Internal Presentation
1 / 18
Internal Staff Presentation

Cybersecurity & AI

Daiwa Internal Policy
and Guidelines

Safe and practical use of AI tools and cybersecurity awareness in the workplace — for the Daiwa Australia sales team.

$4.26M
Average cost of a data breach in Australia (AUD, 2024)
266
Days on average to detect a breach in Australia
$152.6M
Stolen via Business Email Compromise in Australia in 2024
99.9%
Of account attacks blocked by multi-factor authentication
March 2026
Sales Team & Management
Daiwa Australia — Internal Use Only
Meeting Objective

Why We're Here Today

Cyberattacks are no longer something that happens to other companies. In Australia alone, $152.6 million was stolen through Business Email Compromise in 2024 — a 66% increase year-on-year. At the same time, 77% of employees share sensitive company data through AI tools without realising the risk. This session is about equipping our team to work confidently, not fearfully.

01 — Goal

Use AI Confidently and Correctly

Understand exactly what ChatGPT Team and Microsoft Copilot do with your data, what tasks are approved, and how to get real value from them every day — without exposing Daiwa or our clients.

02 — Goal

Build Practical Security Habits

Understand the real threats — phishing, Business Email Compromise, fake Wi-Fi networks — and the simple daily habits that stop them. Small actions like locking your screen and enabling MFA make a measurable difference.

03 — Goal

Know Daiwa's Rules and Why

Understand our internal AI and device policies — not as restrictions, but as guardrails that protect you, your clients, and the company. Know when to ask before you act, so you're always on the right side of the line.

Agenda

What We'll Cover Today

1

Cybersecurity & AI — Daiwa Internal Policy & Guidelines

Our governance approach to AI, the tools we've approved and why, how Microsoft Intune keeps our devices secure, and what the policy expects of each of us day to day

~15 min
2

Using AI Safely to Enhance Your Sales Workflow

Real use cases for ChatGPT Team and Copilot — including example prompts, what not to enter and why, and four realistic sales day scenarios showing safe AI use in practice

~15 min
3

Cybersecurity 101 — Stay Vigilant, Stay Protected

How to spot phishing emails, the real cost of a weak password, what hackers do on public Wi-Fi, how to share files safely, and how to report anything suspicious without delay

~15 min
+

Common Mistakes to Avoid, Best Practices Checklist & Key Takeaways

The six most common security slip-ups in a sales environment, a printable checklist, and open Q&A

~10 min
01
Section One

Daiwa Internal Policy
& Guidelines

What Daiwa expects when using AI tools at work — our governance approach, how Microsoft Intune keeps devices secure and compliant, and the clear rules that protect everyone including our clients.

Our AI Governance Approach
Intune, BitLocker & Compliance
Clear Policy Do's and Don'ts
Section 1 · Our Approved AI Tools

The AI Tools Available to Us — and How They Protect Our Data

Daiwa has approved two enterprise-grade AI platforms. The critical difference from free consumer tools: your data is not used to train AI models and stays within controlled business boundaries.

ChatGPT Team — What It Is and How It Handles Your Data

ChatGPT Team is OpenAI's paid business plan. Unlike the free version where your conversations may be used to train OpenAI's models, ChatGPT Team data is excluded from training by default. Deleted or unsaved conversations are purged within 30 days. Workspace admins at Daiwa can set retention policies and review usage. Your prompts stay within our business account — they are not visible to other companies or members of the public. Think of it as a private, walled version of ChatGPT that only Daiwa controls.

Drafting emailsSummarising notesBrainstorming ideasRewriting & improving textResearch promptsProposal structure

Microsoft Copilot (M365) — Integrated Into the Tools You Already Use

Copilot is built directly into Outlook, Teams, Word, Excel, and PowerPoint. It uses your existing Microsoft 365 permissions — so it can only access files and emails that you already have the right to see. Microsoft commits that Copilot data is not used to train its foundation AI models and data is stored in your regional Microsoft data centre (Australia/Singapore). In Outlook it can draft and summarise emails. In Teams it generates meeting summaries and action items. In Word it drafts documents. In Excel it analyses data. All of this happens within Microsoft's existing enterprise data protection commitments.

Meeting summariesEmail drafts in OutlookProposal structure in WordData analysis in ExcelDocument review
Only Use These Approved Tools for Daiwa Work

Free ChatGPT (personal accounts), Google Gemini, Meta AI, and other consumer AI tools do not carry the same data protections. On free plans, your conversations can be used as training data. Using them for Daiwa work is a breach of policy — not because we want to restrict you, but because we've seen what goes wrong when there are no guardrails.

Section 1 · Device Management

Microsoft Intune — What It Does and Why It Matters

Intune is Daiwa's Mobile Device Management (MDM) system. It ensures every company device meets a defined security baseline before it can access company email, SharePoint, Teams, or any other data. If your device falls out of compliance, access is automatically blocked by Conditional Access policies until it's resolved.

Device Compliance Checks

Before your device can access company data, Intune verifies: OS version is current, screen lock is set, antivirus is running, and encryption is enabled. If any check fails, Conditional Access blocks access to Outlook, Teams, and SharePoint until resolved.

BitLocker Encryption

BitLocker encrypts everything on your hard drive using your computer's built-in security chip (TPM). If someone steals your laptop and removes the hard drive, all data is completely unreadable — they cannot access a single file without your login credentials.

Automatic Security Updates

Security patches are pushed to your device automatically via Intune. Most patches fix vulnerabilities that hackers are already actively exploiting in the wild. Deferring updates for weeks leaves a known open door into your device.

Remote Wipe Capability

If your device is lost or stolen, Daiwa IT can remotely wipe all company data from the device within minutes — email, files, app data, everything. This is only possible because the device is enrolled in Intune. Without it, lost devices become data breaches.

Conditional Access

Intune works with Entra ID to enforce "if/then" access rules. Example: if a device is marked non-compliant, then block access to SharePoint. A device marked as having an insider risk cannot access Copilot. Access is restored automatically when compliance is regained.

App Policy Enforcement

Intune ensures work apps — Outlook, Teams, OneDrive — are configured with Daiwa's security settings. For example, it can prevent copying work email content into a personal app, and enforce that Outlook on your phone requires a PIN before opening.

Your Responsibility: Keep Your Device Compliant

Approve Intune compliance prompts promptly. Apply security updates — don't defer indefinitely. If your device shows a compliance warning or you suddenly can't access Outlook or Teams, contact IT immediately. Waiting doesn't fix it; the block stays until the device is compliant.

Section 1 · Policy Reference

Daiwa AI & Device Policy — Do's & Don'ts

These rules exist to protect our clients, our data, and each other. They are not bureaucracy — they are the difference between a routine day and a $4 million incident.

Do — Expected of All Staff

  • Use only ChatGPT Team or Microsoft Copilot for AI-assisted work tasks — no personal accounts
  • Keep your device enrolled in Intune and showing as compliant at all times
  • Lock your screen (Win+L on PC / Ctrl+Cmd+Q on Mac) whenever you step away — even briefly
  • Enable Multi-Factor Authentication (MFA) on all work accounts — Microsoft 365 and ChatGPT Team
  • Review all AI-generated content carefully before sending or acting on it — AI can produce confidently wrong information
  • Share work documents via SharePoint or OneDrive — not email attachments or USB drives
  • Report security concerns, unusual AI outputs, or suspicious emails to IT promptly
  • Use Daiwa VPN when accessing company systems on public or untrusted Wi-Fi networks

Don't — Policy Violations

  • Use personal or free AI accounts (free ChatGPT, Gemini, etc.) for any Daiwa-related work
  • Paste client names, account numbers, financial figures, or contact details into any AI tool
  • Share your passwords or credentials with anyone — including IT staff (we will never ask for your password)
  • Ignore or indefinitely defer Intune compliance alerts or Windows security updates
  • Use personal Dropbox, WeTransfer, USB drives, or personal cloud storage for work files
  • Forward work emails, contracts, or documents to personal email accounts to work from home
  • Access company systems on public Wi-Fi without VPN — hotel, airport, café, conference networks
  • Send or approve payment changes without first calling the requester on a known phone number
The Golden Rule: When In Doubt, Ask Before You Act

Australian Privacy Act violations can result in fines of up to AU$50 million for serious or repeated breaches. If you're ever unsure whether something is allowed, a 30-second message to your manager or IT is always worth it.

02
Section Two

Using AI Safely to
Enhance Your Workflow

Practical, approved ways ChatGPT Team and Microsoft Copilot can save you time every single day — with real example prompts, clear information boundaries, and four realistic sales scenarios.

Approved Use Cases with Example Prompts
What Never to Enter — and the Samsung Warning
Four Real Sales Day Scenarios
Section 2 · Approved Use Cases

What AI Can Actually Do for Your Sales Day

AI is most valuable when you give it a specific, well-defined task. Below are approved use cases with real example prompts you can use today — just replace the text in [brackets] with your own details before pasting client specifics into the content yourself.

Drafting Client Emails

Ask AI to write the structure and tone, then add your specific client details manually. Never paste a client's name or account info into the prompt.

Example prompt
"Write a professional follow-up email after a sales call about [product category]. Key points discussed: [list 2–3 topics]. Tone: friendly but professional. Include a clear call-to-action for a follow-up meeting."

Summarising Meeting Notes

Paste your rough notes (with any sensitive figures replaced or removed) and ask Copilot or ChatGPT to produce a clean summary with action items.

Example prompt
"Summarise these meeting notes into 5 bullet points. Highlight any action items and who owns each one. Keep it under 150 words."

Structuring Proposals

AI can give you a logical structure and persuasive framing. You then fill in the client-specific details, pricing, and terms yourself — never in the AI prompt.

Example prompt
"Create a proposal outline for a [product category] range for a mid-size retail buyer. Include: executive summary, product fit, key benefits, next steps."

Improving Wording & Tone

Paste a draft email or paragraph and ask AI to make it clearer, more professional, or more concise. It's like having a copyeditor on demand.

Example prompt
"Rewrite the following email to be more professional and concise. Remove any repetition and tighten the language: [paste your draft here]"

Market Research Summaries

Ask for publicly available information about industry trends, competitor product categories, or market overviews. Don't ask AI to access confidential competitor data — it can't, and shouldn't try to.

Example prompt
"Summarise the current trends in the Australian sport fishing equipment retail market. What are buyers prioritising in 2025? Keep it to one paragraph."

Preparing Talking Points

Use AI to generate structured talking points for a client presentation, product pitch, or internal briefing — then personalise them with your own knowledge of the client.

Example prompt
"Generate 5 strong talking points for a 10-minute presentation on why a retail buyer should expand their [product category] range. Focus on margin, sell-through, and consumer demand."
Section 2 · Information Boundaries

What Must NEVER Be Entered into AI Tools — and Why

Even on ChatGPT Team, treat every prompt as potentially readable. The enterprise plan protects your data from being used for training — but your prompt is still sent to and processed by external servers. Keep sensitive content out.

Never Enter Any of the Following
  • Client names, account numbers, or contact details — use [Client Name] or [Account] as placeholders instead
  • Financial data — pricing, margins, P&L figures, budget amounts, commission structures
  • Passwords, PINs, API keys, or login credentials — for any system, yours or the company's
  • Personal information — date of birth, home address, ID numbers (yours or anyone else's)
  • Internal documents not approved for external sharing — contracts, agreements, board papers
  • Unreleased product details, company strategy, or M&A activity
  • Client correspondence marked confidential or containing sensitive commercial terms
The Newspaper Test — Use It Every Time

Before typing anything into an AI prompt, ask yourself: "Would I be comfortable if this exact text appeared in a news article about Daiwa?" If not — replace the sensitive details with placeholders. The AI still does the task. You still get the result. But nothing sensitive ever leaves the building.

Real Case Study — Samsung, 2023

Three Data Leaks in Four Weeks

In March 2023, Samsung allowed employees to start using ChatGPT. Within four weeks, three separate confidential data leaks occurred:

  • An engineer pasted proprietary semiconductor source code into ChatGPT and asked it to find bugs
  • Another employee pasted equipment defect identification code for optimisation help
  • A third employee asked ChatGPT to generate meeting minutes from a recorded internal meeting

All three pieces of data were sent to OpenAI's servers and potentially became part of training data. Samsung initially banned ChatGPT company-wide — then discovered employees were simply using it on personal phones at home with zero oversight. They eventually deployed an internal secure AI solution instead.

The lesson: These weren't careless people. They were engineers trying to do their jobs faster. The risk came from not having clear training and guidelines in place beforehand. That's exactly why we're here today.

AI Is Confident — But Not Always Correct

AI tools will state incorrect information with complete confidence. Always verify facts, figures, and names before sending any AI-generated content externally. You are responsible for everything that goes out under your name.

Section 2 · Real Sales Day Scenarios

AI in the Real Sales Day — Four Scenarios

Scenario A — Drafting a Client Follow-Up Email Under Time Pressure

It's 4:45pm. You just got off a call with a potential retailer and want to send a polished follow-up before they go home. You have rough notes but no time to write a perfect email from scratch.

Safe approach: Open ChatGPT Team. Prompt: "Write a professional follow-up email after a sales call about sport fishing tackle availability and ranging. Key discussion points: [list your 3 main talking points]. Tone: friendly but professional. Include a request to meet again next week." — Then manually add the client's name, their business name, and any specifics before sending. The AI wrote the structure; you maintain control of the sensitive details.
Scenario B — Turning Messy Meeting Notes into a Clean Summary

You've just come out of a 90-minute internal product review meeting and have three pages of rough notes, action items, and follow-ups scattered throughout. Your manager wants a summary sent to the team by end of day.

Safe approach with Copilot in Teams: If the meeting was recorded in Teams, use Copilot's built-in "Summarise this meeting" feature — it creates meeting notes automatically within Microsoft's secure environment. If using ChatGPT Team, first review your notes and replace any client names, dollar figures, or confidential product details with placeholders like [Client A] or [Product X]. Then paste and ask: "Summarise into 6 bullet points with action items and owners."
Scenario C — Working at a Trade Show on Public Wi-Fi

You're at a major fishing and outdoor trade show. You need to access your Daiwa emails, check SharePoint for a product catalogue, and reply to a client. The venue is offering free "ShowWiFi" — it's convenient and everyone's using it.

Safe approach: Do not connect to the venue Wi-Fi for company work. Use your phone's mobile data hotspot instead — you control the network name, password, and access. If you must use venue Wi-Fi, connect through Daiwa's VPN first, then access company systems. Without VPN on public networks, an attacker on the same network can intercept your traffic. In 2024, a man in Australia was arrested for running fake airport Wi-Fi networks specifically to steal email and social media credentials from travellers.
Scenario D — Preparing a Proposal Quickly with AI Help

A buyer from a major retail chain has asked for a product proposal by tomorrow morning. You have the product knowledge but need to structure a professional document quickly — it's now 3pm.

Safe approach: Use ChatGPT Team or Copilot in Word. Prompt: "Create a formal product proposal outline for a [sport fishing tackle range] for a mid-to-large retail chain buyer. Sections should include: executive summary, product overview, key consumer benefits, ranging recommendation, support & marketing, and next steps." — AI gives you the skeleton in minutes. You then fill in actual product names, pricing tiers, and the client's specific details directly into the Word document — not into the AI prompt.
Section 3 · Phishing & Business Email Compromise

The Threat That Costs Australian Businesses Millions

Phishing is the single most common way attackers get in — responsible for 16% of all data breaches globally. What makes it dangerous is that it doesn't look like a hacker movie. It looks like a normal email from your client, your CEO, or Microsoft.

The Numbers in Australia

  • $152.6 million stolen via Business Email Compromise in Australia in 2024 — a 66% increase from 2023
  • Average loss per BEC incident: $55,000+
  • 40% of BEC attacks in Q2 2024 were generated using AI — making them harder to spot
  • Average time to detect a breach in Australia: 266 days — that's nearly 9 months of exposure
  • Average cost of a breach in Australia: AUD $4.26 million (2024, up 5.7% from 2023)
  • 1.13 million phishing attacks were recorded globally in Q2 2025 alone — the highest quarterly total in two years
Business Email Compromise — The #1 Sales Team Threat

BEC is when an attacker impersonates a client, supplier, executive, or colleague via email to trick someone into transferring money or changing payment details. It looks completely legitimate. In one real Australian case, a company transferred $55,000 to a fraudster who had impersonated their supplier's email address — changing just one letter in the domain (e.g., daiwas.com instead of daiwa.com). Always call the person on a known number before approving any payment change.

Red Flags in Any Email

  • Urgency and pressure: "Act immediately or your account will be suspended" — legitimate companies give you time
  • Sender address doesn't match: The display name says "ANZ Bank" but the email is from anzbanknotice@gmail.com
  • Hover before you click: The link says "click here to verify" but hovering reveals it goes to a random website
  • Unexpected attachments: A "remittance advice" or "invoice" PDF you weren't expecting — especially Word docs asking to enable macros
  • Requests for credentials: Any email asking for a password, PIN, or MFA code — no legitimate system needs this via email
  • Payment or bank detail changes: Any request to update BSB/account numbers — always verify by phone on a known number, never the one in the email
If In Doubt: Pick Up the Phone

The single most effective defence against BEC and phishing is a 30-second phone call to verify. Use a number you already have stored — never the number provided in the suspicious email or message.

Section 3 · Passwords & Multi-Factor Authentication

Passwords — How Quickly They Can Be Cracked (2025)

81% of data breaches involve weak, stolen, or reused passwords. Modern GPU clusters can test billions of password combinations per second. Here's how long your password would last against a current attack:

Password TypeExampleTime to CrackVerdict
8 characters, numbers only3847102937 secondsCompletely unsafe
8 characters, lowercase onlypassword3 weeksNot acceptable
8 characters, mixed case + numbers + symbolsP@ssw0rdMonths (but predictable)Marginal — common patterns known
12 characters, mixed complexityTr0ub4dor&3!Thousands of yearsStrong
15+ lowercase characters (passphrase)coffee.runs.morning477 million yearsExcellent — and easier to remember
Multi-Factor Authentication Blocks 99.9% of Account Attacks

Microsoft Security research shows MFA prevents 99.9% of automated account compromise attacks — even when an attacker already has your password. This is the single highest-impact security action any person can take. Enable MFA on Microsoft 365, ChatGPT Team, and every other account that supports it.

MFA Fatigue Attacks — Know the Signs

If you receive unexpected MFA push notifications on your phone that you did not initiate, someone has your password and is trying to brute-force MFA approval. Deny every request and notify IT immediately. Do not approve them to make the notifications stop.

Password Best Practices

  • Use a passphrase: Three or more random words with symbols — e.g., Coffee!Runs#Morning42. Long, easy to remember, almost impossible to crack.
  • Never reuse passwords: 73% of people use duplicate passwords. If one site is breached, attackers immediately try that password on your email, banking, and work accounts.
  • Use a password manager: Generates and stores unique, strong passwords for every site. You only remember one master password. IT can advise on an approved option.
  • Change immediately if breached: Check haveibeenpwned.com with your personal email address to see if your credentials have appeared in known data breaches.
  • Never share passwords: Including with IT. Legitimate IT staff will never ask for your password to fix a problem.
Section 3 · Device Security & Public Networks

Device Security & Public Wi-Fi — What the Threats Actually Look Like

As a sales rep who travels frequently, your device and network habits matter. In 2024, a man in Australia was arrested for setting up fake Wi-Fi networks on commercial flights and at airports, collecting the email credentials and social media logins of dozens of travellers. Here's how these attacks work and what to do.

How Public Wi-Fi Attacks Work

Evil Twin Attack: An attacker sets up a Wi-Fi hotspot with the same name as the legitimate network — e.g., "Airport_Free_WiFi." When you connect, all your traffic goes through their device first. They can see your login credentials, emails, and any unencrypted data you send.

Man-in-the-Middle (MITM): Even on a legitimate network, an attacker on the same Wi-Fi can intercept traffic between your device and the internet. They can capture login sessions, financial data, and email content — often without the network provider knowing.

What attackers can see without VPN: Login credentials, email content, financial transaction details, the websites you visit, and any data you enter into web forms that aren't fully encrypted end-to-end.

The Australian Airport Wi-Fi Arrest — 2024

Australian Federal Police arrested a man who set up portable Wi-Fi devices on domestic commercial flights and at several airports with names designed to look like airline or airport networks. Passengers connecting to check emails had their credentials harvested. Several people lost access to email and social media accounts within hours of landing. The only protection that would have stopped this: VPN.

Your Best Defence on the Road
  • Use your mobile hotspot instead of public Wi-Fi wherever possible — you control who can see the network name and connect to it
  • If you must use public Wi-Fi, always connect through Daiwa's VPN first, before opening any work apps or websites
  • Never connect to a Wi-Fi network that wasn't pre-arranged or verified with the venue

Device Security Habits That Matter

  • Auto-lock set to 5 minutes or less on all work devices — enforce it in Settings
  • Lock manually with Win+L (PC) / Ctrl+Cmd+Q (Mac) every single time you step away
  • Use a privacy screen filter on your laptop when working in public — airports, cafes, client offices, trains
  • Share work documents via SharePoint / OneDrive links with appropriate permission levels, not as email attachments
  • Install security updates promptly — most patches fix actively exploited vulnerabilities

Habits That Create Risk

  • Working on sensitive emails or documents in places where people behind you can see your screen
  • Sending files via personal Gmail, WhatsApp, or iMessage because it's quicker
  • Saving work documents to personal iCloud, Google Drive, or personal Dropbox
  • Leaving devices unattended and unlocked even "for just a minute" in public or shared spaces
  • Connecting to any unverified Wi-Fi — including at client offices — without using VPN
Section 3 · Real-World Scenarios

Four Security Scenarios — What Would You Do?

Scenario A — "Your Microsoft Account Has Unusual Activity" Email

You receive an email with the Microsoft logo saying your account has been flagged for unusual sign-in activity. There's a blue button that says "Verify your account now" and a warning that access will be suspended in 24 hours if you don't act.

Do NOT click the button. Microsoft never asks you to verify credentials via email. Hover over the sender's email address — it will not be from a microsoft.com domain. Report the email to IT using Outlook's "Report Phishing" button. If genuinely concerned about your account, navigate directly to office.com yourself (type it in the browser — don't click any link in the email) and check your account status there.
Scenario B — A Long-Standing Client Emails You to Change Their Bank Details

You receive an email from what appears to be your main client contact — correct name, correct email address, matching signature — asking you to update their BSB and account number for next invoice payment. The email says they've recently changed banks.

Stop completely. This is a Business Email Compromise scenario — one of the most common attacks targeting sales teams. BEC attackers can perfectly spoof email addresses and copy email signatures. Call your contact directly on the phone number you already have stored (not any number provided in the email). If they confirm the change, then proceed. If they have no idea what you're talking about, their email account may have been hacked — inform them immediately and notify IT.
Scenario C — Sharing a Signed Contract with Finance Quickly

A client has just signed a contract and your finance team needs it urgently to process. It's end of day and you want to get it to them fast. Your first instinct is to email the PDF or send it via Teams chat.

Upload the contract to SharePoint in the appropriate project folder first. Then share the SharePoint link with finance via Teams or Outlook — set permissions so only they can access it. This is faster than it sounds, keeps a proper audit trail, and ensures the document is stored securely rather than living in multiple email inboxes. Do not save contracts to personal cloud storage or USB drives at any point.
Scenario D — You Left Your Laptop Unlocked in the Break Room

You went to make a coffee and left your laptop open and unlocked in the break room for about 8 minutes. When you returned, nothing looked touched and no files appeared to be open.

Report it to IT as a precaution — even if nothing appears to have happened. An attacker or an opportunistic insider can install a keylogger, copy files to a USB drive, or access sensitive emails in under 60 seconds. You won't see any evidence unless it's forensically investigated. Use Win+L every single time you step away. Set auto-lock to 5 minutes in Settings so it activates automatically as a backup.
Common Mistakes

The Six Most Common Mistakes — All Well-Intentioned, All Avoidable

These are the mistakes we see most often across organisations. None of them come from carelessness — they come from not having clear guidelines. That's what today is for.

Pasting Client Data into AI "Just to Draft an Email"

An engineer at Samsung pasted proprietary source code into ChatGPT to ask for help fixing a bug. The data was sent to OpenAI's servers and became training data. The same risk applies to client names, account numbers, and financial details — even small details can be sensitive in context.

Use placeholders: [Client Name], [Product], [Amount] — then add real details manually afterwards

Forwarding Contracts and Reports to Personal Email

Sending yourself a sales report, pricing spreadsheet, or client contract to "work on from home over the weekend" via personal Gmail. Once it leaves the Daiwa tenancy, you lose all control over that file — who can access it, how it's stored, and what happens if your personal account is compromised.

Use SharePoint or OneDrive — you can access Daiwa files securely from any compliant device

Clicking Links Without Checking the Destination First

Clicking "click here" in an email from a supplier or courier without hovering over the link to check where it actually goes. Phishing links are designed to look almost identical to legitimate URLs — a single character difference (e.g., daiwa-au.com vs daiwa.com) can be easy to miss when you're busy.

Always hover over every link first. If the destination doesn't match, do not click — report to IT

Clicking "Remind Me Later" on Security Updates for Weeks

Deferring Windows or app security updates for days or weeks because restarting is inconvenient. Most security patches fix vulnerabilities that attackers are already actively exploiting in the wild. The longer you wait, the longer the door is open. Organisations that apply patches quickly suffer significantly fewer incidents.

Apply updates within 24–48 hours. Schedule restarts for a time that doesn't disrupt your day

Using the Same Password for Work and Personal Accounts

Using your Daiwa email password (or a variant of it) for personal accounts like shopping sites, streaming services, or gym apps. When those services are breached — and they regularly are — attackers immediately test those credentials on business email, Microsoft 365, banking, and LinkedIn. This is called "credential stuffing" and it's automated and instant.

Every account needs a unique password. A password manager makes this easy — ask IT for a recommendation

Screen-Sharing the Entire Desktop in Teams Calls

Sharing your full desktop instead of a specific application window in a Teams or Zoom call. An open email or spreadsheet with client data, pricing, or confidential discussions becomes visible to everyone on the call — including people you may not know are recording. In many cases, people don't realise what's visible until someone mentions it.

Always choose "Window" or a specific app when screen-sharing — never share the full desktop in client-facing calls
Best Practices Checklist

Your Daily Security & AI Checklist — Keep This as a Reference

These habits reduce your personal risk and Daiwa's exposure. Screenshot this or ask for a printed copy to keep at your desk.

MFA is enabled on Microsoft 365, ChatGPT Team, and every other work account that supports it
Screen auto-locks in 5 minutes or less — confirmed in device Settings
I lock manually with Win+L or Ctrl+Cmd+Q every time I step away from my device
My device is enrolled in Intune and currently shows as compliant — no outstanding alerts
I use Daiwa VPN before accessing any company system on public or client Wi-Fi networks
I only use ChatGPT Team or Copilot for AI work tasks — never my personal accounts or free AI tools
I never paste client names, account numbers, or financial figures into any AI tool prompt
I review all AI-generated content carefully before sending — AI makes confident mistakes
I use unique passwords for every account and a password manager — no password reuse
I hover over all links in emails before clicking to verify the real destination URL
I report suspicious emails to IT using Outlook's Report Phishing button — without clicking any links first
I share files via SharePoint or OneDrive links — never by attaching documents to personal emails
I apply security updates within 48 hours — I don't defer them indefinitely
I verify any request to change payment details by calling the requester on a known phone number — never trust the number in the email
I use a privacy screen filter in public spaces and avoid discussing sensitive client information in open areas
When unsure, I ask IT or my manager first — there is no such thing as a silly security question
Key Takeaways

You're Now Better Prepared

Three things to walk away with today — and the numbers that back them up:

AI Is a Real Productivity Tool — Use It Right

ChatGPT Team and Copilot can save you real time on emails, proposals, and meeting notes. Use them with the approved prompts. Keep sensitive details out. Review everything before you send it. The tool helps — your judgement is still essential.

Security Is Everyone's Job — and MFA Is Your Biggest Lever

Australian businesses lost $152.6M to BEC in 2024. Breaches take 266 days to detect. But MFA blocks 99.9% of account attacks. Lock your screen. Verify payment changes by phone. Report phishing. Small daily habits create measurable protection.

When In Doubt — Ask Before You Act

Every policy exists because something went wrong somewhere without it. If you're ever unsure whether something is safe — sending a file, using an AI tool, clicking a link — a 30-second message to IT or your manager is always the right call. There are no silly questions here.


Reporting Security Concerns — Early Reports Limit Damage Significantly

If you receive an unexpected MFA prompt, suspect a phishing email, lose a device, accidentally paste something sensitive into AI, or anything just feels "off" — contact IT immediately. The sooner an incident is reported, the smaller the impact. You will not be in trouble for reporting something in good faith. You may be in trouble for not reporting it.

  DAIWA AUSTRALIA  ·  Internal Presentation — Confidential  ·  IT Support for all queries