Safe and practical use of AI tools and cybersecurity awareness in the workplace — for the Daiwa Australia sales team.
Cyberattacks are no longer something that happens to other companies. In Australia alone, $152.6 million was stolen through Business Email Compromise in 2024 — a 66% increase year-on-year. At the same time, 77% of employees share sensitive company data through AI tools without realising the risk. This session is about equipping our team to work confidently, not fearfully.
Understand exactly what ChatGPT Team and Microsoft Copilot do with your data, what tasks are approved, and how to get real value from them every day — without exposing Daiwa or our clients.
Understand the real threats — phishing, Business Email Compromise, fake Wi-Fi networks — and the simple daily habits that stop them. Small actions like locking your screen and enabling MFA make a measurable difference.
Understand our internal AI and device policies — not as restrictions, but as guardrails that protect you, your clients, and the company. Know when to ask before you act, so you're always on the right side of the line.
Our governance approach to AI, the tools we've approved and why, how Microsoft Intune keeps our devices secure, and what the policy expects of each of us day to day
Real use cases for ChatGPT Team and Copilot — including example prompts, what not to enter and why, and four realistic sales day scenarios showing safe AI use in practice
How to spot phishing emails, the real cost of a weak password, what hackers do on public Wi-Fi, how to share files safely, and how to report anything suspicious without delay
The six most common security slip-ups in a sales environment, a printable checklist, and open Q&A
What Daiwa expects when using AI tools at work — our governance approach, how Microsoft Intune keeps devices secure and compliant, and the clear rules that protect everyone including our clients.
Daiwa has approved two enterprise-grade AI platforms. The critical difference from free consumer tools: your data is not used to train AI models and stays within controlled business boundaries.
ChatGPT Team is OpenAI's paid business plan. Unlike the free version where your conversations may be used to train OpenAI's models, ChatGPT Team data is excluded from training by default. Deleted or unsaved conversations are purged within 30 days. Workspace admins at Daiwa can set retention policies and review usage. Your prompts stay within our business account — they are not visible to other companies or members of the public. Think of it as a private, walled version of ChatGPT that only Daiwa controls.
Copilot is built directly into Outlook, Teams, Word, Excel, and PowerPoint. It uses your existing Microsoft 365 permissions — so it can only access files and emails that you already have the right to see. Microsoft commits that Copilot data is not used to train its foundation AI models and data is stored in your regional Microsoft data centre (Australia/Singapore). In Outlook it can draft and summarise emails. In Teams it generates meeting summaries and action items. In Word it drafts documents. In Excel it analyses data. All of this happens within Microsoft's existing enterprise data protection commitments.
Free ChatGPT (personal accounts), Google Gemini, Meta AI, and other consumer AI tools do not carry the same data protections. On free plans, your conversations can be used as training data. Using them for Daiwa work is a breach of policy — not because we want to restrict you, but because we've seen what goes wrong when there are no guardrails.
Intune is Daiwa's Mobile Device Management (MDM) system. It ensures every company device meets a defined security baseline before it can access company email, SharePoint, Teams, or any other data. If your device falls out of compliance, access is automatically blocked by Conditional Access policies until it's resolved.
Before your device can access company data, Intune verifies: OS version is current, screen lock is set, antivirus is running, and encryption is enabled. If any check fails, Conditional Access blocks access to Outlook, Teams, and SharePoint until resolved.
BitLocker encrypts everything on your hard drive using your computer's built-in security chip (TPM). If someone steals your laptop and removes the hard drive, all data is completely unreadable — they cannot access a single file without your login credentials.
Security patches are pushed to your device automatically via Intune. Most patches fix vulnerabilities that hackers are already actively exploiting in the wild. Deferring updates for weeks leaves a known open door into your device.
If your device is lost or stolen, Daiwa IT can remotely wipe all company data from the device within minutes — email, files, app data, everything. This is only possible because the device is enrolled in Intune. Without it, lost devices become data breaches.
Intune works with Entra ID to enforce "if/then" access rules. Example: if a device is marked non-compliant, then block access to SharePoint. A device marked as having an insider risk cannot access Copilot. Access is restored automatically when compliance is regained.
Intune ensures work apps — Outlook, Teams, OneDrive — are configured with Daiwa's security settings. For example, it can prevent copying work email content into a personal app, and enforce that Outlook on your phone requires a PIN before opening.
Approve Intune compliance prompts promptly. Apply security updates — don't defer indefinitely. If your device shows a compliance warning or you suddenly can't access Outlook or Teams, contact IT immediately. Waiting doesn't fix it; the block stays until the device is compliant.
These rules exist to protect our clients, our data, and each other. They are not bureaucracy — they are the difference between a routine day and a $4 million incident.
Australian Privacy Act violations can result in fines of up to AU$50 million for serious or repeated breaches. If you're ever unsure whether something is allowed, a 30-second message to your manager or IT is always worth it.
Practical, approved ways ChatGPT Team and Microsoft Copilot can save you time every single day — with real example prompts, clear information boundaries, and four realistic sales scenarios.
AI is most valuable when you give it a specific, well-defined task. Below are approved use cases with real example prompts you can use today — just replace the text in [brackets] with your own details before pasting client specifics into the content yourself.
Ask AI to write the structure and tone, then add your specific client details manually. Never paste a client's name or account info into the prompt.
Paste your rough notes (with any sensitive figures replaced or removed) and ask Copilot or ChatGPT to produce a clean summary with action items.
AI can give you a logical structure and persuasive framing. You then fill in the client-specific details, pricing, and terms yourself — never in the AI prompt.
Paste a draft email or paragraph and ask AI to make it clearer, more professional, or more concise. It's like having a copyeditor on demand.
Ask for publicly available information about industry trends, competitor product categories, or market overviews. Don't ask AI to access confidential competitor data — it can't, and shouldn't try to.
Use AI to generate structured talking points for a client presentation, product pitch, or internal briefing — then personalise them with your own knowledge of the client.
Even on ChatGPT Team, treat every prompt as potentially readable. The enterprise plan protects your data from being used for training — but your prompt is still sent to and processed by external servers. Keep sensitive content out.
Before typing anything into an AI prompt, ask yourself: "Would I be comfortable if this exact text appeared in a news article about Daiwa?" If not — replace the sensitive details with placeholders. The AI still does the task. You still get the result. But nothing sensitive ever leaves the building.
In March 2023, Samsung allowed employees to start using ChatGPT. Within four weeks, three separate confidential data leaks occurred:
All three pieces of data were sent to OpenAI's servers and potentially became part of training data. Samsung initially banned ChatGPT company-wide — then discovered employees were simply using it on personal phones at home with zero oversight. They eventually deployed an internal secure AI solution instead.
The lesson: These weren't careless people. They were engineers trying to do their jobs faster. The risk came from not having clear training and guidelines in place beforehand. That's exactly why we're here today.
AI tools will state incorrect information with complete confidence. Always verify facts, figures, and names before sending any AI-generated content externally. You are responsible for everything that goes out under your name.
It's 4:45pm. You just got off a call with a potential retailer and want to send a polished follow-up before they go home. You have rough notes but no time to write a perfect email from scratch.
You've just come out of a 90-minute internal product review meeting and have three pages of rough notes, action items, and follow-ups scattered throughout. Your manager wants a summary sent to the team by end of day.
You're at a major fishing and outdoor trade show. You need to access your Daiwa emails, check SharePoint for a product catalogue, and reply to a client. The venue is offering free "ShowWiFi" — it's convenient and everyone's using it.
A buyer from a major retail chain has asked for a product proposal by tomorrow morning. You have the product knowledge but need to structure a professional document quickly — it's now 3pm.
Phishing is the single most common way attackers get in — responsible for 16% of all data breaches globally. What makes it dangerous is that it doesn't look like a hacker movie. It looks like a normal email from your client, your CEO, or Microsoft.
BEC is when an attacker impersonates a client, supplier, executive, or colleague via email to trick someone into transferring money or changing payment details. It looks completely legitimate. In one real Australian case, a company transferred $55,000 to a fraudster who had impersonated their supplier's email address — changing just one letter in the domain (e.g., daiwas.com instead of daiwa.com). Always call the person on a known number before approving any payment change.
The single most effective defence against BEC and phishing is a 30-second phone call to verify. Use a number you already have stored — never the number provided in the suspicious email or message.
81% of data breaches involve weak, stolen, or reused passwords. Modern GPU clusters can test billions of password combinations per second. Here's how long your password would last against a current attack:
| Password Type | Example | Time to Crack | Verdict |
|---|---|---|---|
| 8 characters, numbers only | 38471029 | 37 seconds | Completely unsafe |
| 8 characters, lowercase only | password | 3 weeks | Not acceptable |
| 8 characters, mixed case + numbers + symbols | P@ssw0rd | Months (but predictable) | Marginal — common patterns known |
| 12 characters, mixed complexity | Tr0ub4dor&3! | Thousands of years | Strong |
| 15+ lowercase characters (passphrase) | coffee.runs.morning | 477 million years | Excellent — and easier to remember |
Microsoft Security research shows MFA prevents 99.9% of automated account compromise attacks — even when an attacker already has your password. This is the single highest-impact security action any person can take. Enable MFA on Microsoft 365, ChatGPT Team, and every other account that supports it.
If you receive unexpected MFA push notifications on your phone that you did not initiate, someone has your password and is trying to brute-force MFA approval. Deny every request and notify IT immediately. Do not approve them to make the notifications stop.
As a sales rep who travels frequently, your device and network habits matter. In 2024, a man in Australia was arrested for setting up fake Wi-Fi networks on commercial flights and at airports, collecting the email credentials and social media logins of dozens of travellers. Here's how these attacks work and what to do.
Evil Twin Attack: An attacker sets up a Wi-Fi hotspot with the same name as the legitimate network — e.g., "Airport_Free_WiFi." When you connect, all your traffic goes through their device first. They can see your login credentials, emails, and any unencrypted data you send.
Man-in-the-Middle (MITM): Even on a legitimate network, an attacker on the same Wi-Fi can intercept traffic between your device and the internet. They can capture login sessions, financial data, and email content — often without the network provider knowing.
What attackers can see without VPN: Login credentials, email content, financial transaction details, the websites you visit, and any data you enter into web forms that aren't fully encrypted end-to-end.
Australian Federal Police arrested a man who set up portable Wi-Fi devices on domestic commercial flights and at several airports with names designed to look like airline or airport networks. Passengers connecting to check emails had their credentials harvested. Several people lost access to email and social media accounts within hours of landing. The only protection that would have stopped this: VPN.
You receive an email with the Microsoft logo saying your account has been flagged for unusual sign-in activity. There's a blue button that says "Verify your account now" and a warning that access will be suspended in 24 hours if you don't act.
You receive an email from what appears to be your main client contact — correct name, correct email address, matching signature — asking you to update their BSB and account number for next invoice payment. The email says they've recently changed banks.
A client has just signed a contract and your finance team needs it urgently to process. It's end of day and you want to get it to them fast. Your first instinct is to email the PDF or send it via Teams chat.
You went to make a coffee and left your laptop open and unlocked in the break room for about 8 minutes. When you returned, nothing looked touched and no files appeared to be open.
These are the mistakes we see most often across organisations. None of them come from carelessness — they come from not having clear guidelines. That's what today is for.
An engineer at Samsung pasted proprietary source code into ChatGPT to ask for help fixing a bug. The data was sent to OpenAI's servers and became training data. The same risk applies to client names, account numbers, and financial details — even small details can be sensitive in context.
Sending yourself a sales report, pricing spreadsheet, or client contract to "work on from home over the weekend" via personal Gmail. Once it leaves the Daiwa tenancy, you lose all control over that file — who can access it, how it's stored, and what happens if your personal account is compromised.
Clicking "click here" in an email from a supplier or courier without hovering over the link to check where it actually goes. Phishing links are designed to look almost identical to legitimate URLs — a single character difference (e.g., daiwa-au.com vs daiwa.com) can be easy to miss when you're busy.
Deferring Windows or app security updates for days or weeks because restarting is inconvenient. Most security patches fix vulnerabilities that attackers are already actively exploiting in the wild. The longer you wait, the longer the door is open. Organisations that apply patches quickly suffer significantly fewer incidents.
Using your Daiwa email password (or a variant of it) for personal accounts like shopping sites, streaming services, or gym apps. When those services are breached — and they regularly are — attackers immediately test those credentials on business email, Microsoft 365, banking, and LinkedIn. This is called "credential stuffing" and it's automated and instant.
Sharing your full desktop instead of a specific application window in a Teams or Zoom call. An open email or spreadsheet with client data, pricing, or confidential discussions becomes visible to everyone on the call — including people you may not know are recording. In many cases, people don't realise what's visible until someone mentions it.
These habits reduce your personal risk and Daiwa's exposure. Screenshot this or ask for a printed copy to keep at your desk.
Three things to walk away with today — and the numbers that back them up:
ChatGPT Team and Copilot can save you real time on emails, proposals, and meeting notes. Use them with the approved prompts. Keep sensitive details out. Review everything before you send it. The tool helps — your judgement is still essential.
Australian businesses lost $152.6M to BEC in 2024. Breaches take 266 days to detect. But MFA blocks 99.9% of account attacks. Lock your screen. Verify payment changes by phone. Report phishing. Small daily habits create measurable protection.
Every policy exists because something went wrong somewhere without it. If you're ever unsure whether something is safe — sending a file, using an AI tool, clicking a link — a 30-second message to IT or your manager is always the right call. There are no silly questions here.
If you receive an unexpected MFA prompt, suspect a phishing email, lose a device, accidentally paste something sensitive into AI, or anything just feels "off" — contact IT immediately. The sooner an incident is reported, the smaller the impact. You will not be in trouble for reporting something in good faith. You may be in trouble for not reporting it.