This week on The Awareness Angle:
-
Deloitte’s AI blunder – The firm refunds part of a $440,000 government report after using ChatGPT to generate fake references.
-
ChatGPT data leaks – A new report says 77% of employees have shared company secrets with AI tools outside company controls.
- Cloud missteps – Invoicely exposes 178,000 financial records after leaving a backup bucket wide open online.
Also this week, Capita is fined £14 million for a major data breach, Discord and its vendor argue over who was really responsible for an ID leak, and the NCSC reminds organisations to keep contingency plans on paper. Plus, a school data scare hits close to home, and HuFiCon and Layer 8 continue to champion people-first security.
🎧 Listen on your favourite podcast platform - Spotify, Apple Podcasts and YouTube
Listen Now
Podcast · Risky CreativeCyber Security Awareness Month videos with Hoxhunt
We’ve teamed up with Hoxhunt again this year to create a series of short, snappy videos for Cyber Security Awareness Month. Each one is just one to two minutes long and covers social engineering in messaging apps, the psychology behind social engineering, how AI is powering spear phishing, and how to spot deepfakes. They’re quick, practical, and perfect for sharing with your colleagues, friends, or family. You can grab them directly from the Hoxhunt toolkit, and there are unbranded versions if you’d like to use them in your own awareness programmes.
Get the toolkit here - https://hoxhunt.com/cybersecurity-awareness-month-toolkit-2025
This week's stories...
Deloitte’s AI Blunder – $440K Refund Over Fake References
Watch the discussion - https://youtu.be/9UGNlB2n2W4?t=2308
Deloitte is refunding part of a $440,000 contract to the Australian government after admitting it used generative AI to help write a report that contained multiple errors, including fake references and incorrect data. The report, which reviewed a welfare compliance system, has since been updated to acknowledge the use of ChatGPT-4 within Microsoft Azure.
While Deloitte insists the findings are still valid, the fallout has been fierce. One senator accused the firm of having “a human intelligence problem, not an artificial one.” The incident highlights a growing issue for professional services: when AI is involved in client-facing work, transparency and human review are critical.
Watch the report - https://youtu.be/oN0nViY4gn4
∠The Awareness Angle
- AI Accountability – If AI helps produce work for clients or the public, its use must be disclosed and reviewed. Hidden automation destroys trust.
-
Human Oversight – Generative tools can hallucinate facts, so quality control and fact-checking can’t be skipped to save time.
-
Integrity Risk – Fake citations might seem small, but they damage credibility and raise questions about governance and ethics.
77% of Employees Leak Data via ChatGPT
Watch the discussion - https://youtu.be/9UGNlB2n2W4?t=626
A new report from LayerX Security found that 77% of employees have shared company secrets through ChatGPT and other AI tools, often using personal accounts that sit completely outside company controls. Generative AI platforms now make up 32% of all unauthorised data movement, with almost half of users uploading files containing personal or financial information.
In the episode, we talked about how banning these tools doesn’t solve the problem, it just pushes them underground. People want to use them because they make their work easier, and if they can’t do that safely, they’ll find another way. It’s not about fear or enforcement, it’s about helping people understand the risks and giving them safe, approved options.
Read more - https://www.esecurityplanet.com/news/shadow-ai-chatgpt-dlp/?&web_view=true
∠The Awareness Angle
- Creative authenticity – As AI content grows, human emotion and originality matter more than ever.
-
Ethical AI use – Training models on other people’s work without permission crosses a line.
-
Adapt or vanish – The creators who learn to work with AI, not against it, will define what comes next.
Invoicely Leak Exposes 178,000 Financial Records
Watch the discussion - https://youtu.be/9UGNlB2n2W4?t=398
A cybersecurity researcher discovered an unsecured Amazon S3 bucket linked to invoicing platform Invoicely, exposing almost 180,000 documents including invoices, tax records, and scanned cheques. The database was completely open to the public with no authentication or encryption in place.
We spoke about how these kinds of mistakes keep happening even though they’re avoidable. Misconfigurations like this often come down to human error, testing environments being pushed live, or simple oversight. It is a reminder that cloud platforms do not fail on their own. People do. Regular checks, peer reviews, and clear ownership of cloud assets are what make the difference.
Read more - https://cybersecuritynews.com/178000-invoices-with-customers-personal-records-exposes/
∠The Awareness Angle
- Cloud Misconfigurations – The biggest cloud security risks often come from small setup mistakes. Always check who can access what and from where.
-
Real-World Consequences – Leaked invoices and tax details can easily be used in social engineering and fraud attempts. Authentic data makes scams more convincing.
-
Shared Responsibility – Using SaaS tools does not mean the vendor handles everything. Businesses still need to review how their data is stored and protected.
Do you have something you would like us to talk about? Are you struggling to solve a problem, or have you had an awesome success? Reply to this email telling us your story, and we might cover it in the next episode!
Awareness Awareness
Security Champions Research Project
If you run or support a Security Champions or Ambassador Programme, this one’s for you. The team at Layer 8 are running an open-source research project throughout October to better understand what makes these programmes work.
They’re looking to uncover:
-
What the most successful programmes have in common
-
The biggest challenges and how organisations are overcoming them
-
How teams measure the impact of their champions
-
What real-world results these programmes are delivering
The goal is to create a shared, open dataset that anyone in the community can use. Your contribution is completely anonymous, and the insights could help raise the bar for champion networks everywhere.
Take a few minutes to add your experience at the link below -
https://layer8champions.scoreapp.com/
Watch the discussion – https://youtu.be/9UGNlB2n2W4?t=2579
Human Firewall Conference
The Human Firewall Conference (HuFiCon) takes place in Cologne this November, bringing together awareness professionals, behaviour experts, and security leaders from across Europe. Hosted by SoSafe, it’s all about the human side of cyber — how we engage, motivate, and influence secure behaviour at scale.
Ant will be there, contributing to one of the sessions, and the line-up looks brilliant: from industry researchers to F1’s Ralf Schumacher. The event blends talks, panels, and interactive experiences in one of the most creative security awareness gatherings of the year.
If you work anywhere near human risk, culture, or awareness, this is one to get to.
Register at http://www.humanfirewallconference.com/
Watch the discussion - https://youtu.be/9UGNlB2n2W4?t=2631
This Week's Discussion Points...
Main stories
Have plans on paper in case of cyber-attack, firms told
Watch | Read
178K Invoicely records exposed in cloud data leak
Watch | Read
77% of employees leak data via ChatGPT, report finds
Watch | Read
SimonMed Imaging: 1.27M individuals affected by January 2025 cyberattack
Watch | Read
Hackers use court-themed phishing to deliver info-stealer malware
Watch | Read
Discord blamed a vendor for its data breach — now the vendor says it wasn’t hacked
Watch | Read
Capita fined £14m for cyber-attack which affected millions
Watch | Read
Cyber giant F5 Networks says government hackers had long-term access
Watch | Read |Tenable Blog FAQ
Deloitte’s AI report refund after using ChatGPT
Watch | Read
Extras
Security Champions Research Project – Layer 8
Watch | Read
HuFiCon 2025 (Cologne, Germany)
Watch | Read
Sarah Carty: A hacker walks into a meeting…
Watch | Read
Windows + L “Security Awareness Fail” (Resident Evil trailer clip)
Watch | Read
Local school data breach – Edulink login incident
Watch
Japan digital ID and Fujitsu controversy
Watch | Watch More
The Guardian launches secure messaging tool “CoverDrop”
Watch | Watch More | Read more
Thanks for reading! If you’ve spotted something interesting in the world of cyber this week — a breach, a tool, or just something a bit weird — let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.
And finally…Local school data scare
Watch the discussion - https://youtu.be/9UGNlB2n2W4?t=3033
A local school had to report a potential data breach to the ICO after it emerged that a student may have accessed a teacher’s Edulink account, which contains pupil records and personal details. The school acted quickly, asking all staff to reset passwords and temporarily shutting down the system for parents and students.
The incident reportedly began when a student spotted a teacher’s password appearing briefly on screen as it was typed, then shared it with others. While there’s no confirmed evidence of data misuse, the event led the school to migrate logins to Google with MFA enabled to prevent it from happening again.
We spoke about how even small flaws like this show how fragile security can be in the real world. One moment of curiosity or convenience can expose a whole network. It’s a good reminder that basic controls, like MFA and privacy screens, are just as important in schools as they are in businesses.
∠The Awareness Angle
- Small mistakes, big consequences – A brief on-screen password was all it took to trigger an ICO report and system-wide reset.
-
Education beyond the classroom – Incidents like this are teachable moments about accountability and respect for data.
-
Simple safeguards – MFA, privacy screens, and quick reactions can prevent an embarrassing story from becoming a serious breach.