This week, the threats got personal. A fake Google Meet update that hands attackers the keys to your PC. An SMS that pinged Luke's phone at a hospital and turned out to be a live scammer on the end of the line. A banking glitch that let strangers see your salary, your benefits, and your child payments. And a former government insider who allegedly walked out with the personal data of almost every living American on a thumb drive.

Oh, and if you've got an old iPhone? Stop reading this and go update it first.

The full episode is an hour well spent. Watch on YouTube, listen on Spotify, Apple Podcasts, or wherever you get your podcasts. Ant and Luke don't do death by PowerPoint, just straight talking cyber news for people who actually care about the human side of security.

This week's episode is available to watch on YouTube

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

Article contentSANS is off to Vegas Baby!

If you work in security awareness and you've got something worth saying, this is the room to say it in.

The SANS Workforce Security & Risk Training Security Awareness and Culture Summit Call for Presentations is open right now, and the deadline is Friday 3rd April at 5pm ET. The summit itself runs on the 27th and 28th of August in Las Vegas at Caesars Palace, and it is the biggest gathering of security awareness, behaviour and culture professionals on the planet. 13th year running.

The summit is looking for talks, research and case studies that focus on shifting not just behaviour, but attitudes and beliefs around cybersecurity. If you've got something that's worked in your organisation, something you've learned the hard way, or a genuinely new idea worth sharing with thousands of your peers, they want to hear from it.

And if you've never presented at a conference before, this is a brilliant place to start. Mentoring is available for first time speakers, so you won't be thrown in at the deep end on your own.

If Vegas isn't on the cards, that's not a reason to miss out either. You can present remotely, so there's really no barrier to getting involved.

The deadline is the 3rd of April. Two weeks. Get your submission in.

Submit your proposal here. Get more information on the summit here.

This Week's Stories...

One click on a fake Google Meet update hands attackers the keys to your PC

Watch | Read

A phishing page disguised as a Google Meet update notice is being used to silently enroll victims Windows PCs into an attacker controlled device management system. No malware, no stolen passwords, just a single click.

The page mimics a genuine Google Meet update prompt, but clicking the button triggers a built in Windows feature called MS Device Enrollment, the same legitimate tool your IT department would use to manage a company device. A victim who clicks through hands full remote control of their machine to the attacker, who can then silently install software, change settings, read files, or wipe the device entirely. Because the attack works entirely through the operating system, traditional antivirus tools have nothing to flag. There is no malicious file. No suspicious download. Nothing to scan for.

The best defence here is a human one. Why is Google Meet asking me to update through a webpage? Is this normal? Those two questions, asked out loud, stop this attack dead.

Awareness Angles

  • Your antivirus will not save you here - This attack uses a genuine Windows feature to hand over control of your machine. If your only defence is a security tool, you have a gap that only a questioning mindset can fill.
  • Knowing what normal looks like matters - Google Meet does not push updates through a webpage like this. Neither do most legitimate apps. If something prompts you to do something you have never seen before, that instinct to pause is worth listening to.
  • If you think you might have clicked it - Go to Settings, Accounts, Access Work or School. If you see anything you do not recognise, especially anything referencing sunlife-finance[.]com or esper[.]cloud, disconnect it immediately.



The SMS that pinged Luke's phone at a hospital turned out to be a live scammer on the other end of the line

Watch | Read

SMS blasters are portable rogue devices that mimic legitimate mobile towers, force nearby phones to downgrade to 2G, and deliver phishing text messages that bypass your carrier's spam filters entirely. They sound like something out of a spy thriller, but three people were convicted of using one on the London Underground just a few weeks ago.

This week it got personal. Luke received a suspicious SMS at a local hospital, categorised as being from Google, complete with a verification code he never requested and a support number to call if he didn't recognise the activity. Ant called the number, and the recording is in this week's episode. It wasn't a call centre in Asia with background noise and a script. It sounded like one person in a bedroom, running the whole operation solo, building trust quickly without ever asking for account details, steering the conversation toward a password reset that would have handed over full account access if a real email address had been given. The whole attack is engineered around panic. Someone sees an unexpected verification code, worries their account has been compromised, calls the number in the message, reads out the recovery code that lands on their phone moments later, and it is over before they realise what happened.

Awareness Angles

  • A text that appears to be from a legitimate sender is not proof that it is - SMS blasters spoof sender names, bypass carrier filters, and can drop a message into an existing thread with real previous messages from that contact. The name at the top means nothing.
  • The script relies on you being worried - The call is designed to feel urgent and helpful at the same time. If you receive an unexpected verification code and feel the urge to call a number in the message, stop. Find the real support number from the official website and call that instead.
  • Android users can disable 2G right now - Go to Settings, Network, and look for the option to avoid 2G networks. It is often opted out by default. Turning it on removes the mechanism these devices exploit entirely.



A whistleblower says a former government staffer walked out of the Social Security Administration with the personal data of almost every living American on a thumb drive

Watch | Read

The Social Security Administration's inspector general is investigating a whistleblower complaint alleging that a former DOGE software engineer left his role and took two tightly restricted government databases with him, with at least one stored on a personal thumb drive. One of those databases, NUMIDENT, contains Social Security numbers, dates of birth and parents' names for virtually every living American. He also allegedly claimed to have retained what he described as "god-level" access to SSA systems after leaving. The SSA and the former employee's lawyer have both denied wrongdoing, but investigations are open.

No firewall stops someone walking out of the door with a thumb drive. If the allegations are true, the failure here wasn't technical at all. It was human, procedural and organisational, and the lessons apply just as much to a small business as they do to a government agency.

Awareness Angles

  • Revoking access when someone leaves is a critical security control, not an admin task - When did you last audit who still has access to systems they no longer need?
  • Insider threats are harder to detect and harder to talk about than external attacks - but they are just as real and no security tool will catch them if the right processes aren't in place.
  • The ability to plug a personal device into a government machine should never have been possible - USB port restrictions are unglamorous, but this is exactly why they exist.



Starbucks disclosed a data breach this week affecting nearly 900 employees after attackers created fake login pages to steal their credentials

Watch | Read

Attackers gained access to Partner Central, Starbucks' internal HR platform, by building convincing imitations of the login page and harvesting employee credentials. Once in, they had access to names, Social Security numbers, dates of birth and financial account and routing numbers. The breach ran for 23 days before it was fully resolved, with Starbucks discovering the intrusion on the 6th of February but not fully removing the attackers until the 11th, leaving a five day window where they knew someone was in but couldn't get them out. Affected employees are being offered two years of free identity theft protection through Experian.

The reason this one is worth highlighting isn't the scale, it's the method. Fake login page, stolen credentials, walk straight in through the front door. It's one of the oldest tricks going and it still works, including against large well resourced organisations with dedicated security teams.

Awareness Angles

  • This attack didn't exploit a technical vulnerability, it exploited a human one - A convincing fake login page is often all it takes. Knowing what the real login page looks like and being suspicious of anything that asks for your credentials is a habit worth building.
  • Financial account and routing numbers are a different category of risk - Unlike an email address or even a password, these create a direct route to fraud. If you've been notified of this breach, contact your bank directly rather than just monitoring.
  • Third party platforms expand your attack surface whether you like it or not - Payroll, HR, pensions, training. Every platform your organisation uses is another login screen that can be faked. MFA on all of them isn't optional anymore.


Phish Of The Week

A legitimate Google email was used to deliver a phishing message, and the trick was hidden in plain sight

Article contentIt's clever but we do wonder how successful this will be

This one is genuinely clever. The attacker submitted a Google account recovery request, but instead of using a normal email address, they put the entire phishing message into the email address field. It looked something like this: unauthorized_order_of_bitcoin_965usd_on_gpay_if_not_you_call_08XXXXXXXXX@domain[.]com. Because it's formatted like an email address, it passed Google's form validation. Because it came from Google's own systems, it landed in inboxes looking completely legitimate.

The goal is to panic the recipient into calling the number, at which point the scam moves off email entirely and onto a phone call where the real manipulation happens. We've seen this pattern before with PayPal, and it's becoming a recurring technique. Get the victim to make contact on a different platform where there are no spam filters, no warnings and no safety net.

Awareness Angles

  • A legitimate sender does not mean a legitimate message - This email came from Google. The domain was real, the formatting was real, and it would pass most technical checks. The content is the only thing that gave it away.
  • When something tries to move you to a phone call, that's a red flag - Email, text, fake notification. The platform doesn't matter. If the end goal is getting you on a phone call to a number you didn't go looking for yourself, pause.
  • Panic is the whole mechanism - Unauthorised Bitcoin purchase, urgent action required, call now. Every word is designed to stop you thinking clearly. Slowing down for ten seconds is genuinely a security control.


Thank you to the Hoxhunt Threat Intelligence team for sharing this with us!

This Week's Talking Points...

Starbucks discloses data breach affecting hundreds of employees Watch | Read

Iran-linked hackers wipe data across 200,000 Stryker devices Watch | Read

Lloyds, Halifax and Bank of Scotland apps exposed strangers' transactions Watch | Read

One click on this fake Google Meet update can give attackers control of your PC Watch | Read

Google Messages may soon get built-in protection against SMS blasters Watch | Read

A whistleblower says a former DOGE staffer walked out of the SSA with Americans' data on a thumb drive Watch | Read

Apple rushes out patches for older iPhones and iPads against the Coruna exploit kit Watch | Read

Topics: ClickFix evolves with a new variant that bypasses Microsoft Defender Watch | Read

Topics: Darren Jones MP accidentally shares his passcode on camera Watch | Watch on Instagram

Topics: Tricking an AI scam caller Watch | Watch on Instagram

Topics: Apple MacBook Neo Touch ID ad Watch | Watch on TikTok

And Finally...

The scam caller that got asked for a Bolognese recipe

Article content

Watch

Someone received one of those relentless car finance cold calls this week and decided to have a bit of fun with it. From the start it became pretty clear the caller wasn't human, so they started pushing it. Ask it an off script question, see what happens. Eventually they got it to recite a full Bolognese recipe mid sales pitch, complete with the markdown formatting still intact, hashtags and all, read out loud in a completely earnest robotic voice.

It is funny, and it is worth sharing with people in your life who might not realise how convincing these AI calling systems have become. Because the flip side of that video is that plenty of people who received the same call had no idea they were talking to a machine. If you ask it whether it is human, it says yes. It gives a name. It says it is from Manchester. And that is enough to keep a lot of people on the line.

Show this to someone who needs to hear it. It is a lot easier to hang up on a robot when you know it is a robot.

Video thumbnail
Join to access

This week on The Awareness Angle, attackers ditch malware and pick up the phone. Optimizely confirms a breach after a vishing attack, proving again that the helpdesk is now the attack surface.

We’ve got fake QR codes stuck on real parking meters, Samsung’s weather app quietly fingerprinting devices, and the UK fining Reddit over children’s data.

Plus mental health apps with serious security flaws, a researcher accidentally taking control of 7,000 robot vacuums, and a brilliant example of using AI to build interactive awareness training in minutes.

The Awareness Angle makes more sense in full. Watch on YouTube, listen on Spotify, Apple Podcasts, or wherever you get your podcasts. If you prefer your cyber news with context, challenge and a bit of straight talking, this one’s worth your time.

🎧 Listen on your favourite podcast platform - Spotify, Apple Podcasts and YouTube

Listen Now

Podcast · Risky Creative

This week's stories...

Optimizely confirms breach after vishing attack

Watch | Read

This wasn’t some cutting edge exploit. It was a phone call.

Attackers impersonated IT support, convinced staff to hand over SSO and MFA details, and got access to internal systems and CRM records. Optimizely says they didn’t escalate privileges or deploy backdoors, but the real story is how they got in.

We keep talking about this. MFA isn’t failing. People are being redirected around it.

If someone sounds credible, creates urgency, and claims to be internal support, most people don’t switch into “threat actor” mode. They switch into “helpful colleague” mode and that’s the gap.

For awareness teams, this is a great reminder about verification scripts, call back policies, and a chance to emphasise that support staff have permission to challenge authority.

The Awareness Angle

  • Authority Is a Shortcut – When someone claims to be internal IT, most people default to cooperation. Attackers know that.
  • MFA Can Be Socially Engineered – The control works, until someone convinces you to approve or share it.
  • Support Teams Need Different Training – Helpdesks and IT aren’t just defenders. They are targets. Treat them that way in your awareness strategy.

Fake QR codes stuck on real parking meters

Watch | Read

Cybercriminals placed fake QR stickers on 75 parking meters. Drivers scanned, landed on a convincing payment page, and almost handed over their details. No inbox. No malware. Just a sticker and a bit of time pressure.

When you’re paying for parking, you’re not thinking about threat modelling. You’re thinking about not getting a fine.

This is a brilliant story to use internally because it shows that the risk of QR codes hasn't gone away and must be bringing results or the cybercriminals wouldn't continue with it!

The takeaway is simple. Slow down. Check the URL. Use the official app or go to the web page instead of scanning whatever is in front of you.

The Awareness Angle

  • Context Changes Behaviour – People don’t apply the same caution in a car park as they do in their inbox.
  • Convenience Is the Bait – Quick pay shortcuts are designed to reduce friction. Attackers ride that same instinct.
  • Teach Verification, Not Fear – The behaviour to reinforce is simple. Check the URL. Use official apps. Slow down before entering details.

Mental health apps with millions of installs and hundreds of flaws

Watch | Read

Researchers found over 1,500 vulnerabilities across ten Android mental health apps, including AI therapy companions and CBT trackers. Collectively, they’ve been installed 14.7 million times.

People are using these apps at their lowest points. Logging thoughts. Sharing deeply personal struggles. And behind the scenes, insecure storage, weak session handling, and other issues are sitting there waiting to be abused.

This is not a “delete all apps” panic story. It’s a reminder that popularity isn’t the same as security. It's also not laying blame at the developer's door. Maybe, with all of the AI coding tools available, it's just become too easy to build something that isn't secure.

If you’re in awareness, this opens up a bigger conversation with some important things to check. App permissions. Update frequency. Who built this thing. When was it last maintained.

The Awareness Angle

  • Sensitivity Should Raise Standards – The more personal the data, the higher the security bar should be.
  • Install Numbers Mean Nothing – Millions of downloads create false confidence.
  • Awareness Goes Beyond Email – App hygiene, updates, permissions and developer credibility are part of modern security literacy.

This Week's Discussion Points...

Ad Tech Firm Optimizely Confirms Data Breach After Vishing Attack Watch | Read

Fraudulent QR Codes Found on 75 Kelowna Parking Meters Watch | Read

Your Samsung Weather App Is a Fingerprint Watch | Read

UK Fines Reddit £14.47M for Using Children’s Data Unlawfully Watch | Read

Android Mental Health Apps With 14.7M Installs Found With Security Flaws Watch | Read

Instagram to Alert Parents if Teens Search for Self-Harm and Suicide Content Watch | Read

Security Flaw Allows Man to Accidentally Gain Control of 7,000 Robot Vacuums Watch | Read

Building Interactive Security Training With Gemini Watch

We Invented the Dacia Sandman and the Internet Fell for It Watch | Read

ClickFix Pop-Ups in the Wild Watch | Read

Samsung Privacy Display Feature Watch

Protect Yourself From This Latest Ahrefs Phishing Attack Watch

And finally...Building Interactive Security Training With Gemini

Watch

Luke shows how he used Google Gemini to build an interactive security awareness module in minutes.

With a simple prompt, Gemini generated a ClickFix training page in HTML, complete with explanations, red flags, and a knowledge check. He then refined the look and even built a retro-style phishing game with multiple levels and feedback.

No specialist tools. No complex setup. Just prompts and iteration.

The big takeaway is this. The barrier to creating engaging, customised awareness content is lower than ever. You still need to sense check, validate, and tidy things up, but as a rapid prototyping tool, it is seriously powerful.

This episode is packed with leaked customer data, another employee phishing story that turned into a full blown breach, and some awkward questions about how much we really trust our password managers.

This week on The Awareness Angle, ShinyHunters are back with more stolen data, Canada Goose is investigating after hundreds of thousands of customer records were leaked, and Eurail has confirmed traveller information is now up for sale on the dark web. Different brands. Same story. Collect loads of data. Store it. Hope it never gets out.

We also talk about a fintech firm that disclosed a breach after a single employee was phished. One inbox. One click. Real consequences. The human layer is still where this starts.

Then we get into password managers. What do they actually see? Where are the weak spots? And are we a bit too comfortable assuming the vault is untouchable?

All of that, and a few opinions from us along the way, in this week’s edition of The Awareness Angle.

The Awareness Angle is best served in full. Watch on YouTube, listen on Spotify, Apple Podcasts, or wherever you get your podcasts. If you like your cyber news with context, challenge, and a few raised eyebrows, this one’s for you.

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

This Week's Stories...

Phishing Led Breach at Figure

Watch | Read

Fintech firm Figure has disclosed a data breach after an employee fell victim to a phishing email.

According to the company’s filing, the attack began with a successful phishing email that compromised an employee account. From there, the attacker gained access to internal systems and certain customer files.

Figure says there is currently no evidence that financial account credentials or customer funds were accessed. However, names, contact details and other personal information linked to customer accounts were exposed. Impacted individuals are now being notified.

ShinyHunters has reportedly claimed responsibility and says the breach is linked to a wider campaign targeting organisations using single sign on providers.

No zero day. No nation state. Just one convincing email.

The Awareness Angle

  • Phishing still works – Even in fintech, even with mature security teams, one well crafted email can open the door.
  • Access pathways matter – Inbox compromise is only step one. The real question is what that account can reach once inside.
  • Human risk is business risk – This started with a person. Controls, monitoring, and response speed determine how far it spreads.

AI Generated Passwords Might Not Be as Smart as You Think

Watch | Read

There’s been a bit of noise this week around AI generated passwords, and it’s worth paying attention to.

Researchers looked at passwords created by tools like ChatGPT, Claude and Gemini and found something interesting. They looked strong. They had symbols, numbers, upper and lower case. They passed basic strength tests. But they weren’t truly random.

Because large language models generate likely patterns, not true entropy, some passwords followed very similar structures. In some cases, near identical formats were repeated across tests. That means an attacker who understands how these models tend to construct strings could reduce the guesswork significantly.

It’s not that AI is useless. It’s just not built to be a cryptographic random number generator. So, if you’ve ever asked a chatbot to “give me a strong password”, it might be worth changing it.

The Awareness Angle

  • Complex looking isn’t the same as secure – If something follows a pattern, attackers can learn that pattern.
  • AI generates probability, not randomness – That works brilliantly for language. Not so brilliantly for passwords.
  • Don’t outsource security decisions to convenience – Use a password manager, a long passphrase, or passkeys. Let tools designed for randomness handle randomness.

Infostealer Malware Now Targeting OpenClaw Secrets

Watch | Read

We spoke more than once over the past few weeks about OpenClaw and the rise of agent based AI tools. This week, that story moved on yet again.

Security researchers have identified the first real world case of infostealer malware specifically harvesting OpenClaw configuration files. Not just browser passwords. Not just cookies. But API keys, authentication tokens and private cryptographic material tied to AI agents.

The important bit here is this.

People are wiring these agents into email, apps, local files and workflows. They are giving them memory. They are giving them access. And that means a single malware infection can now expose not just accounts, but the operational identity of someone’s AI assistant.

This is not a futuristic attack. It is infostealer malware doing what infostealers do. It just found a new goldmine of data sitting locally on machines.

AI agents are quickly becoming high value identity hubs.

The Awareness Angle

  • AI agents centralise access – Email, tokens, apps and history all in one place makes them incredibly powerful, and incredibly attractive to attackers.
  • Malware evolves fast – Infostealers are not targeting “AI” as a concept. They are simply harvesting files that contain keys and secrets. AI tools just happen to store lots of them.
  • Experimentation needs guardrails – Curiosity is good. But when employees plug new tools into core systems without visibility, risk expands quietly.

Eurail and Canada Goose – Contact Data Still Has Teeth

Watch | Read

Two very different brands this week, same underlying issue.

Eurail has confirmed that stolen traveller data is now being offered for sale online. The data includes names, email addresses, country of residence and booking details. Around the same time, Canada Goose began investigating claims that roughly 600,000 customer records were leaked, including names, email addresses, phone numbers and mailing addresses.

In both cases, you see the familiar reassurance. No payment data accessed. But if you know someone recently booked travel or bought something expensive, you do not need their card number. You just need enough context to send a believable message. “Problem with your booking.” “Issue with your delivery.” “Click here to avoid cancellation.”

That is where the real risk sits. Follow on phishing, smishing and impersonation campaigns that feel legitimate because they are built on real events.

The Awareness Angle

  • Context is leverage – Real booking or purchase data makes phishing dramatically more convincing.
  • Contact data is currency – Names, emails and phone numbers are more than enough to fuel targeted fraud.
  • The second wave matters – The breach itself is often only the start of the story.

This week's discussion points...

Main Stories

73,000+ Patients Hit in Arizona Urology Data Breach Watch | Read

Eurail Says Stolen Traveller Data Is Now for Sale Watch | Read

Figure Discloses Breach After Employee Phishing Attack Watch | Read

Canada Goose Investigates 600,000 Customer Record Leak Watch | Read

ShinyHunters Claims CarGurus Breach Watch | Read

US Plans Portal to Bypass Content Bans Watch | Read

Vulnerabilities Found in Popular Password Managers Watch | Read | Read (Reddit discussion)

Infostealer Malware Targeting OpenClaw Secrets Watch | Read

AI Generated Passwords May Be Predictable Watch | Read

Extras

TikTok – Review Scam News Clip Watch | Watch on TikTok

And Finally...Online Review Blackmail Scam Hits Small Business

Watch | Watch on TikTok

An ITV News clip highlighted a small business owner who was targeted with a different kind of scam. Criminals demanded payment, threatening to flood his company with fake one star reviews if he refused. They followed through.

Dozens of negative reviews appeared online, damaging his rating and threatening his livelihood. Instead of paying, he worked with Google to challenge the fake reviews. Eventually, the attackers stopped and moved on.

It is a reminder that not all cyber attacks involve malware or data theft. Sometimes the weapon is reputation.

The Awareness Angle

  • Reputation is attack surface – Reviews, ratings and search results can be manipulated and weaponised. Your digital presence is part of your security footprint.
  • Panic is the pressure point – Scammers rely on urgency and fear. The goal is to trigger a quick payment before you think clearly.
  • Do not reward the behaviour – When there is no financial return, attackers often move on to easier targets. Reporting and persistence matter.

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.

All views and opinions are our own and do not reflect those of our employers.

This week on The Awareness Angle, 70,000 government ID images are exposed in a Discord age verification breach, staff data is hit at the European Commission, and supplier fallout ripples out to Volvo Group after a third party incident. More data. More dependency. More risk.

We also cover Apple’s emergency zero day patch already being exploited in the wild, a devastating AI deepfake investment scam that cost an 82 year old nearly £200,000, and fresh concerns around autonomous AI agents expanding enterprise attack surfaces faster than governance can keep up.

On top of that, we get into the backlash around Ring’s Super Bowl advert and surveillance partnerships, why some organisations are banning Notepad++ instead of simply patching it, and how email bombing is still being used to quietly bury real account compromise in a flood of noise.

All of that, and a few strong opinions along the way, in this week’s edition of The Awareness Angle.

The Awareness Angle is best served in full. Watch on YouTube, listen on Spotify, Apple Podcasts, or wherever you get your podcasts. If you like your cyber news with context, challenge, and the occasional raised eyebrow, this one’s for you.

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

This Week's Stories...

Discord Faces Backlash After Age Verification Breach

Watch | Read

Around 70,000 government issued ID images were exposed after a third party provider used for age verification was compromised. These were not usernames. Not email addresses. Actual passport and driving licence images.

This is where the age verification debate gets uncomfortable.

We said on the podcast that this is the trade off problem in real time. If platforms require more sensitive data to prove age, the impact of failure increases massively. And crucially, it is not just about trusting the platform. It is about trusting who they trust.

This was not Discord’s core infrastructure being breached. It was a supplier in the chain. But to the user whose passport is now exposed, that distinction does not matter.

Searches for Discord alternatives reportedly spiked after the story broke. That is what trust erosion looks like.

The Awareness Angle

  • More Data, More Risk – The more sensitive the data collected, the higher the impact if breached.
  • Third Parties Matter – Your risk extends to every supplier in the chain.
  • Trust Has a Cost – Safety controls must not create bigger privacy problems.European Commission Discloses Staff Data Breach

European Commission Discloses Staff Data Breach

Watch | Read

The European Commission confirmed a breach affecting a system used to manage staff mobile devices. Personal data such as names and contact details may have been accessed. There is currently no indication that classified systems were compromised.

The bigger issue is what happens next.

Internal directories and HR data are high value targets. Once exposed, they fuel phishing, impersonation and social engineering.

Containment reportedly happened within hours. But the exposure still matters.

The Awareness Angle

  • Staff Data Is High Value – Internal directories and HR data are prime targeting fuel.
  • Breaches Enable Follow On Attacks – Exposure often leads to phishing and impersonation.
  • Compliance Is Not Immunity – Even major institutions remain attractive targets.

Volvo Group Impacted by Conduent Supplier Breach

Watch | Read

Volvo Group has been named among organisations impacted by a cyberattack at IT services provider Conduent.

This is another reminder that your organisation’s risk surface is bigger than your own firewall.

Conduent provides back office services such as document processing and administrative support. When a service provider like that is breached, the impact cascades outward. One breach can affect dozens, sometimes hundreds, of downstream organisations.

We have said it before, but this is third party concentration risk in action. If one supplier services many large brands, the blast radius expands dramatically.

Volvo is not alone here. And that is the point.

The Awareness Angle

  • Third Party Risk Is Shared Risk – Your exposure includes your suppliers.
  • One Breach, Many Victims – Service providers create amplified blast radius.
  • Supply Chain Visibility Matters – Know who holds your data, and how it is protected.

Apple Fixes Actively Exploited Zero Day

Watch | Read

Apple released emergency updates to patch a zero day vulnerability described as being used in “extremely sophisticated” attacks.

When a vendor confirms exploitation is already happening, patching becomes urgent.

These flaws are rarely theoretical. They are used in targeted campaigns. Targeted does not mean rare. It means deliberate.

The Awareness Angle

  • Zero Days Are Real World – These are not theoretical flaws. They are exploited.
  • Targeted Does Not Mean Safe – Sophisticated attacks still affect everyday users.
  • Update Culture Matters – Fast patching is still one of the strongest defences.

82 Year Old Loses £200k in AI Deepfake Investment Scam

Watch | Read

An 82 year old grandmother lost nearly £200,000 after seeing what appeared to be a trusted doctor promoting an investment opportunity in a professional looking video.

It was AI generated.

The scam did not rely on broken English or obvious red flags. It relied on authority bias, emotional manipulation, and realism. Conversations continued over Messenger. Funds were moved into cryptocurrency. The emotional driver was securing care for her autistic grandson.

We said this on the show. This is not clumsy phishing. This is AI realism combined with psychology.

One comment we discussed summed it up well. It is easy to look at stories like this and think gullible old people. But the speed at which AI is improving should make all of us pause. The bar for deception is rising quickly.

The Awareness Angle

  • Trust Can Be Faked – Familiar faces are no longer proof.
  • Crypto Is Hard to Reverse – Once funds move, recovery is unlikely.
  • Emotion Drives Decisions – Scammers exploit care, not just greed.

This Week's Discussion Points...

🔎 Breach Watch

Discord Age Verification Breach Exposes 70,000 Government IDs Watch | Read

European Commission Discloses Staff Data Breach Watch | Read

Volvo Group Impacted by Conduent Data Breach Watch | Read

Apple Fixes Zero Day Used in Highly Sophisticated Attacks Watch | Read

Our Org Is Banning Notepad++ After Supply Chain Concerns Watch | Read

📰 News

82 Year Old Loses £200k in AI Deepfake Doctor Investment Scam Watch | Read

Reddit discussion: Read

Amazon Distances Itself From Flock Safety After Ring Super Bowl Backlash Watch | Read

How to Recognise a Deepfake, and Why It Is Getting Harder Watch | Read

OpenClaw Integrates VirusTotal After Enterprise Risk Warnings Watch | Read

💬 Discussion & Extras

Cloudflare “ClickFix” Style Fake Verification Page Watch | Read

Email Bomb Used to Hide a Real Security Alert Watch | Read

The CivDiv No.1 TikTok Account Recommendation Watch | TikTok

Most Common 4 Digit PIN Numbers Visualised Watch

QR Code Binder for Child Safe YouTube Access Watch

And finally...LinkedIn AI Caricature Trend Raises Oversharing Questions

See content credentialsArticle contentCan you guess my password from this?

Watch | Read

A new trend circulating on LinkedIn has people using AI to generate caricature style action figure versions of themselves. These posts often include job titles, hobbies, favourite sports teams, pets, cities, personality traits and sometimes even family details.

The trend itself feels creative and harmless. But a post this week from Matthew Jary highlighted something worth pausing on. When you scroll through a feed full of these, you start learning a surprising amount about complete strangers.

Individually, each detail seems insignificant. Collectively, they form a profile.

Many of the attributes being shared mirror the kinds of prompts commonly used in password reset questions and social engineering attempts. First pet. Favourite team. Hometown. Employer.

But here’s the alternative view.

Is this actually an issue?

Most of us openly share our job titles, employers, locations and interests on LinkedIn every day. That is the whole point of the platform. So is this genuinely risky, or is this just the latest “security people hate fun” moment? Is this simply anti bandwagon commentary?

Maybe.

The difference might not be the individual data point. It might be the packaging. When everything is neatly summarised in one visual snapshot, it lowers the effort required to profile someone.

This is not about banning fun. It is about understanding aggregation. Attackers do not always need a breach when information is voluntarily shared and easily searchable.

The risk is rarely one post. It is the accumulation.

The Awareness Angle

  • Small Data Adds Up – Individual facts feel harmless. Combined, they become profile building fuel.
  • OSINT Is Powerful – Attackers do not need a database leak if the information is public.
  • Aggregation Changes Context – One detail is normal. A curated snapshot lowers the barrier for profiling.

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.

All views and opinions are our own and do not reflect those of our employers.

Supply Chain Hacks. Fake Encryption. Phones That Track You

This week on The Awareness Angle, a developer tool update chain gets quietly hijacked, ransomware actors claim access to airport systems, and law enforcement moves in on a major hacking forum, with questions over how much impact that will really have.

We also look at how phones can be tracked at the network level without apps or permission, why McDonald’s felt the need to call out terrible password habits, and how a chaotic extortion group is turning data breaches into deeply personal harassment campaigns. On top of that, Spain is moving to ban social media for under 16s, and questions are resurfacing about whether end to end encryption really means what people think it does.

All of that, and more, in this week’s edition of The Awareness Angle.

The Awareness Angle is best served in full. Watch on YouTube, or listen on Spotify or your favourite podcast platform to get the complete discussion and context.

🎧 Listen on your favourite podcast platform - Spotify, Apple Podcasts and YouTube

Listen Now

Podcast · Risky Creative

This week's stories...

Notepad++ update chain compromised

Watch | Read

Notepad++, a tool a lot of developers use without a second thought, was caught up in a supply chain attack that didn’t touch the code at all. Instead, attackers went after the update process. Between June and December 2025, a small number of users were redirected to malicious update files through the hosting infrastructure.

This wasn’t random. It looks deliberate and targeted, likely aimed at developers or organisations working on sensitive projects. The software itself was fine, but the trust people place in automatic updates was the weak point. Notepad++ has since moved hosting providers, tightened up how updates are verified, and confirmed that versions 8.8.9 and above are safe.

It’s one of those stories that feels uncomfortable because it hits a blind spot. We trust tools like this precisely because they are familiar and boring.

The Awareness Angle

  • The risk lived outside the app - The problem wasn’t what people installed, it was what they never see, the update mechanism.
  • Targeted still counts - You don’t need to hit everyone, just the right few people.
  • Choice brings exposure - Every extra tool adds convenience and risk, which is why organisations try to limit what gets installed.

Ransomware group claims access to airport systems

Watch | Read

A ransomware group is claiming it breached systems linked to Tulsa International Airport and has begun dumping internal files online as proof. The attackers say the data includes internal emails, employee IDs, passports, and financial documents. At the time of reporting, the airport has not publicly confirmed the breach and the leaked material has not been independently verified.

That uncertainty is part of the tactic. Modern ransomware groups do not just rely on encryption or extortion notes. They use public claims and data leaks to create pressure, force attention, and shape the narrative before facts are fully known. Airports are particularly exposed to this kind of pressure because disruption, even perceived disruption, carries immediate reputational and operational weight.

Verified or not, once claims and files are public, the human impact starts straight away.

The Awareness Angle

  • Pressure starts before proof - Publishing claims and documents is designed to trigger panic and rushed decisions.
  • Visibility increases impact - Highly visible organisations feel the reputational damage faster, even when details are unclear.
  • Pause is a defence - Calm, verification, and controlled communication matter more than speed in moments like this.

Your phone can be tracked without your permission, and most people do not realise it

Watch | Read

Most people think they understand how location tracking works. If an app does not have permission, or GPS is turned off, they assume their phone is no longer sharing where they are. This story shows that is not how it actually works.

Mobile networks can locate phones at the carrier level using systems originally built for emergency services. This sits below iOS and Android, which means your phone never asks you, and you never see it happening. It is not malware and it is not a bug. It is how mobile infrastructure has worked for years.

When we talked about this on the podcast, the bit that really landed was how normal this feels once you realise it has been there the whole time. The technology did not change. Our assumptions did.

The Awareness Angle

  • Permissions feel reassuring - Turning things off gives a sense of control, even when it does not change the outcome.
  • The real risk is invisible - Tracking can happen below apps and operating systems people interact with.
  • Assumptions shape behaviour - When beliefs are wrong, people take risks without realising it.

This Week's Discussion Points...

Notepad++ supply chain attack Watch | Read

Ransomware group claims access to airport systems Watch | Read

FBI seizes RAMP hacking forum Watch | Read

Lawsuit claims WhatsApp encryption is a lie Watch | Read

Spain announces social media ban for under 16s Watch | Read

Your phone can be tracked without your permission Watch | Read

Scattered Lapsus ShinyHunters extortion tactics Watch | Read

Ransomware attacks up 30 percent Watch | Read

Ant's mum targeted by follow up scam call Watch

McDonald’s calling out weak passwords Watch | Read

Getting your first job in cybersecurity Watch | Read

Real or phishing, shockingly bad campaign emails Watch | Read

And finally...McDonald’s calling out weak passwords, and it lands because it’s honest

Watch

McDonald’s Netherlands used Change Your Password Day to highlight something security teams have been saying for years. People choose passwords based on things they like, recognise, or can remember. BigMac, HappyMeal, McNuggets, and endless variations of them showed up tens of thousands of times in breached password data.

As we said on the show, this works because it doesn’t pretend people are suddenly going to behave like security professionals. It accepts reality and designs around it.

Predictability is the real problem. Swapping letters for numbers or adding a symbol feels clever, but attackers expect it. Tools try those combinations automatically. The habit hasn’t changed, even though the threat has.

What’s interesting is how transferable this idea is. Almost any organisation could do a version of this with their own language, products, acronyms, or in jokes. When people see themselves reflected in the message, it lands very differently.

The Awareness Angle

  • Familiar beats secure - People choose passwords that feel personal and memorable, not resilient.
  • Old advice lingers - Leetspeak and small tweaks still feel protective, even though they stopped working years ago.
  • Make it local - Campaigns are more effective when people recognise their own habits and language in the message.

Would you try this in your organisation?  Let us know by getting in touch at hello@riskycreative.com

This week on The Awareness Angle, we cover hundreds of exposed Clawdbot and Moltbot AI agent gateways leaking credentials and private chats, a new malware service selling guaranteed phishing extensions through the Chrome Web Store, and sensitive government documents uploaded to ChatGPT by the acting head of the US cybersecurity agency.

We also look at Google rolling out stronger ransomware protections in Drive, France accelerating plans to ban social media for under 15s, and what recent incidents involving AI powered toys reveal about data exposure risks for children.

All of that, and more, in this week’s episode of The Awareness Angle.

The Awareness Angle is best served in full. Watch on YouTube, or listen on Spotify or your favourite podcast platform to get the complete discussion and context.

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

Support the show with all new Awareness Angle merch. Stickers, notebooks, mugs, and bits that quietly say you care about people, not just passwords. Click here to visit the shop.

Article contentJust some of the exciting new merchadise you can buy!

This week's stories...

Hundreds of exposed Clawdbot gateways leave credentials and private chats exposed

Watch | Read

Security researchers have identified more than 900 exposed Clawdbot gateways online, caused by poor setup and insecure default settings. These exposed systems allowed access to private conversations, API keys, and other sensitive information.

Clawdbot, also known as Moltbot, is an AI agent designed to make work easier by remembering information and acting on a user’s behalf inside messaging apps. Because it runs continuously and stores context over time, mistakes in setup can quietly expose far more than people realise.

Incidents like this often happen without malicious intent. Tools are adopted quickly to save time, experiments move into daily use, and security steps are skipped under pressure. The result is exposure created by normal human behaviour, not bad actors.

The Awareness Angle

  • People prioritise speed and convenience – Security steps are often skipped to get work done
  • Assumptions replace checks – If a tool feels helpful and familiar, risk is easily overlooked
  • Psychological safety matters – People need to feel safe admitting mistakes before exposure grows

New malware service pushes phishing extensions into the Chrome Web Store

Watch | Read

Researchers have uncovered a new malware service called Stanley that allows criminals to create phishing browser extensions and successfully publish them to the Chrome Web Store. These extensions are designed to overlay legitimate websites with fake content while keeping the real web address visible, making them difficult to spot.

The service is sold in tiers, offering features such as silent installation, custom branding, and a management panel for attackers. Because the extensions pass official store checks, users are more likely to trust them, install them, and continue using them without suspicion.

This type of attack relies less on technical exploitation and more on habit. People install extensions to save time, solve small problems, or boost productivity, often without revisiting what access those extensions still have later on.

The Awareness Angle

  • Trust is built on familiarity – Official stores and recognisable browsers lower people’s guard
  • Convenience drives behaviour – Small productivity gains can outweigh perceived risk
  • Unused access is rarely questioned – Extensions often stay installed long after they are needed

France moves to fast track a social media ban for under 15s

Watch | Read

France has announced plans to fast track a ban on social media use for children under 15, with the aim of having new rules in place before the next school year. The proposal includes stricter age verification and builds on existing restrictions around mobile phone use in schools.

The move follows similar action in Australia, where millions of under 16 social media accounts have already been removed. French officials have acknowledged that age limits can be bypassed, but see this as an important first step in reducing exposure to online harm and emotional manipulation.

Rather than focusing on individual behaviour, the approach shifts responsibility toward platforms and regulation, recognising that expecting children to self regulate in highly persuasive online environments has not worked.

The Awareness Angle

  • Children are not the problem – Platforms are designed to capture attention, not protect wellbeing
  • Rules fill the gaps left by design – Regulation steps in where controls and safeguards fall short
  • Adults set the environment – Safety improves when responsibility moves away from the user

US cybersecurity chief uploaded sensitive government documents to ChatGPT

Watch | Read

The acting head of Cybersecurity and Infrastructure Security Agency uploaded internal government documents marked “for official use only” into ChatGPT. The uploads triggered automated warnings, and an internal review is now assessing any potential impact.

The documents were described as internal but unclassified, and the use of ChatGPT was said to be short term and previously approved as an exception. Following the incident, multiple staff members were suspended from accessing classified systems while investigations continue.

The story highlights how quickly everyday tools can blur boundaries at work, especially when people are under pressure to move fast or solve problems efficiently.

The Awareness Angle

  • People default to familiar tools – Convenience often overrides caution
  • Exceptions create confusion – One off permissions weaken shared understanding of risk
  • Hierarchy does not prevent mistakes – Senior roles are not immune to everyday human error

Discussion Points...

ShinyHunters swipes right on 10M records in alleged dating app data grab Watch | Read

US cybersecurity chief uploaded sensitive documents to ChatGPT Watch | Read

What is Clawdbot and why it matters Watch | Read

Hundreds of exposed Clawdbot gateways leave data vulnerable Watch | Read

The AI agent craze is turning into a security nightmare Watch | Read

Phishing malware sold as Chrome extensions Watch | Read

Google Drive adds better ransomware protection Watch | Read

France moves to ban social media for under 15s Watch | Read

Exposed admin panel found in AI toy Watch | Read

Awareness, spotting phishing and AI content Watch | Read

Misleading breach headlines and fake panic Watch | Read

Reverse image search exposing fake profiles Watch | Read

Gift card scam warnings appearing in stores Watch | Read

Covering phone cameras as a security habit Watch | Read

Free WiFi on flight QR code prank Watch | Read

TikTok Argos MacBook discount scam Watch | Read

Real world phishing and family account compromise Watch

And finally...This Week I Messed Up!

Article contentI messed up and didn't protect those closest to me!

Watch

This week, the story that hit closest to home wasn’t a breach headline or an AI scare. It was my mum.

Her email account was compromised, no two factor authentication, a password she’d used for years, and attackers quietly sending gift card scam emails to people she trusts. I only spotted it once messages started disappearing from her inbox.

When I got proper access, the reason was obvious. The attackers had set up inbox rules to automatically mark messages as read, move them into hidden folders, and silently redirect copies to a Gmail account they controlled. From the outside, everything looked normal.

I spend my life talking about security awareness, and I still hadn’t locked down the person closest to me.

The Awareness Angle

  • Inbox rules are a red flag – attackers often use filters and redirects to hide their activity and stay undetected
  • No 2FA is still a big risk – even “quiet” email compromises can run for days without being noticed
  • Check your family, not just your workplace – the people closest to you are often the least protected

It’s a reminder that security isn’t just an organisational problem. It’s personal. Take five minutes this week to check in on someone you care about.

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.

All views and opinions are our own and do not reflect those of our employers.

This week on The Awareness Angle, we cover a ransomware attack at Ingram Micro that disrupted a major part of the global IT supply chain, alongside a breach at Grubhub where customer, driver, and merchant data was accessed through a third party support system. We also look at a data breach at the Minnesota Department of Human Services affecting nearly 304,000 people, and a UK secondary school forced to close after a cyber attack knocked critical systems offline.

In the news, Microsoft issued emergency out of band Windows updates after Patch Tuesday caused shutdown and Cloud PC issues, while researchers uncovered malicious browser extensions designed to crash browsers and push fake fixes. We also discuss reports of criminals selling ready made voice phishing kits, a new EU vulnerability database launched as an alternative to CVE, and a phishing campaign targeting LastPass users with fake security alerts.

We round out the episode with policy and platform updates, including the UK government consulting on banning social media for under 16s, and TikTok finalising a deal to split its US operations into a new joint venture.

The Awareness Angle is best served in full. Watch on YouTube, or listen on Spotify or your favourite podcast platform to get the complete discussion and context.

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

Support the show with all new Awareness Angle merch. Stickers, notebooks, mugs, and bits that quietly say you care about people, not just passwords.

Article contentJust some of the stuff you can buy!

This week's stories...

Voice phishing kits sold as a service

Watch | Read

Cybercriminals are now selling ready made voice phishing kits that let almost anyone run convincing phone scams. These kits bundle scripts, call flows, dashboards, and in some cases AI generated voices that sound like banks or internal IT teams. This is not someone freelancing a scam call. This is packaged, repeatable, and designed to scale.

The kits guide attackers through the entire interaction. Who to call. What to say. When to apply pressure. Victims are coached into handing over credentials, one time passcodes, or approving actions that lead to account access. It is phishing, just delivered over the phone instead of email.

The problem is that phone calls still get a free pass. Many organisations have trained people to be cautious with links and emails, but far fewer have clear rules for handling unexpected calls. Attackers are leaning into that gap hard.

This is social engineering getting easier and more normal. And it is aimed squarely at busy humans.

The Awareness Angle

  • Vishing is now off the shelf – Anyone can buy the tooling
  • Calls still bypass suspicion – The channel carries trust
  • Call back breaks the scam – Verification beats confidence

CrashFix browser attacks push fake fixes

Watch | Read

CrashFix is a browser based attack where a malicious extension deliberately crashes the browser, then tells the user they need to install a fix. That fix is malware. Nothing is broken. The crash is the whole point.

After the browser fails, users are shown clear, step by step instructions telling them what to do next. Run this. Install that. It works because this is exactly how people normally deal with software problems. Get it working and carry on.

This is not a clever technical exploit. It is frustration as a delivery mechanism. When something breaks, people stop thinking about risk and start thinking about recovery. CrashFix is designed to catch people in that moment.

The Awareness Angle

  • The crash is intentional – Failure is the lure
  • Fixing mode bypasses caution – Urgency beats scepticism
  • Running commands is a red flag – Pause before you actWatch | Read

UK secondary school forced to close after cyber attack

Watch | Read

A secondary school in England was forced to close after a cyber attack took out its IT systems. There was no big data breach story and no suggestion that grades were tampered with. The school closed because it could not function safely without its systems.

Security, made human.Too much failed at once. Attendance, communications, access control, and safety related systems were all affected. That only happens when everything is tied together. Systems that should be dull, isolated, and resilient were clearly part of the same environment, so when one thing went down, everything followed.

This is what happens when convenience drives design. Things get connected because it is easier, cheaper, or sold as “modern”, not because it makes sense. Then something breaks, and suddenly the impact is far bigger than anyone expected.

The Awareness Angle

  • Not everything should be connected – Convenience quietly increases risk
  • Availability is a safety issue – Offline systems force closure
  • Design decisions matter – Architecture shapes impact

This week's discussion points...

Ingram Micro ransomware attack knocks global IT supply chain offline Watch | Read

Grubhub breach exposes customer, driver, and merchant data via third party support system Watch | Read

Minnesota Department of Human Services breach exposes demographic records of nearly 304,000 people Watch | Read

UK secondary school forced to close after cyber attack disrupts systems Watch | Read

Microsoft releases emergency Windows updates after Cloud PCs fail to shut down properly Watch | Read

Criminals are now selling ready made voice phishing kits Watch | Read

Malicious Chrome extension crashes browsers to push fake “fix” in ClickFix variant Watch | Read

EU launches new vulnerability database as alternative to CVE Watch | Read

Phishing campaign targets LastPass users with fake security alerts Watch | Read

Government consults on banning social media for under-16s in the UK Watch | Read

TikTok seals deal to split US app into new joint venture, keeps platform running in America Watch | Read

AI snowstorm videos show the current state of the internet Watch

Five ways to spot AI generated accounts on social media Watch

And finally...Action Fraud becomes “Report Fraud”, but the experience still breaks trust

Article contentAnt and Luke discuss Report Fraud's account issues

Watch

The UK’s fraud reporting service has been rebranded from Action Fraud to Report Fraud. The new name is clearer and does exactly what it says. The problem is what happens next.

When users try to sign in or create an account, they are redirected to a completely different domain to complete the process. For some people, antivirus tools flag that page as suspicious or phishing. That puts users in an impossible position. They are doing the right thing by reporting fraud, and the experience immediately tells them not to trust it.

This is how trust gets damaged. Not by attackers, but by confusing design. People are told to be cautious about links and domains, then asked to ignore their own instincts when it really matters. Many will simply abandon the report.

If we want people to report scams and cybercrime, the process has to feel safe and consistent all the way through.

The Awareness Angle

  • Trust is fragile – Mixed signals stop people acting
  • Design shapes behaviour – Confusion leads to drop off
  • Security advice must align – We cannot teach one thing and do another

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.

All views and opinions are our own and do not reflect those of our employers.

This week on The Awareness Angle, we cover a busy mix of breaches, claims, and security moments that blurred the line between what happened and what people thought happened. Instagram password reset emails caused widespread confusion, ransomware groups made high-profile breach claims without releasing data, and a well-known hacking forum found itself dealing with a leak of its own.

We also look at cyber incidents with real-world impact, including attacks linked to drug smuggling at major European ports and attempted intrusions targeting national energy infrastructure. On the technology side, we discuss Microsoft’s latest Patch Tuesday, growing control over AI tools on work devices, and why some organisations want clearer choices around when those tools appear.

The episode also explores emerging questions about identity and trust, from reused passwords and long-lived leaked data to eye-scanning technology promoted as a way to prove you are human online.

The Awareness Angle is best served in full. Watch on YouTube, or listen on Spotify or your favourite podcast platform to get the complete discussion and context.

Watch or listen to the episode today - YouTube | Spotify | Apple Podcasts

Visit riskycreative.com for past episodes, our blog, and our merch.

This week's stories...

Instagram password reset emails and data leak claims

Watch | Read

A large number of Instagram users reported receiving password reset emails they did not request. Meta confirmed it fixed an issue that allowed an external party to trigger legitimate password reset emails at scale and said there was no breach of Instagram systems. According to Meta, user accounts were not compromised, and the emails were caused by abuse of a feature rather than a hack.

At the same time, security firm Malwarebytes reported that data linked to around 17.5 million Instagram accounts was being advertised online. The dataset is said to include usernames, email addresses, phone numbers, and, in some cases, physical addresses. Meta has denied any link between the password reset emails and the data, stating that it likely came from older scraping activity rather than a new Instagram breach.

While there is no public evidence tying the two events together, the timing created widespread confusion. Unexpected security emails combined with reports of leaked data looked and felt like a breach to many users, regardless of the technical explanation.

The Awareness Angle

  • Timing shapes perception - When alerts and leak claims land together, people assume the worst
  • Users see impact, not root cause - Bug or breach matters less than how it feels
  • Old data still circulates - Historic scraping can resurface and fuel new scams

Ports hacked to support drug smuggling, hacker jailed

Watch | Read

A hacker has been sentenced to 7 years in prison for cyberattacks that disrupted operations at the Port of Rotterdam and the Port of Antwerp. The attacks took place between 2021 and 2023 and involved unauthorised access to container logistics systems.

Prosecutors said the access was used to manipulate the release and movement of shipping containers, enabling organised crime groups to collect drug shipments without detection. The case highlights how cyber access can directly enable real-world criminal activity rather than just data theft.

Authorities said the sentence reflects the seriousness of targeting critical infrastructure and the wider risks posed to safety, trade, and national security.

The Awareness Angle

  • Cyber enables physical crime - Access to systems can unlock real-world harm
  • Logins are high-value targets - Human access often matters more than malware
  • Impact goes beyond IT - Disruption affects supply chains and public safety

Microsoft may allow Copilot to be uninstalled on managed devices

Watch | Read

Microsoft is planning to give IT administrators the option to uninstall Copilot from managed Windows devices, rather than just hide or disable it. The change would apply to enterprise-managed devices and address concerns about control, data handling, and readiness.

The move gives organisations more choice over when and how AI tools appear on work devices, particularly as teams continue to work through policies, training, and acceptable use. Copilot remains positioned as a productivity feature, but many organisations are still deciding how to introduce it safely.

The Awareness Angle

  • Control matters - IT teams want clear choices, not forced rollouts
  • AI affects behaviour - Tools change how people work, not just systems
  • Readiness comes first - Introducing AI before guidance creates risk

AI is not selling, is interest waning?

Watch | Read

Despite heavy investment in AI-powered PCs and tools, some manufacturers are reporting weaker-than-expected demand. Executives at Dell said consumers are not buying devices for AI features, and that AI-focused messaging often creates confusion rather than clarity.

The comments suggest a gap between how vendors promote AI and how everyday users understand its value. While AI continues to be embedded across products, its presence alone does not appear to be driving purchasing decisions.

This comes as organisations continue to balance innovation with concerns about data use, trust, and whether people actually want AI involved in their daily work.

The Awareness Angle

  • AI does not automatically sell - Features need clear, practical value
  • Confusion slows adoption - Unclear benefits create hesitation
  • Trust still matters - Data questions shape acceptance

This week's discussion points...

Everest Ransomware Claims Nissan Data Breach – Watch | Read

Spanish Energy Giant Endesa Reports Major Customer Data Breach – Watch | Read

Instagram Password Reset Emails – Watch | Read

Breachforums Data Leak – Watch | Read

Microsoft Patch Tuesday – Watch | Read

Microsoft Copilot Removal Option – Watch | Read

AI PCs Not Selling – Watch | Read

Hacker Jailed for Attacks on Rotterdam and Antwerp Ports – Watch | Read

Poland Cyber Attack on Energy Infrastructure Stopped – Watch | Read

Scam Email Knows My Password – Watch | Read

Worldcoin and Eye Scans for Human Verification – Watch | Read

And finally...Scanning your eyes to prove you are human, Sam Altman’s Orb

Watch | Read

This one is proper Black Mirror territory, because it takes a real problem, bot spam, fake accounts, AI-generated nonsense everywhere, and answers it with something that feels way too permanent. Worldcoin’s Orb scans your iris to create a unique digital identifier, a World ID, basically a way to prove you are a real human online. In some places, they even pay you in crypto to do it.

The pitch is “we do not store your eye images, we just turn it into a cryptographic code”, but the bit that makes my skin crawl is the direction of travel. Once you normalise scanning bodies to access digital services, it is hard to un-invent that. Passwords can be changed, devices can be replaced, but biometrics are forever. If a system like this ever gets abused, breached, repurposed, or linked up with other data sources, you do not get to rotate your eyeballs and start again.

And the crypto incentive matters. Paying people to hand over biometric data is not neutral as it changes the deal. It nudges adoption through cash, not through genuine understanding or informed consent. And if the goal is to build trust online, starting with “here is some money, let a shiny sphere scan your iris” is a weird way to do it.

This story is not just about one gadget in a shopping centre. It is about what comes next. If “prove you are human” becomes a standard requirement, who controls that proof, who decides when it is needed, and who gets locked out if they do not want to play along?

The Awareness Angle

  • Biometrics are permanent - If something goes wrong, you cannot reset it like a password
  • Incentives change consent - Paying people to sign up shifts behaviour faster than understanding
  • This will not stay niche - If it works once, it will get pushed into more places

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.

All views and opinions are our own and do not reflect those of our employers.

This week on The Awareness Angle, it is a reminder of just how much data follows us around, and how often it ends up exposed in places we barely think about. From magazine subscriptions and radio stations holding millions of records, to healthcare providers, gas stations, and even space agencies dealing with serious breaches, the theme this week is scale, and how quickly it can spiral.

We look at incidents that were first reported as small, only to grow into hundreds of thousands or millions of affected people months later. We also dig into the way modern attacks blend into normal work, fake blue screens, booking emails, sideloaded apps, and even trusted security tools being used as a way in.

There is a longer view, too, with Equifax still discussing culture years after its breach, new government cyber plans taking shape, and insurers quietly spelling out what they will not cover when cyber incidents spill into the physical world.

It is a packed episode, full of practical lessons and uncomfortable reminders about trust, habit, and the digital footprints we all leave behind.

This week's stories...

Condé Nast breach and the risk hiding in forgotten subscriptions

Watch | Read

Condé Nast is responding to a breach claim that could affect up to 40 million users across brands, including Vogue, GQ, Wired, and The New Yorker. An attacker using the name “Lovely” shared data samples allegedly taken from subscription systems and claimed to have access across multiple Condé Nast properties. The exposed information reportedly includes names, email addresses, usernames, phone numbers, dates of birth, and location data. According to reports, the attacker alleged they attempted to flag vulnerabilities before releasing proof, though Condé Nast disputes parts of that account and says it has taken steps to disable the accounts involved in the unlawful access.

During the discussion on the show, the focus was less on the headline number and more on how ordinary this type of data feels. Subscription accounts like these are often created years earlier and then forgotten entirely. They don’t feel sensitive or important, yet the data persists long after interest fades. That long lived, low attention data is what makes incidents like this so uncomfortable, it surfaces quietly and is easy to abuse without ever feeling like a major breach at the time.

The Awareness Angles

  • Subscription data is still valuable - names and email addresses alone can fuel phishing and scams
  • Forgotten accounts create blind spots - users move on while data remains
  • Proof leaks are rarely the end - small samples often point to wider exposure

European Space Agency breach shows even critical organisations aren’t immune

Watch | Read

The European Space Agency confirmed a cyber incident that is now under criminal investigation, after attackers gained unauthorised access to parts of its internal IT environment. Reporting suggests a public vulnerability was exploited, with attackers claiming to have taken hundreds of gigabytes of internal files. ESA said mission-critical spacecraft operations were not affected, but the incident was serious enough to involve law enforcement and trigger a wider forensic review.

The discussion wasn’t really about whether ESA should be better protected, it was more about frustration. There was a sense that some things just shouldn’t be messed with at all. Space, like healthcare or charities, doesn’t feel like a fair game. But that feeling clashes with reality. Attackers don’t draw ethical lines. If a vulnerability exists and remains open, it becomes an opportunity, regardless of how harmless or important the organisation feels.

The Awareness Angles

  • Attackers don’t respect boundaries - ethical lines don’t factor into targeting decisions
  • Unpatched weaknesses still get exploited - it only takes one open door
  • Sensitive data isn’t limited to operations - internal documents and partner information still carry risk

Fake blue screens are being used to trick hotel staff into installing malware

Watch | Read

Hotels across Europe are being targeted by phishing emails that impersonate booking-related messages, often posing as reservation updates or cancellations. The emails lead staff to malicious pages that display a fake Windows blue screen and instruct users to follow recovery steps. Those steps involve running commands that install malware directly onto the system. It is a ClickFix-style attack, but disguised as a system failure rather than a security warning.

The conversation focused on how easy this is to fall into when it lands in the middle of a normal working day. Hotel staff deal with booking emails constantly, and fixing problems quickly is part of the job. When something looks technical and urgent, the instinct is to resolve it and move on, not stop and question whether it should be escalated. That pressure, combined with something that looks familiar, is what makes this technique effective.

The Awareness Angles

  • Urgency drives behaviour - fake system errors push people into fast decisions
  • Normal workflows lower scepticism - familiar-looking emails get less scrutiny
  • ClickFix keeps evolving - attackers rely on users to run the malware for them

ChatGPT Health raises the stakes for account security

Watch | Read

OpenAI announced ChatGPT Health, a feature that allows users to connect medical records and wellness apps to their ChatGPT account. The company says the feature is not intended for diagnosis or treatment, and that connected health data won’t be used to train models. The goal, according to OpenAI, is to make responses more useful by grounding them in a user’s own health context.

The discussion wasn’t really about whether this is a good or bad feature, it was about concentration of value. On the show, the point was made that for many people ChatGPT is already a second brain. It holds questions, ideas, work context, and personal thinking. Adding health data into that mix means a single account can now represent a very complete picture of someone. That makes the impact of account compromise much higher than it used to be, even if the feature itself is well intentioned.

The Awareness Angles

  • Accounts are becoming life hubs - more context means higher impact if compromised
  • Login security matters more than ever - strong MFA and recovery controls are critical
  • Convenience quietly expands risk - connecting data should always be a conscious choice

This Week's Discussion Points...

Condé Nast breach claims and subscriber data risk – Watch | Read

Covenant Health breach grows to nearly half a million people – Watch | Read

Tokyo FM breach highlights how radio stations hold vast listener data – Watch | Read

US gas station operator breach exposes payment cards and ID data after delayed notification – Watch | Read

European Space Agency breach placed under criminal investigation – Watch | Read

Equifax says security culture is now built in, after one of the biggest breaches on record – Watch | Read

Fake Blue Screen of Death attacks targeting hotel staff – Watch | Read

HSBC blocks customers using sideloaded Bitwarden apps – Watch | Read

OpenAI launches ChatGPT Health and raises questions about account value – Watch | Read

UK government publishes new cyber action plan – Watch | Read

And Finally...Cybersecurity Training That Ticks Boxes but Changes Nothing

Article contentWe discussed NCSC's training for Schools.

Watch

This week we talked about NCSC cybersecurity training being issued to school staff, a 36 minute video, stock slides, synthetic narration, no interaction, and no assessment. Everyone completes it, signs it off, and moves on. On paper, the risk is managed. In reality, very little of that content will be remembered when someone receives a real scam, a fake text, or a convincing phishing email. It is a familiar pattern in security awareness, training designed to satisfy a requirement rather than change behaviour. The problem is not that people do not care, it is that long, generic training delivered once a year does not reflect how threats actually show up in daily life.

The Awareness Angle

  • Completion is not protection - Watching a video does not mean someone can spot a scam under pressure
  • Relevance beats length - Five minutes of current, relatable examples beats 36 minutes of theory every time
  • Engagement is the control - If people do not remember it, it cannot protect them

Thanks for reading! If you’ve spotted something interesting in the world of cyber this week, a breach, a tool, or just something a bit weird, let us know at hello@riskycreative.com. We’re always learning, and your input helps shape future episodes.

Ant Davis and Luke Pettigrew write this newsletter and podcast.

The Awareness Angle Podcast and Newsletter is a Risky Creative production.