At 82 myself, this story resonated! An 82 year old grandmother lost nearly $200,000—not to a cartoonish prince in Nigeria, but to a popular “doctor” she thought she had been seeing live on social media. Scammers used AI to create an utterly convincing deepfake video of the soft-spoken physician, complete with compassionate face and voice. It persuaded her to move her investments into “safer,” more profitable accounts . . .
AI has radically changed the game. It can now clone trusted voices and faces, fabricate experts and news clips, and generate flawless bank style emails and social media posts tailored to your age, profession, interests, and financial profile. For those of us in education—often on part time or year to year contracts, already navigating fluctuating incomes—one well crafted AI powered scam can spell disaster.
A second example: The message that cost Lena her retirement savings didn’t sound foreign, clumsy, or obviously fake. It looked exactly like the security alerts she had been getting from her bank for years—down to the logo, the subject line, and the urgent but polite tone. 52, a senior project manager who spends her days reading contracts and risk reports, she is not naïve and technologically very savy. Yet over three weeks, guided by a smooth, AI-polished voice that claimed to be “from the bank’s fraud department,” she moved almost her entire retirement savings into what turned out to be "ghost"accounts.
How AI changes the scam game
• AI makes messages and websites look and sound professional, eliminating many of the red flags (bad spelling, odd phrasing) we were trained to notice in older scams.
• AI voice tools and deepfakes allow scammers to mimic the tone and cadence of bank staff, government agents, or even relatives, making phone-based social engineering far more convincing.
• AI driven scripts help scammers respond quickly and confidently to hesitation, giving victims the sense they’re talking to a well trained professional following a real protocol.
The real question isn’t, “Can I afford protection?” It is: “Can I afford a major financial hit with little or no backup?”
Three basic rules:
• Never act on a financial request from a message alone; always verify through an official channel.
• Slow down anything labeled “urgent.”
• Put a professional safety net in place for identity monitoring and legal/financial guidance.
I’ve put that in place for myself and my family—especially that third point—because my position as a semi retired educator is precarious. Yours?
If you’d like a simple one page checklist on “Scams, AI and otherwise, that target educators and retirees—and options,” send me a message at wracton@gmail.com. (Disclaimer: One of the options I include is the company I’m with, LegalShield.)
Note: The 82 year old deepfake doctor story was reported by SWNS and picked up by multiple outlets. The agency mixes tabloid and news content, but this case is consistent with what law enforcement and aging advocacy groups are now warning about AI driven scams.
This blogpost was created with the assistance of Perplexity AI.

No comments:
Post a Comment