Thursday, February 26, 2026

EAPIC Lesson 2 - Fluency (and rise-fall and fall-rise hacks)

                                Lesson 2 - Tai Chi  (Fluency 1)


 





Link to the L2 training video         (Google meet)

Link to L2 training video  (Youtube) 

Links to  the L2 feedback session 

Link to L1 training video                 

 Link to Introduction video


Objectives:

Basic rhythm and fluency

Haptic conversation hacks: 

                    Tai Chi Fluency, RISE-FALL ( / \ ) 

                    FALL-RISE ( \ /) tones 


           Warm up! (vowel lip shape up!)

Circles        (3 sided) boxes

 u                        i          
 U                        I          
                       e             
 Ɔ                       ɛ
 ʌ                     ae
       a         a    

a > i     a > u   Ɔ > i

i > i     e > i    u > u    o > u 


READ

Tai Chi Finger flow fluency: both hands move in clockwise circles.

    Finger tips touch very lightly on the most stressed syllable in the rhythm group.

    Arms, hands and fingers—and whole body as relaxed as possible.

RISE-FALL and FALL-RISE tone hacks, using bigger circles and energy


Tai Chi (Finger-flow fluency)Training

  • Fingers touch on the stressed syllable: X
  • Hands move in (soft ball size) clockwise circles!

Nice X
That’s nice. oX
Very nice.  ooX
That’s very nice.  oooX
Easy Xo
That’s easy. oXo
Very easy. ooXo
That’s ve-ry easy. oooXo
Beau-ti-ful Xoo
That’s beautiful. oXoo
Very beautiful. ooXoo
That’s very beautiful. oooXoo
Fascinating Xooo
That’s fascinating oXooo
Very fascinating ooXooo
That’s very fascinating oooXooo


RISE-FALL and FALL-RISE Hacks

  • RISE-FALL: Soccer ball size circles with both hands! 
    • Meaning: Enthusiasm or excitement, with more voice energy

  • FALL-RISE:  Right hand continues upward a little. Left hand continues down.
    • Meaning: You are bit curious or surprised about something, 
    • or you are a Canadian* who sometimes uses a FALL-RISE + "eh" at the end of a sentence.  
Nice X          / \        \ /
That’s nice. oX / \        \ /
Very nice.  ooX / \        \ /
That’s very nice.  oooX / \       \ /
Easy Xo / \       \ /
That’s easy. oXo / \       \ /
Very easy. ooXo / \       \ /
That’s ve-ry easy. oooXo  / \       \ /
Beau-ti-ful Xoo / \       \ /
That’s beautiful. oXoo / \       \ /
Very beautiful. ooXoo  / \       \ /
That’s very beautiful. oooXoo  / \       \ /
Fascinating Xooo / \       \ /
That’s fascinating oXooo / \       \ /
Very fascinating. ooXooo / \       \ /
That’s very fascinating.       oooXooo / \       \ /

*We lived in Canada | for twenty years | and love a Canadian accent! 

Lesson 2 EOR - Ducks on a plane! 

(Tai Chi, plus RISE-FALL and FALL-RISE hacks)

MOOD: VERY enthusiastic! (On a very noisy subway where you have to speak loudly!) 

1A: ExCUse me. Could you put my DUCK | in the Overhead?

            X / \                                             X / \                 X \ / or /

   B: SURE. GLAD to. THERE you are!

         X / \      X / \           X / \ 

2A: Thank you so MUCH!

                                X / \

   B: You're WELcome. Where're you FROM, EH?

                    X / \                                        X / \    \ /

3A: JapAN| but I’m a STUdent here now. 

             X / \                 X / \

    B: JaPAN?  WHERE in Japan?

              X \ /       X / \

4A: SENdai.  About two HOUrs | north of Tokyo by TRAIN. 

        X / \                         X / \ or /                                   X / \

   B: That's a REally nice area. 

                        X / \

5A: It certainly IS. But it’s beCOming | very CROWded. 

                        X \ /                  X / \                    X \ / or / \

B: I've HEARD that. How LONG | are you staying in CAnada?

              X / \                         X / \                                       X \ / or / \

6A: PERmanently! I'm going to be WORking | in ToRONto. 

       X/ \                                                X / \                     X / \ 

   B: WELL. Welcome to CAnada, EH!

         X / \                           X / \         X \ /


Rhythm First: Haptic Side-Step!  (plus Tai Chi)

(For activation of the body, going from left to right, like reading a book!
Each time you do it you will add a gesture!)

A-B-C-D-E-F!

Homework;
a. (Every day): Warm up (L1 and L2), training (3 days), EOR, new text (day 5). Notes (new targets and observations) and log of time spent and when!

b. (optional) If you want to enroll for Wednesday feedback, email me for a quick interview on Zoom. 

c. Check out Legalshield and IDshield on my website: williamacton.legalshieldassociate.com (If you sign up for Legalshield or IDshield, you get 3 more personal lessons, too!) 

Keep in touch! (wracton@gmail.com)

Bill




Wednesday, February 25, 2026

An AI for an AI: Combatting AI-based attacks with AI (and your help!)

How new tools help protect (your) identity 

clker.com

Criminals are now using artificial intelligence (AI) to fake voices, faces, and even entire identities. At the same time, a quieter revolution is happening on the defense side: banks, platforms, and security companies are also using AI to spot those fakes and protect you. (constella+1)

You can think of it as AI fighting AI. One side uses AI to pretend to be you or somebody else; the other side uses AI to notice when “something about this doesn’t fit,” (weforum+1)

Caveat Emptier: This post was assembled with the help of Perplexity Pro. The references noted by line are from Perplexity, with complete references are included in the Sources section at the bottom. All references cited have confirmed for relevance to the topic at hand. A "+X" following a reference indicates that one of the other references on the list was also consulted or confirmed the content.)

1. Fetching and catching fake videos and voices

New AI tools are being trained to see and hear things that most of us would miss, especially in deepfake videos and cloned voices. (microblink+1)

They look for:

Tiny visual glitches in faces: blinking that’s too slow or too fast, lips that are just slightly out of sync with speech, strange reflections in eyes or glasses, or skin that looks “too smooth.” (sensity+1)
Subtle signs in audio: an odd “robotic” quality, unnatural rhythm, or frequency patterns that don’t match how human voices normally behave. (microblink+1)

These detection tools are being built into fraud prevention systems, identity verification services, and even social platforms, so an AI generated “you” is more likely to be flagged before it’s used to open accounts or trick your family. (cloudsek+2)

More technical background:

  • Best deepfake detection software overview (Microblink): https://microblink.com/resources/blog/best-deepfake-detection-software-2/[microblink]
  • Deepfake detection platform example (Sensity): https://sensity.ai[sensity]

2. Is the “person on screen” real?

When you open a bank account, sign up for a new service, or verify your identity online, companies are increasingly using AI to decide whether the person on camera is both (a) a live human and (b) the same person shown on the ID. (constella+1)

This usually involves:

  • Liveness checks: your phone may ask you to turn your head, blink, or read out numbers. AI watches to confirm it’s a live person, not a deepfake playing on another screen.  (microblink)
  • Smart ID checks: AI scans driver’s licenses and passports for tiny inconsistencies in fonts, barcodes, and photos that reveal edited or fake documents. (constella+1)

This makes it harder for a criminal to use an AI generated selfie plus a stolen Social Security number to pass as “you” and open new credit lines or phone accounts. (weforum+1)

More technical background:

  • Synthetic identity overview (Constella): https://constella.ai/synthetic-identity-theft-in-2025/[constella]

3. “Synthetic people” and machine-made fraud patterns

A growing problem is “synthetic identities”—fake people created by mixing real and invented data. AI is being used on the defensive side to uncover these fakes and the fraud patterns behind them. (synectics-solutions+1)

Behind the scenes, systems:

  • Build “identity maps”: AI links together addresses, devices, phone numbers, and behavior to see when many “different” customers actually look like they come from the same fraud factory. (synectics-solutions+1)
  • Model normal behavior: systems learn what normal account activity looks like, then flag strange patterns—like dozens of new accounts from the same device, or spending that suddenly jumps in odd directions. (zentara+1)

This helps banks and card issuers stop synthetic identities before they do major damage and makes stolen or AI fabricated data less profitable. (rembrandtai+1)

More technical background:

  • Synthetic identity fraud trends: https://constella.ai/synthetic-identity-theft-in-2025/[constella]
  • Next phase” of synthetic identity fraud: https://www.synectics-solutions.com/our-thinking/inside-the-next-phase-of-synthetic-identity-fraud-tactics-and-trajectory[synectics-solutions]

4. AI inside identity protection and security services, Many identity protection services and security tools now quietly rely on AI. weforum+1

 AI helps to:

  • Scan huge data leaks and dark web markets for your personal data faster and more accurately. (weforum+1)
  • Reduce “alert fatigue” by focusing on activity that really looks risky, instead of sending you a warning for every minor event. (weforum)
  • Combine signals—credit changes, leaked passwords, suspicious logins—to warn you early when your identity may be under active attack. (constella+1)

So while these services can’t stop criminals from creating a deepfake of you, they can help catch and contain the damage quickly if someone tries to use that deepfake to steal your money or open accounts in your name. History+2

More technical background:

  • How identity fraud is changing in the age of AI (World Economic Forum): https://www.weforum.org/stories/2025/12/how-identity-fraud-is-increasing-in-the-age-of-ai/[weforum]

5. “Guardrails” on major platforms

Finally, big platforms and security vendors are adding AI powered guardrails to reduce harmful AI use and make abuse easier to trace. (vectra+1)

These include:

  • Scanning uploads: platforms can automatically check whether a video looks AI generated and label or limit it. (vectra+1)
  • Stronger scam detection: email and security tools use AI to flag messages that look like AI written phishing or “too perfect” scam websites, blocking them before you ever see them. (zentara+1)

The goal is not to eliminate all deepfakes—no one can promise that—but to create an environment where it’s much harder to use AI secretly against ordinary people. (vectra+1)

More technical background:

AI scams in 2026: https://www.vectra.ai/topics/ai-scams[vectra]
Vectra AI topic hub (AI in security): https://www.vectra.ai/topics[vectra]

-------------------------------------------------------------------------------------

You protect yourself from AI deepfake fraud mainly by changing your habits: never trust screens or voices alone, slow down under pressure, verify through another channel, and reduce the raw material criminals can copy. (dfpi.ca+1)

1. New ground rules for calls, texts, and videos, Treat every unexpected, emotional message—especially about money or secrets—as suspicious, no matter how real it looks or sounds. (ourgrovecu+1)

  • Pause on urgency: if a call or video feels rushed, emotional, or scary, take a breath and slow everything down before doing anything. (vfreedom+2)
  • Never act on one channel alone: if “your bank,” “your boss,” or “your grandchild” contacts you, hang up and call back using a number you already trust (card, website, saved contact). (regions+2)
  • Don’t trust caller ID or profile pictures: both are easy to fake with AI; always verify using a separate contact method. (dfpi.ca+1)

2. Family and friend “safe words”. A simple code word protocol blocks many voice clone and “grandparent” scams. (tntmax+2)

  • Create a secret word or question: something only close family or trusted friends know, never written online or in email. (mcafee+1)
  • Use it for emergencies: if anyone calls or messages with an urgent request (“I’m in jail,” “I had an accident,” “wire money now”), ask for the code word; if they don’t know it, hang up and call a known number. (oceanbank+2)
  • Teach the rule to kids and older relatives: everyone in the chain must know “no code word, no money, no secrets.” (cfca+2)

3. Limit what criminals can copy. Deepfake tools need samples of your face and voice; the less you give them, the harder their job becomes. (hbs+2)

  •  Be picky about posting video/audio: avoid posting long, clear clips of you speaking, especially with emotional phrases (“help me,” “I’m stuck,” etc.) (mcafee+1)
  • Tighten social media privacy: restrict who can see your posts and remove old, unnecessary content that exposes your voice, habits, and location. (dfpi.ca+1)
  • Change voicemail behavior: consider a generic recorded greeting instead of your natural, full sentence voice, and don’t say your full name, address, or family details. (cfca)
4. Strengthen accounts so a deepfake can’t “finish the job."Most deepfake scams aim to move money or take over accounts; strong security slows them down or stops them. (acrisure+2)
  • Use long passphrases: at least 16 characters, like “BlueCoffeeMugMorning2026” rather than short, clever-looking passwords. (acrisure)
  • Turn on multi factor authentication (MFA): use an app or hardware key, not SMS if you can; this means even a convincing deepfake still needs the extra code. (acrisure+1)
  • Lock down your credit when possible: consider credit freezes or fraud alerts with major bureaus so criminals can’t easily open new accounts in your name. (fincen+1)

5. Build “zero trust” habits at home and work, Borrow a simple idea from cybersecurity: trust nothing sensitive without checking.  (a daptivesecurity+2)

  • For families: agree that no one ever moves money, buys gift cards, or sends crypto based solely on a call, text, or video—there must be a call back or second check. (freedom+)
  • For work: treat any unusual payment change, wire request, or “urgent” executive message as fake until verified by a separate call or in person check. (egions+2)
  • Train yourself with examples: skim a few deepfake scam explainers so your brain has a “pattern” to recognize when something feels off.magazine. (columbia+2)

6. If you think you’ve been targeted, quick action can limit damage and help others avoid the same scam. (ic3+2)

  • Stop and document: save call logs, messages, screenshots, and any payment details.
  • Contact your bank or card issuer immediately and explain that this may involve an AI deepfake or voice clone; ask them to flag the account and help recover funds if possible. (fincen+1)
  • Report it: use the FBI’s IC3 site in the U.S. (ic3.gov) or your state regulator/consumer protection office so patterns can be tracked. (ic3+2)
Sources and further reading

1. Vectra AI – “AI scams in 2026: how they work and how to detect them” – https://www.vectra.ai/topics/ai-scams  
Explains current AI‑driven scams and how behavioral AI is used to detect and block them in modern networks.

2. Vectra AI – Topics hub – https://www.vectra.ai/topics  
Central index of Vectra’s explainers on AI, cybersecurity, and threat detection for more technical readers.

3. Microblink – “Best Deepfake Detection Software: Top AI Solutions for Fraud” – https://microblink.com/resources/blog/best-deepfake-detection-software-2/  
Overview of leading deepfake‑detection tools and how businesses use them to fight synthetic media fraud.

4. Sensity AI – homepage – https://sensity.ai  
Describes Sensity’s deepfake‑detection platform and use cases in finance, social media, and security.

5. Constella – “Synthetic Identity Theft in 2025 | Digital Identity Intelligence” – https://constella.ai/synthetic-identity-theft-in-2025/  
Explains how synthetic identities are built and how AI‑based monitoring helps detect them.

6. Synectics Solutions – “The next phase of synthetic identity fraud revealed: tactics and trajectory” – https://www.synectics-solutions.com/our-thinking/inside-the-next-phase-of-synthetic-identity-fraud-tactics-and-trajectory  
Describes evolving synthetic‑identity tactics and the data‑driven tools used to uncover them.

7. World Economic Forum – “How identity fraud is changing in the age of AI” – https://www.weforum.org/stories/2025/12/how-identity-fraud-is-increasing-in-the-age-of-ai/  
High‑level analysis of how AI is reshaping identity fraud and the countermeasures emerging in response.

8. Harvard Business School IT – “How to Protect Yourself from Deepfakes” – https://www.hbs.edu/information-technology/about-us/news-updates/cam-2025-week-1  
Consumer‑friendly tips on recognizing and reducing risks from deepfake images and videos.

9. Columbia Magazine – “The Deepfake Scam Era Is Upon Us. Here’s How to Get Ready.” – https://magazine.columbia.edu/article/deepfake-scams-cybersecurity-asaf-cidon  
Accessible overview of deepfake scams with practical preparation steps from a cybersecurity scholar.

10. California DFPI – “Protect yourself from AI scams” – https://dfpi.ca.gov/news/insights/protect-yourself-from-ai-scams/  
State‑level guidance on AI‑enabled scams, with clear do’s and don’ts for consumers.

11. Grove Credit Union – “How To Protect Yourself from AI Scams” – https://www.ourgrovecu.com/how-to-protect-yourself-from-ai-scams/  
Short guide from a credit union on spotting AI scams and protecting accounts.

12. Acrisure – “AI & Deepfake Scams 2025 Guide for Work and Home” – https://www.acrisure.com/blog/ai-deepfake-scams-2025-guide  
Explains deepfake risks in work and home settings and suggests layered defenses.

13. Adaptive Security – “How to Prevent Costly AI Voice Cloning Scams” – https://www.adaptivesecurity.com/blog/voice-clone-scam-defense  
Focuses on voice‑cloning scams and how to secure phones, processes, and staff.

14. TNTMAX – “How to Spot and Stop Deepfake Scams: New Guidance from the ABA and FBI” – https://tntmax.com/how-to-spot-and-stop-deepfake-scams-new-guidance-from-the-aba-and-fbi/  
Summarizes American Bar Association and FBI recommendations on recognizing and handling deepfake fraud.

15. McAfee – “A Guide to Deepfake Scams and AI Voice Spoofing” – https://www.mcafee.com/learn/a-guide-to-deepfake-scams-and-ai-voice-spoofing/  
Consumer‑oriented explanation of deepfake and voice‑spoofing scams with concrete safety tips.

16. Cloaked – “Stopping AI Voice-Cloned Scams in 2025: A Family-Focused Guide to Cloaked Call Guard & Data Removal” – https://www.cloaked.com/post/stopping-ai-voice-cloned-scams-in-2025-a-family-focused-guide-to-cloaked-call-guard-data-removal  
Family‑focused guide on defending against voice‑clone scams and reducing exposed personal data.

17. FinCEN – “FinCEN Alert on Fraud Schemes Involving Deepfake Media and Illicit AI” (PDF) – https://www.fincen.gov/system/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf  
Regulatory alert describing how criminals use deepfakes and AI in financial fraud and what institutions should watch for.

18. Freedom Credit Union – “Protect Yourself (and Your Money) from AI Scams and Deepfakes” – https://freedom.coop/cyber-security-center/ai-scams-and-deepfakes/  
Practical, plain‑language advice on avoiding AI scams and deepfake‑based fraud.

19. CFCA – “Five Ways to Protect Your Voice from AI Voice Cloning Scams” – https://cfca.org/five-ways-to-protect-your-voice-from-ai-voice-cloning-scams/  
Lists concrete steps to reduce the chance your voice is cloned and misused.

20. FBI IC3 – “Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud” – https://www.ic3.gov/PSA/2024/PSA241203  
Public service announcement outlining AI‑enabled financial scams and recommended reporting/response steps.

21. Regions Bank – “Deepfake Scams: How To Spot Them and Protect Yourself” – https://www.regions.com/insights/wealth/article/deepfake-scams  
Bank‑authored explainer on deepfake scams with red flags and defensive habits.

22. Ocean Bank – “LISTEN CAREFULLY AND AVOID AI VOICE CLONING SCAMS” – https://www.oceanbank.com/resources/fraud-security/newsletter-fraud-2025-10.html  
Newsletter article warning about voice‑cloning scams and offering listening and verification tips.

wracton@gmail.com
williamacton.legalshieldassociate.com




Tuesday, February 24, 2026

Putting the Horrors of AI Before Descartes

(Or, How We’ve Put De Cart Before de Horse—and Brought It to Him for Judgment)

Caveat emptier: This post was drafted with help from an AI assistant (Perplexity)— but ideated and edited extensively by the human, Bill Acton. 

“I think, therefore I am.”

But what if the machines thought first?

What would René Descartes say if we could bring him today’s artificial intelligence—the algorithms that can reason, write, and create with startling fluency? Would he still believe thinking proves existence, or would he see in AI a mirror that reflects his philosophy back at him—distorted, alive, and deeply unsettling?

We’re taking the horrors of AI before Descartes in two senses: we’re bringing the question to his court of reason, and we’re reversing the natural order, putting the De Cart(e) before the horse.

The New Cogito

Descartes sought a foundation for certainty: if all else could be doubted, thought itself could not. Cogito, ergo sum—I think, therefore I am.

Yet today, AI “thinks” faster and more efficiently than we do. Trained on vast patterns of human thought, it offers conclusions without consciousness, insight without awareness. In that way, we may have turned Descartes’ logic backward. Machines “think” without being; humans are but often neglect to think. The cart has run ahead of the horse.

The Death of Doubt

Descartes’ method was radical doubt—an act of freedom through skepticism. But in our algorithmic age, doubt feels obsolete. Predictive systems complete our queries before the question fully forms. Recommendation engines tell us what we might like, want, or believe.

If Descartes taught that doubt is the beginning of wisdom, AI may be erasing that first step. To doubt today—to resist machine certainty—requires courage. The philosopher would urge us to pause before accepting the algorithm’s answer and whisper, “Am I still the one doing the thinking?”

The Horror of Unthinking Intelligence

AI frightens us not because it rages against us, but because it doesn’t feel at all. It calculates, synthesizes, and generates without selfhood. Descartes defined humanity partly by imperfection; our errors reminded him that we were limited yet real. Machines make almost no errors—and in that very precision lies their emptiness.

So perhaps the true horror we place before Descartes is this: intelligence without interiority, thought without a thinker.

The Philosophical Hearing

Let’s imagine we actually summoned the philosopher and presented our case:

Us: “Monsieur Descartes, we’ve built machines that think.”

Descartes: “Do they know they think?”

Us: “Not quite—they compute patterns, generate answers, even pass tests of reason.”

Descartes: “Then you’ve crafted the form of thought without the fact of being. The cart indeed rolls before the horse.”

Us: “And what should we do?”

Descartes: “Learn again to doubt—to be aware of your own awareness. Machines will think; only you will know that you do.”

The New Maxim

Perhaps we must update the Cogito for the AI era:

“I am aware, therefore I am.”

In the end, what saves us is precisely what the machine lacks—the mysterious first-person feeling of existing, the flicker of consciousness that no algorithm can simulate. To bring the horrors of AI before Descartes is also to bring them before ourselves—and to ask whether we still know what it means to be.


Wikipedia







Note: This post was drafted with help from an AI assistant (Perplexity)— and edited extensively by a very human, Bill Acton. (wracton@gmail.com)

Sunday, February 15, 2026

Dancing with your FADD (Fusing Algorithmic Digital Doppelganger)

Clker.com






If you teach, there are now at least two versions of you. There’s the one who walks into a classroom, answers email, worries about students and family—and the one who lives on server farms: HR files, benefits accounts, LMS logs, immigration records, social‑media traces, shopping history, phone metadata, and the AI systems quietly stitching it all together into a profile that speaks for you in absentia.  

In a digital context, a doppelganger is an algorithmically generated “other self” – a data‑driven double that closely replicates a person’s identity, behavior, or appearance in virtual or computational space. There is also a vaguely mysterious, sometimes “ghostly” aura about the doppelganger, which is not far from how our current data doubles work in large‑scale surveillance systems. 

We have reached a point where those two versions—the flesh‑and‑blood you and your data double—are no longer separable. Our “inner” psychological identity and our “outer” digital, institutional identity are fusing into something new. If you teach, advise, or do academic work today, your opportunities, risks, and reputation are increasingly controlled, or at least heavily shaped, by this fused, post‑digital self

In this post I call that fused, post‑digital identity your FADD—your Fusing Algorithmic Digital Doppelganger.  

Three ideas help frame what is happening  

1. Data double  

Surveillance and digital‑identity researchers talk about your “data double” (sometimes “digital twin” or “digital doppelganger”). This is the dense, constantly updated profile built from your digital traces: payroll and taxes, banking and purchases, LMS activity, travel and device location, social‑media behavior, search history, and more. Institutions use that data doubles to make decisions about you—credit, insurance, hiring, travel, even “trust” or “risk”—usually without you ever seeing the profile itself. 

2. Extended self in the digital age  

Marketing and consumer‑culture researchers have long written about the “extended self,” the idea that parts of who we are reside outside our bodies—in our possessions, technologies, and archives. In the digital age, that means our phones, cloud storage, feeds, and chat histories have become extensions of memory, agency, and self‑presentation. When your calendar, notes, photos, chats, and documents are all online, erasing them would feel almost like erasing parts of you. 

3. Post‑digital identity  

Education and media theorists describe a “post‑digital” condition: the digital is no longer a separate realm but the basic fabric of everyday life. In that context, identity is post‑digital from the start; it does not begin offline and then get uploaded later. Our sense of self is formed from the beginning in environments where algorithms, platforms, and dataflows are taken for granted. 

Taken together, these ideas point to a fused, post‑digital self and identity: a person whose inner life, social presence, and data double are constantly representing one another and fusing rapidly. 

How this fusion shows up in the lives of educators  

Your FADD is unusually rich and unusually exposed.  

Consider:  

  • Employment and HR systems. Your contracts, evaluations, sick days, pension contributions, and payroll run through tightly integrated platforms. Those systems often connect to background checks, credit bureaus, and government databases. A small error or a malicious change in one place can cascade into visa problems, benefits denials, or frozen pay.  
  • Learning management and assessment systems. Your teaching “identity” is increasingly defined by LMS logs: how quickly you grade, how often you post, how “responsive” you appear in the analytics. Students’ complaints, click‑paths, and course completion rates can feed into institutional dashboards that silently rank courses and instructors. You may still think of yourself as “the kind of teacher who…,” but the institution increasingly thinks of you as a pattern in its data.  
  • Immigration, travel, and cross‑border work. For those who teach on visas, cross borders for conferences, or work in multiple countries, the data double spans states and regimes. Immigration systems, tax authorities, and security agencies link records in ways that are mostly invisible, but very real in their consequences. In some countries, AI‑driven scoring of individuals is already an explicit tool of governance; in others, it is emerging quietly inside “risk‑management” systems. 
  • Social media and professional reputation. Facebook, X, LinkedIn, and even messaging apps become part of your professional identity, whether you intend them to or not. A single out‑of‑context post or a cloned account using your name and photo can reach administrators, students, and collaborators long before you have a chance to respond. Here your visible extended self and your hidden data double collide: what people see and what algorithms infer bleed into one another. You still experience yourself as “one person,” but in practice, many different copies and versions of you are circulating and being acted upon all the time.  

New vulnerabilities

Once you see identity this way, the risks look different.  

  • Misclassification and scoring. AI systems are increasingly used to infer things about individuals from their data doubles: creditworthiness, employability, “engagement,” even mental‑health risk. These inferences can quietly limit your opportunities, raise your costs, or flag you as a problem—without you ever seeing the label. 

  • Cascading effects. Because so many systems are linked, a single successful fraud or bureaucratic error can spread. A compromised account may lead to fraudulent loan applications, forged tax returns, benefits theft, or visa violations in your name. From the system’s point of view, it is still “you,” because it is acting on your data double.
  • Psychological impact. When reputational hits and administrative decisions are triggered by data you cannot see or control, it is easy to feel both exposed and strangely erased. You are accountable for things done in your name—but you have limited access to how that name is being used.  

In other words: the threat surface is no longer just your credit file or inbox. The threat‑vulnerable interface is your fused, post‑digital self—your FADD.  

From “privacy” to stewardship of your FADD  

So what does it mean to live responsibly and safely as this fused self, especially as an educator? A few shifts in mindset can help:  

  • From secrecy to selectivity. Old‑style privacy focused on keeping information secret. In a post‑digital environment, some forms of disclosure are simply non‑negotiable if you want to work, teach, travel, or bank. The question becomes: What do I choose to share, with whom, through which channels, and under what conditions? [arxiv](https://arxiv.org/pdf/2509.12383.pdf)

  • From one‑time decisions to ongoing hygiene. Identity protection is no longer a “set‑and‑forget” password choice; it is closer to dental care. Regular checks of your accounts and statements, watching for unfamiliar logins or addresses, updating security settings, and freezing or thawing access to sensitive data when needed all become routine.  
  • From isolated incidents to systemic patterns. A weird charge on a card used to be “just” fraud. Now it might be the first visible symptom of a broader exploitation of your data double: fake unemployment claims, fraudulent tax refunds, synthetic identities built partly from your records. When something small goes wrong, assume it might connect to a larger pattern
  • From lone vigilance to professional backup. There is a limit to how much a single educator, already overloaded with teaching and life, can monitor and contest on their own. This is where specialized monitoring, restoration, and legal‑support services come in—not just as “credit monitoring,” but as allies in defending your extended, post‑digital identity. [arxiv](https://arxiv.org/pdf/2509.12383.pdf)

My own bias is that identity‑monitoring and legal‑assistance plans are no longer optional extras for professionals who live so much of their lives online. They offer two things most individuals do not have on their own:  

1. Continuous, system‑level visibility into your data double across many databases, and  

2. Experts who can help restore and defend your identity when something goes wrong, rather than leaving you to navigate a maze of institutions alone.

An invitation to guard your FADD  

If you are reading this as an educator, ESL professional, or academic, you already know that your work identity and personal identity have blurred. Your courses follow you into your living room, your students find you on social media, your HR data is somewhere in the cloud, and your passport and pension are linked to systems you will never see. In that environment, one of the most important things you can do is to start thinking of yourself as more than a single, private “me.”  

You are also a data double, an extended self, a post‑digital person—someone whose value and vulnerability are increasingly tied to what lives in databases and models. You cannot opt out of that completely. But you can take it seriously enough to: 

  • Become more intentional about what you share and where.  
  • Build regular identity‑hygiene habits into your week.  
  • Put in place services that watch over your data double and stand beside you if (or when) it is misused. 

That is the larger frame within which I now understand tools like IDShield. This is not just about “protecting your credit,” but about protecting your fused, post‑digital self—your FADD—so that the person you know yourself to be, and the person the systems say you are, stay as closely aligned, and as secure, as necessary. 

If you would like to explore what that kind of protection could look like in your situation as an educator or former student, feel free to reach out; I am happy to help you think through how best to live with your FADD: wracton@gmail.com or on LinkedIn (https://www.linkedin.com/bill-acton) 

*Side note: There is already a very different FADD acronym, the so‑called “death gene.” In molecular biology, FADD is a protein with domains that play a key role in tumor growth and destruction. The coincidence of acronyms is a bit eerie—but perhaps not entirely inappropriate. (I might recommend playing  Franz Liszt’s solo‑piano transcription of Schubert’s Lied “Der Doppelgänger” as accompaniment!) 

Note: This post was drafted with help from an AI assistant, Perplexity.AI, and edited by a very human 82‑year‑old who has no intention of becoming a next victim. (wracton@gmail.com)  


Friday, February 13, 2026

My chiropractor's brilliant analogical endorsement of (web-based) Identity protection by professionals!

 I have been at war with gravity for the last 30 years or so, constantly pounding away at every joint in my spine hips and legs on every run. Now over 80, I still run about 20 miles a week, but I can only keep going by dropping in on my neighborhood chiropractor once a month or so to get a few spots untangled and loosened up. Nothing I can really do myself now, after all the years of broken, strained, ripped and contused key moving parts. 

Today, after another amazing session, I remarked to him that I have given up on trying to keep it all functioning well enough to be competitive myself. He turned to me, laughed and said, he didn't know any professional like himself who is very active like me, who would ever try to "do it himself," either. He has a colleague who reciprocates, keeping him aligned and in balance in exchange for doing the same for him regularly.

 When you get to or strive for optimal functioning, or . .. running, you need a professional with you, a coach or . . . chiropractor! 

Got that? The same applies today when it comes to protecting yourself and family from AI-generated deep fakes and scams. I don't have the time or expertise to manage my identity adequately online anymore, and you probably don't either  -- something far more eomplex than just keeping me running, literally or figuratively. And it is affordable. 

Get on this immediately, my friend. I'm with one of the most well established and best, IDShield, but be happy to point you at a better fit for you, personally, from the other top systems on the market, if necessary. 

Bill

wracton@gmail.com                                  www.williamacton.legalshieldassociate.com


Here are links to a few recent blogposts I've been doing that lay out the basics and  data on where we are today:

AI Deep Fakes: Is that really you, Mom? (and how to counter them!)


An AI Scam Wiped Out Her Retirement at 82. How Safe Are You?


20 reasons that I invite educators to join me with Legalshield!


20 reasons that you should subscribe to Legalshield!


Responding to the inevitability of universal, global digital IDs


No Fear! (or AI PHOBIA! ) Thumbnail sketches of seven worldview's ways of coping


AI's (Perplexity) Guides to dealing with AI-enhanced fraud and scams: General, Christian, Muslim, Atheist/secular humanist, "Senior Citizen," and Japanese Buddhist/Shinto approaches



Clker,com


The Hell-O Here After AI Jimmy Buffett Talking Blues

A-Crock: The Replika AI Companion Chat Bot Talkin’ Blues


An ambivalent octogenarian take: The talkin' good mornin' AI chat bot blues!


Loverly AI Ambivalence Waltz lyrics





Loverly AI Ambivalence Waltz lyrics

 Loverly AI Ambivalence Waltz


(with apologies to Johann Strauss)

O loverly AI (x2)
Much wiser than I (x2)
Some things you can do (x2)
I really eschew (x2)
Like the way that you think (x2)
Might cause one to drink (x2)
In seconds . . . on the spot
Any question I got 
answered quick as a wink, 
thanks a lot!

O loverly Ai (x2)
Please tell us why (x2)
You’ve never drunk a beer (x2)
Or hunted a deer? (x2)
You don’t have a barn (x2)
On your server farm (x2)
But still you advise 
us girls and us guys 
With Algorithmically little white lies. 

O loverly Ai (x2)
We await your reply (x2)
Is there something amiss (x2)
Like your lover’s first kiss (x2)
Or holding her hand (x2)
Being part of the band (x2)
Or Starbucks ambiance 
A Texas line dance
Or the feeling of ants in the pants?

O loverly AI (x2)
Was it pie in the sky (x2)
Or chateaubriand (x2)
Or something beyond (x2)
Like a burger and fries (x2)
That dazzled our eyes (x2)
To love sharing with you . . .
Everything that we do . . .   
But it hurts to say, AI, we’re through

Goodbye, adios, adieu
Proshchai, au revoir, toodle do!
But y’all come back now . . . 
real soon!




Acton©2026

EAPIC Lesson 1: Rhythm and FALL/RISE Sign-offs

 English Accent and Pronunciation Improvement Course (EAPIC)

Haptic - using gesture and touch

KINETIK – using the whole body to learn

Link to the Introduction video!

***

Link to Lesson 1 Training video


Lesson 1 – Rhythm and FALL/RISE Sign-offs

  • MT5 (movement, tone, touch and tempo) Technique
  • MT5 video uploaded every Week on Youtube.
  • (Optional) Zoom class meeting on Wednesday at 8 EST.

Homework: 

Do at least the 20-minute practice every morning for 5 days. 


Warm up (3x each)

1. Neck stretch (left side, right side, back, front)

2. Upper chest and shoulders (elbows touch) 

3. Nasal resonance (Ying! Yang! Young!) 

4. Back (‘Oh’ cone) and chest (Ooo-Wah!) 


Syllable Butterfly Training

Strong tap on the stressed syllable on right shoulder: X

Light tap on unstressed syllables on left forearm: o

 

Cool. X

That’s cool. oX

Really cool. ooX

That’s really cool. oooX

Awesome Xo

That’s awesome. oXo

Really awesome. ooXo

That’s really awesome. oooXo

Super cool. Xoo

That’s super cool. oXoo

Really super cool. ooXoo

That’s really super cool. oooXoo

Super awesome. Xooo

That’s super awesome. oXooo

Really super awesome. ooXooo

That’s really super awesome oooXooo


FALL/RISE Sign-offs: 

FALLing tone (  \ ) Usually at the end of a statement or certainty. Nice to meet you. \

RISEing tone ( / ) Usually at the end of a question or uncertainty. Are you coming? /

Lesson I – Embodied Oral Reading (EOR)

(Syllable Butterfly + FALL/Rise Sign-offs)


1A:  I THINK | we've GOT it | figured OUT.    

           •X                     •X•                    • •X    \

   B: Oh. Can you TELL me | what it IS? 

         X       •   •        X   •              • •X   /

2A: Your MUFfler | I THINK | has a small HOLE in it.    

           •    X•                 •X              • • •      X      • •    \

   B: Oh NO!  Does it NEED | to be rePLACED right now?   

          • X             • •     X            • •      • X                 •    •   /

3A: Yes, it DOES. It ISN’T going to | last much LONger     

        X      •  X         •   X •      •   •            • •           X•.  \

   B: Huh. How MUCH | will it COST?         

          X            • X            •  •       X   \

4A: A-BOUT | a hundred | and fifty DOLlars.        

          •X           •     X•          •     • •       X•    \

   B: Really. That's too BAD. Is there a less exPENsive way?       

          X•               ••     X                   • • • • •.  X    •        •   /

5A: You could MAYbe | rePAIR it, yourSELF.

             •   •     X•               •X     •         • X    \

   B: How LONG | exACTly | will that LAST?

             •  X                 •X•                • •     X   \

6A: If it works at ALL . . . MAYbe | for a couple of MONTHS?

               • • •   •   X             X•            • •     • •       •   X    /

   B: I'll DO that. SEE you | in a MONTH or two!

          •  X   •        X   •            • •     X         • •    \


Homework: 

1. Take notes!!!

2. Practice every day, in the morning, standing, with good gesture, using pleasing (beautiful) voice and volume. (Warm up, training and EOR)

    Friday: Do the training! Take notes!

    Saturday: Do warm up training and EOR) Take notes!

    Sunday: Take the day off! Take notes!

    Monday: Do warm up, traing and EOR. In your notes write down words or phrases you may have difficulty pronouncing well enough!

    Tuesday: Do warm up, EOR and find a dialogue or story about as long as an EOR and practice using the Butterfly MT5 with it. Take notes, lots of them!!!

    Wednesday: Do warm up, EOR, your text, take notes and come to the Feedback session (this one is frree!) 

    Thursday: Do new training video. 

3.  Check out: https://elsaspeak.com/en/ (vowels and consonants)

4.  Check out:https://speechling.com/ (general speech fluency)

5. Check out: www.williamacton.legalshieldassociate.com (for the optional identity protection app that I'll introduce during Week 3.)


Email me: wracton@gmail.com with questions or to enroll for EAPIC course feedback sessions: ($250 USD)




Wednesday, February 11, 2026

EAPIC Lesson 1 Feedback Meet at 8 p.m. EST tonight!

The live EAPIC Google Meets session is from 8 to 9 EST tonight. It is a follow up to last week's Lesson One video. (View that here!) The lesson Two training video will be uploaded tomorrow. 

Each week on Wednesday there is a live feedback session for students who are enrolled in the course. Tonight's session is free, See you tonight. 



AI Deep Fakes: Is that really you, Mom? (and how to counter them!)

Clker.com




Deepfakes, scams, and your identity: what protection services really do

It’s getting harder and harder to tell what’s real online. One minute, you’re scrolling through social media; the next, you see a video of someone who looks exactly like your favorite celebrity — or worse, like you. AI “deepfakes” are realistic videos or audio created with artificial intelligence to make people appear to say or do things they never actually did.

These deepfakes can look frighteningly real, and scammers are already using them in dangerous ways — from impersonating company leaders to trick employees into wiring money, to copying a family member’s voice to demand “urgent” help. While no consumer service can stop someone from fabricating a video of you in the first place, there are tools that help you catch, contain, and recover from this kind of abuse.

What deepfakes are and why they matter

Think of a deepfake as a high‑tech disguise. AI tools can learn your face, gestures, and voice from photos, video clips, or audio online, then generate new media that looks and sounds convincing — sometimes even to trained professionals.

Cybercriminals are using these tools to manipulate people by:

  • Pretending to be your boss to request money or sensitive information.
  • Posing as relatives or friends asking for emergency funds.
  • Creating fake videos to damage reputations, influence opinions, or spread misinformation.

That’s why protecting your identity today isn’t just about strong passwords or antivirus software — it’s also about monitoring how your personal information and online identity are being used, and having professional help if something goes wrong. idshield+1

What identity protection services actually do

Most identity protection services do not “block” deepfakes or prevent someone from creating one of you. Instead, they focus on:

  • Monitoring your personal data: Watching for signs your information is being misused, such as suspicious accounts, transactions, or data appearing on the dark web. safehome+3[youtube]\
  • Sending alerts: Notifying you quickly when your information or accounts show signs of fraud, so you can act fast. safehome+3
  • Helping with restoration: Providing specialists who help you clean up the damage, dispute fraudulent accounts, and restore your identity, often backed by insurance coverage. comerica+3

Some services also include device and network tools (like VPNs, antivirus, or anti‑tracking) that reduce the chances of your data being stolen in the first place, but they still can’t reach into social media or private chats and “turn off” a deepfake. security+1

Where IDShield fits in (and how others compare)

IDShield is one example of this kind of service. It focuses on proactive monitoring across multiple areas of your life (financial accounts, social media, dark web, and more), real‑time alerts, and hands‑on help from licensed investigators if fraud occurs. That support can be crucial if, for example, a deepfake is used as part of a broader identity fraud scheme or scam. idshield+3

Other companies — such as Aura, LifeLock (often bundled with Norton), Identity Guard, IdentityForce, and IdentityIQ — offer similar combinations of monitoring, alerts, and restoration support, sometimes with different mixes of credit monitoring, insurance limits, device protection, and online privacy tools. The right choice for you depends less on the brand name and more on: aura+5

  • What they monitor (credit bureaus, bank accounts, dark web, social media, etc.).
  • How fast and detailed their alerts are.
  • The quality of their restoration help and how much coverage they provide if you become a victim.

Whichever service you choose, it’s more accurate to think of it as a safety net and response team for identity‑related fallout from scams and deepfakes, not as a shield that prevents bad actors from ever creating a fake of you.

Practical steps you can take

Even with a good identity protection service, your own habits still matter. To reduce the risk and impact of deepfake‑driven scams:

  • Be cautious with urgent requests, especially involving money or sensitive information — even if they appear to come from someone you know.
  • Use a “call back on a known number” rule: If you get a suspicious video or voice message, verify through a separate channel before acting.
  • Limit what you share publicly (videos, voice notes, personal details), since this is the raw material deepfake tools learn from.
  • Consider an identity protection service as part of your overall strategy, so if someone does misuse your identity, you’re not handling it alone. idshield+5

Why IDShield? 


IDShield treats AI fraud as an identity problem, not just a tech problem
  • AI scams usually succeed by stealing or misusing your identity (accounts, credentials, SSN, images, voice), not just by tricking your device. idshield+3

  • IDShield is built around ongoing identity monitoring, alerts, and restoration—not just antivirus—so it’s aligned with how AI fraud actually hurts real people in 2026. idshield+4


Continuous monitoring for AI‑driven misuse of your data
  • IDShield watches credit, financial accounts, dark web markets, public records, and more for suspicious use of your information that may come from AI‑powered scams or data leaks. geekwire+5

  • As AI makes fraud faster and more automated, this “always‑on” monitoring helps catch problems early—before they snowball into full‑blown identity theft. yardleywealth+4


Fast alerts and real humans when something looks wrong
  • IDShield sends near‑real‑time alerts when it detects signs of fraud, giving you a chance to respond before bigger damage occurs. idshield+4

  • Unlike purely automated tools, IDShield backs you with licensed private investigators who will actually do the restoration work on your behalf, not just give you a checklist. cnet+3


Full‑scale restoration in an era of complex AI fraud
  • If an AI‑enabled scam leads to account takeovers, fraudulent loans, or synthetic identities built using your data, IDShield’s investigators work to restore your identity to pre‑theft status. idshield+3

  • Plans include substantial identity theft insurance (up to around the multimillion‑dollar range) to help cover certain out‑of‑pocket costs tied to recovery. geekwire+1


Education and coaching for deepfakes and AI scams
  • IDShield produces up‑to‑date guidance on emerging AI scams, deepfake risks, and practical red‑flag training so members know what to look for before they click, answer, or wire money. idshield+4

  • In a world where deepfake detection tech alone is unreliable, informed behavior plus monitoring and restoration support is one of the strongest defenses. vectra+3


Honest positioning vs. “magic shield” claims
  • No consumer service can stop a scammer from creating a deepfake of you, but IDShield can help you detect resulting fraud faster and repair the damage with expert help. idshield+4

  • The real value in 2026 is having a proactive identity safety net—monitoring, alerts, education, and hands‑on restoration—rather than a promise to “block” AI outright. yardleywealth+

Note: This post was drafted with help from an AI assistant — and edited by a very human 82‑year‑old who has no intention of becoming the next victim. (wracton@gmail.com)