It’s time to teach our parents how to spot AI fakes



By Ryan General
As artificial intelligence (AI) tools develop at an unprecedented rate, it is becoming more and more difficult for people to tell digital faces from real ones. Participants of a recent study by researchers from the University of Reading and partner institutions correctly identified AI generated faces only 31% of the time even when looking for obvious flaws.
The findings show that untrained viewers miss the same visual cues scammers and misinformation spreaders rely on, which makes it essential for elders, including our own parents, to learn how to recognize those patterns.
Knowing where to look
The University of Reading study tested 664 people on their ability to distinguish real faces from images generated by StyleGAN3, a widely used neural network that produces highly realistic synthetic portraits. Typical participants identified the synthetic faces correctly only 31% of the time and super recognizers (individuals with unusually strong face recognition ability) reached 41%, which showed how closely current AI systems can mimic natural features.
When researchers gave a short tutorial pointing out common trouble spots, such as uneven lighting or items that blend into the background, accuracy improved, with typical viewers reaching 51% and the strongest recognizers reaching 64%. Study lead Katie Gray said, “Computer generated faces pose genuine security risks. They have been used to create fake social media profiles, bypass identity verification systems and create false documents.”
AI-powered crime networks are here
One of the reasons identifying fakes is extremely important is that organized crime groups are adopting AI to increase the scale and efficiency of cybercrime, of which older adults are particularly susceptible to. A technical brief from the United Nations Office on Drugs and Crime reported that criminal groups in Southeast Asia are using AI-generated faces, cloned voices and synthetic identities to support large scale fraud.
These automated tools are increasingly responsible for phishing, malware distribution and illicit financial transfers. It also reports that these scammers have begun integrating multilingual chatbots, automated outreach systems and coerced labor into their operations to target victims around the world.
U.S. officials estimate that Americans lost “tens of billions” to scam operations based in the region last year. The data shows that older adults are more financially attractive targets and experience significantly higher losses when deceived.
Fake news in the AI era
A recent study by the University of Colorado Boulder and the Rochester Institute of Technology found that adults aged 55 and older were significantly more likely to share inaccurate political headlines than younger users. The research, which surveyed nearly 2,500 adults in the U.S. and Brazil, reported that older participants were more likely to treat misleading headlines as true.
Even 79-year-old President Donald Trump, who is surrounded by professional advisers and media staff, recently circulated an AI-generated video promoting so-called “medbeds,” a fabricated device that has spread for years within wellness and tech hoax communities. The fact that a high-profile figure with extensive support can be misled by synthetic content underscores how challenging detection can be for older adults who navigate these messages without similar resources.
Guidance for older adults
Asian American seniors often manage family savings, remittances or multi generation messaging groups, which gives them a central role in financial and communication decisions. These responsibilities increase the likelihood that they will respond quickly when a familiar name or voice appears to ask for help. The same urgency applies when they encounter headlines or posts that appear credible. The shift leaves seniors with far fewer signals to tell when a familiar face or voice is not genuine.
The training approach used in the study offers a practical model families can adapt at home. Begin by showing a real face and an AI generated one and ask what looks unusual. Point to details such as mismatched teeth, blended background edges or lighting that does not align, since these inconsistencies appear in many synthetic images.
Encourage parents to pause when a message or image seems urgent and to confirm identity through a separate channel before responding. The goal is to build a routine of checking, not perfect accuracy, with regular practice making it easier for older relatives to notice when something feels off.
This story is part of The Rebel Yellow Newsletter — a bold weekly newsletter from the creators of NextShark, reclaiming our stories and celebrating Asian American voices.
Subscribe free to join the movement. If you love what we’re building, consider becoming a paid member — your support helps us grow our team, investigate impactful stories, and uplift our community.
Share this Article
Share this Article