Generative AI, once largely the domain of research laboratories, is now ubiquitous-offering powerful tools not only to creatives and innovators, but also to those seeking to deceive.
The proliferation of deepfake technology, capable of synthesizing highly realistic videos, images, and audio, has made social media fertile ground for novel forms of deception.
Beyond celebrity impersonations or political misinformation, a new trend leverages AI avatars falsely presenting themselves as licensed medical professionals to sell dubious supplements and unverified health products.
AI-Powered Deepfakes Pose Growing Risks on Social Media Platforms
Security researchers from ESET in Latin America recently uncovered a coordinated campaign on platforms such as TikTok and Instagram, where AI-generated avatars masquerade as gynecologists, dietitians, and other healthcare experts.

These avatars dispense health and wellness advice, often accompanied by polished video production and an authoritative tone designed to instill trust.
The content routinely steers viewers toward specific commercial products, masking marketing motives as credible medical guidance.
This exploitation of public trust in the medical profession is not only unethical but alarmingly effective.
Analysis of these campaigns reveals a recurring playbook: an AI-generated talking head offers beauty or health tips from a corner of the screen, recommending “natural” remedies or lifestyle changes.
These pitches typically direct users to online marketplaces, such as Amazon, to purchase supplements that are deceptively described with terms like “relaxation drops” or “anti-swelling aids.”
In one instance, a purported doctor promotes a “natural extract” as a superior alternative to Ozempic-the prescription drug popularly associated with weight loss-making unsubstantiated claims of miraculous results.
Closer inspection of both the products and the avatars exposes a lack of legitimate medical backing and, in some cases, the unauthorized use of real doctors’ likenesses to lend further credibility to the pitches.
Deceptive Medical Advice Fuels Misinformation and Fraud
The technical ease with which these deepfakes can be created is especially concerning.
Commercial AI tools, which typically allow users to generate realistic avatars and synthesize speech from text or short video samples, are being repurposed to scale up the production of deceptive content.
ESET researchers identified over 20 TikTok and Instagram accounts involved in peddling these AI-powered scams.
One account, impersonating a gynecologist with over a decade of fictitious experience, was traced directly to publicly available avatar libraries within popular applications.
This misuse flagrantly violates the terms of service of AI platforms, yet enforcement struggles to keep pace with the speed and sophistication of emerging tactics.
The ramifications extend beyond financial harm to consumers. The widespread dissemination of false health information exacerbates the erosion of confidence in legitimate medical advice available online and may delay individuals’ access to appropriate treatment.
Vulnerable users seeking trustworthy guidance may fall victim to ineffective or even hazardous remedies, amplifying the public health risks associated with digital misinformation.
While advances in AI make the identification of deepfakes increasingly challenging, there remain some telltale signs of manipulation.
Technical artifacts such as mismatched lip-syncing, unnatural facial expressions, visual glitches around edges, and synthetic-sounding voices can sometimes reveal a video’s inauthenticity.
Additionally, new social media accounts with minimal history, hyperbolic health claims, or a lack of credible citations should be treated with skepticism.
Experts urge users to verify medical claims against trusted resources, refrain from sharing unvetted content, and promptly report misleading material to platform moderators.
As generative AI evolves, distinguishing genuine expertise from sophisticated digital fakery will require both technological countermeasures and heightened digital literacy.
Without these safeguards, the threat posed by AI-generated disinformation-including the emerging “TikDocs” phenomenon-stands to undermine public trust in vital online health resources and enable a new frontier of scams.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant updates