1) Algorithmic Amplification: “What spreads” becomes “what seems true.”
On major platforms, the most consequential editorial decisions are often made by recommendation and ranking systems, not human editors.
A 2024 observational study of X (Twitter) found that posts linking to low-credibility domains generated more impressions on aggregate than comparable posts, with amplification patterns especially pronounced among high-engagement, high-follower accounts and for high-toxicity content.
Spoon-feeding: When algorithms reward engagement, they can foster distrust and concern, as distortion spreads faster than nuance, underscoring the need for awareness among media professionals and researchers.
2) Coordinated Inauthentic Behavior (CIB): Fake “publics” that simulate consensus
Influence campaigns increasingly work by manufacturing the appearance of organic agreement—using coordinated networks of fake or deceptive accounts to seed narratives, amplify them, and make them look “popular.”
Meta’s regular threat reporting documents repeated takedowns of such networks across regions and languages, underscoring the importance of trust and vigilance in media and policy efforts to combat CIB, as well as the need for coordinated efforts to counter coordinated manipulation of public debate in which fake accounts are central.
Takeaway: If consensus is cheap to fabricate, “everyone is saying” becomes a tactic—not evidence.
3) “Pink Slime” Local News: Partisan or pay-for-play outlets wearing a hometown mask
One of the most effective forms of manipulation isn’t national—it’s local.
Research and reporting from the Tow Center/CJR describe “pink slime” as content that mimics local journalism while obscuring funding, intent, and authorship; CJR’s 2024 investigation traced millions in political spending flowing into an extensive network of such sites.
A 2024 report noted that the number of “pink slime” sites identified by NewsGuard roughly rivaled the count of genuine local daily newspaper sites—highlighting how the decline of local news creates a vacuum for imitation.
Takeaway: The most persuasive propaganda looks like the neighborly newsroom you miss, making it easy to feel nostalgic and worried about the decline of genuine local journalism among citizens and media advocates.
4) Native Advertising & “Sponsored” Storytelling: Ads that borrow journalistic authority
A subtler manipulation technique is deceptively formatted advertising—marketing content designed to resemble reporting, reviews, or features.
The FTC explicitly warns that “native advertising” can mislead when readers can’t readily distinguish ads from editorial content, emphasizing that disclosures must be clear and conspicuous based on the net impression of the format.
Takeaway: If the label is easy to miss, persuasion gets to pose as information.
5) AI Content Farms: Industrial-scale “news” with little or no human oversight
Generative AI has made it dramatically cheaper to mass-produce credible-sounding articles—often for ad revenue—creating a new supply chain for misinformation and “garbage-in, garbage-out” journalism.
NewsGuard reports identifying thousands of undisclosed AI-generated news and information sites operating with minimal human oversight, often using generic names that appear legitimate and sometimes publishing false claims.
These sites can be economically sustained through programmatic advertising, which can place mainstream brand ads regardless of the site’s quality—creating incentives to scale the model.
Takeaway: When “articles” become cheap to manufacture, information pollution becomes a business model.
6) Synthetic Audio & Deepfake Persuasion: The weaponization of “I heard it myself.”
Deepfakes exploit a cognitive shortcut: people often treat audio/video as higher proof than text.
In the U.S., authorities investigated AI-generated robocalls that used a cloned voice to discourage voting in New Hampshire’s January 2024 primary; the state Attorney General described the calls as using an AI-generated voice clone and urged voters to disregard them.
The FCC subsequently clarified that AI-generated voices qualify as “artificial” under the TCPA and took enforcement action against deepfake election robocalls.
Internationally, the “Slovak case” involved a viral fake audio clip released just before elections, illustrating how timing, low-trust environments, and distribution channels can amplify Impact even when authenticity is disputed.
Takeaway: In the deepfake era, “seeing is believing” becomes “seeing is being targeted.”
7) Cloned Websites & Brand Impersonation: When the “map” counterfeits the territory
Some operations don’t argue with mainstream media—they forge it.
The Russia-linked “Doppelgänger” campaign has been documented as using spoofed domains and cloned websites that mimic legitimate outlets and institutions, coupled with social distribution tactics to drive traffic and legitimacy.
EU DisinfoLab maintains a running timeline of public reporting on the operation across multiple platforms and investigations, reflecting the persistence and evolution of these tactics.
Takeaway: When counterfeit media looks authentic, credibility becomes a commodity that can be stolen.
8) Encrypted Messaging as a “Dark Social” Distribution Layer
Even when public platforms label or remove content, narratives can continue to spread via closed or encrypted channels where moderation and public visibility are limited.
Research commentary on election misinformation highlights how encrypted messaging apps can Play an outsized role in influence operations because content circulates through trusted interpersonal networks rather than public feeds.