Introduction
For years, we’ve let algorithms determine what we see and how we see it. Now, with artificial intelligence in the hands of the same forces that once profited from Nigerian prince scams and crypto frauds, the vulnerability of the human mind has never been more apparent. The days of bots merely spamming scam messages are over—today, they interact like humans, subtly influencing opinions, preferences, and beliefs over time.
From Spam to Subtle Influence
It used to be easy to spot a bot. Poor grammar, generic messages, and obvious scams made them laughably transparent. But today’s AI is different. Large Language Models (LLMs) can now craft detailed, personalized responses that pass as genuine human interaction. Instead of dumping links or spamming nonsense, AI-driven bots are engaging in conversations, building trust, and nudging public opinion in nearly undetectable ways.
Some key developments:
- AI-powered political discourse: Studies show that bot accounts make up a significant portion of the online conversation during major political events. During Trump’s first impeachment, bots represented less than 1% of users but contributed over 31% of impeachment-related content (Arxiv).
- Bots shaping financial markets: Crypto trading groups on platforms like X (formerly Twitter) and Discord are flooded with AI-generated commentary, fueling hype cycles and panic selling. Bots accounted for almost 80% of crypto-related discussions on social media during the Bitcoin boom.
- OnlyFans and AI companionship: Men are paying thousands of dollars to chat with models, unaware that they are talking to AI. Platforms like ChatPersona have over 6,000 creators using AI to automate interactions with subscribers.
AI in the Hands of Manipulators
AI technology is not neutral. It is a tool, and like any tool, its impact depends on who wields it. The same forces that once used mass spam emails to scam the gullible now use sophisticated AI to manipulate discourse on a global scale. Bot farms, once used to artificially inflate likes and views, are now embedded in comment sections, guiding conversations and slowly shifting narratives.
Examples of AI-driven manipulation:
- Election interference: A Russian bot farm recently used AI-generated American profiles to push political propaganda, successfully deceiving thousands before being exposed.
- False consensus creation: AI bots upvote and respond to specific viewpoints, making them appear more widely accepted than they actually are.
- Social engineering at scale: Instead of pushing blatant disinformation, AI bots now blend in with real users, subtly influencing opinions over time.
The Vulnerability of the Human Mind
Humans are social creatures, and those around us shape our perceptions. But what happens when the ‘people’ we interact with aren’t people at all? Studies on human-AI interaction show that people struggle to differentiate between human and AI-generated responses, and 58% fail to correctly identify AI bots in political discussions (Notre Dame).
This is no longer just about fake engagement—it’s about engineered perception.
- When AI bots push a particular viewpoint, it creates the illusion that this perspective is widely held.
- When they engage in slow, conversational persuasion, they influence opinions over time.
- When they flood the discourse, they drown out dissenting voices.
Conclusion: Taking Back Control
The rise of AI-driven manipulation forces us to be more skeptical about what we see and who we engage with online. To counteract this:
- Question engagement metrics – High likes and comments don’t always mean organic interest.
- Look for patterns – It may be coordinated if many accounts push the same message in slightly different ways.
- Engage critically – Avoid letting online interactions be your sole source of truth.
The digital world is no longer just curated by algorithms—AI-driven conversations actively shape it. Recognizing this reality is the first step in reclaiming control over our own perceptions.

