Game of Hide and Seek

Photo: Artificial Hand. Credit:  Possessed Photography/Unsplash
Photo: Artificial Hand. Credit: Possessed Photography/Unsplash

Future of AI-Fueled Non-Authentic Behavior Online

Walter Schmidt. The profile looked just like any other middle-aged father of two children somewhere from the Dresden area in Germany. Despite this first look, NADS (Non-Authentic Defence System) an AI-controlled software deployed by NATO to identify non-authentic behavior online tagged it as highly suspicious, with 73% probability of it being a bot. Ginny Berwick, an officer of the NATO Disinformation unit sighed. Since the bots were merged with advanced AI and machine learning capabilities, it became increasingly difficult to distinguish between them and real profiles.

Problems began to arise around mid-2025 when secret services learned about a new and powerful cyber instrument called simply AIB (Artificial Intelligence Bots). AIB most likely originated from Eastern Europe and allowed for a dramatic increase in the ability of bots to camouflage their activities. The software rapidly proliferated and soon a number of rogue international actors began using it to shift public opinion in target countries.

It became a game of cat and mouse. With the new system, bots were now able to create extremely sophisticated networks helping them to remain undetected. What is more, machine learning caused their tactics to constantly evolve, reacting to NATO members' attempts to fight them and thus requiring constant development of new anti-bot strategies to limit their numbers.

Previous ways to identify inauthentic behavior rapidly became obsolete. Bots began using untraceable images on their profiles, and had a number of seemingly credible “friends” in their networks. AIB also introduced an automatic messaging system. Bots were now holding conversations between each other about various subjects, making it even harder for researchers to prove them inauthentic. The worst part was their ability to focus on objectives with more patience than bots from just a few years ago. Bots were no longer spamming dozens of comments to start a conflicting exchange with the target audience. Nor were they sharing a large amount of propagandist content on their profiles. Instead, they have adopted a “lingering approach”, slowly, but surely spreading their misleading messages.

After a few minutes of thinking, officer Ginny Berwick finally decided what to do about the Schmidt profile and clicked a red “suspend” button on her screen. “Are you sure you want to finish your action?” popped up. She hesitated for a moment, but eventually confirmed her decision.

Social media immediately received the demand and trusting NATO’s analysis approved the request, temporarily disabling the account. Walter Schmidt soon complained on different platforms, saying he was a real person and anti-West political activist. His case was soon shared by a number of actual bots to showcase “western censorship” and to fuel anti-NATO sentiment. The social media later apologized and made the account accessible again.

Eventually, only two questions remain unanswered - Was Walter Schmidt a real person or a bot? Either way, would it make any difference in this scenario?