Discussions on social media do not always reflect the real opinion of society. They are increasingly becoming a field of coordinated activity for ‘bots’ and ‘trolls’, where artificial tension is created, fear is spread, trust in institutions is undermined, discord is fomented, and social conflict is deepened.

What are ‘bots‘ and ‘trolls‘? What are the fundamental principles and objectives of their activities? Why do these activities pose a threat to Lithuania and its citizens? Finally, how can coordinated inauthentic behaviour be identified online?

These and other fundamental questions about the activities of bots and trolls are answered by experts from the Lithuanian National Crisis Management Centre (NCMC) and Marius Laurinaitis, professor at the Mykolas Romeris University (MRU) Law School and LegalTech Centre.

Experts: bots aim to spread fear, distrust and discord 

The NCMC classifies both so-called ‘bots’ (short for ‘robots’) and “trolls” as coordinated inauthentic behaviour characterized by attempts to manipulate, distort, or create inorganic public opinion on certain issues through deceptive methods such as fake accounts, identity concealment, and automation of comments or reactions.

The NCMC also notes that, in general, the main objectives of bots are to artificially shape public opinion, spread fear and mistrust, escalate sensitive issues, stir up discord, and influence democratic processes such as elections, etc.

“So-called ‘bots’ and ‘trolls’ differ from real people in that they hide their identity and seek to imitate discussion and diversity of opinion on issues of public importance, creating the misleading impression that, for example, a particular issue has broad public support or disapproval,” the institution clarifies in a comment to the news portal Delfi.

Even a limited network of bots can have a disproportionately large impact on social fragmentation

In turn, M. Laurinaitis, professor at the MRU Law School and LegalTech Centre, adds that in Lithuania, the activities of bots pose a serious threat to public e-services, the financial sector, information security and critical infrastructure, especially in the current context of hybrid threats and geopolitical instability.

“Bots artificially amplify emotional and divisive content, distorting public discourse and undermining trust in institutions. We see that sensitive topics are systematically exploited through synchronized accounts, repetitive narratives, and sudden spikes in activity, which suggests a deliberate escalation of sensitive topics,” he emphasizes. 

According to the MRU professor, even a limited network of bots can have a disproportionately large impact on social fragmentation and resistance in a small information space.

“In Lithuania, bot activity most often coincides with topics related to national security, the war in Ukraine, NATO, migration, vaccination, health crises, energy prices, inflation, education reforms, and cultural conflicts. These topics are inherently emotionally sensitive and prone to conflict,” he clarifies.

In turn, professor Paulius Pakutinskas, head of the MRU Law School and LegalTech Centre, notes in a press release that there is less and less room for real people in the digital space in general. 

“After all, most people do not write aggressive comments on the internet, do not create hundreds of posts a day, and do not get involved in conflicts. This is the kind of social media noise that the bots see and love most. Thus, bots, which never tire and always respond quickly, drown out real users, creating the false impression that society is angrier, more divided, and more radical than it actually is. We can see this in Lithuania’s social space as well,” he says.

The press release also draws attention to a study by global technology and security company Thales, which revealed that automated bot activity accounted for 51% of all internet traffic in 2024. This is the first time that bot-generated content has surpassed human activity on the internet. Of this portion, as much as 37% of all internet traffic consisted of harmful, often AI-based bots, while the remainder was neutral or useful bots, such as search engines. 

“This process happens very quickly and is often unnoticed. Much of the content on the internet is already being created by bots – and this is not necessarily dangerous or false, misleading information. Often these are simple, entertaining videos or neutral content. However, it is becoming increasingly difficult for people to distinguish between what is real and what is artificial. This is gradually changing people’s relationship with information,” notes Pakutinskas.

Coordinated inauthentic behaviour: principles of bots and trolls

NCMC explains that bots are characterised by complete automation, which means that the actions of accounts are fully programmed and operate only within the confines of the code written by the programmer. An example would be the so-called ‘chat bot’, which is used by various businesses to manage user queries.

“Accounts of this type on social networks are often characterised by unnatural language constructions, unusual grammatical errors, and strange punctuation, such as a period or comma in the middle of a word, no spaces between individual words or sentences, and time intervals between different actions that are not typical of human behaviour. Bots can have very specific functions – react, press on likes, share posts, or write comments,” the institution clarifies.

“Sometimes it can be observed that these accounts, after posting 1–2 comments, do not respond any more, do not participate in discussions, do not reply to private messages, but when a specific keyword is used, such as a public figure’s surname, abbreviations, names, etc., they activate and generate a template response,” adds the NCMC. 

According to the institution, trolls are accounts that operate independently or within a network of other accounts and are controlled by a real person seeking to provoke a negative reaction from users or simply to mislead them.

“The fundamental difference between a troll and a bot is that the former is not programmed, i.e., it behaves in the same way as a normal user or real person, but differs from the latter in its desire to conceal its true identity and its intention to cause harm, for example, to shame, insult, or silence opinions that are unfavourable to the people who control the troll accounts. Trolls can have and manage several different accounts, but their scale is usually small (approximately 50–100), so their activities are directed specifically at escalating one or more issues and pitting opinions against each other,” explains the NCMC in its commentary. 

The Centre’s experts also note that from a data collection perspective, the difference between so-called bots and trolls is more significant at the technical level, in order to identify a network of related inauthentic accounts and to identify the specific actors involved in these harmful activities. However, according to them, classifying technical accounts as bots or trolls should not be the main task for ordinary users.

“It is more important to pay attention to the phenomenon of coordinated inauthentic behaviour, the behaviour of accounts, and to report it to social platform administrators, because the activity itself, whether carried out by a troll or a bot, is inherently problematic and damages public discourse, democratic values, and society’s ability to discuss important issues,” emphasizes the NCMC.

Involvement of social media platforms themselves should be much greater 

MRU professor Laurinaitis claims that existing legal and technological mechanisms seem to allow for combating bots and coordinated inauthentic behaviour, but they are insufficient to ensure systematic, complete prevention, transparency, and rapid response from institutions.

“Current technological solutions are finding it increasingly difficult to distinguish automated activity from human behaviour, and legal procedures often lag behind the actual threats and their dynamics,” he says.

Laurinaitis also emphasises that social networks themselves must take responsibility for the activities of inauthentic accounts and the damage they cause by implementing continuous risk management, ensuring the transparency of the algorithms used, strengthening the authenticity of accounts, and cooperating with state institutions and investigators.

Why is the law lagging behind? Why is technology no longer keeping pace? One problem is generative AI. Bots are already writing in natural language that is no longer easy to distinguish from real human speech. Another problem is hybrid accounts, which are half human, half automated. Privacy boundaries are also important in this context – platforms cannot unrestrictedly analyse user behaviour. Economic interests are also important – activity generates advertising revenue. The platforms themselves have no interest in limiting this phenomenon,” he clarifies.

In turn, the NCMC notes that although social network administrators are showing interest and a desire to cooperate in the field of disinformation prevention, in the opinion of the NCMC and analysts, their involvement in stopping inauthentic activities, i. e. bots and trolls, should be much greater.

What should social network users pay attention to?

The NCMC identifies the signs that most often reveal the activity of bots and shares advice on what residents should pay attention to when evaluating comments and profiles in order to better recognise coordinated or manipulative activity and reduce its impact. 

  • Pay attention to your account. If the profile is new, has no personal photos or posts, and only comments on politics or controversial topics, it is suspicious.
  • Observe its behaviour. Bots often comment very quickly, en masse, and on many different posts.
  • Look for repetitions. If many accounts are posting almost identical text, this is a sign of coordinated activity.
  • Evaluate the objective. If commentators are not discussing, but inciting anger, panic, and disparaging institutions, this is manipulation.
  • Be careful with links. Especially if they urge you to “click now”, “share now”, or promise to reveal “the hidden truth.”
  • Do not rush to share. Emotion is the first signal to stop. Check the information in other sources.
  • Do not feed the algorithm. Do not get involved in arguments – the more reactions, the more the content spreads.
  • Report it to the platform. If you see coordinated behaviour, use the Report function.

The NCMC also highlights the five most prominent “red flags” that indicate coordinated bot activity on social networks.

  1. Identical comments from different profiles. Same text, same words – different names.
  2. Empty profile + political comments. No personal content, only “hot” topics.
  3. Unnatural activity. Comments every few seconds, even at night.
  4. Emotional slogans. “Everyone already knows”, “the government is lying”, “tomorrow will be too late”.
  5. Suspicious links. Unknown websites, petitions, “shocking” videos.

Share.