Social bot

(Redirected from Socialbot)

A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages (e.g. tweets) it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.

Uses

edit

Some of the uses for social bots are:

  • To persuade people, e.g. to advertise a product, support a political campaign, or boost social media engagement.[1] Messages with similar content can influence fads or trends.[2]
  • To offer affordable customer service agents or automatic responses to frequently asked questions on social media platforms like Discord.

Another example is that the bots can be used for algorithmic curation, algorithmic radicalization, and/or influence-for-hire, a term that refers to the selling of an account on social media platforms.

History

edit

Bots have coexisted with computer technology since its creation. Social bots have therefore risen in popularity simultaneously with the rise of social media. Social bots, besides being able to (re-)produce or reuse messages autonomously, also share many traits with spambots concerning their tendency to infiltrate large user groups.[3] Artificial Social Networking Intelligence (ASNI) refers to the application of artificial intelligence within social networking services and social media platforms. ASNI is expected to evolve rapidly.

Twitterbots are already well-known examples, but corresponding autonomous agents on Facebook and elsewhere have also been observed. Using social bots is against the terms of service of many platforms, such as Twitter and Instagram, although it is allowed to some degree by others, such as Reddit and Discord. Even for social media platforms that restrict social bots, a certain degree of automation is intended by making social media APIs available. Social media platforms have also developed their own automated tools to filter out messages that come from bots, although they cannot detect all bot messages.[4]

 
Twitter bots posting similar messages during the 2016 United States elections

Due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is unclear how legal regulation can be enforced. Social bots are expected to play a role in the future shaping of public opinion by autonomously acting as incessant influencers. Some social bots have manipulated public opinions (especially in a political sense), stock market manipulation, advertisements, and the malicious extortion of spear-phishing attempts.[5]

Detection

edit

The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages. Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. With enough bots, it might be even possible to achieve artificial social proof. To unambiguously detect social bots as what they are, a variety of criteria[6] must be applied together using pattern detection techniques, some of which are:[7]

  • cartoon figures as user pictures
  • sometimes also random real user pictures are captured (identity fraud)
  • reposting rate
  • temporal patterns[8]
  • sentiment expression
  • followers-to-friends ratio[9]
  • length of user names
  • variability in (re)posted messages
  • engagement rate (like/followers rate)
  • analysis of the time series of social media posts[10]

Social bots are always becoming increasingly difficult to detect and understand. The bots' human-like behavior, ever-changing behavior of the bots, and the sheer volume of bots covering every platform may have been a factor in the challenges of removing them.[11] Social media sites, like Twitter, are among the most affected, with CNBC reporting up to 48 million of the 319 million users (roughly 15%) were bots in 2017.[12]

Botometer[13] (formerly BotOrNot) is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[14][15] An active method for detecting early spam bots was to set up honeypot accounts that post nonsensical content, which may get reposted (retweeted) by the bots.[16] However, bots evolve quickly, and detection methods have to be updated constantly, because otherwise they may get useless after a few years.[17] One method is the use of Benford's Law for predicting the frequency distribution of significant leading digits to detect malicious bots online. This study was first introduced at the University of Pretoria in 2020.[18] Another method is artificial-intelligence-driven detection. Some of the sub-categories of this type of detection would be active learning loop flow, feature engineering, unsupervised learning, supervised learning, and correlation discovery.[11]

Some operations of bots work together in a synchronized way. For example, ISIS used Twitter to amplify its Islamic content by numerous orchestrated accounts which further pushed an item to the Hot List news,[19] thus further amplifying the selected news to a larger audience.[20] This mode of synchronized bots accounts can be used as a tool of propaganda as well as stock markets manipulations.[21]

Platforms

edit

Instagram

edit

Instagram reached a billion active monthly users in June 2018,[22] but of those 1 billion active users, it was estimated that up to 10% were being run by automated social bots. While malicious propaganda posting bots are still popular, many individual users use engagement bots to propel themselves to a false virality, making them seem more popular on the app. These engagement bots can like, watch, follow, and comment on the users' posts.[23]

Around the same time, the platform achieved the 1 billion monthly user plateau. Facebook (Instagram and WhatsApp's parent company) planned to hire 10,000 to provide additional security to their platforms; this would include combatting the rising number of bots and malicious posts on the platforms.[24] Due to increased security on the platform and the detection methods used by Instagram, some botting companies are reporting issues with their services because Instagram imposes interaction limit thresholds based on past and current app usage, and many payment and email platforms deny the companies access to their services, preventing potential clients from being able to purchase them.[25]

Twitter

edit

Twitter's bot problem is caused by the ease of creating and maintaining them. The ease of creating the account as and the many APIs that allow for complete automation of the accounts are leading to excessive amounts of organizations and individuals using these tools to push their own needs.[12][26] CNBC claimed that about 15% of the 319 million Twitter users in 2017 were bots; the exact number is 48 million.[12] As of July 7, 2022, Twitter is claiming that they remove 1 million spam bots from their platform every day.[27]

Some bots are used to automate scheduled tweets, download videos, set reminders and send warnings of natural disasters.[28] Those are examples of bot accounts, but Twitter's API allows for real accounts (individuals or organizations) to use certain levels of bot automation on their accounts and even encourages the use of them to improve user experiences and interactions.[29]

See also

edit
  • Astroturfing – Public relations tactic using fake grassroots movements
  • Chatbot – Program that simulates conversation
  • Dead Internet theory – Conspiracy theory on online bot activity
  • Devumi – Former social media company
  • Fake news website – Website that deliberately publishes hoaxes and disinformation
  • Ghost followers – Users on social media platforms who remain inactive
  • Internet bot – Software that runs automated tasks over the Internet
  • Social spam – Spam on social networking services
  • Sybil attack – Network service attack performed by multiple fake identities
  • Twitter bomb – Posting numerous tweets with the same hashtags
  • Whispering campaign – Method of persuasion

References

edit
  1. ^ "The influence of social bots". www.akademische-gesellschaft.com. Retrieved March 1, 2022.
  2. ^ Frederick, Kara (2019). "The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight". Center for a New American Security. {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (June 24, 2016). "The rise of social bots". Communications of the ACM. 59 (7): 96–104. arXiv:1407.5225. doi:10.1145/2818717. ISSN 0001-0782. S2CID 1914124.
  4. ^ Efthimion, Phillip; Payne, Scott; Proferes, Nicholas (July 20, 2018). "Supervised Machine Learning Bot Detection Techniques to Identify Social Twitter Bots". SMU Data Science Review. 1 (2).
  5. ^ Gorwa, Robert; Guilbeault, Douglas (June 2020). "Unpacking the Social Media Bot: A Typology to Guide Research and Policy". Policy & Internet. 12 (2): 225–248. arXiv:1801.06863. doi:10.1002/poi3.184. ISSN 1944-2866. S2CID 51877148.
  6. ^ Dewangan, Madhuri; Rishabh Kaushal (2016). "SocialBot: Behavioral Analysis and Detection". International Symposium on Security in Computing and Communication. doi:10.1007/978-981-10-2738-3_39.
  7. ^ Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96–104. arXiv:1407.5225. doi:10.1145/2818717. S2CID 1914124.
  8. ^ Mazza, Michele; Stefano Cresci; Marco Avvenuti; Walter Quattrociocchi; Maurizio Tesconi (2019). "RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter". In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). arXiv:1902.04506. doi:10.1145/3292522.3326015.
  9. ^ "How to Find and Remove Fake Followers from Twitter and Instagram : Social Media Examiner".
  10. ^ Weishampel, Anthony; Staicu, Ana-Maria; Rand, William (March 1, 2023). "Classification of social media users with generalized functional data analysis". Computational Statistics & Data Analysis. 179: 107647. doi:10.1016/j.csda.2022.107647. ISSN 0167-9473. S2CID 253359560.
  11. ^ a b Zago, Mattia; Nespoli, Pantaleone; Papamartzivanos, Dimitrios; Perez, Manuel Gil; Marmol, Felix Gomez; Kambourakis, Georgios; Perez, Gregorio Martinez (August 2019). "Screening Out Social Bots Interference: Are There Any Silver Bullets?". IEEE Communications Magazine. 57 (8): 98–104. doi:10.1109/MCOM.2019.1800520. ISSN 1558-1896. S2CID 201623201.
  12. ^ a b c Newberg, Michael (March 10, 2017). "As many as 48 million Twitter accounts aren't people, says study". CNBC. Retrieved November 22, 2022.
  13. ^ "Botometer".
  14. ^ Davis, Clayton A.; Onur Varol; Emilio Ferrara; Alessandro Flammini; Filippo Menczer (2016). "BotOrNot: A System to Evaluate Social Bots". Proc. WWW Developers Day Workshop. arXiv:1602.00975. doi:10.1145/2872518.2889302.
  15. ^ Varol, Onur; Emilio Ferrara; Clayton A. Davis; Filippo Menczer; Alessandro Flammini (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization". Proc. International AAAI Conf. on Web and Social Media (ICWSM).
  16. ^ "How to Spot a Social Bot on Twitter". technologyreview.com. July 28, 2014. Social bots are sending a significant amount of information through the Twittersphere. Now there's a tool to help identify them
  17. ^ Grimme, Christian; Preuss, Mike; Adam, Lena; Trautmann, Heike (2017). "Social Bots: Human-Like by Means of Human Control?". Big Data. 5 (4): 279–293. arXiv:1706.07624. doi:10.1089/big.2017.0044. PMID 29235915. S2CID 10464463.
  18. ^ Mbona, Innocent; Eloff, Jan H. P. (January 1, 2022). "Feature selection using Benford's law to support detection of malicious social media bots". Information Sciences. 582: 369–381. doi:10.1016/j.ins.2021.09.038. hdl:2263/82899. ISSN 0020-0255. S2CID 240508186.
  19. ^ Giummole, Federica; Orlando, Salvatore; Tolomei, Gabriele (2013). "Trending Topics on Twitter Improve the Prediction of Google Hot Queries". 2013 International Conference on Social Computing. IEEE. pp. 39–44. doi:10.1109/socialcom.2013.12. ISBN 978-0-7695-5137-1. S2CID 15657978.
  20. ^ Badawy, Adam; Ferrara, Emilio (April 3, 2018). "The rise of Jihadist propaganda on social networks". Journal of Computational Social Science. 1 (2): 453–470. arXiv:1702.02263. doi:10.1007/s42001-018-0015-z. ISSN 2432-2717. S2CID 13122114.
  21. ^ Sela, Alon; Milo, Orit; Kagan, Eugene; Ben-Gal, Irad (November 15, 2019). "Improving information spread by spreading groups". Online Information Review. 44 (1): 24–42. doi:10.1108/oir-08-2018-0245. ISSN 1468-4527. S2CID 211051143.
  22. ^ Constine, Josh (June 20, 2018). "Instagram hits 1 billion monthly users, up from 800M in September". TechCrunch. Retrieved November 24, 2022.
  23. ^ "Instagram Promotion Service (Real Marketing) – UseViral". August 15, 2021. Retrieved November 24, 2022.
  24. ^ "Instagram's Growing Bot Problem". The Information. July 18, 2018. Retrieved November 24, 2022.
  25. ^ Morales, Eduardo (March 8, 2022). "Instagram Bots in 2021 — Everything You Need To Know". Medium. Retrieved November 24, 2022.
  26. ^ Gilani, Zafar; Farahbakhsh, Reza; Crowcroft, Jon (April 3, 2017). "Do Bots impact Twitter activity?". Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. pp. 781–782. doi:10.1145/3041021.3054255. ISBN 978-1-4503-4914-7. S2CID 33003478.
  27. ^ Dang, Sheila; Paul, Katie (July 7, 2022). "Twitter says it removes over 1 million spam accounts each day". Reuters. Retrieved November 23, 2022.
  28. ^ Azhar, Huzaifa (December 10, 2021). "10 Best Twitter Bots You Should Follow in 2022 - TechPP". techpp.com. Retrieved November 24, 2022.
  29. ^ "Twitter's automation development rules | Twitter Help". help.twitter.com. Retrieved November 24, 2022.
edit