This user is a student editor in SUNY_Polytechnic_Institute/Digital_Media_and_Information_in_Society_(Fall_2023). |
In some instances, AI may be used to generate convincing misinformation with the intent of deceiving others. This is often known as "AI misinformation." Because AI can receive inputs from human generated content, including papers created by professors, it can be used to mimic professional papers and come off as very convincing. Along with the "quality" of misinformation increasing, the quality of misinformation generated from AI also increases due to how easy and quick it is to generate an article using AI.[1]
There is also the use of bots on social media to generate and spread misinformation. The most notably example was the use of Russian bots to influence the 2016 US Presidential Election. By creating a misinformed post and utilizing more bots to increase the interactions on the post, it gains more traction, enough to draw the attention of other people who are deceived by the post, which could include celebrities it is targeted towards.[2]
There is also an increased demand in using AI to counter misinformation. Many models have been made, however creating a model that is able to distinguish misinformation with an opinion is difficult, as there will always be some bias toward information in the pool provided for the AI model. Use of AI to handle bots has been more successful.[1]
- ^ a b Monteith, Scott; Glenn, Tasha; Geddes, John R.; Whybrow, Peter C.; Achtyes, Eric; Bauer, Michael (2023-10-26). "Artificial intelligence and increasing misinformation". The British Journal of Psychiatry: 1–3. doi:10.1192/bjp.2023.136. ISSN 0007-1250.
- ^ Aïmeur, Esma; Amri, Sabrine; Brassard, Gilles (2023-02-09). "Fake news, disinformation and misinformation in social media: a review". Social Network Analysis and Mining. 13 (1): 30. doi:10.1007/s13278-023-01028-5. ISSN 1869-5469. PMC 9910783. PMID 36789378.
{{cite journal}}
: CS1 maint: PMC format (link)