READY FOR GRADING- NO FEEDBACK
editIdentification and correction
editInformation conveyed as credible but later amended can affect people's memory and reasoning after retraction.[14] Misinformation differs from concepts like rumors because misinformation is inaccurate information that has previously been disproved.[4] According to Anne Mintz, editor of Web of Deception: Misinformation on the Internet, the best ways to determine whether information is factual is to use common sense.[15] Mintz advises that the reader check whether the information makes sense and whether the founders or reporters of the websites that are spreading the information are biased or have an agenda. Professional journalists and researchers look at other sites (particularly verified sources like news channels[16]) for that information, as it might be reviewed by multiple people and heavily researched, providing more concrete details.
Martin Libicki, author of Conquest In Cyberspace: National Security and Information Warfare,[17] noted that the trick to working with misinformation is the idea that readers must have a balance of what is correct or incorrect. Readers cannot be gullible but also should not be paranoid that all information is incorrect. There is always a chance that even readers who have this balance will believe an error to be true or that they will disregard factual information as incorrect. According to Libicki, readers' prior beliefs or opinions also affect how they interpret new information. When readers believe something to be true before researching it, they are more likely to believe information that supports these prior beliefs or opinions. This phenomenon may lead readers who otherwise are skilled at evaluating credible sources and facts to believe misinformation.
According to research, the factors that lead to recognizing misinformation is the amount of education a person has and the information literacy they have.[18] This means if a person has more knowledge in the subject being investigated, or are familiar with the process of how the information is researched and presented, they are more likely to identify misinformation. Further research reveal that content descriptors can have a varying effect in people in detecting misinformation.[19]
Prior research suggests it can be very difficult to undo the effects of misinformation once individuals believe it to be true and fact checking can even backfire.[20] Attempting to correct the wrongly held belief is difficult because the misinformation may suit someone's motivational or cognitive reasons. Motivational reasons include the desire to arrive at a foregone conclusion, this means accepting information that supports that conclusion. Cognitive reasons may be that the misinformation provides scaffolding for an incident or phenomenon, and is thus part of the mental model for consideration. In this instance, it is necessary to correct the misinformation by not only refuting it, but also providing accurate information that can also function in the mental model.[21] One suggested solution that would focus on primary prevention of misinformation is the use of a distributed consensus mechanism to validate the accuracy of claims, with appropriate flagging or removal of content that is determined to be false or misleading.[22]
Another approach to correcting misinformation is to "inoculate" against it by delivering misinformation in a weakened form by warning of the dangers of the misinformation and including counterarguments showing the misleading techniques at work. One way to apply this approach is by parallel argumentation, in which the flawed logic is transferred to a parallel, sometimes extreme or absurd, situation. This approach exposes bad logic without the need for complicated explanations.[23]
Flagging or eliminating news media containing false statements using algorithmic fact checkers is becoming the front line in the battle against the spread of misinformation. Computer programs that automatically detect misinformation are still just beginning to emerge, but similar algorithms are already in place with Facebook and Google. Algorithms detect and alert Facebook users that what they are about to share is likely false, hoping to reduce the chances of the user sharing.[24] Likewise, Google provides supplemental information pointing to fact check websites in response to its users searching controversial search terms.
Causes
editHistorically, people have relied on journalists and other information professionals to relay facts and truths. Many different things cause miscommunication but the underlying factor is information literacy. Information is distributed by various means, and because of this it is often hard for users to ask questions of the credibility of what they are seeing. Many online sources of misinformation use techniques to fool users into thinking their sites are legitimate and the information they generate is factual. Often misinformation can be politically motivated. Websites such as USConservativeToday.com have previously posted false information for political and monetary gain. Another role misinformation serves is to distract the public eye from negative information about a given person and/or bigger issues of policy, which as a result can go unremarked with the public preoccupied with fake-news. In addition to the sharing of misinformation for political and monetary gain it is also spread unintentionally. Advances in digital media have made it easier to share information, although it is not always accurate. The next sections discuss the role social media has in distributing misinformation, the lack of internet gatekeepers, implications of censorship in combating misinformation, inaccurate information from media sources, and competition in news and media.
Social media[edit]
editSocial Media can be a cause of misinformation. Users sometimes unknowingly share false information. The rapid sharing of information on social media sometimes causes misinformation. Users share information without first checking the legitimacy of the information they have found.[2] This often happens by the simple share of a post.
Contemporary social media platforms offer a rich ground for the spread of misinformation.The exact sharing and motivation behind why misinformation spreads through social media so easily remains unknown. A 2018 study of Twitter determined that, compared to accurate information, false information spread significantly faster, further, deeper, and more broadly. Combating its spread is difficult for two reasons: the profusion of information sources, and the generation of "echo chambers". The profusion of information sources makes the reader's task of weighing the reliability of information more challenging, heightened by the untrustworthy social signals that go with such information. The inclination of people to follow or support like-minded individuals leads to the formation of echo chambers and filter bubbles. With no differing information to counter the untruths or the general agreement within isolated social clusters, some writers argue the outcome is a dearth, and worse, the absence of a collective reality, some writers argue. Although social media sites have changed their algorithms to prevent the spread of fake news, the problem still exists. Furthermore, research has shown that while people may know what the scientific community has proved as a fact, they may still refuse to accept it as such.
Misinformation thrives in a social media landscape frequently used and spread by college students. This can be supported by scholars such as Ghosh and Scott(2018), who indicated that misinformation is "becoming unstoppable". It has also been observed that misinformation and disinformation come back, multiple times on social media sites. A research study watched the process of thirteen rumors appearing on Twitter and noticed that eleven of those same stories resurfaced multiple times, after much time had passed. It is unclear whether users share misinformation on social media with malicious intent. It seems one of the top reasons misinformation is shared is to attract attention and start a conversation.
Twitter is one of the most concentrated platforms for engagement with political fake news. 80% of fake news sources are shared by 0.1% of users, who are "super-sharers". Older, more conservative social users are also more likely to interact with fake news. Over 70% of adults in the United States have Facebook accounts, and 70% of those with accounts visit the site daily.[3] On Facebook, adults older than 65 were seven times more likely to share fake news than adults ages 18–29. While misinformation and fake news interactions have started to lessen on Facebook, they have increased on twitter.[4] Another source of misinformation on Twitter is bots. Bots share stories that contain misinformation. Some misinformation, especially surrounding climate change, is centered around bots on twitter sharing stories. [5] Facebook has taken measures to top the spread of misinformation, like flagging posts that may contain fake news. Since Facebook implemented this change, the misinformation appearing on Facebook has dropped, but is still present. [6]
There is spontaneous misinformation on social media that usually occurs from users sharing posts from friends or other mutual followers. These posts are often shred from someone the sharer believes they can trust. Other misinformation is created and spread with malicious intent. Sometimes to cause anxiety, other times to deceive audiences. [7] There are times when rumors are created with malicious intent, but shared by unknowing users.
Social media maybe the way to correct misinformation. With the large audiences that can be reached, and the experts on various subjects on social media, it may be the key in correcting misinformation. [1]
Citations assignment
editSatirical news developed with the development of media technologies like the television and radio. Some people misunderstood satirical news and believed it was true news.[8]
Rumors being spread through social media are also a source of misinformation. [9]
Some misinformation, especially surrounding climate change, is centered around bots on twitter sharing stories. [5]
This page is a redirect. The following categories are used to track and monitor this redirect:
|
References
edit- ^ a b Bode, Leticia; Vraga, Emily K. (2018-09-02). "See Something, Say Something: Correction of Global Health Misinformation on Social Media". Health Communication. 33 (9): 1131–1140. doi:10.1080/10410236.2017.1331312. ISSN 1041-0236. PMID 28622038.
- ^ Cite error: The named reference
:3
was invoked but never defined (see the help page). - ^ Bode, Leticia; Vraga, Emily K. (2015-08-01). "In Related News, That was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media". Journal of Communication. 65 (4): 619–638. doi:10.1111/jcom.12166. ISSN 0021-9916.
- ^ Allcott, Hunt; Gentzkow, Matthew; Yu, Chuan (2019-04-01). "Trends in the diffusion of misinformation on social media". Research & Politics. 6 (2): 2053168019848554. doi:10.1177/2053168019848554. ISSN 2053-1680.
- ^ a b "Revealed: quarter of all tweets about climate crisis produced by bots". the Guardian. 2020-02-21. Retrieved 2021-03-30.
- ^ Chou, Wen-Ying Sylvia; Oh, April; Klein, William M. P. (2018-12-18). "Addressing Health-Related Misinformation on Social Media". JAMA. 320 (23): 2417–2418. doi:10.1001/jama.2018.16865. ISSN 0098-7484.
- ^ Thai, My T.; Wu, Weili; Xiong, Hui (2016-12-01). Big Data in Complex and Social Networks. CRC Press. ISBN 978-1-315-39669-9.
- ^ Posetti, Julie; Matthews, Alice. "A Short Guide to the History of 'Fake News' and Disinformation: A New ICFJ Learning Module". International Center for Journalists. Retrieved 2021-03-30.
- ^ "Wayback Machine" (PDF). web.archive.org. 2019-04-29. Retrieved 2021-03-30.