It’s impossible to overstate how much the internet has changed our lives over the past three decades. Internet-based technologies and products have unleashed exponential economic growth and efficiencies. It is no accident that the five largest companies in the S&P 500 are Apple, Microsoft, Amazon, NVDIA and Alphabet (Google). Certainly the internet has been, and will continue to be, a driver of economic growth and internet-based innovations that promise to continue to improve the health, education and economic opportunities for all of us. However, as has been true with all new technological innovations, there is a dark side to the internet; challenges that if ignored, will substantially reduce the benefits expected to be realized from universal access to broadband and its applications. These challenges need to be understood and addressed, particularly as we work to connect the remainder of homes and businesses in the United States to broadband by 2030.
One of these challenges was described earlier this year in a three-part blog on cybersecurity. More recently, I explored some of the potential questions and challenges associated with generative artificial intelligence. This blog discusses yet another challenge: How do we spot internet disinformation and counter internet-based algorithms that tend to confirm our preexisting biases, and blind us to opposing viewpoints?
For representative democracies like the United States, particularly now when most of us rely on the internet to get our news and form our opinions, the ability to analyze and test the accuracy of sources of information on the web (to engage in critical thinking) has and will continue to be a vital skill to effectively use the internet. While there is no “magic bullet” solution, ignoring this issue risks more than just continued economic progress, it could threaten the very institutions that sparked the creation of the internet itself.
Of course the goal of this blog (and the others that preceded it) is not to discourage the development of internet infrastructure and internet-based applications. But the degree to which our goals of better health, education and economic opportunity will be realized, depends in large part on how well we adapt to use these new internet-based applications and technologies effectively. Developing these skills is an important part of the University’s Broadband Initiative, and programs such as the pilot Digital Ambassadors project are expected to be an important part of that effort.
Defining the Challenge
At the outset, it’s worthwhile to spend time defining the challenge. The term “disinformation” is closely related to its companion – misinformation. Misinformation is simply inaccurate or false information. Disinformation on the other hand is misinformation put to a purpose. It’s the use of misinformation in a way specifically designed to deceive or hide the facts.
The motives that lead a person (or more recently an artificial intelligence algorithm) to place disinformation on the web, most often are not perceived to be morally wrong by the person responsible for posting. In fact, in many cases the opposite is true; disinformation is used to serve what is perceived as a higher purpose or objective. In other words, the means (an intentionally false or misleading story or headline) are justified by the belief that it supports a view or position that is in the best interest of society.
Unlike disinformation, “confirmation bias,” requires no intentional act. Instead it describes our unconscious tendency to seek out and treat information as true if it supports our biases and predispositions. We all use confirmation bias to make decisions in our daily lives, and it is not necessarily a bad thing. For example, most of us have a “confirmation bias” that would make us hesitant to climb into an enclosure to get a better view of a grizzly bear at the zoo. It might well be that the particular bear was trained and well-behaved, but we know from books or film that these animals often can be dangerous, and we run the risk of being “lunch” if we get too close. Research indicates that internet-based algorithms make extensive use of our tendency for confirmation bias in ways that we may not fully understand or appreciate, usually with the goal of keeping us engaged and online.
Disinformation – How Does the Internet Differ from Earlier Forms of Mass Communication?
History contains many examples of individuals that used mass media to disseminate disinformation. In addition to charlatans, disinformation has been used by some that we hold in high regard. In fact, none other than Benjamin Franklin apparently is guilty. In order to stir up revolutionary fever, he apparently made up a story accusing King George III of promoting attacks on colonists — offering a cash bounty for each colonist scalp that was collected! Disinformation in mass media also has always been difficult to stop. This is particularly true in societies like ours that value freedom of expression. Early attempts by our government to rein in false or “fake” news, even if motivated by a noble purpose have been unpopular and ineffective.
So it’s fair to point out that disinformation in public media (whether it’s a printed pamphlet, a newspaper, television or radio) is nothing new. However, there are several unique aspects that make disinformation on the internet more challenging to identify and counter. The same characteristics that make the internet such an effective tool for learning and disseminating information, also have made it much more effective in spreading disinformation. One reason the internet spreads disinformation so effectively is that multimedia (videos and audio) can be used along with text to get ideas across. It is not surprising then, that researchers have found that most disinformation on websites today consists of images and videos. A second reason is that the internet permits information and disinformation to be shared far more easily and quickly than earlier technologies. In the past few could afford a printing press, and more recently television and radio stations could make use of audio and visual images, only after obtaining a license from the Federal Communications Commission. Today, anyone with an online connection can create and share text and full resolution video content with millions in just a few hours, and do so anonymously.
The risks posed by disinformation on the web seem destined to grow. For example, software has been developed that permits most anyone to create near perfect video imitations of public figures that can say anything the programmer desires. The age-old adage “seeing is believing” seems destined to become a quaint anachronism.
Internet Algorithms & Confirmation Bias – “There’s No Such Thing as a Free Lunch”
Most of us don’t reflect on why there is so much information available to us on the web “free of charge.” Of course, in some cases government, nonprofits and public-spirited individuals have provided content as a public service, and there are many subscription fee-based websites, but that does not explain the millions of commercial websites that provide news, entertainment, and personal connections free of charge.
These websites have a profit motive, and most exist to sell advertising. Many of these ads are structured as a “cost per click” arrangement – meaning that advertisers pay the website owner a set amount each time someone clicks on a hyperlink that directs the web browser to the advertiser’s content. Again, intellectually, many of us realize this is happening, but we may not fully appreciate just how significant this revenue has become. In 2022, ads of this type were estimated to generate $95.2 billion. To put this in perspective, that’s over $275 of cost per click revenue for every man, woman, and child in the United States.
With this much money at stake, it is not surprising that a primary goal of many commercial websites – particularly social media websites, is to keep us online and engaged with the website’s content for as long as possible. Like print media advertisers that came before, today’s web designers know they can do this most effectively by keeping us emotionally engaged. Again, the ultimate goal of these efforts most often is to increase ad revenue. After all, the longer you are online and looking at website content, the more ads you will see, and potentially the greater the chance that you will click on at least one, and earn revenue for the website’s owners. There’s a name for these targeted efforts to keep us on a website, it’s call “clickbait,” and anyone who suddenly realized they are late for a meeting because they have spent the past 30 minutes looking at “50 cute kitten” videos knows clickbait can be very effective.
Applying Critical Thinking Skills to the Internet
We tend to lose sight of how quickly the internet has become a part of our lives. Thirty years ago blogs like this one did not exist. Google was founded in 1998. The first Facebook page was created by Mark Zuckerberg less than 20 years ago, and Jack Dorsey posted the first “tweet” on Twitter in 2006. Given the speed of these developments, it’s not all that surprising that critical thinking skills training may not adequately address web-based disinformation or the impact of web-based search algorithms on our confirmation bias.
Certainly there is nothing new in the idea that critical thinking skills are essential to the sound functioning of democratic institutions. Skilled critical thinkers are able to:
- Raise vital questions and problems, formulating them clearly and precisely.
- Gather and assess relevant information, using abstract ideas to interpret it effectively.
- Come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards.
- Think open-mindedly within alternative systems of thought, recognizing and assessing, as needs be, their assumptions, implications, and practical consequences.
- Communicate effectively with others in figuring out solutions to complex problems.
Even if critical thinking is not taught as a stand-alone subject in elementary and secondary schools, most children do receive instruction in basic critical thinking skills as part of their education, although the amount of training varies and surprisingly, may decline once the student enters middle school and high school. Most colleges and universities, including several of the University of Missouri System campus libraries and schools, provide critical thinking skills training designed to help students when they do internet research.
Yet a comprehensive 2019 study found that students are not well equipped in ferreting out disinformation on the internet. This also is not all that surprising. There may be other reasons for this, but one suggested by the study and other related research is that critical thinking skills taught to identify disinformation on the web may not be particularly effective today. Those methods included assigning credibility to information contained on “.org” websites and discounting those that had a “.com” designation; relying on the website’s statement located on its “about” webpage to understand its mission; assigning more value to content on websites that have professional looking “error free” layouts; and giving credence to web-based materials that includes footnotes or hyperlinks referencing journals that are not generally well-known, but that have professional-sounding names. While all of these may seem reasonable or have direct corollaries to fact checking traditional printed text material, the study discounted their value in discovering disinformation on the web.
For example the fact that a website has a “.org” label, does not mean that it is sponsored by a nonbiased nonprofit organization; instead, it simply is an alternate catch-all designation available for any website that does not wish to be classified as having a commercial (.com), government (.gov), or educational institution (.edu) sponsor. Nor is the website’s “about” page or its professional design particularly helpful in ferreting out disinformation. After all, the text of the “about” page was written by the same folks who wrote the content that is being checked, and modern website design programs enable most anyone to prepare a very professional looking website.
Distinguishing disinformation simply based on how the content appears likely will be even more difficult in the future because of the development and widespread availability of new programs using sophisticated graphic design and artificial intelligence. This was illustrated just this past month, when several attorneys were sanctioned and fined $5,000 for filing a legal brief, authored by an generative artificial intelligence program. The problem wasn’t that the attorneys used an AI program to write the brief; instead they were sanctioned because the AI program had “made up” the names and legal citations for several cases to support the brief’s legal position! The attorneys made the mistake of assuming that the information was accurate, because the legal citations appeared to be in the proper format. In other words, they relied on the superficial appearance of the information, rather than taking the time to check that it came from a legitimate source.
A Sandford University study recommends applying a different mindset to web-based publications, one that takes into account how easy it now is for anyone to impersonate legitimate resources and to post false information. The recommended approach is not to look at the website or its materials to validate the information, but instead to access external unrelated sources to evaluate the website’s sponsoring organization and materials contained on that website. To do this effectively and efficiently, web-based search engines (such as Google) and multiple fact checking websites can be used. While this approach may not be foolproof, it does capitalize on the ease of finding independent third-party resources to evaluate both the efficacy of the website sponsor and the accuracy of information it contains.
Social & News Media Sites: Applying Critical Thinking Skills to Overcome Conformation Bias Algorithms.
No amount of fact and source checking can fully counter the internet’s ability, through search engine algorithms, to feed us a nearly unending supply of whatever information we ask for. These algorithms have been very beneficial. Most of us use them every day to do a variety of tasks, such as evaluating a product we are considering purchasing, repairing or obtaining instructions on how to use an appliance, selecting a hotel or vacation resort, or even finding source material for a blog. However, these algorithms are most useful if we do not lose sight of the fact that most often they are optimized to raise revenue for the sponsor. We also need to consider that the same algorithm that feeds us an endless supply of cute kitten videos may also be used to keep us engaged on websites featuring news and social issues.
Again, no conspiratorial motive seems to be at work here; it’s just application of a time-honored principle of advertising to this new form of mass-media: if you want to get someone’s interest, use flashy emotion-based content, and if you want to keep them interested, show them more and more of it, making each subsequent “click” just a little flashier and more emotional. Today this happens with little or know human input at all; it’s a product of algorithms that could care less whether the topic you are viewing is cute kittens or gun control.
A 2022 study published by researchers at the University of California-Davis illustrates how this technology operates when the topic involved is a political or social issue. The goal of the study was to see how YouTube’s recommended content would change over time, when viewers initially selected a political topic. The study, described in an August 2022 article, assessed the effect of following YouTube’s “recommended” videos. The idea behind the study was to determine what happened if users viewing a political video, followed the YouTube recommendation for the next video, and the video after that one.
Researchers in the study created fictitious YouTube accounts (sock puppets) that were programmed to access and “view” a video initially tagged by the researchers as having either a slightly conservative or as slightly liberal/progressive viewpoint. After viewing the video, each sock puppet automatically accessed the next recommended video, and viewed it. The sock puppets repeated this process over a number of days, accessing many videos.
One result that is not all that surprising, was that sock puppets that initially viewed a conservative or a liberal/progressive bias tended to only be recommended videos that matched their original view. That makes sense; that is how confirmation bias operates. If one initially prefers and connects to content that had liberal/progressive bias, they are more likely to like and view more videos that support or “confirm” that bias, as opposed to videos that promote an alternative “conservative” viewpoint. Of course, the same principle holds true for those that prefer content with a more conservative bias. Muhammad Haroon, the leader of the study noted: “Unless you willingly choose to break out of that loop, all the recommendations on that system will be zeroing on that one particular niche interest that they’ve identified.”
What was more disturbing, was that the YouTube algorithm tended not only to limit views to conservative or liberal/progressive content (depending on which bias was initially selected); the content selected tended to become increasingly more radical the longer the program ran. Again, this doesn’t imply any nefarious intent on the part of the YouTube algorithm programmers; it just seems to be a logical extension of a program designed to give the viewer more and more of content that supports their original bias. Again, the purpose might be solely to maximize the time spent on the website and the revenues generated from advertising, but obviously for the viewer, the algorithm shuts out competing voices and apparently over time tends to emphasize more extreme positions.
Another university study published in 2023, found that the risk of being caught up in group think and misleading or false information on the web tends to be directly related to the viewer’s analytic reflection skills. The study compared the web browsing activities of individuals that scored higher in analytic reflection on a standardized Cognitive Reflection Test (CRT), to a second group that tended to rely primarily on “intuitive reasoning” to reach conclusions. The study found that individuals with higher analytic reflection skills as measured by the CRT, appeared to be better able to counteract the tendency to view only those websites that confirmed their initial bias.
The CRT used in the study was designed to measure analytic reflections skills by asking participants to answer a number of questions that included an option that, while they at first seemed intuitively correct, on reflection were obviously wrong. For example:
“If you’re running a race and you pass the person in second place, what place are you in?”
The “intuitive” answer – the one that initially seems most appealing – is “first place.” However, after a little reflection one recognizes this answer is clearly wrong. After all, if the person you were trailing in the race was in second place, passing them only means you are now in second place, and you still need to catch the person who is in the lead.
When comparing the web browsing activities of the two groups, the study found that individuals who scored higher in analytic reflection skills also tended to rely on more traditional and reliable sources of news and information. The study concluded that these participants tended “to be more discerning in their social media use: they followed fewer [Twitter] accounts, shared higher quality content from more reliable sources, and tweeted about weightier subjects (in particular, politics).”
Critical Thinking is a Skill, Not a Measure of Intelligence
One misconception about critical thinking is that it is somehow directly related to the level of education and innate intelligence. In other words individuals with higher levels of formal education will be good “critical thinkers.” Academic research does not support this view. While critical thinking can be a complex process that includes clearly defining the issue, identifying and analyzing the sources of the information, testing and seeking confirmation from alternative sources and checking for alternative viewpoints; it is a skill. Equally important, hundreds of separate studies over many decades have shown that critical thinking can be learned, regardless of an individual’s age or education level. Learning that skill is not limited to those with a college degree, and having a college degree is certainly no guarantee that the individual is a good a critical thinker.
Applying Critical Thinking Skills to the Internet – A Path Forward
The challenges posed by internet disinformation and confirmation bias algorithms can seem insurmountable. We live in a free and open society. Any form of censorship or controls over web content run against our core belief in freedom of speech and expression. Of course, as a practical matter the internet itself is structured in ways that make regulation of its content, either by the government or industry difficult to implement even in authoritarian societies.
Responsible internet content providers may be able to provide tools that are useful in identifying disinformation on the internet. However, internet experts polled in a 2017 Pew Research Center study were evenly split on whether misinformation on the internet can be reduced in the future, and there is little reason to believe the results would be different today. Likely real improvement will depend in part on us, the individuals that use the internet on a daily basis. If we develop and use the skills necessary to avoid falling prey to disinformation, the rationale for posting it in the first place will be reduced. However, this will happen only if critical thinking skills training are a core part of improving adoption of the internet and internet-based applications.
The internet is a relatively new medium of mass communication, and new products and innovations that enhance its capabilities to provide information come to market almost daily. It is a much more powerful means of conveying information – and disinformation – than anything that has come before. There are few legal or practical restrictions on what content can be added to the internet, or on who is able to add it. But most of us would not want it any other way. As was true in the past, critical thinking skills can be learned by anyone, at any age, and there is evidence that they can be highly effective in identifying disinformation, particularly if the skills learned are adapted to account for ways in which the web differs from other forms of mass media technology that came before.
Yet it also is evident that developing the skills necessary to separate facts from false or misleading content on the web may not be enough to reduce society’s polarization and help us find common ground to peacefully resolve our most difficult policy issues. As a society, we have always had issues of disagreement that require compromise, but we never have had to reach compromises when large segments of the population have isolated themselves in “opinion silos” created by web-based algorithms.
More of us now rely on the internet as our primary – and in some cases our sole – source for news and opinion. Limiting consumption of news and opinions on the web to one or two social media websites seems to create the very real possibility that, like the “sock puppets” in the YouTube video research experiment, we will be fed only a steady diet of confirmation-biased information that resonates with our world view and suppresses all others.
In the past, our parents and grandparents could counter this risk by subscribing to multiple newspapers, or at least they could read and consider opposing viewpoints on the op-ed page of their newspaper. Our generation can find alternative viewpoints as well. In fact, an important advantage of the internet is that it contains many points of view and information sources. However, accessing those views today likely requires that we take affirmative steps to break out of the “comfort zone” of algorithm-generated content, so that we at least understand the views of those with whom we disagree.
Over the next several years, government and private business will invest more than $100 billion to bring affordable reliable internet infrastructure to every home and business in the United States. Funding through the Affordable Connectivity Program is available to help make the internet affordable for everyone regardless of income. However, to take full advantage of this resource, to use it safely and effectively, we also must develop and implement programs to make skills-based critical thinking training generally available.