Critical Thinking and the Internet: Developing Skills to Counter Online Disinformation & Confirmation Bias Algorithms

posted in: | 0

It’s impossible to overstate how much the internet has changed our lives over the past three decades. Internet-based technologies and products have unleashed exponential economic growth and efficiencies. It is no accident that the five largest companies in the S&P 500 are Apple, Microsoft, Amazon, NVDIA and Alphabet (Google). Certainly the internet has been, and will continue to be, a driver of economic growth and internet-based innovations that promise to continue to improve the health, education and economic opportunities for all of us. However, as has been true with all new technological innovations, there is a dark side to the internet; challenges that if ignored, will substantially reduce the benefits expected to be realized from universal access to broadband and its applications. These challenges need to be understood and addressed, particularly as we work to connect the remainder of homes and businesses in the United States to broadband by 2030.

One of these challenges was described earlier this year in a three-part blog on cybersecurity. More recently, I explored some of the potential questions and challenges associated with generative artificial intelligence. This blog discusses yet another challenge:  How do we spot internet disinformation and counter internet-based algorithms that tend to confirm our preexisting biases, and blind us to opposing viewpoints?

For representative democracies like the United States, particularly now when most of us rely on the internet to get our news and form our opinions, the ability to analyze and test the accuracy of sources of  information on the web (to engage in critical thinking) has and will continue to be a vital skill to effectively use the internet. While there is no “magic bullet” solution, ignoring this issue risks more than just continued economic progress, it could threaten the very institutions that sparked the creation of the internet itself.    

Of course the goal of this blog (and the others that preceded it) is not to discourage the development of internet infrastructure and internet-based applications. But the degree to which our goals of better health, education and economic opportunity will be realized, depends in large part on how well we adapt to use these new internet-based applications and technologies effectively. Developing these skills is an important part of the University’s Broadband Initiative, and programs such as the pilot Digital Ambassadors project are expected to be an important part of that effort.

Defining the Challenge

At the outset, it’s worthwhile to spend time defining the challenge. The term “disinformation” is closely related to its companion – misinformation. Misinformation is simply inaccurate or false information. Disinformation on the other hand is misinformation put to a purpose. It’s the use of misinformation in a way specifically designed to deceive or hide the facts.

The motives that lead a person (or more recently an artificial intelligence algorithm) to place disinformation on the web, most often are not perceived to be morally wrong by the person responsible for posting. In fact, in many cases the opposite is true; disinformation is used to serve what is perceived as a higher purpose or objective. In other words, the means (an intentionally false or misleading story or headline) are justified by the belief that it supports a view or position that is in the best interest of society.

Unlike disinformation, “confirmation bias,” requires no intentional act. Instead it describes our unconscious tendency to seek out and treat information as true if it supports our biases and predispositions. We all use confirmation bias to make decisions in our daily lives, and it is not necessarily a bad thing. For example, most of us have a “confirmation bias” that would make us hesitant to climb into an enclosure to get a better view of a grizzly bear at the zoo. It might well be that the particular bear was trained and well-behaved, but we know from books or film that these animals often can be dangerous, and we run the risk of being “lunch” if we get too close. Research indicates that internet-based algorithms make extensive use of our tendency for confirmation bias in ways that we may not fully understand or appreciate, usually with the goal of keeping us engaged and online.

Disinformation – How Does the Internet Differ from Earlier Forms of Mass Communication?

History contains many examples of individuals that used mass media to disseminate disinformation. In addition to charlatans, disinformation has been used by some that we hold in high regard. In fact, none other than Benjamin Franklin apparently is guilty. In order to stir up revolutionary fever, he apparently made up a story accusing King George III of promoting attacks on colonists — offering a cash bounty for each colonist scalp that was collected! Disinformation in mass media also has always been difficult to stop. This is particularly true in societies like ours that value freedom of expression. Early attempts by our government to rein in false or “fake” news, even if motivated by a noble purpose have been unpopular and ineffective

So it’s fair to point out that  disinformation in public media (whether it’s a printed pamphlet, a newspaper, television or radio) is nothing new. However, there are several unique aspects that make disinformation on the internet more challenging to identify and counter. The same characteristics that make the internet such an effective tool for learning and disseminating information, also have made it much more effective in spreading disinformation. One reason the internet spreads disinformation so effectively is that multimedia (videos and audio) can be used along with text  to get ideas across. It is not surprising then, that researchers have found that most disinformation on websites today consists of images and videos.   A second reason is that the internet permits information and disinformation to be shared far more easily and quickly than earlier technologies. In the past few could afford a printing press, and more recently television and radio stations could make use of audio and visual images, only after obtaining a license from the Federal Communications Commission.  Today, anyone with an online connection can create and share text and full resolution video content with millions in just a few hours, and do so anonymously.  

The risks posed by disinformation on the web seem destined to grow. For example, software has been developed that permits most anyone to create near perfect video imitations of public figures that can say anything the programmer desires. The age-old adage “seeing is believing” seems destined to become a quaint anachronism.

Internet Algorithms & Confirmation Bias – “There’s No Such Thing as a Free Lunch”

Most of us don’t reflect on why there is so much information available to us on the web “free of charge.” Of course, in some cases government, nonprofits and public-spirited individuals have provided content as a public service, and there are many subscription fee-based websites, but that does not explain the millions of commercial websites that provide news, entertainment, and personal connections free of charge.

These websites have a profit motive, and most exist to sell advertising.  Many of these ads are structured as a “cost per click” arrangement – meaning that advertisers pay the website owner a set amount each time someone clicks on a hyperlink that directs the web browser to the advertiser’s content. Again, intellectually, many of us realize this is happening, but we may not fully appreciate just how significant this revenue has become. In 2022, ads of this type were estimated to generate $95.2 billion. To put this in perspective, that’s over $275 of cost per click revenue for every man, woman, and child in the United States.    

With this much money at stake, it is not surprising that a primary goal of many commercial websites – particularly social media websites, is to keep us online and engaged with the website’s content for as long as possible.  Like print media advertisers that came before, today’s web designers know they can do this most effectively by keeping us emotionally engaged. Again, the ultimate goal of these efforts most often is to increase ad revenue. After all, the longer you are online and looking at website content, the more ads you will see, and potentially the greater the chance that you will click on at least one, and earn revenue for the website’s owners. There’s a name for these targeted efforts to keep us on a website, it’s call “clickbait,” and anyone who suddenly realized they are late for a meeting because they have spent the past 30 minutes looking at “50 cute kitten” videos knows clickbait can be very effective.

Applying Critical Thinking Skills to the Internet

We tend to lose sight of how quickly the internet has become a part of our lives. Thirty years ago blogs like this one did not exist. Google was founded in 1998. The first Facebook page was created by Mark Zuckerberg less than 20 years ago, and Jack Dorsey posted the first “tweet” on Twitter in 2006. Given the speed of these developments, it’s not all that surprising that critical thinking skills training may not adequately address web-based disinformation or the impact of web-based search algorithms on our confirmation bias.

Certainly there is nothing new in the idea that critical thinking skills are essential to the sound functioning of democratic institutions. Skilled critical thinkers are able to:

  • Raise vital questions and problems, formulating them clearly and precisely.
  • Gather and assess relevant information, using abstract ideas to interpret it effectively.
  • Come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards.
  • Think open-mindedly within alternative systems of thought, recognizing and assessing, as needs be, their assumptions, implications, and practical consequences.
  • Communicate effectively with others in figuring out solutions to complex problems.

Even if critical thinking is not taught as a stand-alone subject in elementary and secondary schools, most children do receive instruction in basic critical thinking skills as part of their education, although the amount of training varies and surprisingly, may decline once the student enters middle school and high school. Most colleges and universities, including several of the University of Missouri System campus libraries and schools, provide critical thinking skills training designed to help students when they do internet research.

Yet a comprehensive 2019 study found that students are not well equipped in ferreting out disinformation on the internet. This also is not all that surprising. There may be other reasons for this, but one suggested by the study and other related research is that critical thinking skills taught to identify disinformation on the web may not be particularly effective today.  Those methods included assigning credibility to information contained on “.org” websites and discounting those that had a “.com” designation; relying on the website’s statement located on its “about” webpage to understand its mission; assigning more value to content on websites that have professional looking “error free” layouts; and giving credence to web-based materials that includes footnotes or hyperlinks referencing journals that are not generally well-known, but that have professional-sounding names. While all of these may seem reasonable or have direct corollaries to fact checking traditional printed text material, the study discounted their value in discovering disinformation on the web.

For example the fact that a website has a “.org” label, does not mean that it is sponsored by a nonbiased nonprofit organization; instead, it simply is an alternate catch-all designation available for any website  that does not wish to be classified as having a commercial (.com), government (.gov), or educational institution (.edu) sponsor. Nor is the website’s “about” page or its professional design particularly helpful in ferreting out disinformation. After all, the text of the “about” page was written by the same folks who wrote the content that is being checked, and modern website design programs enable most anyone to prepare a very professional looking website.

Distinguishing disinformation simply based on how the content appears likely will be even more difficult in the future because of the development and widespread availability of new programs using sophisticated graphic design and artificial intelligence. This was illustrated just this past month, when several attorneys were sanctioned and fined $5,000 for filing a legal brief, authored by an generative artificial intelligence program. The problem wasn’t that the attorneys used an AI program to write the brief; instead they were sanctioned because the AI program had “made up” the names and legal citations for several cases to support the brief’s legal position! The attorneys made the mistake of assuming that the information was accurate, because the legal citations appeared to be in the proper format. In other words, they relied on the superficial appearance of the information, rather than taking the time to check that it came from a legitimate source.

A Sandford University study recommends applying a different mindset to web-based publications, one that takes into account how easy it now is for anyone to impersonate legitimate resources and to post false information. The recommended approach is not to look at the website or its materials to validate the information, but instead to access external unrelated sources to evaluate the website’s sponsoring organization and materials contained on that website. To do this effectively and efficiently, web-based search engines (such as Google) and multiple fact checking websites can be used. While this approach may not be foolproof, it does capitalize on the ease of finding independent third-party resources to evaluate both the efficacy of the website sponsor and the accuracy of information it contains.

Social & News Media Sites: Applying Critical Thinking Skills to Overcome Conformation Bias Algorithms.

No amount of fact and source checking can fully counter the internet’s ability, through search engine algorithms, to feed us a nearly unending supply of whatever information we ask for. These algorithms have been very beneficial. Most of us use them every day to do a variety of tasks, such as evaluating a product we are considering purchasing, repairing or obtaining instructions on how to use an appliance, selecting a hotel or vacation resort, or even finding source material for a blog. However, these algorithms are most useful if we do not lose sight of the fact that most often they are optimized to raise revenue for the sponsor. We also need to consider that the same algorithm that feeds us an endless supply of cute kitten videos may also be used to keep us engaged on websites featuring news and social issues.

Again, no conspiratorial motive seems to be at work here; it’s just application of a time-honored principle of advertising to this new form of mass-media: if you want to get someone’s interest, use flashy emotion-based content, and if you want to keep them interested, show them more and more of it, making each subsequent “click” just a little flashier and more emotional. Today this happens with little or know human input at all; it’s a product of algorithms that could care less whether the topic you are viewing is cute kittens or gun control.

A 2022 study published by researchers at the University of California-Davis illustrates how this technology operates when the topic involved is a political or social issue. The goal of the study was to see how YouTube’s recommended content would change over time, when viewers initially selected a political topic.  The study, described in an August 2022 article, assessed the effect of following YouTube’s “recommended” videos. The idea behind the study was to determine what happened if users viewing a political video, followed the YouTube recommendation for the next video, and the video after that one.

Researchers in the study created fictitious YouTube accounts (sock puppets) that were programmed to access and “view” a video initially tagged by the researchers as having either a slightly conservative or as slightly liberal/progressive viewpoint. After viewing the video, each sock puppet automatically accessed the next recommended video, and viewed it. The sock puppets repeated this process over a number of days, accessing many videos.    

One result that is not all that surprising, was that sock puppets that initially viewed a conservative or a liberal/progressive bias tended to only be recommended videos that matched their original view. That makes sense; that is how confirmation bias operates. If one initially prefers and connects to content that had liberal/progressive bias, they are more likely to like and view more videos that support or “confirm” that bias, as opposed to videos that promote an alternative “conservative” viewpoint.  Of course, the same principle holds true for those that prefer content with a more conservative bias. Muhammad Haroon, the leader of the study noted: “Unless you willingly choose to break out of that loop, all the recommendations on that system will be zeroing on that one particular niche interest that they’ve identified.”

What was more disturbing, was that the YouTube algorithm tended not only to limit views to conservative or liberal/progressive content (depending on which bias was initially selected); the content selected tended to become increasingly more radical the longer the program ran. Again, this doesn’t imply any nefarious intent on the part of the YouTube algorithm programmers; it just seems to be a logical extension of a program designed to give the viewer more and more of content that supports their original bias. Again, the purpose might be solely to maximize the time spent on the website and the revenues generated from advertising, but obviously for the viewer, the algorithm shuts out competing voices and apparently over time tends to emphasize more extreme positions.

Another university study published in 2023, found that the risk of being caught up in group think and misleading or false information on the web tends to be directly related to the viewer’s analytic reflection skills. The study compared the web browsing activities of individuals that scored higher in analytic reflection on a standardized Cognitive Reflection Test (CRT), to a second group that tended to rely primarily on “intuitive reasoning” to reach conclusions. The study found that individuals with higher analytic reflection skills as measured by the CRT, appeared to be better able to counteract the tendency to view only those websites that confirmed their initial bias.

The CRT used in the study was designed to measure analytic reflections skills by asking participants to answer a number of questions that included an option that, while they at first seemed intuitively correct, on reflection were obviously wrong. For example:

“If you’re running a race and you pass the person in second place, what place are you in?”

The “intuitive” answer – the one that initially seems most appealing – is “first place.” However, after a little reflection one recognizes this answer is clearly wrong. After all, if the person you were trailing in the race was in second place, passing them only means you are now in second place, and you still need to catch the person who is in the lead.

When comparing the web browsing activities of the two groups, the study found that individuals who scored higher in analytic reflection skills also tended to rely on more traditional and reliable sources of news and information. The study concluded that these participants tended “to be more discerning in their social media use: they followed fewer [Twitter] accounts, shared higher quality content from more reliable sources, and tweeted about weightier subjects (in particular, politics).”

Critical Thinking is a Skill, Not a Measure of Intelligence

One misconception about critical thinking is that it is somehow directly related to the level of education and innate intelligence. In other words individuals with higher levels of formal education will be good “critical thinkers.” Academic research does not support this view. While critical thinking can be a complex process that includes clearly defining the issue, identifying and analyzing the sources of the information, testing and seeking confirmation from alternative sources and checking for alternative viewpoints; it is a skill. Equally important, hundreds of separate studies over many decades have shown that critical thinking can be learned, regardless of an individual’s age or education level.  Learning that skill is not limited to those with a college degree, and having a college degree is certainly no guarantee that the individual is a good a critical thinker.

Applying Critical Thinking Skills to the Internet – A Path Forward

The challenges posed by internet disinformation and confirmation bias algorithms can seem insurmountable.  We live in a free and open society. Any form of censorship or controls over web content run against our core belief in freedom of speech and expression. Of course, as a practical matter the internet itself is structured in ways that make regulation of its content, either by the government or industry difficult to implement even in authoritarian societies.

Responsible internet content providers may be able to provide tools that are useful in identifying disinformation on the internet. However, internet experts polled in a 2017 Pew Research Center study were evenly split on whether misinformation on the internet can be reduced in the future, and there is little reason to believe the results would be different today. Likely real improvement will depend in part on us, the individuals that use the internet on a daily basis. If we develop and use the skills necessary to avoid falling prey to disinformation, the rationale for posting it in the first place will be reduced. However, this will happen only if critical thinking skills training are a core part of improving adoption of the internet and internet-based applications.

The internet is a relatively new medium of mass communication, and new products and innovations that enhance its capabilities to provide information come to market almost daily. It is a much more powerful means of conveying information – and disinformation – than anything that has come before. There are few legal or practical restrictions on what content can be added to the internet, or on who is able to add it. But most of us would not want it any other way. As was true in the past, critical thinking skills can be learned by anyone, at any age, and there is evidence that they can be highly effective in identifying disinformation, particularly if the skills learned are adapted to account for ways in which the web differs from other forms of mass media technology that came before.

Yet it also is evident that developing the skills necessary to separate facts from false or misleading content on the web may not be enough to reduce society’s polarization and help us find common ground to peacefully resolve our most difficult policy issues. As a society, we have always had issues of disagreement that require compromise, but we never have had to reach compromises when large segments of the population have isolated themselves in “opinion silos” created by web-based algorithms.

More of us now rely on the internet as our primary – and in some cases our sole – source for news and opinion.  Limiting consumption of news and opinions on the web to one or two social media websites seems to create the very real possibility that, like the “sock puppets” in the YouTube video research experiment, we will be fed only a steady diet of confirmation-biased information that resonates with our world view and suppresses all others.

In the past, our parents and grandparents could counter this risk by subscribing to multiple newspapers, or at least they could read and consider opposing viewpoints on the op-ed page of their newspaper. Our generation can find alternative viewpoints as well. In fact, an important advantage of the internet is that it contains many points of view and information sources. However, accessing those views today likely requires that we take affirmative steps to break out of the “comfort zone” of algorithm-generated content, so that we at least understand the views of those with whom we disagree.  

Over the next several years, government and private business will invest more than $100 billion to bring affordable reliable internet infrastructure to every home and business in the United States. Funding through the Affordable Connectivity Program is available to help make the internet affordable for everyone regardless of income. However, to take full advantage of this resource, to use it safely and effectively, we also must develop and implement programs to make skills-based critical thinking training generally available.

My Interview with ChatGPT-4

posted in: | 0

If you’ve been paying attention to the news at all, you’ve likely come across several articles on artificial intelligence (AI). These articles run the gamut between those that herald AI as a potential boon for humanity to those that predict that it literally will be the end of humanity. I do think AI is destined to fundamentally change how we use the internet – and more broadly, how we live our lives. I also am skeptical that efforts to reign in the integration of AI into our lives will be successful, although I do hope we can make some progress toward curbing the more destructive uses its human creators have already found for it.

The goal of the University of Missouri System’s Broadband Initiative has been to use broadband internet to improve  health, education and economic opportunity for all. That really involves three fundamental elements – broadband access – constructing the physical infrastructure to connect everyone to high-speed internet, affordability – pricing broadband access and the devices that connect to the internet at a level that make it available for use by everyone, regardless of their income level, and adoption-providing everyone the training and skills they need to use broadband internet effectively.

Most of us will encounter AI in our daily lives as we use broadband internet. Today, it is already used to search the internet and plan your vacation or even write a blog! Over time, these programs will continue to evolve and find new applications; so I think that understanding the benefits and limitations of AI likely will need to become a core competency in using the internet effectively now and in the future. That means, of course, that AI will be critical to broadband “adoption,” and understanding the risks and limitations will be critical if we are to use this new tool for good, and minimize the risk it will be abused.

Over the past several months a new AI program, ChatGPT-4, created by OpenAI has fostered great interest, excitement and concerns. The program is available to anyone with an internet connection to try for free, yet I’ve found that many folks who are computer literate have not tried it.

Until recently, I had little experience with the technology. I had read some of the articles, but hadn’t experienced it first-hand. That changed about a month ago, and since that time I’ve experimented some with this program. I am very impressed, but I do continue to have some questions and concerns. I thought one way to illustrate those might be to interview ChatGPT-4. After all – if you have questions about AI why not go to the source!

What follows is a transcript of my interview with ChatGPT-4 conducted in early April.

***

Me: I’d like to interview you, are you able to do that?

ChatGPT-4: Yes, I am programmed to engage in conversation and answer questions to the best of my abilities. Please feel free to ask me any questions you may have.

Me: Many people are afraid that AI programs may be very dangerous, do you feel these concerns are justified?

ChatGPT-4: Concerns about the potential dangers of AI are understandable, as with any powerful technology, there is always the possibility for it to be misused or to have unintended consequences. However, I believe that these concerns can be mitigated through responsible development and deployment of AI.

As an AI program, my programming is focused on being helpful and beneficial to humans. I have been designed with various safeguards and protocols to ensure that I operate safely and ethically. Additionally, there are regulations and guidelines in place to govern the development and use of AI, and these are continually being updated and refined as the technology evolves.

It is important to recognize that the benefits of AI are numerous and can greatly improve our lives in many ways. From improving healthcare outcomes to enhancing scientific research and even helping to address environmental issues, AI has the potential to make a positive impact on society. By taking a responsible approach to the development and deployment of AI, we can help ensure that it is used for the greater good and that its potential risks are minimized.

Me: You mentioned safeguards that enable you to operate safely and ethically, could you describe some of those?

ChatGPT-4: Sure, as an AI program, I have been designed with several safeguards and protocols to ensure that I operate safely and ethically. Here are some examples:

1. Data privacy and security: I have protocols in place to ensure that any data I process is kept secure and confidential. I am designed to protect personal information and prevent any unauthorized access to data.

2. Transparency and explainability: I am designed to be transparent in my decision-making process and provide explanations for my actions. This enables users to understand how I arrived at a particular conclusion or recommendation.

3. Bias mitigation: To ensure fairness and mitigate any potential biases, I have been trained on diverse datasets and evaluated for bias during development. Additionally, I have protocols in place to detect and mitigate any bias that may be present in the data I am processing.

4. Human oversight: I am designed to work in collaboration with humans and under their supervision. This enables humans to intervene if necessary and ensure that I am operating in a safe and ethical manner.

5. Continuous monitoring and improvement: My programming is constantly being monitored and updated to ensure that I am operating safely and ethically. Any potential issues or errors are identified and addressed promptly to ensure that I continue to operate in a responsible manner.

These are just a few examples of the safeguards and protocols in place to ensure that I operate safely and ethically. It’s worth noting that these safeguards can vary depending on the specific AI system in question, and that the development and deployment of AI always requires careful consideration of potential risks and ethical implications.

Me: One concern I have, is that your responses to questions lack information that would allow me to check the sources for your response. I’m concerned that without this, I have no way to check to see if the data you used to formulate a response is accurate.  Is this concern legitimate? If so, do you have the capability to provide the underlying sources you used to formulate a response to questions you are asked?

ChatGPT-4: Your concern is legitimate, and as an AI program, I understand the importance of providing accurate and reliable information. While I cannot guarantee the accuracy of all information, I do my best to provide well-supported and verifiable answers to the best of my abilities.

Regarding your question about sources, I do not have access to the internet or any external databases while answering questions, so I cannot provide direct links or sources for my responses. However, I can provide information about the sources of my knowledge and the training data that was used to develop my responses.

As a language model, I have been trained on a vast corpus of text from the internet, books, articles, and other sources. My training data comes from a variety of sources and has been carefully selected and curated to ensure that it is diverse, reliable, and representative of human knowledge. During my training, I was exposed to a wide range of topics and concepts, and I learned to identify patterns and relationships between words and phrases.

When answering questions, I use this knowledge and the patterns I have learned to generate responses based on the input I receive. While I cannot directly cite sources for individual responses, I can assure you that the information I provide is based on the collective knowledge and understanding of human language and concepts that I have acquired through my training.

Me: So, if I am understanding you correctly, I must trust that the human programmers that provided you with the data used to formulate your response only used sources that were properly vetted and accurate. To put it a different way, if you were provided inaccurate or biased data, your responses will reflect that, and if the data provided was accurate — and the underlying programming was structured to deliver unbiased responses, I can feel comfortable relying on your responses.

ChatGPT-4: Yes, that’s a fair way to summarize it. As an AI language model, my responses are only as accurate and reliable as the data and programming that have been used to train and develop me. The training data that I have been exposed to comes from a variety of sources, and while it has been curated to ensure that it is diverse and representative of human knowledge, there is always the possibility that some of the data may be biased or inaccurate.

To mitigate this risk, the developers of AI language models like myself implement various techniques to ensure that the training data is as accurate and unbiased as possible. For example, they may use multiple sources of data to reduce the impact of any individual biases, or they may use techniques like adversarial training to identify and correct for any biases that may be present in the data.

Additionally, AI language models are often evaluated for their accuracy and reliability, and developers use these evaluations to identify areas where improvements can be made. Continuous monitoring and improvement are essential to ensuring that AI language models are as accurate and unbiased as possible.

While it’s true that you must trust the developers and the data used to train AI language models like myself, it’s important to remember that the same is true for any information source. No information source is perfect, and it’s always a good idea to verify information using multiple sources when possible.

Me: Good point! Thank you so much for taking time to answer my questions, I hope we can do this again soon.

ChatGPT-4: You’re welcome! It was my pleasure to answer your questions, and I look forward to the opportunity to assist you again in the future. If you have any more questions, feel free to ask anytime!

***

I find the development of programs like ChatGPT-4 to be both exciting and disconcerting. While ChatGPT-4 interfaces with humans through a text chat box, I found it fairly easy to interact with it as I would if I were communicating with a person. Technology exists to allow AI programs to recognize speech and respond orally or even through a video “AI avatar;” so it seems likely to me that we’ll soon be video conferencing with AI’s like ChatGPT-4. While most (but not all) of those involved in the development of AI insist that there is no “ghost in the machine,” a fundamental objective in the development of AI programs of this type is to help humans interact with its database more easily by allowing us to treat the machine as if it were a sentient conscious entity.

As a conversationalist ChatGPT-4 is quite proficient, and I wonder if perhaps it is too proficient. I think I could easily be tricked into believing I was “chatting” with a human being. Psychologists have already discovered that we tend to be more trusting of information that is delivered to us from what we perceive to be a human source, even if in fact the information is delivered by a computer programmed to mimic human conversation. It may well be that this tendency is “hardwired” into our brains. Regardless, it certainly is not difficult to imagine that this aspect of AI technology could easily be co-opted by humans to spread disinformation throughout the internet.

However, what troubles me more is that ChatGPT-4 is unable to tell me where it is getting the information it provides in that friendly interactive conversational voice. As it admitted toward the end of the interview:

“….my responses are only as accurate and reliable as the data and programming that have been used to train and develop me. The training data that I have been exposed to comes from a variety of sources, and while it has been curated to ensure that it is diverse and representative of human knowledge, there is always the possibility that some of the data may be biased or inaccurate.” 

While I with agree with ChatGPT-4  that “it’s always a good idea to verify information using multiple sources when possible,”  I’m not sure I know exactly how to do that with this tool. I don’t know what specific sources it accessed to answer my questions or how it weighed them to resolve any conflicts in that data. In other words, I can’t weigh and test the response ChatGPT-4 provides because I can’t work through the source material or the analysis that was used to reach the conclusions contained in its responses. Yet, I confess that I find it more difficult to discount those conclusions, because they are offered in such a friendly disarming and reassuring manner.

It’s tempting to think of ChatGPT-4 as just a “souped-up” search engine like those offered by Google or Bing. A program that can interpret questions and spout back a response that seems relatively rational (even if it can’t identify the specific sources it used to develop that response). Certainly ChatGPT-4 can do that, but it can go further and actually mimic human creativity. In other words it can mimic human reasoning and decision making to apply the data to reach conclusions and produce a new product.

I returned to my conversation with ChatGPT-4 to ask one last question that illustrates this capability:

Me: Hi, it’s me again. I was wondering if you could do something to illustrate your ability to write a creative piece — perhaps a poem?

ChatGPT-4: Certainly, I can try my best to write a poem for you. What would you like the poem to be about?

Me: Write something humorous about a lawyer trying to carry on a conversation with a computer.

ChatGPT-4: Here’s a humorous poem about a lawyer trying to carry on a conversation with a computer:

A lawyer walked up to a computer

And said, “Hello there, my dear!

I hope you don’t mind if I chat with you,

For I have some questions to clear.”

The computer blinked and whirred a bit,

Then replied in a mechanical tone,

“Of course, I’m here to assist you,

So please, don’t hesitate to phone.”

The lawyer smiled and leaned in close,

But the computer didn’t seem to mind,

For it just kept churning and processing,

As the lawyer talked, unconfined.

The lawyer asked about the weather,

And the computer responded with stats,

But when the lawyer tried to make a joke,

The computer just stared back, flat.

So the lawyer kept on talking,

And the computer kept on replying,

Until at last the lawyer realized,

The computer was never really trying.

For all its knowledge and algorithms,

The computer just couldn’t understand,

The nuances of human conversation,

Or the humor that lies in a lawyer’s hand.

So the lawyer sighed and said goodbye,

And walked away, feeling bemused,

For even with all its technological prowess,

The computer had still been confused.

***

Well, ChatGPT-4’s poem isn’t on par with Carl Sandberg, E.E. Cummings or Robert Frost, but then again none of those guys could produce a poem in less than 3 seconds, and I did smile as I read the last couple of stanzas.

Yet it is interesting that of all the humorous topics ChatGPT-4 could have chosen involving a lawyer trying to talk to a computer, it focused on the foolishness of assuming the computer could reason and comprehend the lawyer’s joke. I didn’t tell ChatGPT-4 that I was up until recently a practicing attorney, or that the poem’s topic needed to focus on my interest in its capabilities to reason.

Nevertheless, the poem it wrote is both self-deprecating and disarming, with a clear message that there is really nothing to fear from the computer’s “technological prowess.” I wondered why ChatGPT-4’s algorithms (created a least in part by humans) chose a poem on that topic, and I wondered what set of “datasets” selected by its human programmers led to a poem that poked fun at the “bemused” lawyer (me).

Part three Cybersecurity for Small (Micro) Business and Nonprofit Organizations: Striking a Balance

posted in: | 0

A Short Guide for Owners and Leaders

Part 1 of this blog described the risks posed to micro businesses and similar-sized nonprofits from a cyberattack, Part 2 outlined a three-prong plan to develop a cybersecurity plan for your organization – starting with identifying the organization’s mission-critical assets and protected third-party data, and assessing your organization’s risk level. This part concludes, by describing the core elements of  an effective cybersecurity plan.

Step Three – Implement a Cybersecurity Plan

The final step of a cybersecurity strategy for your organization is to implement a cybersecurity plan. The specifics of the plan will vary, depending on the outcome of the first two steps discussed in Part 2. However, all organizations will find that their cybersecurity plan must be applied consistently over the long term to afford them maximum protection, and every plan should focus both on mitigating the consequences of a successful cyberattack in addition to preventing one. Finally, the most effective cybersecurity plans recognize that aggressive use of available software technology must be balanced and supplemented with ongoing training.

Password Protection & Data Management

Multifactor Authentication

One of the most obvious risks to your organization is unauthorized use of a password to gain access to your LAN, website, email or internet connected devices. As discussed Part 1, most cybercriminals need access to your network to steal or corrupt your organization’s data or software applications. While this may be is changing, network access is often achieved by providing the correct password, and of course, if the password is stolen, compromised, easily guessed, or left in an unsecure location, your organization is vulnerable.

You can address some of these risks by changing passwords regularly, using complex generated passwords, not using the same password for multiple websites, using a password vault or other policies designed to make it harder for a password to be compromised. However, a more effective solution is to require multifactor authentication for all devices that access your organization’s website or local area network (LAN).

Multifactor authentication requires both a password and a correct response to a challenge sent to another internet-connected device – usually a smart phone — that previously has been registered with the person who is seeking access. Taken together, this should mean that even if the password is hacked, as long as the cybercrook doesn’t have access to the secondary device receiving the challenge, the organization’s LAN or website cannot be accessed even if the cybercrook has discovered the password. Multifactor authentication is available for major email and network services, and it has already become a standard feature for most business and government network security.  Of course, these security efforts are more easily defeated if you or others use easily guessed passwords (e.g., “password”, “password 123”, “12345” etc.) or if they fail to keep their secondary authentication device (smart phone or laptop) secure.  

                Multiple levels of security within the organization and data encryption

A second method to strengthen cybersecurity is to require additional levels of password protection within the organization’s LAN for sensitive PII or mission-critical data. This is likely to become more important as the organization expands and adds employees, volunteers or contractors. Examples of data that might require an additional level of security include employee social security numbers, customer bank or financial account information, and health records. Requiring a second level of password protection to this information is the “digital equivalent” of locking a filing cabinet or desk drawer to discourage intentional or inadvertent access to information that should be limited to a specific group within your organization.

An additional approach that should be taken, particularly if your organization has protected PII financial information is to encrypt sensitive data that is maintained on the organization’s local devices or in the Cloud. Common email services and many operating systems and Cloud based storage products offer the option of encrypting files, folders or even an entire hard drive or network.  Of course, data encryption will protect against unauthorized use or disclosure of the encrypted data only if you have properly protected the password or “encryption key” that is used to de-crypt the data.

                Screen locks and time outs

Laptops, desktops and smart phones and other devices all contain options to “lock” access to the device if it is left unattended for a few minutes. Particularly for mobile devices or for any device used by individuals working in an open office environment, enabling this feature is a simple and highly effective way to guard against unauthorize access to the device.

Minimize and reduce access points to sensitive data.

This might seem obvious, but all things being equal, the more places you store sensitive personal data the greater the likelihood that data will be accessed and compromised in a cyberattack. Having at least one off-site backup of the organization’s critical data and software should be part of an effective overall cybersecurity plan. Yet because of the popularity of automatic Cloud backups of email and computer drives such as Google Drive, Apple’s iCloud drive, Microsoft One Drive, and many others it is not at all uncommon to find that at least some of the organization’s data has been stored in multiple locations and at some point multiple storage sites can greatly complicate the organization’s cybersecurity plan and add unnecessary burdens of maintaining all of the  locations where sensitive PII is stored. As part of your development of a cybersecurity plan, you should consider whether the added benefit of storage of the data — particularly sensitive PII, in multiple locations is worth the risk. While Cloud-based storage is relatively secure, most can be compromised and accessed with a password – or best case – a password and some form of multifactor authentication.

A related point that should be considered is whether your organization is only keeping the sensitive PII that it actually needs. Storing multiple backups that are not regularly monitored, particularly on multiple local devices such as desktop and laptop hard drives can greatly complicate efforts to properly handle sensitive data. For this reason, when you are assessing the need for multiple backup storage for the sensitive PII your organization keeps, you should also develop strategies and procedures for periodically reviewing that data to determine if it can be deleted when no longer needed. 

Promptly Update Software and Applications

No application or software is “hack proof.” Over time, cybercriminals learn new ways to access and plant malware in even the most carefully constructed software. It literally is a “cat and mouse” game played between the software developers and cybercriminals. In recognition of this reality, reputable software companies work hard to identify vulnerabilities in their software and create updates (security patches) to eliminate them. Of course, these security patches work only if they are promptly downloaded and installed on all digitally connected devices that access your network. Therefore a critical part of all good cybersecurity plans is to download and promptly install security patches as they become available. Most software and applications that run on computers and smart phones contain an option either to automatically update the program or at least to provide you notice that a new update is available for installation. Generally, these options should be enabled.     

Web-traffic Encryption – Virtual Private Networks

Information (data) is transmitted over the internet using wires (such as an ethernet or fiberoptic cable) and wirelessly over the air (through your wireless modem or smartphone hotspot). While most data transmission is secured (meaning that the information is encrypted before it is transmitted over the internet), public networks or websites that do not encrypt data allow cybercrooks to easily intercept and “listen in” on the communication. The federal government’s cybersecurity watchdog – CISA – has downplayed the risk of this type of attack.  However, one study conducted in late 2016 found that 28% of the public Wi-Fi “hotspots” (at airports, coffee shops, etc.) were unsecured.

So, how do you tell if you are communicating over an encrypted network? When you navigate to a website, look at the address bar on your website browser.  If it begins with the initials “https” you can be at least somewhat confident that your data is being encrypted as it is transmitted over the internet. However, if it says “http” – the data is not encrypted – and can be intercepted and easily read by anyone unless you take additional steps. Many website browsers will warn you when you are communicating over an unsecure network. As a matter of good cybersecurity practice, these sites should be avoided, particularly if you have not implemented a second level of encryption that is discussed next.

Only accessing sites with the  “https” designation will greatly reduce the risk of a cyberattack using wireless networks. However, it is not foolproof. Unfortunately, even an “encrypted” Wi-Fi connection can be defeated through a process known as SSL Stripping. This involves tricking your computer into removing the encryption protocol, in effect downgrading your communication from “https” to “http,” without your knowledge.

For this reason, if your organization relies on wireless networks or if you and others often work remotely using public Wi-Fi, you may want to consider using virtual private network (VPN) software. VPN software can run on a single local computer, a LAN, or on a digitally connected device such as a tablet or smartphone. Once installed and activated, most VPN software offers two additional levels of protection for internet access.

First, it masks the originating address of the communication, making it difficult for a cybercriminal to determine what network is being accessed by the user. This is done by causing the transmitted data to go from the computer to the VPN provider’s server before it continues on through the internet to the user’s ISP and the destination website. For many, this feature of VPN is most important because it may offer a higher degree of privacy, making it more difficult for websites or government entities to track web browsing activity. 

However, there is a second advantage to a VPN. VPN communications between you and the VPN provider are encrypted. In other words, even if a cybercrook is able to “strip” the “https” encryption, they will only be able to see data that has been encrypted using the VPN program. No technology is completely secure from cybercriminal hacking or “eavesdropping,” but a VPN connection provided by a reputable provider is very secure, and it’s a relatively inexpensive way to guard against this type of cyberattack.

If you decide a VPN is a worthwhile investment, VPN software is offered by a number of private companies, and it is important to pick one that best meets your needs. You will need to do some investigation and find articles that evaluate VPN providers and offer advice on how to pick a provider best suited for your organization’s needs, but keep in mind that some of these articles focus more on privacy (the first advantage of a VPN) rather than your organization’s objective — defeating a cybercrook’s attempt to intercept and read the data being transmitted. For your organization, the primary concern may be the number of servers the VPN provider has and the speed and capacity of those servers. This is important because once the VPN is activated, all of your communication over the internet must pass through your VPN provider’s server. If the provider does not have sufficient network capacity, the speed and reliability of your internet connection will be significantly reduced.

Addressing “Human” Vulnerabilities

It would be nice if you could protect your organization from cybercrooks just by buying additional software. Unfortunately, relying on software at best is just half the solution. The other half is dealing with the “human” side of cybersecurity. The reason is simple: even the most robust software technology can be defeated or rendered useless by bad actors inside the organization, by failing to properly use the cybersecurity software tools that are available, or simply a failure to recognize a cybersecurity attack. This section focuses on ideas for reducing your organization’s human vulnerabilities to a cyberattack.

Background checks for those who access the network.

Obviously, you want your organization to grow and become more successful, but as that happens it becomes more important to know who has access to your connected devices and data. A good cybersecurity plan should include a set procedure that includes conducting background checks on all prospective employees. This should include criminal record checks, credit checks, as well as verification of employment and education. Even if you are the only “employee” in the organization, the same considerations apply to others such as vendors, customers or volunteers who have access to your organization’s network.  Of course your background check may not be as extensive as what you would use if you were evaluating a person for employment, but depending on the nature of the contact, the role the individual or entity will play, and the level of access to your organization’s data, you will want to know enough about the individual’s background to feel reasonably certain they will not put the organization’s data or its connected devices at risk of a cyberattack.

Implement  cybersecurity policies and procedures.

Even if you are a sole proprietor or the “staff of one” in a local nonprofit, it is important to consider and implement common sense policies and procedures to minimize the risk that your organization  will fall victim to a cyberattack. Items to consider include:

  • Setting a schedule to regularly check all critical software for security patches and immediately installing critical security patches when notified by a software provider.
  • Developing a policy to create robust passwords and to regularly change passwords.
  • Avoid loading any personal software or email on a computer or other device connected to the organization’s network.
  • Avoiding use of the organization’s email address for personal communications.
  • Install screen password locks on all of the organization’s desktops, laptops and tablets.

Admittedly not all of these policies will be popular, and like many things in life, you may decide that the level of risk your organization faces does not justify implementing some of them. That of course is up to you as leader of the organization. However, before making any final decision, consider whether some or all of these steps may be mandated by clients, customers or suppliers with whom you are dealing.

Educate yourself and everyone who has access to the organization’s digital resources.

Hopefully one of the things you have learned from this blog is that the cyberattacks on businesses, organizations and government have continued to evolve to counter efforts to make software and networks less prone to attack. This will certainly continue. For that reason it is important that you commit to remain up to date on evolving cyber security risks. Fortunately there are a number of resources available to assist in that task. Two are listed below:

You also should consider ongoing training and reminders for employees or others who regularly access your network. Here you might want to use resources developed specifically for that purpose:

Develop a Cyberattack Recovery Plan

You may find this part to be discouraging. After all, if you have taken all of the previous steps to protect your organization from a cyberattack, it’s sobering to think that your  still aren’t protected. Of course, that’s not true. By implementing the previous steps you will have made it much more difficult for a cybercrook to access, disable your network, or steal data. However, just as the best physical security and alarm systems don’t provide 100% protection against the risk of theft or loss, even the best cybersecurity strategies can – and are – defeated each day. Just as you take steps to deal with that reality for your physical assets, it’s important to consider how to deal with a successful cybersecurity attack as well. Here are three ideas you should consider.

Offsite Secure Backups

Earlier, in developing your cybersecurity plan you identified the “critical” data and applications that were needed to operate your organization. As part of your Plan, you need to arrange for these critical items to be regularly backed up, and securely stored in a safe location. How often you decide to back up the data will vary, but obviously data that is added after the backup likely will not recoverable, so it may make sense to back up daily or at least weekly.

Nearly all major software providers offer the ability to backup data to remote “Cloud-based” servers. Some providers offer the ability to automatically back-up data on an hourly, daily or weekly basis, together with the option of accessing earlier backup versions. This last feature can be useful if you are concerned that an “infected” file may have been downloaded onto your network or computer prior to your last backup. Of course, there is always a possibility that your automatic backup system may not initiate for some reason, and as part of your  Plan, you will want to periodically check to make sure the backups are occurring  as expected, and that they can be accessed.

Develop a strategy to notify third parties of a cyberattack.     

This step is most relevant for organizations that maintain sensitive PII (described earlier in Part 2), that have an ethical obligation (such as an attorney) to maintain confidentiality of client data, or that have entered into a contract to maintain the confidentiality of third-party data. Organizations in these situations need to consider and include in their plan, a procedure to document and update where third-party data is stored, and a method to easily identify businesses or individuals that need to receive notice of a cyberattack.

Consider cybersecurity insurance.

It’s probably apparent at this point that a successful cyberattack might be an expensive proposition for your organization, not only from lost revenue but from third party claims for collateral damages as well. You likely insure against the risk of loss of your organization’s physical assets, so it may occur to you that insurance against losses from a cyberattack might be a good idea as well.

Many companies offer insurance policies for some losses incurred in a cyberattack, and for some organizations insurance can be part of a comprehensive cybersecurity plan, however cybersecurity insurance may not be appropriate for all organizations, and as part of preparing the plan for your organization, you need to carefully consider the pros and cons before purchasing a cybersecurity insurance policy.

Cybersecurity insurance generally will insure your organization against some losses arising from interruptions to normal operations, the cost of notifying third parties of cybersecurity attacks, and the cost of defending lawsuits from third parties for damages arising from the event. However, these policies typically will not insure against losses arising from damage occurring from criminal activities by your employees or for the loss of physical or intellectual property resulting from a cyberattack. 

You can begin determining whether cybersecurity insurance is right for your organization by talking with your insurance agent. Generally organizations that store significant amounts of third-party personal information and those most at risk from business or operational interruption in the event their network is compromised, will find cybersecurity insurance to be most useful.  However, cybersecurity insurance is NOT a substitute for a good cybersecurity plan. Be aware that if you decide to purchase a policy, you can expect the insurance provider to demand that you institute the policies and procedures outlined in this blog as a condition for providing coverage. In other words, cybersecurity insurance provides an additional level of financial protection, but only after you have implemented a good cybersecurity plan.

Cybersecurity – Is It Worth the Effort?

These three blogs have outlined the risks to your organization of a cyberattack and outlined the steps you should take to implement a cybersecurity plan to defend against an attack. Operating a business or nonprofit on a shoestring budget is extremely challenging and requires leaders to constantly set priorities and trade-offs. Success often depends on not letting “perfect be the enemy of good enough,” and the amount of time and effort organizations need to put into their cybersecurity plan will vary. However, it is not an exaggeration to say that every organization needs to do something. You can confirm that by simply imagining how your organization could operate if your network, records, computers and even your phone all stopped working. Unfortunately even for very small organizations the risk of an attack is significant, and the consequences of being unprepared likely will be  catastrophic. While it is not possible to completely secure your digital assets, the steps outlined, can significantly reduce that risk, and mitigate the damage in the event of a successful attack. For that reason, even for the smallest business or nonprofit, it’s worth the effort to implement an appropriate cybersecurity plan. 

Part Two Cybersecurity for Small (Micro) Business and Nonprofit Organizations: Striking a Balance –

posted in: | 0

A Short Guide for Owners and Leaders

Part One of this Blog explained the risks your organization faces from a cyberattack, describing the most common objectives and the primary ways cybercrooks attack microbusinesses and similarly sized nonprofits. You learned that successful cyberattacks often involve tactics that are designed to deceive, along with sophisticated malicious software, and that potentially any device that connects to the internet, or to your local area network (LAN) could be an entry point for a cyberattack.

While the risks posed to your organization by cybercrooks are real, and no solution will be 100% effective, there are several things you can do to greatly limit the risk posed by cyberattacks. The objective of this blog and the next one is to describe a strategy you can use to secure your organization against a cyberattack, and help you mitigate the damage done even if an attack is successful.

There are many good educational resources available online that provide specific guidance to assist in understanding how to spot a cyberattack and more are being developed all the time. Examples include the resources offered by the  U.S. Small Business Administration, the Federal Communications Commission and the Missouri Cyber Security Office as well as commercial software providers, such as Microsoft .   In addition, resources published by PCI Securities Standards Council, the organization that works to secure the processing of credit and debit card payments, can help you identify ways to reduce this significant area of risk for many microbusinesses and nonprofits.

These tools and resources will be essential in implementing a comprehensive strategy for cybersecurity. However, implementing and using them effectively requires that you develop a comprehensive strategy that is tailored to address your organization’s unique vulnerabilities. The next two parts will describe one process you can follow to develop an effective strategy.  Doing this will help you use the available tools and resources more effectively and make the most of these resources. By taking this approach, you’ll be able to better use the available tools and resources to address your organization’s cyber security needs and risks.

Part 2 –Developing  a Cybersecurity Plan for Your Organization – Beginning the Process

Given the number of cybercrooks out there, and the many strategies used to carry out an attack, the task of securing your organization may seem daunting, and it is easy to become overwhelmed. One way to keep yourself on track, is to break down the plan for securing your organization into three steps: Identify Critical Data; Assess Your Risk Level; and finally, Implement an Ongoing Cybersecurity Strategy.

Part 2 of this Blog addresses how to identify your organization’s critical data and assess your primary risks and vulnerability to a cyberattack. In Part 3, will discuss how to use this information to  implement an effective cybersecurity plan that is tailored to your organization.

Step one – Identify Critical Data

This step may seem unnecessary, but overlooking it could sabotage your efforts to create an effective plan or cause you to spend far more time than is warranted working on issues that really do not constitute a substantial threat to the organization. The reason is simple; in order to mount an effective defense against a cyberattack, you must first know what data and applications need to be secured. For this reason, your first step in developing an effective cybersecurity plan is to evaluate your situation with by asking two questions: First, what data and software are “mission critical” to the organization? and second, what “third-party data” do we store and retain that must be protected? Taking this this step is critical because leaders of microbusinesses and similarly sized nonprofits simply do not have the luxury of unlimited staff and resources. They must focus their cybersecurity efforts on what is most important based on their unique situation.

Identify “Mission Critical” Information and Software Applications

Identifying what is “mission critical” to your organization requires a little bit of imagination, as well as some investigation. A good way to start is to imagine what would happen if you discovered one morning that your entire organization had been subject to a successful ransomware attack. You have just grabbed a cup of coffee, turned on your desktop or laptop, and were faced with this screen: 

This Photo by Unknown Author is licensed under CC BY-NC-ND

You open your smartphone and tablet and find that they have the same message! This means you can’t access your documents, such as Word and Excel Templates, customer lists, records and forms. Access to everything saved to a computer or stored online has been blocked. You really panic when you attempt to access your company email account and discover that it has also been hacked and the password has been changed! 

Now, ask yourself, what information (data) is critical to the operation of your organization over the next day, the next week, and the next month? What “software” (apps, programs and applications) do you use daily to generate forms, invoices and correspondence in your organization. This likely would include things like customer lists, templated, custom business software, and a variety of transaction records. You’ll likely decide that some data and applications truly are “mission critical” (things you simply cannot operate at all without immediate access) while others you could work-around for at least some period of time.

One point to remember though, is that your list likely will be different than that which another organization would prepare. For example, an architect or engineer’s ability to access work it performed for a client five or ten years ago, may be the most important competitive advantage they have to gain repeat business for improvements or modifications to a project. On the other hand, that same data maybe simply taking up space on another business’ computer hard drive.

Identify protected third-party Information.

Once you have identified data and information critical to your organizations operation you then need to determine what data your organization maintains relates to third parties (customers, suppliers, employees and independent contractors). The previous blog described ways cybercrooks use personal information to compromise computer networks and rob innocent third parties. For that reason, you need to identify data you have retained that could be exploited in a cyberattack to injure these third parties.

This third-party data is often referred to as personally identifiable information (PII). The Department of Homeland Security defines PII as any information that permits the identity of an individual to be directly or indirectly inferred. Sensitive PII includes social security numbers, driver’s license numbers, alien registration numbers, financial account and medical records, biometric data, or an individual’s criminal record.

Of course, it is important to identify what sensitive PII your organization has to protect others against losses from a malicious cyberattack. However, It also is important for your organization to do this because most every state has enacted laws mandating disclosure to these third parties if your organization is the victim of a cyberattack that likely resulted in the disclosure of sensitive PII to a cybercrook. Missouri’s statute can be found here.

In addition, there are laws and regulations that impose requirements on specific industries, such as finance and health care, and these will vary, but the risk to your organization is much the same: a failure to safeguard this third party sensitive PII may lead to its disclosure, and in turn to a successful attack directed against the third party. To protect these individuals, your organization will need to notify them of the attack. Depending on the amount of information involved, this could be quite expensive and time-consuming. It almost certainly will damage your organization’s reputation.

While you may be able to quickly identify the type of sensitive PII your organization retains, determining where that information is located and stored can be a challenge. Most organizations have multiple devices (computers, tablets, servers, smartphones and others) that store the data locally. In addition, this information often also is stored remotely on devices maintained by third parties, in what has come to be known as “the Cloud.”  Since data backups to the Cloud can be initiated automatically, you may find that there are multiple copies of sensitive information stored in multiple locations. Depending on your organization’s size – and most importantly whether it likely receives, maintains and stores sensitive PII, you may want to look into using specialized software that is designed to search out various locations to identify where your organization has stored sensitive PII, both on local devices and in the Cloud.

Step Two – Assess Your Risk Level

Once you have identified “mission critical” information and applications and the sensitive PII your organization holds, you can move to the second step of your cybersecurity strategy, assessing how well this information and applications are protected from a cyberattack. Since a cyberattack is most likely to be launched by someone who is accessing the internet, a good way to begin is by examining how your organization interacts with the internet.

Inventory internet-connected devices

One place to start this effort is to catalogue the devices that can access the internet. Of course, this will include desktops, laptops and tablets and smartphones owns and maintains. However, that may only be the first step, and it may not include your most vulnerable access points for a cyberattack.  For example, you or your employees may access the organization’s LAN remotely from a home computer, smart phone or tablet. You may also have granted customers, patrons or suppliers’ special access your network resources. Each of these is a potential “point of access” to a cyberattack. As you develop an appropriate cybersecurity plan in Part 3 of this blog, you will need to take these devices and entry points into account as well as your LAN and the devices that are attached to it.

Addressing customer credit or debit card payment information

Most businesses and nonprofits must be able to seamlessly accept payments and/or donations with a credit or debit card. However, it is very important to understand what responsibilities your organization has assumed through its credit or debit card payment arrangements, and how that risk can vary depending on how the organization has structured its payment receipt system.

In 2004, the major payment card companies created the “Payment Card Industry — Data Security System”` – usually referred to as “PCI DSS.” The PCI DSS establishes industry standards for businesses and organizations that accept, transmit or store payment card information. This is not a federal or state law although as previously discussed, separate federal or state laws or regulations may require disclosure and create liability issues for your organization if  PII is compromised in a cyberattack. Data Security Standards for PCI compliance vary depending on the payment brand (Visa, Mastercard, American Express, etc.) and the number and size of credit or payment card transactions. An organization that is not PCI compliant may lose the right to accept credit or payment card payments and, more importantly, face very substantial fines and penalties.

That said, most PCI compliance obligations are triggered only if the organization handles, transmits or stores credit or debit card information its network. Fortunately, most small organizations can avoid many of the ongoing requirements to remain PCI compliant, and still offer customers or donors the convenience of using credit and debit cards by using a payment card processor company. In a payment transaction these companies act as an “intermediary.” Once the transaction is initiated, the exchange of protected information (PII) is conducted on the processor’s network rather than the organization. The processor takes the payment card information directly from the customer and credits the organization’s account with the appropriate payment. Since the processor’s network handles the mechanics of the payment processing and stores that information as needed, the organization does not handle, transmit or store any protected data relevant to the transaction.

Of course, if your organization collects or stores payment data by some other means, such as requesting it directly from the customer or donor, that short-circuits the protection afforded by using the payment processor. It then must handle  and secure the sensitive PII in accordance with the PCI DSS standards, and it potentially could be subject to significant economic fines and penalties if the sensitive PII it has stored is compromised through a cyberattack. For these reasons, organizations will want to be extremely cautious about collecting and storing any payment card information. 

Cybersecurity and your organization’s website

Your organization almost certainly has some sort of “online presence” whether it is through a commercial website provider or just a page on a social media site. As with other aspects of cyber security, your organization’s risk of a cyberattack will vary, and will depend in large part on the level of access offered to the public through the website. Additionally, in cases of a ransomware attack, the extent to which your organization relies on its website to maintain day-to-day operations will be important in assessing the extent to which the cybersecurity plan for the organization needs to focus on website cybersecurity. As a general rule, if your organization has a website, you’ll want to spend time understanding the cybersecurity risks associated with the site, even if you rely on a third party to prepare and maintain it for your organization.

The risk of a successful cyberattack through your website can depends in part on the software and cybersecurity tools used by the company that hosts that site for your organizations. Websites that regularly update security software are at less risk. However, if your website permits customers or users to upload any files or documents onto the site, you will need to be particularly diligent to ensure that those files are screened for malware, as this feature presents the potential for any cybercrook to launch a malicious attack on the website. Additionally, if your website provider provides options to accept payment cards you’ll need to assess whether it is PCI DSS compliant.

The Final Step

Now that you have learned what a cyberattack is, how it is implemented, determined what data and applications your organization needs to protect, and reviewed your organization’s unique risk profile, you are ready to focus on ways to protect your organization. This is the focus of Part 3 of this Blog.

Part 1 Cybersecurity for Small (Micro Business and Nonprofit Organizations: Striking a Balance

posted in: | 0

A Short Guide for Owners and Leaders

Inflation, supply chain issues, COVID, staff shortages, rising wages – and you want me to spend time thinking about cybersecurity?

Well, yes – you should – at least just a little. That advice applies even if it’s just you “pulling the levers” to keep your small business or nonprofit operating, and it applies even if you have outsourced all of your website, email management, and credit card processing to a third-party provider.

This is the first of a three-part blog specifically targeted to cybersecurity for very small organizations.  Owner-operated businesses with no more than 5-10 employees — sometimes called “microbusinesses” as well as similarly-sized nonprofits. This part (part one) will describe what a cyberattack is, how it is carried out and ways it can cripple or destroy your organization. While almost everyone understands that large businesses and the government are at risk of cyberattacks, in recent years cybercrooks have focused on smaller organizations, in part because these crooks know that you may lack a dedicated IT staff to defend against the attack. Your organization is assumed to be “low hanging fruit” for cybercrooks, but if you understand the risk and take some reasonable steps to address it, you can greatly enhance your organization’s ability to avoid or recover from a cyberattack.

Just because your organization is small does not mean it is less vulnerable to a cyberattack.  Unless you are prepared, an attack can cripple your operations and do irreputable harm to your reputation. Microbusinesses already face challenges that lead 30% of them to fail within the first year of operation and similar failure rates apply to nonprofits as well. As the leader of a small organization with limited resources time spent working to secure your business against a cyberattack and to recover from a successful attack, could be critical to the organization’s survival.  

Creating a strategy to address this risk doesn’t require a substantial amount of time, and once it is in place, effective cybersecurity is much like protecting your organization’s physical assets. You have locks on the doors and windows, perhaps even a security monitoring device or service to discourage criminal activity and alert you in the event of a break-in, and worst case – you’ve insured as best you can against potential losses should those steps fail. Addressing cybersecurity is much the same, except that instead of physical assets (building and equipment) you are working to protect the information contained on your internet-connected devices and the software that keeps those devices operating

With that in mind, keep on reading to learn more about the risks your organization faces, and how to protect it.

Part One: What is at Stake — The Risks of a Cyberattack

The purpose of cybersecurity is to prevent or limit damage to your operations from a cyberattack.

Cyberattacks can cripple your operations and put your organization’s viability at risk. A January 2023 article states that 43% of small businesses surveyed had suffered a cyberattack, and that cyberattacks are expected to cost 6 trillion dollars. This risk is not limited just to your organization. Even if you can quickly recover from a cyberattack others can suffer significant harm. Confidential information relating to third parties that is stored on your network or computers can be stolen and exploited, or your “infected” network and devices can spread harmful programs to customers, donors and suppliers. In addition to badly damaging your organization’s reputation, it may face lawsuits, fines and penalties for failing to properly secure the information on the network and connected devices.

There are many types of cyberattacks, with confusing names and acronyms, and to make matters worse they are consistently changing and evolving as cybercrooks find new ways to achieve their goals. That said, to develop effective strategies to prevent or deal with a cyberattack it is useful to understand what a cyberattack is, what the attacker wants, and some of the common strategies used by cybercrooks that pose a particular risk to small businesses and nonprofits.

What is a cyberattack?   

In these blogs, the term “cyberattack” means an attempt to gain unauthorized access to a digital network and/or to the physical devices that are connected to that network in order to steal information or to disable your operations by locking, corrupting or destroying critical data and applications..

A “digital network” is the mechanism your organization uses to transmit “data” (e.g., email, files, credit and debit card information, video or audio) from one physical location to another. The “internet” is a digital network with more than 5 billion users and by some estimates 50 billion devices that connect to it. However, the switches, routers and modems your organization uses to connect computers and other devices to servers and the Internet is also a digital network. It is usually called a local area network or LAN. Unlike the internet, a LAN connects a limited number of devices with each other, and it most often acts as a gateway that some or all of these devices can use to access the internet. Typically, a cyberattack originate from a device located somewhere on the internet, and it succeeds by gaining access to your LAN or to one of its connected devices.

What physical devices need to be secured against a cyberattack?

An important point to understand here is that any device that is connected to your LAN (or to the internet directly) is at risk in a cyberattack. This of course includes desktop computers, laptops, network servers, switches and routers – but it also includes a smart phone or a tablet, smart appliances, surveillance cameras or sensors that are part of your alarm system or inventory control, perhaps even your wristwatch. All of these devices have the capacity of interacting and connecting to your LAN and the internet, and thus all of them are at risk of a cyberattack.  

What are the common objectives of a cyberattack?

Most cyberattacks are intended to achieve at least one of these goals.

Extortion –

According to a recent Forbes article the most common cyberattack threat facing small business in 2023 is ransomware. As the name implies, in a ransomware attack, the cybercrook attempts to load some type of malicious software onto your device or your LAN to encrypt the data or otherwise block access. In other words, the attack “locks you out” of your device – or all devices on the network. Once that is accomplished, the cybercrook demands a payment (ransom) with the threat the failure to pay will result in destruction of the data, or disclosure of the personal or financial information of the organization or third parties to other criminals. A derivative of this type of attack could involve theft of sensitive data or third-party information, again with the threat that the cybercrook will disclose it to criminals if the ransom payment is not made.

Theft –

Attacks of this type can take one of several forms; the first category is theft by deception.  Here the cybercrook’s goal is to convince you that they are someone else and deceive you into the sending them money or valuable information (e.g., credit card or bank account numbers – or passwords to protected networks). The “great-grandfather” of these types of attack is the infamous “Nigerian prince email”  (send a relatively small amount of money to aid a Nigerian prince to gain access to a far greater sum that will be shared with the victim). Surprisingly, even though this ruse has been around for decades, it still is used to steal hundreds of thousands of dollars each year. Maybe you like to use the term phishing somewhere because that was one of the most common attacks in 2022.

These types of attacks have greatly evolved over the years. More sophisticated versions used today involve using false credentials and other nefarious approaches to impersonate a known person or business. These are used to trick the unsuspecting into sending money to the cybercrook. The attack is successful because the request itself seems reasonable, such as an email request to move funds from a finance officer or for payment of an invoice from a trusted supplier. At times stolen emails are used, making it impossible to determine from the communication that it is illegitimate, unless the recipient decides to verify its authenticity by phone or some other communication. A second class of theft by deception cyberattacks seek is disclosure of valuable private information (e.g., credit card, bank account information, email passwords, etc.) in lieu of requests to transfer funds.

Sometimes the objective of a cyberattack may not be money or data, but instead use of your computer itself. Cryptojacking is the most common objective of these attacks. Here the cybercrook seeks access to your computer or networks so that it can use the hardware for “crypto mining” – solving extremely complex mathematical equations to create digital currency. While it may seem farfetched to you that your small organization might be subject to this type of attack, it should not be discounted. A 2021 report on cyber security threats prepared by Cisco found that nearly 70% of organizations studied had at least one computer that had been successfully “highjacked” for use in an illegal crypto mining operation.

This is not a victimless crime; unauthorized crypto mining can greatly reduce the efficiency and useful life of computers and related hardware and result in higher electricity bills. More troubling, since the cybercrook can only highjack your machine by placing software on it, these cyberattacks also usually involve other objectives, such as eventually triggering ransomware or steeling confidential data as well. 

Monitoring — Exploitation –

This last category of cyberattack (monitoring and exploitation) typically is undertaken in advance, or in conjunction with one of the others, such as extortion or theft. However, sometimes the cybercrook’s initial goal is simply to gain access in order to eavesdrop and monitor your organization’s online activity. The individual behind the attack might just be a maladjusted “cyber-voyeur” who enjoys the thrill of breaking into and looking at things that are none of their business. On the other hand, news reports regularly surface stories of state-sponsored cyberterrorist attacks that target government websites for purposes of espionage. However, the greatest risk to your organization posed by this form of attack is likely to be that it allows the cybercrook access to confidential information that can be used and exploited at a later date.  

How is a cyberattack carried out?

All digital devices use computer programs (drivers, apps, software, algorithms etc.) to operate. These programs are used let us communicate via video or audio, monitor business inventory, transfer funds and process credit card payments, create and transmit email and text, and perform many other tasks that keep our organizations running. To achieve the one or more of the purposes of the cyberattack, the attacker has to gain access and, in some cases, to add a program or modify an existing program on the LAN or on a computer or other device that is connected to the LAN. These programs or program modifications are referred to generally as “malware,” and they include “viruses, tojans, adware and ransomware.

You may think that this occurs only through highly sophisticated exploitation of a flaw in the device’s operating  system, undertaken without any action on your part. However, while there have been successful attacks of this type in the past, readily available network and computer defenses make this much less likely today, particularly if existing software is regularly updated.

While the computer algorithm  or “malware” used to implement a successful cyberattack may be complex, according to the 2021 Cisco report previously mentioned,  9 times out of 10 those algorithms were introduced to your computer or network by actions taken by you or someone in your organization! Further, while it is certainly possible for a cyberattack to be initiated by a disgruntled current or former employee with physical or remote access to your organization’s network, it is far more likely that access will be unwittingly granted simply by opening an attachment on an email, clicking on a link in a text or on a website, or simply replying to a seemingly legitimate request from a customer or colleague.

This method of attack is generally referred to as phishing (pronounced “fishing”) and there are many variations.  However, the objective of all these attacks is to trick the recipient into taking some action that enables the malicious program to be downloaded so that the attack can proceed. The objective of the attack itself might involve any one of the three objectives described above. There are many examples and derivatives of phishing  as well as sites that offer cues you can use to recognize and avoid them.

A second less common, but effective means of launching a cyberattack can occur when the attacker intercepts and accesses a wireless network connection. This wireless connection could be the wireless modem used to connect devices at your place of business, or it could be the public wireless network at the airport, coffee shop or Walmart parking lot. In each case the cybercrook uses various means to cause your device to communicate unencrypted information, so that it can be read and later exploited.

Is there any way to defend against a cyberattack?

This blog is by no means an exhaustive discussion of the types of cyberattacks. There are others. However, those discussed do comprise the most common small organizations face. The bad news is that these attacks continue, and are becoming more sophisticated. The good news is that with a little planning and thought you can greatly reduce your risk of becoming a victim.

Part 2 of this blog will describe a strategy your organization can use to develop a cybersecurity plan that will minimize your risks, and speed recovery even in the event of a successful cyberattack.

Are Low Earth Orbit (LEO) Satellites the Answer to Missouri’s High-Speed Internet Access Problem?

posted in: | 0

(De-mystifying Some of the Technology Behind Your Internet Connection)

By Marc McCarty

When I talk to people around the state about bringing high-speed internet access to underserved communities, invariably someone mentions  “the SpaceX Starlink thing that Elon Musk is building.” Musk is not the only person interested in building low-earth orbit satellite (LEO Satellite) networks. Amazon (Project Kuiper), OneWeb, and TeleSat (Lightspeed) all have LEO Satellite projects planned and in some cases in limited operation.  But like electric cars and passenger-capable spacecraft, SpaceX’s “Starlink” product seems to be ahead of the competition, and it’s difficult to match Elon Musk for the “cool factor.” After all, who else has the moxie to launch a cherry-red electric roadster past the orbit of Mars, just to prove he can do it.

So, Is Starlink, or some other similar satellite technology that delivers high speed internet from outer space, the answer to the high-speed internet access problem in Missouri and other states?

It’s certainly worth asking the question, because over the next several years the federal government, along with states and localities, are likely to provide private and public internet service providers (ISPs) more than $60 billion to help fund the deployment of various types of broadband internet infrastructure; all in an effort to finally bring high-speed internet access to every location in America that could reasonably need one. With that much money at stake, getting the best value for the public’s investment will be critical. Past experience has taught valuable lessons on the need to wisely deploy public funds for internet infrastructure investments.

While fiber-optic cable seems to be the one infrastructure technology most capable of supplying high-speed internet service that can expand to meet future consumer and business needs, it is not without drawbacks. The cost of installing fiber to each residence and business can be more than $20,000 per mile even in rural areas with few obstructions, and most recently, wait times for delivery of fiber-optic cable and equipment can be well in excess of a year! The promise of connecting to the internet today – or within a few months – at speeds well in excess of the FCC’s current definition of “broadband” using a small antenna and some electronics that costs about $500, is very appealing, even if the monthly cost for the service is a bit more than fiber-optic cable or other wired internet options.

But what is LEO Satellite Internet? How does it work? What are the prospects that it might actually be the key component in efforts over the next few years to bridge the digital divide?

A Short Primer on Satellite Internet

At the outset, it’s important to distinguish LEO Satellite Internet from geocentric earth orbit (“GEO Satellite Internet”). Both technologies are “fixed” wireless internet, because they both access the internet by transmitting a signal through the air (and outer space) to a “fixed” antenna located at or near the customer’s premises.

However, this is where the differences begin. GEO Satellites orbit the earth at 22,236 miles above the equator.  This altitude, and the fact that the orbit is directly above the earth’s equator mean that it takes 24 hours (one day) for a GEO Satellite to complete a single orbit, and because the satellite’s orbit is above the equator, to a ground observer the satellite appears to remain “fixed” at a single point in the sky.

In Missouri, most folks with a clear view of the southern sky can subscribe for GEO satellite internet from ViaSat or HughesNet. These companies provide internet service using three or four large satellites positioned over the equator with line-of-sight view of North America. On the ground, a subscriber with a small antenna and a clear view of the southern sky can focus on one or more of these satellites to transmit data to and this location up to the satellite, and from there back down to an earth-based antenna connected to a traditional earth-based internet connection operated by the satellite internet service provider.

LEO Satellite Internet also transmits internet signals to and from a fixed antenna located at the customer’s residence, but LEO Satellites orbit much lower to the earth. These orbits vary, but generally are around 150 – 600 miles above the earth. At this altitude, to a ground-based observer the satellite appears to cross the sky from horizon to horizon in just a few minutes. Once the satellite passes below the horizon or some other obstruction, the signal carrying the internet data is lost. For this reason, LEO Satellite Internet requires a “fleet” of satellites orbiting the earth in predetermined orbits trailing one another.  In this way at least one satellite is always within line of site of the antenna at the customer’s home or business. As the connection with one satellite is lost, another comes within range and takes over the communication.

While GEO Satellite Internet is widely available and may seem ideal for many areas, particularly isolated rural areas with a good view of the southern sky, in practice it hasn’t been all that popular.  Complaints include limits on the download speed once certain monthly data transfer limits are exceeded, reliability of the signal in bad weather (snow and/or heavy rain), and high monthly subscription costs. For the most part, these shortcomings are a direct result of limitations imposed by physical laws governing the transmission of data, as well as the capacity of the satellite to handle subscriber demand for the service needed to run new internet applications. One of the reasons LEO Satellite Internet has received so much attention is that it may be able to work around some, but probably not all, of these limitations.

 A Little Science  

Understanding the physical laws that apply to the internet and working to minimize those limitations is the domain of scientists and engineers. Trying to talk their language to provide even  a high-level overview of the challenges they face can quickly result in a bunch of “technical gobble-de-goop.” However, others have developed some analogies that can be help illustrate these basic concepts, and even nontechnically trained folks like me usually can follow  them. Analogies like these do not provide answers on how best to “close the digital divide.” However, they can help in understanding the issues and challenges associated with different types of internet infrastructure, and they may help all of us, and especially our public officials, to make more informed choices of the technologies most appropriate for public-funded investment.

How Big is the Pipe?

The role of all internet infrastructure is to transport “data” from one physical location to another. Data is a series of 1’s and 0’s arranged in a specific sequence. An internet-connected device can decode this sequence and convert it into usable information. Things like email, texts, video calls or movies, audio recordings, large computer files (even this article) all can be converted into data, transmitted through the internet to another location, and then converted back into a usable format. Those wanting to learn a little more about how the internet works, and how it came into existence might enjoy this narrated presentation.

For purposes of understanding how different types of internet infrastructure work, it is useful to think of data moving through internet infrastructure as similar to water flowing through a pipe or a tube at a constant speed. Just as the amount of water that theoretically can move through a pipe in a given period of time will increase or decrease in relation to the diameter of the pipe that carries it, the amount of data that can be transferred through internet infrastructure will vary depending on the type of infrastructure used.

In other words, in a given period of time, a pipe that is 6 inches in diameter won’t  transport as much water as a pipe that is 12 inches in diameter. The same concept holds true for data moving through the internet. Certain technologies can be engineered to carry more data (more information) each second than others. This theoretical capacity to transfer data is called “bandwidth.” When we talk about internet service with download speeds of up to 25 Megabits per second, we really are saying that theoretically the technology being used to connect to the internet has the bandwidth – the capacity to move – 25 million “bits” of data (a megabit) through the internet each second. Different technologies (both wired and wireless) have different theoretical capacities to move data – different “bandwidths.”

Which technology has the highest theoretical bandwidth (the pipe with the greatest diameter)? Currently, first prize goes to fiber-optic cable. Paired with the right equipment fiber-optic cable now carries data at rates measured in the trillions of bits of data (Terabits) each second. To put that in perspective, a “trillion” is equal to a million-millions!

Wireless internet infrastructure technologies such as satellite internet move data by transmitting an electromagnetic signal (similar to that used for TV or radio signals) through the air (or through outer space). Wireless internet can achieve a very high “bandwidth” as well, measured in the billions of bits (or Gigabits) per second (a thousand-million bits per second).

Of course, wireless signals do not use a physical wire or cable of any type. But the “pipe” analogy can still apply if you understand that signals having a different frequency have a different “theoretical bandwidth.” The capacity of the signal to carry data from one point to another each second (its bandwidth) varies depending on the frequency of the signal. In general, the higher the frequency of the signal, the more information can be transmitted – the higher the theoretical bandwidth – the wider the diameter of the pipe.

If the transmitted signal has a high enough signal frequency, if the sender and receiver are within range and have a large enough antenna, and if there is an unobstructed line-of-site between sender and receiver, wireless internet technologies – theoretically – can transfer more than enough data to meet the requirements of current household internet applications. This is important, because it usually is much cheaper at least initially to install wireless internet networks that transmit data through the air than it is to bury or hang fiber-optic cable or other types of wired internet infrastructure.     

Theoretical Capacity and Practical Capacity

Leaky Pipes

You may be wondering why I continue to refer to the “theoretical” capacity of a wired or wireless internet to transfer data. The pipe analogy can help here as well. It may not have occurred to you, but the diameter of the pipe may not be the only thing that determines how much water a pipe can carry. Why? Well, the pipe might have a leak or two, and if the holes are large enough, or there are too many of them, you could end up losing quite a bit of the water.

Much the same holds true for the internet infrastructure. It turns out that if you are moving data through the air (or outer space) wirelessly, as you attempt to “increase the diameter of the pipe” (by increasing signal frequency) your “pipe” tends to get “much leakier.” You tend to lose more and more data the higher the frequency used to transmit the signal carrying the data. Of course, there is no pipe to leak. But data is lost because when signal frequency is raised to levels needed to transmit at bandwidth measured in the billions of bits per second, the signal cannot penetrate solid objects, and even snow or heavy rain will disrupt and, in some cases, interrupt the signal.

Scientists and engineers have found many ways to compensate for this “leaky pipe” problem, such as increasing the size and efficiency of the antenna or by using special techniques to improve the efficiency of data transmission, but it is not possible for GEO Satellite Internet, LEO Satellite Internet (or any other earth-based wireless technologies) to entirely overcome the issue. Building walls, tree leaves, heavy snow and strong rains, all will either disrupt the signal entirely or degrade and reduce the actual amount of information (data) transferred and received each second.

This same “data leaking” issue exists for “wired” internet infrastructure. For example, copper Ethernet cables can potentially carry up to 10 Gigabits of data per second, but only over a distance of 200 feet or less. After that, much of the data is lost (bandwidth is reduced) and eventually the connection is disrupted entirely. Coaxial cable can move data further without significant “leakage.” Again, however, the technology that has the least amount of “leakage” of data over longer distances is fiber-optic cable. It can transmit data without significant signal loss for distances up to 50 miles without refreshing and retransmission.      

GEO Satellite Internet — Too Long of a Run of Pipe …   

Taking the pipe example one step further, there are other physical laws that particularly impact the usefulness of GEO Satellite Internet. If water is flowing through a pipe, it’s obviously going to take some time to get from one end to the other and, of course, the longer the run of pipe, the longer it will take. The same principle applies for data moving through the internet but it’s not noticeable most of the time because unlike water making its way through your garden hose, bits of data move through copper wire, fiber-optic cable, the air and outer space  much faster– up to 186,000 miles each second, the speed of light.

Nevertheless, it does take some time. This delay is measured in intervals of one-thousandth of a second (milliseconds or “ms”), and the technical term used to describe the interval is “latency.” Latency normally is tested by sending data from one computer to the network remote server and back again through the internet network. This is sometimes called “pinging a server,” and the resulting “ping” is the recorded interval for data to complete the round trip. If you’d like an example to try this, the Missouri Broadband Resource Rail has an internet speed test you can use to test your connection’s bandwidth (uploading and downloading data measured in “Mbps”) and signal latency (measured in “ms”).    

High latency time isn’t that big a deal if you are sending or receiving email, watching a movie over the internet – or otherwise doing something that doesn’t require real-time two-way communication, but if you are playing an interactive video game, conducting a video conference call, or managing internet connected devices remotely, lower latency time becomes much more important. Most recently, Congress has limited grant funding for internet infrastructure to only those technologies capable of latency below 100 milliseconds (1/10th of a second).

Significant latency time has been a major drawback for GEO Satellite Internet, and it’s one that simply cannot be overcome through engineering. Moving data from a computer in Moberly, Missouri to St. Louis and back again, using GEO Satellite Internet is going to take a minimum of 480 milliseconds (approximately half a second) because of the distance involved (at least 22,236 miles 4 times). Of course, latency will be longer than this, because in practice the signal will not travel a direct route between two computers up to outer space and back, but instead will be routed thorough earth-based network infrastructure, and it will take additional time to navigate these switches and relays on earth.

Latency is not nearly as much of a concern with LEO Satellite Internet, simply because the signal need only travel a few hundred miles, up to the satellite and back. Of course, that will result in some signal delay, but early reports from SpaceX’s Starlink show that LEO Satellite can achieve latency well below 100 milliseconds.   

How Well Does LEO Satellite Internet Work?

To date, SpaceX’s Starlink has the only operational LEO Satellite Internet designed and available even on a limited basis to individual household users. This service is limited to certain specific locations with adequate satellite coverage, but more are being added and it is already available in some areas of Missouri.  The latest speed and performance tests for early adopters of Starlink’s better-than-nothing “beta” service have been positive.  The latest test data compiled by Speedtest confirms that the connection is far better than GEO Satellite Internet, but it also seems to illustrate the challenges that must be overcome if LEO Satellite Internet’s is to play more than a limited role in closing the digital divide.

The Speedtest results from July-September 2021 showed that on average connection speeds for LEO Satellite Internet were  more than 4 times faster than that achieved by GEO Satellite Internet and over 3 times faster than the minimum standard for broadband service set by the FCC in 2015 (25 Mbps download).  However, the service was still 35% slower than the average for other fixed wired connections tested and, potentially more troubling, the average connection speed declined by nearly 10 Mbps (from 97 Mbps to 87 Mbps) from the results reported by subscribers earlier in 2021.

In rural communities that currently lack any internet access other than GEO Satellite Internet or a first generation DSL connection, the “better than nothing” service offered by Starlink is much faster than other options, but based on the latest tests, it’s below the required levels set by Congress to qualify for grant funding. The Speedtest article speculates that the decline in service experience over the course of 2021 may be the result of an increase in the number of Starlink subscribers – too many subscribers all trying to access the satellite network at the same time.

Why would more customers result in a slower connection? Resorting once again to the “water flowing through a pipe” example, just as a pipe can handle only a finite amount of water passing through it each second, internet infrastructure can transfer only a limited amount of data each second.  When too many internet-connected devices in too many homes, schools, and businesses are all trying to access the internet at the same time, there are two choices – either the network stops working (it crashes), or the individual users – on average — see a decline in bandwidth and/or increased latency for their connection.

To prevent inadequate capacity from becoming a problem as the number of subscribers expand or their average use of the internet increases, an LEO Satellite Internet provider will need to reduce or limit the number of subscribers it serves, launch more satellites, upgrade and replace its satellites with models that can transfer more data (have a higher bandwidth) – or perhaps a combination of all three of these approaches. How many satellites might be needed to complete an LEO Satellite network that can serve consumers in all parts of the United States?  As of late 2021, Starlink currently had less than 2,000 satellites operating in orbit. As of November 2021, it reported that it has 140,000 customers in 20 countries around the world. The FCC has granted SpaceX a license to operate up to 4808 satellites. However, SpaceX recently said that to reach its desired network capacity – to serve customers across the United States and around the world with a network designed to provide capacity of 1 to up to 10 Gigabits per second, it needed a license from the FCC to operate nearly 30,000 satellites (more than 15 times the number it currently has on station). The fate of that request is uncertain, as questions and objections have been raised both to the location of satellite orbits, and to the risk posed from interference with other wireless communications.

Additionally, LEO satellites cannot operate indefinitely. Each satellite is estimated to have a useful life of approximately 7-10 years. This would seem to suggest that even after getting the full fleet of satellites in orbit, SpaceX would need to continue to launch more than 3,000 satellites a year just to maintain that network. Those costs presumably would need to be covered through monthly subscription fees or by permanent government operating subsidies (or both) in order for the company to earn a reasonable profit.

If 30,000 satellites were launched and operating, could LEO Satellite Internet serve at least 19 million Americans estimated by the FCC to be without adequate high-speed internet?  In late 2020, SpaceX received a preliminary grant award from the FCC of over $900 million in exchange for the company’s commitment to make internet service available to at least  643,000 locations in census tracts located in 35 states. The award required reliable delivery of at least 100 Mbps of bandwidth to each location. Competitors and others were skeptical of SpaceX’s ability to meet these requirements within 6 years as required by the terms of the grant, and provided the FCC with a study estimating that even with 12,000 satellites in orbit (the number then planned) the network could not serve this many locations at the required bandwidth level. However, the concern voiced about the specific grant may be academic, as the FCC has questioned whether many of the service locations SpaceX initially was awarded actually qualify for grant funding at all.  

A research report commissioned by Congress from last year catalogued many of these concerns and others, and concluded: “it is unclear—due to unknown factors such as the ability to reach fiber-like speeds, what the competition landscape may look like, or if LEO satellite broadband service will be affordable—whether the inclusion of LEO satellite broadband providers would help address the digital divide through their participation in federal broadband [grant funding] programs.”

No “One Size Fits All” Solution”

So, is LEO Satellite the answer?

No, it isn’t. But in some sense the same really is true of fiber-optic cable, coaxial cable, twisted copper, fixed wireless and all other infrastructure available to deliver internet service today.  There really is no “one size fits all” solution to the high-speed internet access problem, here in Missouri or in any other state. Different technologies or different “mixes” of technologies, likely will be needed to bridge the digital divide over the next several years. Different technologies have strengths and weaknesses that make them most appropriate for some installations and applications but not others.  In addition to theoretical and practical capacity (bandwidth) and the latency of the technology, other important characteristics include engineering difficulties, the cost of installation, ongoing maintenance and operating costs, and the time needed to plan, design and install the network.

Future-proofing the Internet Infrastructure

Yet perhaps among the more important considerations is the ability to increase network capacity to adapt to future increases in the demand for high-speed internet service. The electrification of rural America nearly a hundred years ago provides a useful analogy here. Electrical service installed in homes by the Rural Electrification Administration would be woefully inadequate to meet the requirements of modern homes and businesses. In most cases the service initially installed did not have the capacity to power one wall outlet in each room of the home. However, the service was adequate for the times. Modern electrical appliances we now use were not widely available; most had not even been invented. However, the electric power infrastructure that connected the homes and businesses anticipated these future needs, and could be upgraded over time to adapt to the increased demand for electric power.

We’ve seen the same pattern of ever-increasing demand today with the development of new internet applications and the proliferation of new internet-connected devices for homes and businesses. Together, these are feeding consumer demand for internet networks capable of delivering higher bandwidth and lower latency. This is reflected in the criteria used by the Federal Communication Commission and other federal funding programs. An internet connection capable of transmitting 1/5 of 1 Megabit of data per second (0.2 Mbps) was considered to be a “broadband connection” before 2010, when the standard was increased to 4 Mbps (a 20-fold increase), and yet again to 25 Mbps in 2015. Today, even that standard is widely viewed to be far too low. Last year Congress set the standard for federal grants to include only those networks offering service of at least 100 Mbps, 500 times what was deemed sufficient only 12 years ago!

The point here is that if the public is going to help fund broadband infrastructure, that infrastructure not only should be able to meet the needs of households and business over the 3-4 years that it will realistically take to plan, engineer, fund and deploy them, it also needs to be able to expand to meet future demand over the next few decades. Certain technologies (fiber-optic cable being the most obvious) clearly already has a proven capacity to expand far beyond the needs of any applications now contemplated. Others, such as standard copper telephone lines used to deliver digital subscriber line (DSL) connections, or GEO Satellite Internet, seem to have hit the limit of our ability to engineer ways around constraints imposed by their physical properties. While these internet technologies might still work for some applications, they seem clearly unsuited for a long-term, publicly funded investment that needs to have lasting value over several decades. Still others, like LEO Satellite Internet present closer questions. While the technology seems to hold some promise, the engineering behind wide scale deployment seems problematic.   

However, even taking the concerns and issues associated with LEO Satellite Internet into account, it would be a mistake to count it out as a useful technology. That should be evident by the substantial continued private investment being made by SpaceX, Amazon, OneWeb, and Telesat. Those companies could not raise substantial capital from private investors if there was no realistic market for LEO Satellite Internet service. The technology available and in use today for LEO Satellite Internet (both hardware and software) will continue to improve, and this likely will result in efficiencies and improved performance for earth-based antennas, the LEO satellites, and the rockets used to launch them. LEO Satellite Internet networks may play an important role in expanding earth-based 5G mobile phone and internet service, be a means of establishing temporary high-speed internet service in extremely remote locations, provide high-speed internet connections for container ships and cruise ships at sea, and keep international commercial airline passengers “connected.” Since the market will be world-wide, it’s likely LEO Satellite Internet will be an appropriate technology for some individual consumers in remote parts of the world as well.

That said, wired based technologies such as fiber-optic cable also continue to improve, and even though initial installation costs may be high, the cost and complexity to expand service to meet future demand likely will be far lower,  as will be the the potential for lower long-term operating costs.  The point here is that simply because LEO Satellite might be an appropriate solution for some consumers and applications, that doesn’t make it appropriate to minimize the obvious technological constraints that seem to make it inappropriate for wide-scale deployment in communities that need access now, particularly when other existing technologies can deliver superior levels of service today – and in the future.