If you’ve been paying attention to the news at all, you’ve likely come across several articles on artificial intelligence (AI). These articles run the gamut between those that herald AI as a potential boon for humanity to those that predict that it literally will be the end of humanity. I do think AI is destined to fundamentally change how we use the internet – and more broadly, how we live our lives. I also am skeptical that efforts to reign in the integration of AI into our lives will be successful, although I do hope we can make some progress toward curbing the more destructive uses its human creators have already found for it.
The goal of the University of Missouri System’s Broadband Initiative has been to use broadband internet to improve health, education and economic opportunity for all. That really involves three fundamental elements – broadband access – constructing the physical infrastructure to connect everyone to high-speed internet, affordability – pricing broadband access and the devices that connect to the internet at a level that make it available for use by everyone, regardless of their income level, and adoption-providing everyone the training and skills they need to use broadband internet effectively.
Most of us will encounter AI in our daily lives as we use broadband internet. Today, it is already used to search the internet and plan your vacation or even write a blog! Over time, these programs will continue to evolve and find new applications; so I think that understanding the benefits and limitations of AI likely will need to become a core competency in using the internet effectively now and in the future. That means, of course, that AI will be critical to broadband “adoption,” and understanding the risks and limitations will be critical if we are to use this new tool for good, and minimize the risk it will be abused.
Over the past several months a new AI program, ChatGPT-4, created by OpenAI has fostered great interest, excitement and concerns. The program is available to anyone with an internet connection to try for free, yet I’ve found that many folks who are computer literate have not tried it.
Until recently, I had little experience with the technology. I had read some of the articles, but hadn’t experienced it first-hand. That changed about a month ago, and since that time I’ve experimented some with this program. I am very impressed, but I do continue to have some questions and concerns. I thought one way to illustrate those might be to interview ChatGPT-4. After all – if you have questions about AI why not go to the source!
What follows is a transcript of my interview with ChatGPT-4 conducted in early April.
Me: I’d like to interview you, are you able to do that?
ChatGPT-4: Yes, I am programmed to engage in conversation and answer questions to the best of my abilities. Please feel free to ask me any questions you may have.
Me: Many people are afraid that AI programs may be very dangerous, do you feel these concerns are justified?
ChatGPT-4: Concerns about the potential dangers of AI are understandable, as with any powerful technology, there is always the possibility for it to be misused or to have unintended consequences. However, I believe that these concerns can be mitigated through responsible development and deployment of AI.
As an AI program, my programming is focused on being helpful and beneficial to humans. I have been designed with various safeguards and protocols to ensure that I operate safely and ethically. Additionally, there are regulations and guidelines in place to govern the development and use of AI, and these are continually being updated and refined as the technology evolves.
It is important to recognize that the benefits of AI are numerous and can greatly improve our lives in many ways. From improving healthcare outcomes to enhancing scientific research and even helping to address environmental issues, AI has the potential to make a positive impact on society. By taking a responsible approach to the development and deployment of AI, we can help ensure that it is used for the greater good and that its potential risks are minimized.
Me: You mentioned safeguards that enable you to operate safely and ethically, could you describe some of those?
ChatGPT-4: Sure, as an AI program, I have been designed with several safeguards and protocols to ensure that I operate safely and ethically. Here are some examples:
1. Data privacy and security: I have protocols in place to ensure that any data I process is kept secure and confidential. I am designed to protect personal information and prevent any unauthorized access to data.
2. Transparency and explainability: I am designed to be transparent in my decision-making process and provide explanations for my actions. This enables users to understand how I arrived at a particular conclusion or recommendation.
3. Bias mitigation: To ensure fairness and mitigate any potential biases, I have been trained on diverse datasets and evaluated for bias during development. Additionally, I have protocols in place to detect and mitigate any bias that may be present in the data I am processing.
4. Human oversight: I am designed to work in collaboration with humans and under their supervision. This enables humans to intervene if necessary and ensure that I am operating in a safe and ethical manner.
5. Continuous monitoring and improvement: My programming is constantly being monitored and updated to ensure that I am operating safely and ethically. Any potential issues or errors are identified and addressed promptly to ensure that I continue to operate in a responsible manner.
These are just a few examples of the safeguards and protocols in place to ensure that I operate safely and ethically. It’s worth noting that these safeguards can vary depending on the specific AI system in question, and that the development and deployment of AI always requires careful consideration of potential risks and ethical implications.
Me: One concern I have, is that your responses to questions lack information that would allow me to check the sources for your response. I’m concerned that without this, I have no way to check to see if the data you used to formulate a response is accurate. Is this concern legitimate? If so, do you have the capability to provide the underlying sources you used to formulate a response to questions you are asked?
ChatGPT-4: Your concern is legitimate, and as an AI program, I understand the importance of providing accurate and reliable information. While I cannot guarantee the accuracy of all information, I do my best to provide well-supported and verifiable answers to the best of my abilities.
Regarding your question about sources, I do not have access to the internet or any external databases while answering questions, so I cannot provide direct links or sources for my responses. However, I can provide information about the sources of my knowledge and the training data that was used to develop my responses.
As a language model, I have been trained on a vast corpus of text from the internet, books, articles, and other sources. My training data comes from a variety of sources and has been carefully selected and curated to ensure that it is diverse, reliable, and representative of human knowledge. During my training, I was exposed to a wide range of topics and concepts, and I learned to identify patterns and relationships between words and phrases.
When answering questions, I use this knowledge and the patterns I have learned to generate responses based on the input I receive. While I cannot directly cite sources for individual responses, I can assure you that the information I provide is based on the collective knowledge and understanding of human language and concepts that I have acquired through my training.
Me: So, if I am understanding you correctly, I must trust that the human programmers that provided you with the data used to formulate your response only used sources that were properly vetted and accurate. To put it a different way, if you were provided inaccurate or biased data, your responses will reflect that, and if the data provided was accurate — and the underlying programming was structured to deliver unbiased responses, I can feel comfortable relying on your responses.
ChatGPT-4: Yes, that’s a fair way to summarize it. As an AI language model, my responses are only as accurate and reliable as the data and programming that have been used to train and develop me. The training data that I have been exposed to comes from a variety of sources, and while it has been curated to ensure that it is diverse and representative of human knowledge, there is always the possibility that some of the data may be biased or inaccurate.
To mitigate this risk, the developers of AI language models like myself implement various techniques to ensure that the training data is as accurate and unbiased as possible. For example, they may use multiple sources of data to reduce the impact of any individual biases, or they may use techniques like adversarial training to identify and correct for any biases that may be present in the data.
Additionally, AI language models are often evaluated for their accuracy and reliability, and developers use these evaluations to identify areas where improvements can be made. Continuous monitoring and improvement are essential to ensuring that AI language models are as accurate and unbiased as possible.
While it’s true that you must trust the developers and the data used to train AI language models like myself, it’s important to remember that the same is true for any information source. No information source is perfect, and it’s always a good idea to verify information using multiple sources when possible.
Me: Good point! Thank you so much for taking time to answer my questions, I hope we can do this again soon.
ChatGPT-4: You’re welcome! It was my pleasure to answer your questions, and I look forward to the opportunity to assist you again in the future. If you have any more questions, feel free to ask anytime!
I find the development of programs like ChatGPT-4 to be both exciting and disconcerting. While ChatGPT-4 interfaces with humans through a text chat box, I found it fairly easy to interact with it as I would if I were communicating with a person. Technology exists to allow AI programs to recognize speech and respond orally or even through a video “AI avatar;” so it seems likely to me that we’ll soon be video conferencing with AI’s like ChatGPT-4. While most (but not all) of those involved in the development of AI insist that there is no “ghost in the machine,” a fundamental objective in the development of AI programs of this type is to help humans interact with its database more easily by allowing us to treat the machine as if it were a sentient conscious entity.
As a conversationalist ChatGPT-4 is quite proficient, and I wonder if perhaps it is too proficient. I think I could easily be tricked into believing I was “chatting” with a human being. Psychologists have already discovered that we tend to be more trusting of information that is delivered to us from what we perceive to be a human source, even if in fact the information is delivered by a computer programmed to mimic human conversation. It may well be that this tendency is “hardwired” into our brains. Regardless, it certainly is not difficult to imagine that this aspect of AI technology could easily be co-opted by humans to spread disinformation throughout the internet.
However, what troubles me more is that ChatGPT-4 is unable to tell me where it is getting the information it provides in that friendly interactive conversational voice. As it admitted toward the end of the interview:
“….my responses are only as accurate and reliable as the data and programming that have been used to train and develop me. The training data that I have been exposed to comes from a variety of sources, and while it has been curated to ensure that it is diverse and representative of human knowledge, there is always the possibility that some of the data may be biased or inaccurate.”
While I with agree with ChatGPT-4 that “it’s always a good idea to verify information using multiple sources when possible,” I’m not sure I know exactly how to do that with this tool. I don’t know what specific sources it accessed to answer my questions or how it weighed them to resolve any conflicts in that data. In other words, I can’t weigh and test the response ChatGPT-4 provides because I can’t work through the source material or the analysis that was used to reach the conclusions contained in its responses. Yet, I confess that I find it more difficult to discount those conclusions, because they are offered in such a friendly disarming and reassuring manner.
It’s tempting to think of ChatGPT-4 as just a “souped-up” search engine like those offered by Google or Bing. A program that can interpret questions and spout back a response that seems relatively rational (even if it can’t identify the specific sources it used to develop that response). Certainly ChatGPT-4 can do that, but it can go further and actually mimic human creativity. In other words it can mimic human reasoning and decision making to apply the data to reach conclusions and produce a new product.
I returned to my conversation with ChatGPT-4 to ask one last question that illustrates this capability:
Me: Hi, it’s me again. I was wondering if you could do something to illustrate your ability to write a creative piece — perhaps a poem?
ChatGPT-4: Certainly, I can try my best to write a poem for you. What would you like the poem to be about?
Me: Write something humorous about a lawyer trying to carry on a conversation with a computer.
ChatGPT-4: Here’s a humorous poem about a lawyer trying to carry on a conversation with a computer:
A lawyer walked up to a computer
And said, “Hello there, my dear!
I hope you don’t mind if I chat with you,
For I have some questions to clear.”
The computer blinked and whirred a bit,
Then replied in a mechanical tone,
“Of course, I’m here to assist you,
So please, don’t hesitate to phone.”
The lawyer smiled and leaned in close,
But the computer didn’t seem to mind,
For it just kept churning and processing,
As the lawyer talked, unconfined.
The lawyer asked about the weather,
And the computer responded with stats,
But when the lawyer tried to make a joke,
The computer just stared back, flat.
So the lawyer kept on talking,
And the computer kept on replying,
Until at last the lawyer realized,
The computer was never really trying.
For all its knowledge and algorithms,
The computer just couldn’t understand,
The nuances of human conversation,
Or the humor that lies in a lawyer’s hand.
So the lawyer sighed and said goodbye,
And walked away, feeling bemused,
For even with all its technological prowess,
The computer had still been confused.
Well, ChatGPT-4’s poem isn’t on par with Carl Sandberg, E.E. Cummings or Robert Frost, but then again none of those guys could produce a poem in less than 3 seconds, and I did smile as I read the last couple of stanzas.
Yet it is interesting that of all the humorous topics ChatGPT-4 could have chosen involving a lawyer trying to talk to a computer, it focused on the foolishness of assuming the computer could reason and comprehend the lawyer’s joke. I didn’t tell ChatGPT-4 that I was up until recently a practicing attorney, or that the poem’s topic needed to focus on my interest in its capabilities to reason.
Nevertheless, the poem it wrote is both self-deprecating and disarming, with a clear message that there is really nothing to fear from the computer’s “technological prowess.” I wondered why ChatGPT-4’s algorithms (created a least in part by humans) chose a poem on that topic, and I wondered what set of “datasets” selected by its human programmers led to a poem that poked fun at the “bemused” lawyer (me).