When will A.I. be truly intelligent?
I was as blown away by my first conversation with ChatGPT as anyone could be. It’s hard to believe it’s been a year since I first logged in and thought about what to ask it first. Just for fun, I threw a Latin phrase at it during one of those first conversations to see what it would do with it.
Omnis corruptio quaedam incipit innocenter. (All corruption begins innocently.)
(I have a minor fascination with Latin ever since taking a half year of it way back in high school. I’ve never been able to make myself reclaim whatever skills I developed then so, if there are problems with the Latin, blame Google Translate.)
I really wish I had saved that first conversation because what followed was astounding. ChatGPT correctly translated the sentence and analyzed it’s meaning. We then had a detailed discussion on various types of corruption from government to academia. I brought up the Broken Window Theory and asked ChatGPT to compare it to the original statement. It was able to draw the analogy between slowly growing corruption within systems and the idea that minor visible signs of decay and neglect within an environment could gradually lead to greater decay and crime. It even pointed out objections to the Broken Window Theory. I was definitely impressed.
While the focus has generally been on content generation with A.I., I’ve been more interested in the ability of these systems to analyze complex ideas for discussion and have been amazed at their ability to interpret statements that would leave many humans asking for clarification. I’ve used it to analyze my own writing a couple of times and also enjoyed its ability to provide answers based on limited details and even take on philosophical issues.
With this and all the other wonders that ChatGPT and other systems are performing and the value they’re providing, it’s really hard not to proclaim them as intelligent …
… and, yet …
Google’s Bard hedged a little more …
I have to admit that I’m so impressed by these systems that I found these responses a little sad, almost like listening to someone denigrate themselves or show a crippling lack of self confidence.
Asking if a software can be truly intelligent is probably like asking if your dog truly loves you; it’s comparing apples to oranges because we define both love and intelligence from a human perspective that dogs and computers can’t operate from. In both cases, we need more objective definitions.
During my short stint as a college instructor, I was introduced to Bloom’s Taxonomy, the classification system for learning objectives and a tool for evaluating a student’s mastery of a subject. While it’s not designed to evaluate general intelligence, I feel it’s a decent starting point for evaluating the abilities of an artificial system.
In my view, these systems blow through the first four levels above without much trouble. The only question would come with the concept of understanding which, depending on the level required, might require an emotional or experience-based understanding that A.I. simply isn’t capable of. These systems can certainly perform a number of the tasks listed under application and analysis.
Can they evaluate? I’ve seen ChatGPT have trouble with that such as when it again supplied the same URLs that I had just told it were invalid 404 links. It also gets a little timid when defending its information, sometimes justifiably as it can be wrong and it’s critical skills are definitely limited. It does seem to know when it’s on safe ground, however, and will politely defend known facts such as when I intentionally contradicted it about the dates of historical events. This isn’t quite the same as defending a conclusion or thesis, though.
Can it create?
Many would say well, obviously, yes – we have A.I. “artwork” and other creations popping up all over the place and it’s been the source of plenty of debate over ownership and copyright, but who is the creator? Bard confesses that it “lacks many of the key attributes of human intelligence such as independent thought“. These systems have no volition, no initiative of their own. They can do incredible things so long as they’re responding to prompts but cannot independently conceive of a new idea for an essay or image. They cannot ruminate on the information on which they’ve been trained, formulate questions, prioritize which are worthy of investigation and then initiate research. It still takes a human mind to start those processes.
Do we want them to someday have that ability?
Imagine a different experience with an A.I. chatbot – one in which the A.I. was actually able to converse rather than simply receive and respond to prompts for information. Imagine a system that could learn based on its conversations with humans as well as from training data. The A.I. would remember its conversation with you, evaluate the input you’d provided and its reliability just as you do when talking to anyone else and then incorporate that information (anonymously, of course) in its interactions with others.
Maybe the system could continuously scan for trends in conversations just as Google tracks popular searches and, at its discretion, query newcomers for their thoughts, adding more input to its knowledge base. Would the system be vulnerable to the bandwagon fallacy described above? What interesting tangents would it follow? What if the system was able, at its discretion, to include news and other online sources in its research and could use its present abilities to evaluate them for reliability?
Imagine an image generator that could scan current headlines, prioritize the subjects most likely to be of interest and generate its own memes and cartoons. Some of the content might be rather disturbing as it comes from a system unable to truly understand human sensitivities, much less the unpredictable levels of offense common on today’s networks. On the other hand, it might only be a matter of time before some of the images go viral and the A.I. develops an audience.
We already have customized GPTs that can be trained on the desired user data for specific roles. Combine that individuality with the initiative and spontaneity I’ve described above. Once a software A.I. is able to initiate its own creative processes based on its own complex and individual history, at what point do we perceive it to have a personality?
Once a personality is perceived and even tentatively confirmed, what ethics come into play? Our current A.I. systems are managed to keep them from dispensing illegal information or offensive opinions. Should a truly intelligent system still be subject to this kind of manipulation? When do we start questioning the ethics of shutting it down?
Even from an objective definition, it’s safe to say that none of the current systems are truly intelligent. They are cleverly constructed language models and image generators that give the appearance of intelligence. Calling them A.I. is hyperbole but it’s a familiar and comfortable term that’s handy for promotion.
It’s also a dream that has been around for decades. The idea of shaping these computer systems in our own intellectual and even physical image started as the stuff of sci-fi and then naturally transitioned to real science. Throughout these years and all its manifestations, the term A.I. has meant many things. Now, we want to claim it while there’s still much work to be done and many questions to be answered.
Are we ready for the possibilities and the responsibilities that go with them?
Sign up for our newsletter to receive updates about new projects, including the upcoming book "Self-Guided SQL"!
We respect your privacy and will never share your information with third-parties. See our privacy policy for more information.
0