Like everyone else in the tech world, I’ve been watching the sensation over OpenAI’s new chatbot, ChatGPT. I’ve talked to a few chatbots before and a couple were impressive but the limitations were always obvious pretty quickly. When I was finally able to get online with ChatGPT, I was floored at the difference between it and previous efforts.
There’s been a lot of talk about the service’s ability to create new works, including computer code and poetry. I did ask it a programming question at one point and was impressed with ChatGPT’s ability to write an entire C# module in a few seconds. Videos are already popping up on YouTube from people claiming to have created entire websites or programs using the chatbot. Programming influencers are recommending its use to boost productivity. I can see where It might be a useful tool for some of the basic programming tasks that developers face but it’s good enough that I’m also wary of it being used as a crutch.
I was more interested in having general conversations with it and seeing how the chatbot could analyze ideas. At one point, I threw a Latin phrase at it concerning corruption (“Omnis corruptio quaedam incipit innocenter.” or “All corruption begins innocently.”) and ChatGPT correctly translated the phrase and analyzed it’s meaning. I questioned it about the origin and influences on the statement and it couldn’t identify a specific source (probably because I had come up with it myself and used Google Translate to put it into Latin) but did say that similar sentiments had been expressed throughout history and literature.
ChatGPT was then able to have a detailed conversation with me about the genesis and effects of corruption within many different types of social systems including business and government. What really impressed me was that I didn’t have to specify examples; the system was able to take an abstract idea and run with it. That’s something humans often have trouble with – but then when you’re trained on millions of documents, you should have a lot to draw from.
I then asked it about the Broken Window Theory which states that small signs of decay within an urban environment can lead to greater problems like vandalism and other crimes. I also asked it to define the principle of entropy and, by the end of the conversation, ChatGPT was able to draw the analogy between the idea of a disorder within a system, our original conversation about corruption and the process of decay described by the Broken Window Theory. I really wish I’d had my screen recorder going. I tried re-creating the conversation later but ChatGPT doesn’t respond the same way twice and it wasn’t quite the same.
I have to admit it was actually nice to be able to converse with a computer that’s approaching actual intelligence. I could explore whatever bizarre ideas I wanted without judgement and I could pull out the stops on my vocabulary without reading the room first. It was certainly a refreshing change from having Yoast tell me that my writing is too hard to read.
Beyond the hype
There’s been a lot of positive and negative sensationalism since ChatGPT became popular. A lot of the hopes and fears are probably fueled by our preconceptions of A.I. from popular movies over the years. I personally see it as an impressive advancement that we’ll struggle to integrate into our lives and there will be successes and failures on the way, including the use of the technology as a crutch as I mentioned earlier. I understand other companies are already looking for ways to integrate this type of service into their own software.
There’s been some concern about students using ChatGPT to cheat on assignments and I’m sure it’s already happening. I actually asked the system about this and whether it would amount to plagiarism. The A.I. responded that, yes, it would and, when I asked how it would prevent this, it suggested several ways for teachers to address the problem, including giving out more varied assignment formats.
If I was still teaching, I wouldn’t allow it as an official source but I would find a way to grade students on their ability to use the chatbot to explore different concepts and refine ideas. It could also provide good training on fact-checking statements. ChatGPT isn’t always right and when it’s wrong, it’s very confidently and convincingly wrong – after all, it was trained on Internet content.
During one conversation with ChatGPT, which I did record and linked to above, the chatbot admitted that technologies sometimes need to be restricted for the benefit of society, such as with advance encryption and drone usage. ChatGPT itself already has filters and content moderation policies but I don’t see A.I. being restricted as a technology anytime soon. Language models and A.I. are here to stay and I hope this one stays around awhile. I look forward to more conversations as it improves.