ChatGPT

Technology has provided us with a lot of fascinating new toys in recent decades — search engines, smartphones, gyroscope-stabilized quadcopter drones with GPS capability, not to mention the Internet and near-gigabit speeds to the home.

Every so often, one of these technologies seems like an almost magical breakthrough. Search engines, to an extent, fit this description: they can search through the unbelievably large amount of data online and return mostly-relevant results for most common queries.

ChatGPT and “large language models” are the latest technology to feel like magic — and, in fact, have convinced me that maybe we really are approaching the technological Singularity. ChatGPT is capable of responding to natural-language queries in English (or French, or Spanish) and replying with generally relevant, useful content — rendered in beautiful, correct English grammar.

This, by itself, would be impressive. But ChatGPT’s language skills extend to computer languages like C and BASIC, as well. If asked to write an implementation of Bubble Sort in C, it does so, and also provides a paragraph explaining how its code works. Okay, I thought, that’s a neat trick, but it would be easy enough to listen for the correct terms and then come up with a stock response.

So at a friend’s prompting, I posed it a more difficult task: Write a program in BASIC (which most AI researchers are probably not using) to compute and display images of the Mandelbrot Set. This is my go-to task when learning any new computer language with graphics capability; I’ve been writing it in various languages since the late ’80s, and know the algorithm well.

Its first attempt at the program almost worked — and might have worked if fed to an old-school IBM BC running 1980s-era BASICA. ChatGPT knew that it had to set up two integer FOR loops to iterate over the field, scale these appropriately to produce an image of the relevant part of the complex plane, then implement a simple complex-number multiply-and-add scheme — in a language that doesn’t natively have complex numbers.

Again, this could have been a stock response, except for the fact that it can accept criticism and update the program accordingly. Looking it over, I noticed it using single-precision floating point, as well as inadvertently using two BASIC keywords: “WIDTH” and “CONTINUE.” It also used line numbers, which aren’t supported in modern dialects like FreeBASIC.

I pointed these issues out, as well as providing exactly two lines of code (one to declare the variables, and another to keep the screen from closing at the end), and ChatGPT re-wrote the program (far, FAR faster than any human could), with the errors corrected. It worked, and produced a black-and-white image of the Mandelbrot Set.

I then asked it to re-write the program to color the iteration levels. I didn’t specify what colors to use, how to do this, or anything. It just understood the request and did it.

While ChatGPT does not even attempt to pass the Turing test (it claims that it is a simple language model and not self-aware), it’s still the closest candidate I’ve ever seen. This is intelligence here — not quite human or aware, but still capable of complex, “intelligent” processing of ideas.

But don’t take my word for it. Here’s what I got — unedited other than making the title bold — when I asked ChatGPT to peruse the articles on this site and write a blog post about ChatGPT and large language models, in the same style:

ChatGPT and the Rise of Large Language Models

In recent years, there has been a lot of buzz surrounding chatbots and natural language processing (NLP). One particularly interesting development in this field is ChatGPT, a chatbot powered by a large language model called GPT-3 (Generative Pre-trained Transformer 3).

But what exactly is a large language model, and how does it work? Simply put, a language model is a type of machine learning model that is trained to predict the likelihood of a sequence of words. The more data the model is trained on, the more accurate it becomes at predicting the next word in a sequence.

GPT-3 is an example of a large language model, with 175 billion parameters (the values that the model is trying to optimize). This makes it one of the largest language models ever created, and allows it to perform a wide range of natural language tasks with impressive accuracy.

But what makes ChatGPT unique is its ability to engage in conversation with users. By feeding the chatbot a prompt and allowing it to generate responses based on its understanding of language, ChatGPT can hold conversations with users on a variety of topics.

One of the most impressive aspects of ChatGPT is its ability to understand context and maintain coherence in a conversation. For example, if a user asks a question and the chatbot responds with an answer, it will be able to understand that the conversation is still ongoing and respond appropriately to follow-up questions.

Of course, ChatGPT and other large language models are not without their limitations. While they can perform a wide range of tasks with impressive accuracy, they are not capable of understanding and replicating the subtleties of human communication, such as irony or sarcasm.

Despite these limitations, the capabilities of large language models like GPT-3 and ChatGPT are nothing short of impressive. As natural language processing technology continues to advance, it will be interesting to see how these chatbots and other language models will be used in the future.

— ChatGPT, Dec. 15 2022 version (accessed 2022-12-16)

So while it’s not human, it’s a great writer and absolulely more competent at coding than a typical student. The next ten to twenty years are going to be simply fascinating. Buckle up!

This entry was posted in Algorithms, BASIC, C, Current Events, Digital Citizenship. Bookmark the permalink.

Leave a Reply