ChatGPT: the future of writing code?

There has been a lot of discussion recently about OpenAI’s new large language model, ChatGPT. It is essentially a very advanced chatbot, capable of issuing sophisticated, human-like responses to user prompts. It’s trained on a vast quantity of data gathered from the internet (the underlying model, an autocomplete text generator named GPT-3.5, requires 800GB of memory for training), and it can seemingly accomplish everything from writing poetry to tutoring, storing and manipulating data, or solving crossword puzzles.

But much of the excitement and intrigue has centred around the ability of ChatGPT to write and debug computer code. The obvious question is: how does it stand up to a human being? Well, generally, it seems to do pretty well at constructing boilerplate code such as generic functions. In this sense, it certainly has something to offer when it comes to speeding things up and making life easier. But, at least in its current incarnation, it is not capable of structuring complex code or following the more complex higher-order logic that a programmer needs to abide by.

/images/chatGPT.jpeg

Another major issue at present is that, while its answers to questions are generally pretty accurate, it has a persistent tendency to issue responses that seem plausible but are in fact incorrect. For this reason, Stack Overflow recently banned ChatGPT-issued responses.

So, in its current incarnation, ChatGPT looks to be no more than a useful complement to learning or writing code. The bigger question is ultimately where things will be in 2 or 3 years time. The pace of development is staggering: GitHub’s co-pilot autocomplete prompt is already starting to look obsolete. This opens up opportunities for software engineers at the same time as it presents challenges. One crucial consideration will be to ensure that open source versions continue to be available in the future.