So just last month, a new chatbot has burst onto the scene and gathered the collective attention of the world. ChatGPT (Chat Generative Pre-trained Transformer) builds upon previous efforts in the space, but seems to be more realistically responsive than other chat tools. This is likely due to the additional supervised and re-inforcement learning that went into training the ChatGPT tool.
As users of ChatGPT have found, conversational exchanges can be quite compelling. In particular, ChatGPT is able to respond in the style of famous people, based on their corpus that the tool injested as part of its training set. A prominent example is asking ChatGPT to respond like ex-President Donald Trump.
However, some reviewers are questioning the depth of the reponses, resulting in non-scensical answers. As a predictive text tool, ChatGPT does not really understand an exchange with a human. The tool has no context or comprehension of the subject being discussed. Rather it is extremely good at predicting the next piece of text that will best fit with the conversation so far.
While a compelling development, ChatGPT’s lack of true understanding of a conversation’s context is its Achilles heel. Often accurate, it can none the less lead a reader down a convincing but ultimately false path when responding. A key question this poses for all of us, is whether we as humans will be able to tell the difference between real and simulated information.