ChatGPT Revolutionary Technology

Blog Image

What Is ChatGPT?

ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human.

It is powered by a large language model, an AI system trained to predict the next word in a sentence by ingesting massive amounts of text from the internet and finding patterns through trial and error. ChatGPT was then refined using feedback from humans to hold a conversation — as well as a robot in 2022 could reasonably do so, that is.


OpenAI, an organization launched several years ago with funding from Elon Musk and others, warns that ChatGPT isn’t perfect and will sometimes give offensive or misleading answers. But that hasn’t stopped social media users from asking it creative questions and posting the results online.


Who Built ChatGPT?

ChatGPT was created by San Francisco-based artificial intelligence company OpenAI. OpenAI Inc. is the non-profit parent company of the for-profit OpenAI LP.

OpenAI is famous for its well-known DALL·E, a deep-learning model that generates images from text instructions called prompts. The CEO is Sam Altman, who previously was president of Y Combinatory.

How accurate is it?

A disclaimer pops up when you start to use the technology, warning users that ChatGPT is not always accurate.

“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content,” it reads. “It is not intended to give advice.”

The model can write answers that can seem plausible but contain errors under closer scrutiny. The frequency of falsehoods, gibberish, or minor errors has compounded doubts about how soon this type of AI could be relied on without human oversight.


  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.
  • Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
  • While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.