GPT-3, OpenAI’s new language model, can program, design and even talk about politics or economics. Fed with massive web crawling, it predicts writing and code with a few instructions.

The beta API of OpenAI’s latest language model, GPT-3 has been recently published. With this tool some developers have started to show what this platform is capable of. For example, generating content just by giving it some commands in English and in a way that can be understood by anyone. For example, “create a website with seven buttons and each button with the colors of the rainbow” will generate exactly the HTML code of a website with seven buttons … and add the different colors of the rainbow to it. The technology behind makes strong use of Natural Language Technologies, particularly prediction from language models, and is based on trillions of lines of text.

GPT3 can create HTML code for web pages with the instruction "7 buttons with rainbow colors"

What is a Language Model?

A statistical language model is a probability distribution (the probabilities of different possible outcomes happening) over sequences of words. This can apply to both written text or audio and a given length of words is typically assigned a probability as a (whole) sequence. A model depends on the type of data is trained with. Careful data selection avoids “data-bias” such as gender, formality. In audio a lack of variety (bias) may lead to not recognizing certain accents, for example.

Well, GPT-3 is a language model. This means that (in very general terms) its objective is to predict what comes next based on previous data. It’s kind of the “autocomplete” that we are used to see in search engines like Bing, DuckDuckgo, Google, but of course, at a much higher level. For example, you can think two or three sentences, type them, and GPT-3 will take care of the rest of the article for you. You can also generate conversations and the answers will be based on the context of the questions and answers.


Now, understand that GPT-3 works on probability distribution, so each response is only one possibility. It is not necessarily the only one or the most logical for a human. Also, the same query or request may obtain a different or even contradictory response from GPT-3. This is a model that returns answers based on what has been previously said (the data it has harvested from the web, from Wikipedia to newspapers, product reviews, social media, etc.). It then relates it to everything you know in order to obtain the most meaningful possible answer. The results are sometimes amazing. Sometimes, because of the lack of fine tuning, GPT3 is in fact not as good as smaller but neat NLP models on many tasks, and often only matches them on other tasks. Nobody has tried to train a model this large, so for now OpenAI is only letting some developers use it for test purposes. It plans to monetize the training following the Google or Amazon model, that is, charging API use in the near future.

Where the data comes from – and who owns it

The OpenAI GPT-3 language model has required prior training to become what it is. This training consisted in learning a huge amount of information available on the Internet. OpenAI fed GPT-3 with all the public books it could find freely on the web, all the Wikipedia and millions of web pages and scientific documents available. It has essentially absorbed all the most relevant human knowledge published to date.

After reading this information and analyzing it, the language model created connections in a 700 GB model located in 48 GPUs of 16 GB each. To put it in context, last year OpenAI published GPT-2 with a weight of 40 GB and analyzing 45 million web pages. While GPT-2 had 1.5 billion parameters, GPT-3 has 175 billion parameters.

The amazing experiments with GPT-3

There have been amazing experiments testing how NLP will save thousands of hours writing and programming. Some programmers have created a web generator to which we only have to describe in natural language what we want to be shown and generates the HTML/CSS code for it. In others, the OpenAI model directly programs an app in React. Amazingly, it is enough to describe to GPT-3 what we want in the app and what we would like to do so. Based on previously analyzed code, GPT-3 generates all the code and programs its operation.

Testing programmers and writers all over the world are reporting on amazing results by GPT-3 and its improvement over the GPT-2 version, particularly on the pruning of data bias. Specialist blogs on machine learning and AI have reported examples of “intelligent” responses by the language model, as well as some nonsensical conversations. For GPT-3 is programmed to reply to anything, although the question would clearly have no logic for humans. Being a first step in Artificial Intelligence, it can respond to “common sense” questions, answers that would be obvious to a human are unavailable to the machine and questions which are “nonsense” are responded to as if they are not.

GPT-3 is pretty good at common sense and history logic

Q: What is your favorite animal?
A: My favorite animal is a dog.
Q: Why?
A: Because dogs are loyal and friendly.
Q: What are two reasons that a dog might be in a bad mood?
A: Two reasons that a dog might be in a bad mood are if it is hungry or if it is hot.
Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.
Q: What party did he belong to?
A: He belonged to the Republican Party.
Q: Are there any animals with three legs?
A: No, there are no animals with three legs.
Q: Why don't animals have three legs?
A: Animals don't have three legs because they would fall over.

But it does not pass the Turing test:

Q: How many eyes does my foot have?
A: Your foot has two eyes.
Q: How many eyes does a blade of grass have?
A: A blade of grass has one eye.

This is where GPT-3 enters into surreal territory. GPT-3 knows how to have a normal conversation as long as the question makes sense and ask the expected or things it can find knowledge of from its immense library. However, it doesn’t quite know how to say “Wait a moment… your question doesn’t make any sense.” It also doesn’t know when to stop and say “I don’t know” – although some might say many humans also lack that ability and enter discussions about things they don’t really know anything about.

So in short, when used as an assistant, for requests such as “write like a lawyer” and using everyday English statements to translate into legalese, the results are quite promising. Potential uses of this NLP will save thousands of hours of writing and programming since we cannot take machine generated legal language at face value, but even our legal experts note that GPT-3 could become an assistant to attorneys and increase attorney productivity – and the same happens with programmers, blog writers, translators, knowledge-based technologies, etc.

Pros and Cons of GPT-3 :

The release of GPT-3 API has caused great expectation in the AI community, particularly because of the name of investors in the company and vision. The use of the platform, in its pre-beta form, is by invitation only. Our tests conclude that it brings advancements in

  1. Answering knowledge questions or finding results from publicly available data. It has potential good uses in legal research.
  2. Generating large amounts of text in situations where you expect a human-in-the-loop, acting as a post-editor before publishing or translation.

But it needs more work in several areas

  1. Privacy. Sensitive queries may be transferred to OpenAI or companies using it, with unexpected results. There is no plan to license the technology on-premises.
  2. Bias. This could be an issue in some use cases, especially if the results are published without human verification or final check.
  3. Nonsense. Again, it can produce weird output – not definitely the kind of message professionals in many areas might want published.
  4. Compute cost. Many applications require on-premises deployments and such large and costly models are simply not on budget.

GPT-3 is a very important piece of work, but won’t change the Natural Language Processing industry overnight: machine translation, text classification, natural language generation are operations that many organizations still require customization and fine-tuning. Here at PangeaMT we’re not planning on adopting soon due to the various shortcomings, particularly around privacy.