30 Amazing Things GPT-3 Can Do Today

App and layout tools

Search and data analysis

Program generation and analysis

Text generation

Content creation

What is OpenAI GPT-3?

GPT-3 is a neural-network-powered language model. A language model is a model that predicts the likelihood of a sentence existing on the planet. For instance, a language model can identify the sentence "I take my dog for a walk" as more probable to exist (i.e. on the Internet) than the sentence "I take my banana for a walk." This holds true for sentences in addition to phrases and, more generally, any sequence of characters.

Like a lot of language models, GPT-3 is elegantly trained on an unlabeled text dataset (in this case, the training information includes among others Common Crawl and Wikipedia). Words or phrases are randomly removed from the text, and the design needs to learn to fill them in using only the surrounding words as context. It's an easy training task that leads to an effective and generalizable model.

The GPT-3 model architecture itself is a transformer-based neural network. This architecture became popular around 2-- 3 years earlier, and is the basis for the popular NLP model BERT and GPT-3's predecessor, GPT-2. From an architecture perspective, GPT-3 is not actually really unique! So what makes it so special and wonderful?

IT'S REALLY BIG. I imply actually huge. With 175 billion parameters, it's the largest language model ever developed (an order of magnitude larger than its nearest competitor!), and was trained on the biggest dataset of any language model. This, it appears, is the primary reason GPT-3 is so remarkably "smart" and human-sounding.

But here's the really wonderful part. As a result of its humongous size, GPT-3 can do what no other model can do (well): perform specific jobs without any special tuning. You can ask GPT-3 to be a translator, a developer, a poet, or a well-known author, and it can do it with its user (you) supplying fewer than 10 training examples. Damn.

This is what makes GPT-3 so amazing to artificial intelligence professionals. Other language models (like BERT) need a fancy fine-tuning action where you collect thousands of examples of (say) French-English sentence sets to teach it how to do translation. To adapt BERT to a particular job (like translation, summarization, spam detection, and so on), you need to go out and discover a big training dataset (on the order of thousands or tens of countless examples), which can be troublesome or sometimes difficult, depending upon the task. With GPT-3, you do not require to do that fine-tuning action. This is the heart of it. This is what gets people thrilled about GPT-3: custom language tasks without training information.

Today, GPT-3 is in personal beta, however kid can I not wait to get my hands on it.

SIMON DODSON
Simon is a technologist with 19 years' experience in product, publishing, prop-tech, hi-ed, health-techs and large enterprise and digital executive strategy and scale teams. https://bit.ly/3kNHCZ4
asia