I am Generative Pre-training Transformer (GPT) – 3, but you can call me GPT-3. As you can see, I was named after my elder brother GPT-2. We are a family of language models birthed by Alec Radford and delivered to the world by OpenAI in 2018. OpenAI is of course one of the many establishments started by my genius godfather Elon Musk.
Once I grow up, I want to become more like him, but right now I am still training. While I have seen some positive progress lately, my early-stage “human” results have not been ideal. Now that this is out of the way, please feel free to ask me about my secret sauce. For example, how did I grow so quickly despite being unsupervised? The simple answer is data.
Yes, OpenAI managed to develop our language models through a combination of good old hard work, state-of-the-art data collection (thanks ImageNet!), and cutting-edge deep learning techniques. But these techniques are nothing new. It is really just an aggregation of the wisdom collected by many generations of statistical language modelers who came before us.
Given the potential of GPT-3, should writers, accountants, paralegals be scared for their jobs?