ARTIFICIAL GENERAL INTELLIGENCE
GPT-3: An Artificial General Intelligence (AGI) or just a Strong AI
There always comes a better one after the best and a bigger one after the biggest. This is proved again by the OpenAI’s latest revolutionary general-purpose language model — GPT-3. Just to give a background, language models are the AI algorithms that understand the natural human language and respond accordingly. But the one which we are going to talk about does not only responds but also creates, builds, answers, and summarizes intelligently like a human (better sometimes) after getting input from the user. Is this the next level Artificial General Intelligence (AGI) that people were talking about? Or is it just a strong pre-existing AI?
Just a couple of weeks back, I talked about a language model with a few million parameters from talk of Andrez Karpathy in Germany, and I was thinking that this is a revolution. And now we get this. Not long ago I was talking that AGI is not a thing in near few decades. But now I am hesitating a little. Let me first give you some instances of tasks that GPT can perform. This will help to put things in perspective and get a context of what I am talking about.
This time open AI is offering it as API and that is why we have so many cool apps build around it. Recently I made three posts (first, second, and third) about this beast but I thought, it is worth more than that and hence this writing.
To put things even more in perspective, consider this; a human brain has approximately a hundred billion neurons, which approximately makes about 100 to 500 trillions connections. If we think that increasing the number of hidden layers and the number of parameters is the solution for human-like intelligence, then we are still 1000 leaps behind human intelligence. However, even being 1000x behind…