GPT-3 Will Come out Soon, and there are some Risks about it

GPT-3 Will Come out Soon, and there are some Risks about it

Social media is awash with people talking about a new piece of software called the GPT-3. OpenAI developed it. OpenAI is an Elon Musk-backed artificial intelligence lab in San Francisco.

Generative Pre-Training (GPT) -3 is a language generation tool capable of producing human-like text on demand.

By observing which words and letters tend to follow one another and analyzing vast quantities fo information on the internet, the software learned how to produce texts.

Last week, OpenAI started releasing it to a select few people. They requested access to an early private version. They have been blown away.

After testing, entrepreneur Arram Sabeti wrote in a blog post that it is far more coherent than any artificial intelligence language system he has ever tried.

Moreover, he added that all you must do is write a prompt. Then, it will add text it thinks will plausibly follow. He has gotten it to write technical manuals, essays, interviews, guitar tabs, press releases, stories, and songs. He says it is frightening and hilarious. Moreover, Sabeti feels like he has seen the future.

The company wants developers to play with GPT-3. Furthermore, they want to see what they can achieve with it before rolling out a commercial version later this year. It is unclear how the system might be able to benefit the business or how much it will cost. Nevertheless, it can potentially use for prescribing medicine, improving chatbots, and designing websites.

In May, OpenAI first described GPT-3 in a research paper. It followed the earlier GPT-2. Nevertheless, OpenAI chose not to release GPT-2 widely last year. This is because it thought it was too dangerous. The company was concerned that people would use it in malicious ways, including generating spam and fake news in vast quantities.

GPT-3

GPT-3 is a hundred times larger than GPT-2. It is far more competent than its predecessor. This is because of the number of parameters it is trained on is 175 billion for GPT-3, versus 1.5 billion for GPT-2.

In its current form, it may be brilliant in many ways. Nevertheless, it has certainly got its flaws: developers noticed that GPT-3 is prone to spout out sexist and racist language, even when the prompt is something harmless. It also churns out complete nonsense from time to time, the kind that is hard to imagine any person saying.

Trevor Callaghan used to be an employee at rival lab DeepMind. He said that if you assume we get natural language processing (NLP) to a point where most people can not tell the difference, the real question comes. What happens next?

At that point, the big issue is whether we can map out that effects and debate them and figure out what to do concerning them.

Moreover, some worry that it can end up replacing specific jobs. The chief technology officer of Oculus VR said that the almost accidental and recent discovery that GPT-3 can sort of write does generate a slight shiver.

Jerome Pesenti is Facebook’s head of Artificial Intelligence. On Wednesday, he wrote on Twitter that he did not understand how they went from GPT-3 being too big a threat to humanity and yet still released GPT-3.