The experiment of the Porr and the GPT-3 Writing Blog Posts

The experiment of the Porr and the GPT-3 Writing Blog Posts

Liam Porr is a College student. He recently used a language-generating artificial intelligence tool, GPT-3. He used it to produce a fake blog post that recently landed in the No.1 spot on Hacker News. That is what the MIT Technology Review reported. Porr tried to demonstrate that content generated by GPT-3 could fool people into believing a human wrote it. He told MIT Technology Review that it was super easy, and that was the scariest part.

In case you are not familiar with GPT-3: it is the latest version of a series of artificial intelligence autocomplete tolls designed by the San Francisco-based OpenaAI. It has been in development for several years. The Generative pre-trained transformer (GPT) – 3, at its most basic function, auto-completes your text based on prompts from a human writer.

James Vincent explains how it works. GPT-3 looks for patterns in data, like all deep learning systems. The program has been trained on a vast corpus of text from which it has mined statistical regularities, to simplify things. Those regularities are unknown to humans. They are stored as billions of weighted connections between the different nodes in the generative pre-trained transformer – 3. Based on the weights in its network, the program knows that the words ‘alarm’ and ‘truck’ are much more prone to follow each other than the words ‘fire’ and ‘elvish’ or ‘lucid.’

Here is an example from Porr’s blog post, titled ‘Feeling unproductive? So, stop overthinking’.

Definition N2: OT (Over-Thinking) is the act of trying to come up with ideas that have already been thought by some else. Over-thinking usually results in ideas that are stupid, impractical, or even impossible.

Porr

I would also like to think that I can understand that this was not written by a human. As one can see, there is a lot of not-great writing on these here internets. It is still possible that it might pass as ‘content marketing’ or some other content.

Rather than releasing it into the wild at first, OpenAI decided to give access to API of GPT-3 to researchers in a private beta. Porr is a computer science student at the University of California, Berkeley. He managed to find a Ph.D. student who already had access to the API. The latter agreed to work with him on the experiment. Porr wrote a script. This script has for  GPT-3, a blog post intro, and a headline. It then generated a few versions of the post. Afterward, Porr chose one for the blog, copy-pasted the version from the generative pre-trained transformer – 3 with little editing.

Porr said that the post went viral in a matter of a few hours. The blog had more than 26,000 visitors. However, Porr added that only one person reached out to ask if the post was artificially generated. Nevertheless, several commenters did guess that the generative pre-trained transformer -3 was the author. Porr says that the community downvoted those comments.

He suggests that generative pre-trained transformer – 3 ‘writing’ could replace content producers, which may not happen, but we can never really be too sure about the future. Porr writes that the whole point of releasing this project in a private beta is that the community can show new use cases for OpenAI that they should either look out for or encourage study towards.