Codification of Society: Why AI Fails to Moderate Content

Codification of Society: Why AI Fails to Moderate Content

In the Codification of Humanity – a new Neural series that analyzes the attempts of the mechanical world to create human-level AI. Meta has just introduced a new AI system, an important milestone, and path to more generalized and intelligent AI systems. Gaining moderation in everything is difficult. At the same time, they are talking about the more expansive open worlds of Twitter and Facebook, the small target audience, the forums. If a person has anonymity, they are usually mean to each other. It never loses its relevance.

The problem is seemingly specific and straightforward. Content that crosses borders must be curated. However, finding a solution is quite tricky. Even platforms that boast full respect for free speech should moderate the content. The unethical, violent videos of children and animals are spreading massively. These crimes mainly come from social media by unverified killers.

The answer was simple decades ago. Online communities have appointed moderators to oversee the forums. The more users the site had, the more moderators it needed to prevent irregularities. However, it is worth noting that this system does not function globally. Furthermore, a universal approach full of moderators should be created to regulate moderation. Therefore, this leaves social media companies with two choices. The first is to invent a means of automated content moderation. Alternatively, they can limit the number of users that moderators can monitor according to their number.

Effective Failure

The big tech has chosen none of the above. All the popular social media platforms have become very large for all of this. There is currently no such powerful system of artificial intelligence that can function even with a small part of a human moderator’s efficiency. To be precise, artificial intelligence negatively affects content moderation. There is no reason to believe that this will change until it reaches a human level in language and social understanding.

Meta’s new AI is learning a few hits. This means a small prep model can be updated and changed faster than other models. The research is quite impressive. The ability of a team to force artificial intelligence to do so with so little data is almost unimaginable. However, claiming that this is another step towards AI systems sounds unbelievable. Also in question is the term AGI, which is used to describe artificial intelligence. It can perform any action that a person can give access to. This is what AI will need to moderate content across social media successfully. However, it is clear that a more efficient language model does not mean solving AGI, and these are two different things.