‘Human-Like’ is a low bar for the Machine Learning Projects

‘Human-Like’ is a low bar for the Machine Learning Projects

I will show you a faulty piece of tech if you show me a human-like machine.  By 2025, the artificial intelligence market is forecasted to eclipse $300 billion. Thus, most companies try to cash in on that bonanza market with some form of ‘human-like’ artificial intelligence. Maybe it is time for reconsidering that approach.

The big idea is that human-like artificial intelligence in an upgrade. Computers compute, artificial intelligence can learn. Unfortunately, humans are not good at the kinds of tasks a computer makes sense for. Furthermore, artificial intelligence is not particularly good at the types of tasks that humans are good at. That is why researchers move away from development paradigms that focus on imitating human cognition.

Recently, A pair of New York researchers took a deep dive into how humans and artificial intelligence process word meaning and words. The study of ‘psychological semantics,’ the duo hoped to explain the shortcomings held by systems of machine learning in the NLP (natural language processing) domain. According to a study published to arXiv, many artificial intelligence researchers do not dwell on whether their models are human-like. Few would complain that it does not do things the way human translators would if someone could develop a highly accurate machine translation system.

Humans have various techniques for keeping multiple languages in their heads and for fluidly interfacing between them in the field of translation. On the other hand, machines do not need to understand what word means to assign the appropriate translation to it.

Human-Like

It gets tricky when you get close to the accuracy of the human-level. It is relatively simple to translate one, two, and three into Spanish. The machine can learn that they are precisely equivalent to Uno, dos, and tres; most probably, it will get those right hundred percent of the time. Nevertheless, when you add complex words, concepts with more than one meaning, and slang or colloquial speech, things get more complicated.

We begin getting into artificial intelligence’s uncanny valley when developers try to create translations algorithms that can handle everything and anything. Artificial Intelligence is struggling to keep with an ever-changing human lexicon. It is like humans: they cannot learn all the slang they might encounter in Mexico City within a few Spanish classes.

Thus, NLP is not capable of human-like cognition yet. It will be ludicrous to make it exhibit human-like behavior. Try to imagine if Google Translate balked at a request because it found the word ‘moist’ distasteful, for example.

That line of thinking is not just reversed for NLP. Making artificial intelligence appear more human-like is merely a design decision for most projects of machine learning. New York University researchers said that one way to think concerning such progress is solely in terms of engineering. There is a job to do. If the system does it well, then it is successful. Engineering is critically essential. Thus, it can result in faster and better performance. Moreover, it can relieve humans of dull labor such as making airline itineraries, buying socks, or keying an answer.