Artificial Intelligence, Robots, and Cause-and-Effect

Artificial Intelligence, Robots, and Cause-and-Effect

Let us check out robots. Yoshua Bengio is a Turing Award-winning scientist known for his work in deep learning. He had an interview with IEEE Spectrum in 2019. During an interview, he said that so far, deep learning comprised learning from static datasets. It makes artificial intelligence good at tasks related to associations and correlations.

Nevertheless, neural nets do not interpret cause-and-effect. Furthermore, they do not interpret why those correlations and associations exist. Moreover, they are not good at tasks that involve planning, imagination, and reasoning. So, that, in turn, is limiting artificial intelligence to generalize its transfer and learning skills to another related environment.

Ossama Ahmed is a master’s student at ETH Zurich. He has worked with a team of Bengio to develop a robotic benchmarking tool for transfer and causality learning. So, Ahmed that the lack of generalization is a big problem. Also, he said that robots are (usually) trained in simulation. Then, when you try to deploy them in the real world, they (in most cases) fail to transfer their learned skills. The reason for that is that the physical properties of the simulation are different from the real world. CausalWorld is the group’s tool. It demonstrates that the generalization capabilities of robots are not good enough, with some of the methods currently available. At least they are not useful to the extent that they can deploy them safely in any arbitrary situation in the real world.

Robots

The paper on CausalWorld is available as a preprint. Using the open-source TriFinger robotics platform, it describes benchmarks in a simulated robotics manipulation environment.  The main goal of CausalWorld is to transfer learning using the stimulating environment and accelerate research in causal structure, where learned skills could potentially move to the real world. Robotic agents had tasks that comprised placing, pushing, stacking, and so on, informed by how children learned to build complex structures and to play with blocks. There is a massive set of parameters, such as the appearance, weight, and shape of the blocks and robots itself. In that environment, users can intervene at any point for evaluating the generalization capabilities of robots.

The researchers, in their study, gave the robots several tasks ranging from simple to extremely challenging—three different curricula based on the survey. The first involved no changes in the environment; the second involved changes to a single variable: and the third involved full randomization of all variables in the environment. They observed that the agents showed less ability to transfer their skills to the new conditions, as the curricula got more complex.

That is the situation concerning robots.