Google Created “A Constitution For Robots” To Make Them Safer For People
DeepMind Technologies Limited, a subsidiary of Google, has introduced a “Constitution for Robots” that will prevent them from harming people. The developers reported details on their official blog on January 4.
AutoRT’s data acquisition system uses a visual language model (VLM) and a large language model (LLM) to help robots assess their environment, adapt to unfamiliar environments, and make task decisions. VLM is used to analyze the environment and recognize objects within sight;
and the LLM is responsible for the creative execution of tasks. The most important innovation of AutoRT was the appearance in the LLM block of the “Robot Constitution” – safety-oriented commands that instruct the machine to avoid choosing tasks that involve people, animals, sharp objects and even electrical appliances. For added safety, robots are programmed to stop when joint force exceeds a certain threshold; and their design now has an additional physical switch that a person can use in an emergency
In November, it became known that a robot killed a man at a factory in South Korea. It is believed that the loading machine mistook the victim for a box of vegetables and accidentally pressed him against the sorting equipment. The man suffered serious injuries and died in hospital.