Beware the helpful robots in your home

2019-03-02 14:14:00

By Paul Marks YOU step into the waiting room on your way to an important meeting. The receptionist, a smiling humanoid robot, shows you to a chair and offers you a drink. When you accept, the robot returns bearing a cup of steaming hot coffee. Just as the robot passes you the drink, its arm jerks, spilling the scalding liquid all over you. The growing breed of oh-so-cute robotic pets and small humanoids can make it easy to forget that robots are essentially fast-moving lumps of metal, and therefore potentially dangerous when something goes wrong. Industrial robots have been with us for decades, along with a steady trickle of robot-related accidents. The first death was in 1981, when an engineer at a Kawasaki plant in Japan was killed when a robotic arm pushed him into a grinder. According to the Health and Safety Executive in the UK, there were 77 accidents involving robots in the country in 2005. One of the measures taken to reduce the injury toll has been to place robots in a restricted “work cell” that people are not supposed to enter. If they do, sensors detect their presence and cut the power supply to the robot. Such safety measures would be fine if robots were going to stay in their work cells, but they are not: over the next decade humanoid robots will begin appearing in our homes and offices. In the vanguard of this new market are Japanese firms such as Honda, Toyota and Toshiba, which plan to launch domestic robots that will assume roles as varied as receptionists, security guards, entertainers, hospital porters, tourist guides and cleaners. Although they will have nothing like the power and speed of industrial robots, lawyers and roboticists agree that robots in the home present a raft of new safety risks. Engineers are striving to anticipate and minimise these risks. Last December Honda unveiled the latest version of its Asimo robot as a receptionist. The 1.3-metre-high humanoid robot can welcome visitors, show them to a meeting room and bring them a tray of coffees, says Honda UK spokesman William de Braekleer. Asimo can also push a trolley full of mail or sandwiches around a building. Soon Asimo will be working as the receptionist at a Honda research centre in Wako, near Tokyo. “But we don’t expect to sell it for at least 10 years because it needs much more intelligence to understand the world,” says de Braekleer. However, when we do begin sharing our living and working space with mobile robots, many legal issues are likely to arise. What if, for instance, a robot were to get under someone’s feet, tripping them down a set of stairs? Robot owners will have to be aware of their responsibilities to other people, says Joanne Barker, a solicitor with Which Legal Service in Hertford, UK. “It would be like having a pet: it can do its own thing, but you are the one responsible for its actions.” If a reasonable person could have foreseen such a hazard, then the maker or owner might be guilty of negligence, leading to a damages claim. Autonomous robots will operate using information from arrays of ultrasound, visual and infrared sensors and use on-board intelligence to map the environment they are placed in. But lawyer Stephen Sidkin, a spokesman on high-tech product liability issues for the Law Society of England and Wales, believes robots will have to be a lot smarter than that. “Designers will have to ensure a robot’s software is capable of learning how to avoid all problems like hot drink spills. The onus has to be on the manufacturer to get it right.” It is not just a few lawyers who are worried about our safety around robots. The Japanese government is concerned enough to have commissioned a long-term research programme designed to establish the safety standards for robots working in our homes and as nurses and porters in healthcare. This is set to run beyond 2010. Safety will also be a major theme at the first ever conference on human-robot interaction, to be held in Salt Lake City, Utah, in early March. If safety demands placed on robots are too onerous, the market for humanoid home-helpers may never get off the ground because software development costs would be prohibitive. Ronald Arkin, a senior roboticist at Georgia Tech in Atlanta and a consultant to Honda’s P3 and Sony’s recently shelved Qrio humanoid robot development programmes, says it is not realistic to expect all hazards to be programmed out. “Are they going to hold a robot to a higher standard than a human receptionist? A human cannot cope with everything that can conceivably be thrown at them either,” Arkin says. However, certain simple, basic safety programs are feasible, such as obstacle avoidance to prevent an unintended impact. Research is focusing on three areas: sensors, smarter control strategies, and physically softer, rounder-edged robots. The more a robot knows about what is around it, the better. “When you get into the class of the larger humanoid robots you have to be considerably more concerned with safety,” says Arkin. If a robot’s arm or leg is moving fast, for instance, and it contacts something unexpected, a safety method called “Go Limp” should come into play. Haptic sensors on the robot’s limbs measure the force of any contact and give feedback that makes the arm or leg go limp if the force exceeds a safety threshold. “It doesn’t have to be a deaf, dumb and blind thing ploughing through a crowd: haptics will let it feel its environment,” Arkin says. To prevent the spills and trips caused by sudden failures of vital control systems such as servomotors, robots should come equipped with back-up motors that will kick in when necessary. “We may be able to say a robot is ‘safe’ only in the same sense that a car and driver are safe” In the control arena, Dana Kuli´c, a robotics engineer at the University of British Columbia in Canada and her colleague Elizabeth Croft are developing software to help robots find the surest route through rooms filled with people (see “Danger zone”). If all else fails, a bouncy visco-elastic covering with spherical areas over joints will make any collisions less painful. Perhaps we are expecting an unrealistically small margin for error. Why should a robot be any safer than a car, asks John Hallam, a researcher in mobile robot behaviour at the University of Southern Denmark in Odense. “We establish a car’s safety with a mixture of strategies: insurance companies indirectly ensure adequate engineering standards and we license drivers. So we may be able to say that a robot is ‘safe’ for everyday use, but only in the same sense that a car and driver are safe,” Hallam says. Given that two car companies, Honda and Toyota, are in the forefront of humanoid robot research, Arkin thinks their motor industry experience will be invaluable. “They have always been saddled by safety engineering, compliance, litigation and product recall issues,” he says. “I’m sure lawyers will be chasing robot victim’s ambulances too.” So while it might be cool to have a robot around the house, some businesses will take a lot of convincing that office droids are a good idea. Despite handling high-tech clients in the field of RFID tags, chip-and-PIN technology and intelligent textiles, Stephen Sidkin doesn’t think his legal practice, London-based Fox Williams, will have robots welcoming clients in its reception anytime soon. “If we decided to employ a robot to show people to the sofa in our reception area and offer them hot coffee, we’d need our heads examined,” he says. Danger Zone (from above) However well a robot’s sensors allow it to perceive its environment, it still needs to be programmed with safe-path and task-planning strategies. One method roboticists are investigating to ensure this is called “danger indexing”, an idea hatched by Koji Ikuta and Makoto Nokata at Nagoya University in Japan. The idea is that every possible motion of robot limbs, and the trajectory of the robot itself across a floor or stairs, is assigned a danger value. Only those movements and routes that pose no danger to people, pets or expensive furniture can be used. In the Nagoya scheme, the potential force a robot could exert on a person if it were to hit them is used as the danger measure. It is calculated from factors such as the speed the robot can move, the distance it needs to go to carry out a task and its own mass. Now Dana Kulic´ and Elizabeth Croft at the University of British Columbia in Canada have improved the method by adding real-time gesture recognition, so humans can interrupt the robot mid-path (Robotics and Autonomous Systems, vol 54, p 1). So if, for instance, you have changed your mind about a recent instruction,