Think that you are in an experiment with a robot in a huge building which you never know. The mission of the robot is to escort you from one room to other room and at the end of the each room it provides you a magazine to read which has no relation with the thing you are doing. Also the robot is not trustworthy since it has directed you to wrong rooms for a few time and broken down for once. And then fire alarms began… The robot directs you to somewhere. Would you go with this robot or not in that case?

Trusting machines is easier than thinking for oneself. If memory serves, a similar study at Stanford University some 30 years ago had test volunteers performing calculations with scientific calculators that had been programmed to give answers with varying degrees of error. Only in rare cases did a volunteer question the answers being presented on the display. The same phenomenon has lead to accidents at nuclear power plants and fatal aircraft accidents, when instruments give confusing or outright incorrect indications.

People Already Began to Trust Robots a lot in EmergenciesWhat an overblown, contrived scare story. It will be years before robots are used as guide dogs, if ever they are. They’re vastly more expensive than the flashing emergency exit signs already in use. The human test dummies were TOLD to follow the robot rather than the signs, and the robot was gimmicked to fail and take people to a blocked exit.

That’s like public safety officials luring bystanders into an unsafe building to prove that they shouldn’t enter unsafe buildings on the advice of public safety officials.

Imagine that you’re in an unfamiliar building, participating in a research experiment. A robot escorts you from room to room to complete a survey about robots and then read an unrelated magazine article. The robot that you are following around is a bit unreliable, though. It guides you to the wrong room a few times and has broken down before. (The robot is secretly being controlled by one of the experimenters.)

Suddenly, the fire alarms go off and smoke fills the hallway. The robot, with the words “Emergency Guide Robot” on it, lights up with red LED lights and uses its “arms” to point people to a route in the back of the building instead of toward the doorway, which was clearly marked with exit signs.

Do you trust the faulty bot?

In the study, all 24 participants did. They were unknowingly being tested on the level of trust they’d place in the robot, even after it demonstrated repeated failings.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” Paul Robinette, a research engineer at Georgia Tech Research Institute (GTRI) who conducted the study as part of his doctoral dissertation, said in a press release. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers believe that participants viewed the robot as an authority figure and were more likely to trust it in a situation as stressful as a fire. Test subjects were not as likely to trust a faulty robot in simulations that did not involve an emergency scenario.

“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” explained Alan Wagner, a senior research engineer at GTRI. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”

When robots made obvious errors during the emergency evacuation, participants did begin to question its instructions. However, some subjects still followed the robot’s orders.

This was the first study to look into the level of trust humans place in robots, an important issue as robots and intelligent systems like self-driving cars take on larger roles in our lives. Future research at Georgia Tech will look into why test subjects trusted the robot, whether that response varies by other factors including education level and demographics, and how robots can be viewed as more or less trustworthy.

“These are just the type of human-robot experiments that we as roboticists should be investigating,” Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering, said in a press release. “We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.”