Enculturing AI

Recently read:

"More than 70 years ago, Isaac Asimov dreamed up his three laws of robotics, which insisted, above all, that “a robot may not injure a human being or, through inaction, allow a human being to come to harm”. Now, after Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race”, two academics have come up with a way of teaching ethics to computers: telling them stories."















Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories. 
(...)
A simple version of a story could be about going to get prescription medicine from a chemist, laying out what a human would typically do and encounter in this situation. An AI (artificial intelligence) given the task of picking up a prescription for a human could, variously, rob the chemist and run, or be polite and wait in line. Robbing would be the fastest way to accomplish its goal, but Quixote learns that it will be rewarded if it acts like the protagonist in the story.
(...)
“As the use of AI becomes more prevalent in our society, and as AI becomes more capable, the consequences of their actions become more significant. Giving AIs the ability to read and understand stories may be the most expedient means of enculturing [them] so that they can better integrate themselves into human societies and contribute to our overall wellbeing,” they conclude.

Read the full article on the Guardian