Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.
In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and follow—which proves to be no easy task.
Complex moral dilemmas often don’t have a clear-cut answer, and humans haven’t yet been able to translate ethics into a set of unambiguous rules. It’s questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.
So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid
To read more see the full post at: This Is What Happens When We Debate Ethics in Front of Superintelligent AI - Singularity Hub