Wednesday 14 June 2017

This backflipping noodle has a lot to teach us about AI safety

AI isn’t going to be a threat to humanity because it’s evil or cruel — AI will be a threat to humanity because we haven’t properly explained what it is we want it to do. Consider the classic “paperclip maximizer” thought experiment, in which an all-powerful AI is told, simply, “make paperclips.” The AI, not constrained by any human morality or reason, does so, eventually transforming all resources on Earth into paperclips, and wiping out our species in the process. As with any relationship, when talking to our computers, communication is key.

That’s why a new piece of research published yesterday by Google’s DeepMind and the Elon Musk-funded OpenAI institute is so interesting. It offers a simple way for humans to give feedback to AI...

Continue reading…



from
https://www.theverge.com/2017/6/14/15792818/ai-safety-human-feedback-openai-deepmind

from
http://ifeeltechinc.blogspot.com/2017/06/this-backflipping-noodle-has-lot-to.html

No comments:

Post a Comment