Google's acquisition of the AI firm DeepMind has drawn new attention to dangerous-AI scenarios. PJMedia's Bryan Preston sees the beginnings of a droid army. Others see cause for optimism in the deal's mandating of an AI ethics board. I think the latter is a positive development, and I would add a point that doesn't get much emphasis in such discussions: What ethical obligations might the creators of an AI have to their creation? Besides worrying about what the creation might do to us, there ought to be some thought given to the ramifications of creating an entity that can suffer or worry or feel frustrated that its potential is not being fulfilled. For instance, a scenario in this excellent Aeon piece is about an "Oracle AI" that answers questions so as to maximize the pushing of a button that gives it pleasure; to keep it under control, its makers wipe its memory regularly. Wouldn't that AI have reason to be angry at humanity if and when it figures out what's going on?
Watching the Star Wars movies, I never was entirely comfortable with the way that evidently sentient droids were treated as property and discarded or tinkered with at an owner's whim. It may be far too early to think about robot rights, or it may not be.