UPDATE: I don't know if any of the people involved in computer Go are also involved in machine ethics but this piece is interesting in any case: "Why Asimov's Three Laws of Robotics Can't Protect Us."
UPDATE 3/29: A very interesting piece: "The Singularity Is Further Than It Appears," by Ramez Naan. Makes multiple worthwhile arguments to that effect, and here's one that deserves more notice than it tends to get (emphasis in original):
And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won't immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won't come close to equaling them. And so, the first smarter-than-human mind won't take over the world. But it may find itself with good job offers to join one of those organizations.Me: Decades ago, I read Steven Rose's book The Conscious Brain, which had an emphasis on the idea of consciousness as being in important ways a social phenomenon. This reflected Rose's affinity for Marxism, but that doesn't mean it's wrong. A lot of thinking about AI, including alarmism about it, fails to recognize that how smart an individual entity becomes isn't the whole story.