Tech industry self-regulates AI ethics in secret meetings
This morning I stumbled upon a New York Times article entitled How Tech Giants Are Devising Real Ethics for Artificial Intelligence. The basic idea, and my enormously enraged reaction to that idea, is perfectly captured in this one line:
… the basic intention is clear: to ensure that A.I. research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.
So we have no window into understanding how insiders – unnamed, but coming from enormously powerful platforms like Google, Amazon, Facebook, IBM, and Microsoft – think about benefit versus harm, about who gets harmed and how you measure that, and so on.
That’s not good enough. This should be an open, public discussion.