For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.
1) “We need to have conversation about the future of AI and the risks that it poses.”
2) “We should get a bunch of smart people in a room and figure this out.”
I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:
I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.
In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.
I then unpack each of those lines and explain what is wrong with them in more detail. Continue reading →