We Need to Get All the Smart People in a Room & Have a Conversation

by on October 16, 2022 · 0 comments

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.

In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. One thing that always bugs be about the “we need to have a conversation” aphorism is that those uttering it absolutely refuse to be nailed down on the specifics, like:

  1. What is the nature or goal of that conversation?
  2. Who is the “we” in this conversation?
  3. How is this conversation to be organized and managed?
  4. How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
  5. And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?

The other commonly heard line — “We need to get a bunch of smart people in a room and figure this out” — can be equally infuriating due to both a lack of specifics (which people? what room? where and when? etc) but also because of the fact that we already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”

I also note that, practically speaking, “the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.” And I conclude by noting how:

We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.

Anyway, I invite you to head over to Discourse and read the entire essay. In the meantime, I propose we get all the smart people in a room and have a conversation about how these two lines came to dominate tech policy discussions before they end up doing real damage to human prosperity! It’s the ethical thing to do if you really care about the future.

Previous post:

Next post: