If consumers don’t have confidence that the AI tools they interact with respect their privacy, don’t incorporate bias and discrimination, and don’t cause security issues, then all the wonderful possibilities won’t work. will not materialize. Nowhere is this more true than in the realm of national security and law enforcement.
I’ll give you a great example. Facial recognition technology is an area where there have been some horrific and inappropriate uses: take a grainy video in a convenience store and identify a black man who has never been in that state, who is then arrested for a crime that he did not commit. (Editor’s note: Prabhakar refers to this story.) Unjustified arrests based on gross misuse of facial recognition technology must stop.
Unlike that, when I go through security at the airport, they take your photo and compare it to your ID to make sure you are who you say you are. It’s a very narrow, specific application that matches my image to my ID, and the sign tells me — and our colleagues at DHS tell me this is actually the case — that they’re going to delete the image. This is an effective and responsible use of this type of automated technology. Appropriate, respectful, responsible: this is where we need to go.
Were you surprised to see the AI security bill become vetoed in California?
I wasn’t. I followed the debate and knew there were strong opinions on both sides. I think what was said, and I think it was correct, by the opponents of this bill was that it was just impractical, because it was an expression of desire on how to assess security, but in reality, we just don’t know how to do it. these things. Nobody knows. It’s not a secret, it’s a mystery.
To me, this really reminds us that while all we want to do is know how safe, effective, and trustworthy a model is, we actually have a very limited ability to answer these questions. These are actually very in-depth research questions and a great example of the type of public R&D that now needs to be conducted at a much deeper level.
Let’s talk about talent. Much of the recent national security memorandum on AI focused on how to help the right talent come to the United States from abroad to work on AI. Do you think we are handling this the right way?
This is an extremely important question. It’s the ultimate American story, that people came here over the centuries to build this country, and that’s truer than ever in science and technology. We live in a different world. I came here when I was little because my parents came here from India in the early 1960s, and at that time the opportunities (to emigrate to) many other parts of the world were very limited.