New developments in artificial intelligence have thrust the technology to the forefront of public discord, but also raised concerns about the opaque decision-making process of some systems – often referred to as “black box AI.”
The term “black box” came from Great Britain’s Royal Air Force during WWII, Dr. Michael Capps told Fox News Digital. But when it relates to AI, the term is used to describe a decision-making process that cannot be explained.
“The whole idea of a black box is you’re not allowed to look inside and see, and that’s what we have with these artificial neural networks, with hundreds of billions of nodes inside of a box, that nobody can look into,” Capps said.
“The black box is just the decision-making process, and what goes into that,” Christopher Alexander, the COO of Liberty Blockchain, told Fox News Digital. “And of course, that’s incredibly difficult, because no one can explain a decision-making process, regardless of whether it’s a human or not.”
Despite these concerns, black box AI systems have enormous potential, Capps said, but added he still has concerns with the technology.
“These black box systems, where you can’t see what’s going on inside, can do some amazing things,” Capps said. “ChatGPT is an example of a black box. Nobody knows exactly why it gave you the answer it did. You can kind of guess, because you know what it’s seen, you know what we trained it on, so you can kind of guess where it came up with that, but you’ll never know for sure.”
Capps said there are techniques AI creators employ to try and understand why black box AI models make the decisions they do, but ultimately, these techniques are not effective. He compared the decision-making process of an AI to falling in love with a spouse.
“You don’t really know. … You can’t really truly explain it, and neither can a black box. It works the same way. It sees massive amounts of information, and then it puts it into one decision, and then if you ask it why it did it, it will guess,” he said.
This lack of understanding about a machine’s outputs means they should not be used for high-stakes decision-making, Capps said.
“And that’s why these systems can’t really be trusted to make super important social decisions. You would never want to hand the nuclear arsenal to a black box AI that might make a decision that’s a bug. It might make a decision because it had bad training advice,” he said. “It might make a bad decision because someone snuck some bad training advice in, it’s called poisoning. All these things can go wrong, so we really shouldn’t use it for anything from driving a car to the nuclear arsenal, to even deciding who gets a loan.”
Alexander said another challenge with attempting to understand the decision-making process of AI systems is the proprietary technology used in their development.
“A lot of what goes into the black box is proprietary. So who is going to develop something if they’re going to have to give it away and show all the inner workings as soon as they create it,” Alexander said.
Artificial intelligence was thrust to the forefront of public discourse last year when OpenAI released ChatGPT, a generative AI platform which is capable of human-like conversations, and can answer questions, write stories and songs and even plan trips.
But, concerns over bias, as well as a lack of transparency and privacy, have led to some criticism of the platform.