Speaking during an Information Technology Industry Council summit on Wednesday, Obernolte emphasized that sector-specific regulation serves to effectively provide guidance on safely leveraging AI systems while allowing companies to continue researching and innovating.
“If you look at the risk management framework that NIST put out last year — which has been acknowledged as probably the most useful document for analyzing the potential risk of AI deployment that's been produced anywhere in the world — what the report makes clear is that the risks of deployment are highly contextual, so it matters very much what you're going to do with the AI when you evaluate what the risks are,” he said. “And that's incredibly important, because that means that something that's unacceptably risky in one context might be completely benign in another context.”
Without a federal framework, a bevy of bills have been introduced in state legislatures to tackle AI regulation. Obernolte said that too many states passing differing rules of the road for AI technologies will hinder the U.S. goal to further innovation in AI and machine learning.
“If we fail to take action in Congress, we are running the risk that all the states are going to get out ahead of us, as they have on digital bank accounts, and, in short order, we're going to have 50 different standards for what constitutes safe and trustworthy deployment of AI,” he said. “That's very ..
Support the originator by clicking the read the rest link below.