Someone to overlook AI and keep it honest

News – The general public does not need to know how Artificial Intelligence works to be trusted. They need to know that someone with the necessary skills tool is checking AI and has the authority to impose sanctions if it causes or could cause harm.

Dr Bran Knowles, senior lecturer in data science at Lancaster University, said: “I’m sure the public can’t justify the trust of individual AIs … but we don’t have to do these. It’s not their responsibility to keep AI honest. “

Dr Knowles will present (8 March) a research paper ‘Control of Authority: Promoting Public Trust in AI’ at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).

The paper is co-authored by John T. Richards, of the TJ Watson Research Center at IBM, Yorktown Heights, New York.

The masses, the paper notes, are often unreliable about AI, which comes both from the way AI has been represented over the years and from a growing awareness that there is little meaningful overview there.

The authors argue that greater clarity and more accessible explanation of how AI systems work, which is seen as a way to increase trust, does not address public concerns.

A ‘governance ecosystem’, they say, is the only way AI is accountable to the public, earning trust.

“The public will not consistently be concerned about the reliability of food, aircraft and medicines because they are confident that there is a system in place that will regulate these things and punish any breach of safety protocols,” said Dr. Richards.

And, adds Dr Knowles: “Instead of asking the public for skills to make informed decisions about the AIs that deserve trust, the public must make the same promises that AI will not. anything that could harm them. “

It reinforces the critical role that AI documentation plays in enabling this reliable management ecosystem. For example, the paper discusses work with IBM on AI Fact Sheets, documents designed to capture key facts about AI development and testing.

However, while such documentation may provide information required by internal auditors and external regulators to comply with emerging frameworks for assessing reliable AI, the Dr Knowles warns against reliance on direct public trust.

“If we do not recognize that skilled regulators must have the responsibility for monitoring the trust of AI, there is a good chance that the future of AI documentation will remain an alternative means of consent in terms of style and conditions – something that no one really reads or understands. , “she says.

The paper calls for AI documentation to be properly understood as a way to empower experts to assess reliability.

“AI has tangible consequences in our world that affect real people; and we need real accountability to ensure that the AI ​​that permeates our world helps to making that world better, “says Dr. Knowles.

ACM FAccT is a computer science conference that brings together researchers and users interested in fairness, accountability, and transparency in socio-technical systems.

###

.Source