Watch On:
Summary
The White House this morning unveiled what it’s colloquially calling an “AI Bill of Rights,” which aims to establish tenets around the ways AI algorithms should be deployed as well as guardrails on their applications. Models used in hospitals to inform patient treatments have later been found to be discriminatory, while hiring tools designed to weed out candidates for jobs have been shown to predominately reject women applicants in favor of men owing to the data on which the systems were trained. But these steps fall short of the EU’s regulation under development, which prohibits and curtails certain categories of AI deemed to have harmful potential.
Show Notes
The White House this morning unveiled what it’s colloquially calling an “AI Bill of Rights,” which aims to establish tenets around the ways AI algorithms should be deployed as well as guardrails on their applications.
The AI Bill of Rights mandates that AI systems be proven safe and effective through testing and consultation with stakeholders, in addition to continuous monitoring of the systems in production.
It explicitly calls out algorithmic discrimination, saying that AI systems should be designed to protect both communities and individuals from biased decision-making.
While the White House seeks to “lead by example” and have federal agencies fall in line with their own actions and derivative policies, private corporations aren’t beholden to the AI Bill of Rights.
Still, experts like Oren Etzioni, a co-founder of the Allen Institute for AI, believe that the White House guidelines will have some influence.