Summary
Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Such computer vision models are used in downstream applications for security, surveillance, job candidate assessment, border control, and information retrieval. Biased gender associations
A worthy example for exploring such biases appear in biased gender associations. To understand how gender associations manifest in downstream tasks, we prompted iGPT to complete an image given a woman’s face. When the administration of justice and policing relies on models that associate certain skin tones, races or ethnicity with negative valence, people of color wrongfully suffer the life-changing consequences. The bottom line is that researchers in AI ethics have called for public audits, harm incident reporting systems, stakeholder involvement in system development, and notice to individuals when they are subject to automated decision-making. Developing bias measurement and analysis methods for AI, trained on sociocultural data, would shed light into the biases in social and automated processes.
Show Notes
Such computer vision models are used in downstream applications for security, surveillance, job candidate assessment, border control, and information retrieval. Biased gender associations A worthy example for exploring such biases appear in biased gender associations. To understand how gender associations manifest in downstream tasks, we prompted iGPT to complete an image given a woman’s face.State-of-the-art pre-trained computer vision models like iGPT are incorporated into consequential decision-making in complex artificial intelligence (AI) systems.Introducing the required standards for trustworthy AI would affect how the industry implements and deploys AI systems.