Governance, Risk, and Artificial Intelligence

  • Aaron Mannes Culmen International LLC

Abstract

Artificial intelligence, whether embodied as robots or Internet of Things, or disembodied as intelligent agents or decision-support systems, can enrich the human experience. It will also fail and cause harms, including physical injury and financial loss as well as more subtle harms such as instantiating human bias or undermining individual dignity. These failures could have a disproportionate impact because strange, new, and unpredictable dangers may lead to public discomfort and rejection of artificial intelligence. Two possible approaches to mitigating these risks are the hard power of regulating artificial intelligence, to ensure it is safe, and the soft power of risk communication, which engages the public and builds trust. These approaches are complementary and both should be implemented as artificial intelligence becomes increasingly prevalent in daily life.

Published
2020-04-13
How to Cite
Mannes, A. (2020). Governance, Risk, and Artificial Intelligence. AI Magazine, 41(1), 61-69. https://doi.org/10.1609/aimag.v41i1.5200
Section
Special Topic Articles