Beyond I, Robot: Ethics, Artificial Intelligence, and the Digital Age (EventID=114125)
Published at : October 18, 2021
Connect with the House Financial Services Committee
Get the latest news: https://financialservices.house.gov/
Follow us on Facebook: https://www.facebook.com/HouseFinancialCmte
Follow us on Twitter: https://twitter.com/FSCDems
___________________________________
On Wednesday, October 13, 2021, at 12:00 p.m. (ET) Task Force on Artificial Intelligence Chairman Foster and Ranking Member Gonzalez will host a virtual hearing entitled, “Beyond I, Robot: Ethics, Artificial Intelligence, and the Digital Age."
- - - - - - - -
Witnesses for this one-panel hearing will be:
• Meredith Broussard, Associate Professor, Arthur L. Carter Journalism Institute of New York University
• Meg King, Director, Science and Technology Innovation Program, The Wilson Center
• Miriam Vogel, President and CEO, EqualAI
• Jeffery Yong, Principal Advisor, Financial Stability Institute, Bank for International Settlements
• Aaron Cooper, Vice President for Global Policy, BSA – The Software Alliance
Overview
Isaac Asimov’s three laws of robotics, popularized in his short story collection “I Robot” outlines the ethical guidelines intelligent machines are bound to follow – 1) to serve and protect their creators (i.e., humankind); 2) to not harm humans either by their action or inaction, and 3) to protect themselves as long as this does not interfere with the other two laws. However, these laws may be a poor guide for our current age of Artificial Intelligence (AI) technologies. AI can broadly be thought of as computerized systems that work and react in ways commonly thought to require intelligence, such as learning, solving problems, and achieving goals under uncertain and varying conditions. The field encompasses a range of methodologies and application areas, including machine learning (ML), predictive modeling, and natural language processing (NLP). These technologies are often defined to optimize a particular objective function but can be less aware of any unintended harm. In financial services, these systems often tackle complex problems in realworld situations and offer new tools, products, and services, from businesses to consumers, including credit underwriting, customer service, and cybersecurity. Using AI irresponsibly could result in financial market instability or unintended discrimination against protected groups when they do not perform as intended.3 Consequently, there is growing concern over the extent to which AI’s effects are fully known or understood when a system is implemented.
AI has the potential to offer a broad range of potential benefits, including the speed of data analysis and the ability to synthesize large datasets at a rate that could not possibly be achieved through human analysis.5 However, complex AI calculations can produce undesirable consequences as well. The Task Force on Artificial Intelligence has examined concerns regarding AI usage, including how human-centered AI can address systemic racism6 and how AI can affect jobs in financial services. Building off these prior hearings, this hearing will consider ethical frameworks to assess the potential benefits and harms that AI technologies could cause in society, including in highly consequential decision areas such as financial services and housing. Additionally, this hearing will examine how domestic and international frameworks on AI ethics are being considered, and the ethical implications of new technologies, such as predictive modeling.
Domestic and International Ethical AI Proposals from Government Entities
The use of AI has become widespread worldwide, and the resulting ethical concerns have spurred conversation by policymakers on the necessity for the development of ethical AI frameworks which could provide guardrails to ensure that new AI technologies benefit humankind. AI’s ethical principles vary between frameworks, but may include principles of fairness, accountability, transparency, justice, security, and privacy. Often, these principles are summarized as good AI governance. Still, the interpretations or achievements of these principles, such as the allowance of third-party assessments and audits of AI systems are subject to debate. In many cases, AI development firms will employ a cost/benefit calculation in an attempt to fully understand the implications of a particular AI system before deployment. These analyses consider the extent to which the AI system may produce unintended or tangential consequences that may end up harming end-users in the future.
Ethics and good governance are widely viewed as essential elements of trustworthy AI systems. In the U.S., a variety of federal entities are working on standards and frameworks for AI, including considerations of ethics, bias, and explainability. For example, in March 2021, the Office of the Comptroller of the Currency, the Federal Reserve System, the Federal Deposit Insurance....
Hearing page: https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=408495
Get the latest news: https://financialservices.house.gov/
Follow us on Facebook: https://www.facebook.com/HouseFinancialCmte
Follow us on Twitter: https://twitter.com/FSCDems
___________________________________
On Wednesday, October 13, 2021, at 12:00 p.m. (ET) Task Force on Artificial Intelligence Chairman Foster and Ranking Member Gonzalez will host a virtual hearing entitled, “Beyond I, Robot: Ethics, Artificial Intelligence, and the Digital Age."
- - - - - - - -
Witnesses for this one-panel hearing will be:
• Meredith Broussard, Associate Professor, Arthur L. Carter Journalism Institute of New York University
• Meg King, Director, Science and Technology Innovation Program, The Wilson Center
• Miriam Vogel, President and CEO, EqualAI
• Jeffery Yong, Principal Advisor, Financial Stability Institute, Bank for International Settlements
• Aaron Cooper, Vice President for Global Policy, BSA – The Software Alliance
Overview
Isaac Asimov’s three laws of robotics, popularized in his short story collection “I Robot” outlines the ethical guidelines intelligent machines are bound to follow – 1) to serve and protect their creators (i.e., humankind); 2) to not harm humans either by their action or inaction, and 3) to protect themselves as long as this does not interfere with the other two laws. However, these laws may be a poor guide for our current age of Artificial Intelligence (AI) technologies. AI can broadly be thought of as computerized systems that work and react in ways commonly thought to require intelligence, such as learning, solving problems, and achieving goals under uncertain and varying conditions. The field encompasses a range of methodologies and application areas, including machine learning (ML), predictive modeling, and natural language processing (NLP). These technologies are often defined to optimize a particular objective function but can be less aware of any unintended harm. In financial services, these systems often tackle complex problems in realworld situations and offer new tools, products, and services, from businesses to consumers, including credit underwriting, customer service, and cybersecurity. Using AI irresponsibly could result in financial market instability or unintended discrimination against protected groups when they do not perform as intended.3 Consequently, there is growing concern over the extent to which AI’s effects are fully known or understood when a system is implemented.
AI has the potential to offer a broad range of potential benefits, including the speed of data analysis and the ability to synthesize large datasets at a rate that could not possibly be achieved through human analysis.5 However, complex AI calculations can produce undesirable consequences as well. The Task Force on Artificial Intelligence has examined concerns regarding AI usage, including how human-centered AI can address systemic racism6 and how AI can affect jobs in financial services. Building off these prior hearings, this hearing will consider ethical frameworks to assess the potential benefits and harms that AI technologies could cause in society, including in highly consequential decision areas such as financial services and housing. Additionally, this hearing will examine how domestic and international frameworks on AI ethics are being considered, and the ethical implications of new technologies, such as predictive modeling.
Domestic and International Ethical AI Proposals from Government Entities
The use of AI has become widespread worldwide, and the resulting ethical concerns have spurred conversation by policymakers on the necessity for the development of ethical AI frameworks which could provide guardrails to ensure that new AI technologies benefit humankind. AI’s ethical principles vary between frameworks, but may include principles of fairness, accountability, transparency, justice, security, and privacy. Often, these principles are summarized as good AI governance. Still, the interpretations or achievements of these principles, such as the allowance of third-party assessments and audits of AI systems are subject to debate. In many cases, AI development firms will employ a cost/benefit calculation in an attempt to fully understand the implications of a particular AI system before deployment. These analyses consider the extent to which the AI system may produce unintended or tangential consequences that may end up harming end-users in the future.
Ethics and good governance are widely viewed as essential elements of trustworthy AI systems. In the U.S., a variety of federal entities are working on standards and frameworks for AI, including considerations of ethics, bias, and explainability. For example, in March 2021, the Office of the Comptroller of the Currency, the Federal Reserve System, the Federal Deposit Insurance....
Hearing page: https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=408495
BeyondRobot:Ethics,