University of California Presidential Working Group on Artificial Intelligence Standing Council (AI Council)

University of California Presidential Working Group on Artificial Intelligence Standing Council (AI Council)
NEW, September 2024: Please visit the University of California's Artificial Intelligence website for up-to-date information on the UC AI Council, council roster, training, tools, risk assessment guidance, links to AI communities throughout UC, and more.
 

 

On October 6, 2021, President Michael V. Drake adopted the Presidential Working Group on Artificial Intelligence’s Responsible AI Principles and related recommendations to guide UC’s development and use of AI in its operations. The recommendations seek to:

  1. Institutionalize the UC Responsible AI Principles in procurement, development, implementation, and monitoring of AI-enabled technologies deployed in UC services;
  2. Establish campus-level councils and support coordination across UC that will further the principles and guidance developed by the Working Group;
  3. Develop an AI risk and impact assessment strategy; and
  4. Document AI-enabled technologies in a public database.

In May 2022, President Drake established the UC Presidential Working Group on Artificial Intelligence Standing Council (AI Council) to assist in the implementation of the “UC Responsible AI Principles”.

List of all UC Responsible AI Principles

INDEX

AI Council Co-chairs

  1. Alex Bui
  2. Alexander Bustamante

AI Council Members

Charges

  1. AI Council Charge Letter
  2. Subcommittee Charges

Co-chairs

Alex Bui, Co-Chair Alex Bui, PhD
Director, Medical & Imaging Informatics Group
Director, Medical Informatics Home Area
Professor, Departments of Radiological Sciences, Bioengineering & Bioinformatics
David Geffen Chair in Informatics

Alex Bui received his PhD in Computer Science in 2000, upon which he joined the UCLA faculty. He is now the Director of the Medical & Imaging Informatics (MII) group. His research includes informatics and data science for biomedical research and healthcare in areas related to distributed information architectures and mHealth; methodological development, application, and evaluation of artificial intelligence (AI) methods, including machine/reinforcement learning; and data visualization. His work bridges contemporary computational approaches with the opportunities arising from the breadth of biomedical observations and the electronic health record (EHR), tackling the associated translational challenges. Dr. Bui has a long history of leading extramurally funded research, including from multiple different National Institutes of Health (NIH) institutes (NCI, NLM, NINDS, NIBIB, etc.). He was Co-Director for the NIH Big Data to Knowledge (BD2K) Centers Coordination Center; and Application Lead for the NSF-funded Expeditions in Computing Center for Domain-Specific Computing (CDSC), exploring cutting-edge hardware/software techniques for accelerating algorithms used in healthcare. He led the NIH-funded Los Angeles PRISMS Center, a U54 focused on mHealth informatics. He is now Director of UCLA’s Bridge2AI Coordination Center, a landmark NIH initiative to advance the use of AI/ML methods. Dr. Bui is Program Director of multiple separate NIH TL1/T15/T32s at UCLA in the areas of biomedical informatics and data science; Director for the Medical Informatics Home Area in the Graduate Program in Biosciences; Co-Director of the Center for SMART Health; and serves at the Senior Associate Director for Informatics for UCLA’s Clinical and Translational Science Institute (CTSI). He also Co-Chairs the University of California (UC) AI Council.

Alex Bustamante, Co-Chair Alexander Bustamante
Senior Vice President 
University of California, Office of the President
Alexander.Bustamante@ucop.edu

Alexander A. Bustamante is the Senior Vice President and Chief Compliance and Audit Officer for the University of California system. He leads the Office of Ethics, Compliance and Audit Services and oversees the University's corporate compliance, investigative, and audit programs. Most recently, Mr. Bustamante and his team have dedicated significant effort to current and emerging compliance issues related to research security and emergent technology (Foreign Influence | UCOP). His office also routinely conducts cyber-risk audits across the system to strengthen UC’s critical infrastructure, protect federally funded research, and safeguard UC’s large data sets used in operations and research, including machine learning and artificial intelligence (e.g., Center for Data-driven Insights and Innovations | UCOP). As co-chair of the UC Presidential Artificial Intelligence Council, he and his team created guidelines for the use of AI applications within the UC system (Artificial Intelligence | UCOP). Prior to coming to the University of California, Mr. Bustamante served as the Inspector General for the Los Angeles Police Department, where he was responsible for providing independent oversight of the Department. Mr. Bustamante also served as an Assistant United States Attorney for the Central District of California from 2002 to 2011, where he received the United States Attorney General's Award for Exceptional Service, the Department of Justice’s highest award, for handling a landmark case involving the federal government’s first use of civil rights statutes to combat racially motivated gang violence against African Americans. Mr. Bustamante received his Juris Doctor degree from the George Washington University Law School and his Bachelor’s degree in Rhetoric from the University of California, Berkeley.

 

Subcommittees

Knowledge, Skills, and Awareness Subcommittee

The AI Council Subcommittee on Knowledge, Skills, and Awareness (KSA) is tasked with implementing an AI training strategy and recommendations outlined by the Presidential Working Group’s Final Report, issued in October of 2021, and other initiatives as determined by the AI Council in response to the rapidly evolving AI landscape. The KSA Subcommittee is charged with implementing and sustaining appropriate education, engagement, and learning programs that further the identification, creation, and delivery of best practices, models of use, and standards for the development, deployment, and use of AI across the UC and in support of the UC mission of teaching, research, and public service.

Transparency Subcommittee

The AI Council Subcommittee on Transparency develops approaches to promote transparency to the University community and to the public on ways in which AI is being utilized or may be utilized within the University of California. Transparency in the use of AI will enable the University to better evaluate potential risks and opportunities, study University experiences and outcomes, and to determine subsequent initiatives, such as the development of policy relating to responsible AI use that promotes efficiency, transparency, civil liberties, and autonomy, and leads to equitable positive outcomes.

Risk Management Subcommittee

The AI Council Subcommittee on Risk Management is tasked with developing a framework for assessing and managing risks associated with AI-enabled technologies. The Subcommittee seeks to identify risks associated with procurement, development and deployment of AI-enabled technologies, including compliance, data privacy, bias, security, and ethical risks; interpret UC’s risk appetite related to those risks; design a framework for assessing and monitoring those risks; and, time permitting, pilot the framework on a select set of AI-enabled technologies.