Bill Text: CA SB892 | 2023-2024 | Regular Session | Amended


Bill Title: Public contracts: automated decision systems: AI risk management standards.

Spectrum: Partisan Bill (Democrat 1-0)

Status: (Introduced) 2024-04-29 - April 29 hearing: Placed on APPR suspense file. [SB892 Detail]

Download: California-2023-SB892-Amended.html

Amended  IN  Senate  April 10, 2024
Amended  IN  Senate  April 01, 2024

CALIFORNIA LEGISLATURE— 2023–2024 REGULAR SESSION

Senate Bill
No. 892


Introduced by Senator Padilla

January 03, 2024


An act to add Section 12100.1 to the Public Contract Code, relating to public contracts.


LEGISLATIVE COUNSEL'S DIGEST


SB 892, as amended, Padilla. Public contracts: automated decision systems: AI risk management standards.
Existing law requires all contracts for the acquisition of information technology goods and services related to information technology projects, as defined, to be made by or under the supervision of the Department of Technology. Existing law requires all other contracts for the acquisition of information technology goods or services to be made by or under the supervision of the Department of General Services. Under existing law, both the Department of Technology and the Department of General Services are authorized to delegate their authority to another agency, as specified.
This bill would require the Department of Technology to develop and adopt regulations to create an artificial intelligence (AI) risk management standard, consistent with specified publications regarding AI risk management, and in accordance with the rulemaking provisions of the Administrative Procedure Act. The bill would require the AI risk management standard to include, among other things, a detailed risk assessment procedure for procuring automated decision systems (ADS), as defined, that analyzes specified characteristics of the ADS, methods for appropriate risk controls, as provided, and adverse incident monitoring procedures. The bill would require the department to collaborate with specified organizations to develop the AI risk management standard.
This bill would, commencing on six months after the date on which the regulations described in the paragraph above are approved and final, prohibit a state agency from entering into a contract for an ADS, or any service that utilizes an ADS, unless the contract includes a clause that, among other things, provides a completed risk assessment of the relevant ADS, requires adherence to appropriate risk controls, and provides procedures for adverse incident monitoring.
Vote: MAJORITY   Appropriation: NO   Fiscal Committee: YES   Local Program: NO  

The people of the State of California do enact as follows:


SECTION 1.

 Section 12100.1 is added to the Public Contract Code, to read:

12100.1.
 (a) For purposes of this section, the following definitions apply:
(1) “Artificial intelligence” or “AI” means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing a real or virtual environment. an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
(2) (A) “Automated decision system” or “ADS” means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.
(B) “Automated decision system” does not mean a spam email filter, firewall, antivirus software, identity and access management tool, calculator, database, dataset, or other compilation of data.
(3) “Department” means the Department of Technology.
(4) “High-risk automated decision system” or “high-risk ADS” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.
(b) The department shall develop and adopt regulations to create an AI risk management standard.
(1) The AI risk management standard shall be consistent with all of the following publications:
(A) The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy in October 2022.
(B) The Artificial Intelligence Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology (NIST) in January 2023.
(C) The Risk Management Framework for the Procurement of Artificial Intelligence (RMF PAIS 1.0), authored by the AI Procurement Lab and the Center for Inclusive Change in 2024.
(2) The AI risk management standard shall include all of the following:
(A) A detailed risk assessment procedure for procuring ADS that analyzes all of the following:
(i) Organizational and supply chain governance associated with the ADS.
(ii) The purpose and use of the ADS.
(iii) Any known potential misuses or abuses of the ADS.
(iv) An assessment of the legality, traceability, and provenance of the data the ADS uses and the legality of the output of the ADS.
(v) The robustness, accuracy, and reliability of the ADS.
(vi) The interpretability and explainability of the ADS.
(B) Methods for appropriate risk controls between the state agency and ADS vendor, including, but not limited to, reducing the risk through various mitigation strategies, eliminating the risk, or sharing the risk.
(C) Adverse incident monitoring procedures.
(D) Identification and classification of prohibited use cases and applications of ADS that the state shall not procure.
(E) An analysis about how the use of high-risk ADS can impact vulnerable individuals and communities.
(3) To develop the AI risk management standard, the department shall collaborate with organizations that represent state and local government employees and industry experts, including, but not limited to, public trust and safety experts, community-based organizations, civil society groups, academic researchers, and research institutions focused on responsible AI procurement, design, and deployment.
(4) The department shall adopt regulations pursuant to this subdivision in accordance with the provisions of Chapter 3.5 (commencing with Section 11340) of Part 1 of Division 3 of Title 2 of the Government Code.
(c) Commencing on six months after the date on which the regulations described in subdivision (b) are approved and final, a state agency shall not enter into a contract for an automated decision system, or any service that utilizes an automated decision system, unless the contract includes a clause that does all of the following:
(1) Provides a completed risk assessment of the relevant ADS.
(2) Requires the state agency or the ADS vendor, or both, to adhere to appropriate risk controls.
(3) Provides procedures for adverse incident monitoring.
(4) Requires authorization from the state agency before deployment of ADS upgrades and enhancements.
(5) Requires the state agency or the ADS vendor, or both, to provide notice to individuals that would likely be affected by the decisions or outcomes of the ADS, and information about how to appeal or opt-out of ADS decisions or outcomes.

feedback