The AI Executive Order: What Does It Actually Mean for the Private Sector?
As the dust settles on President Biden’s Artificial Intelligence (AI) Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, many businesses are asking whether it only applies to Federal Agencies or if it has real implications for them as well. And if so, what can they expect in the coming months, and how does that affect their current and planned adoption of AI?
Understanding the AI Executive Order
The recent AI Executive Order marks a significant step towards ethical AI implementation. This directive aims to establish guidelines and standards for the development, deployment, and governance of AI technologies. Complementing the AI Executive Order, the Office of Management and Budget (OMB) AI Memo provides federal agencies with detailed guidance on AI deployment. The AI Memo, still open for public comment, is meant to be read in conjunction with the Executive Order. The Memo establishes hard mandates for Federal Agencies when adopting AI technologies.
These further mandates in the AI Memo include:
- AI Governance: Requiring the designation of Chief AI Officers, AI Governance Boards, and expanded reporting on risks, and mitigations for their use of AI technologies.
- Advancing Responsible AI Innovation: Promoting the investment of agency use of AI, ranging from improved AI infrastructure to workforce development.
- Managing Risks from the Use of AI: Requiring the implementation of AI safeguards to protect against bias and discrimination, defining uses of AI that are presumed to impact rights and safety, and providing recommendations for managing risk in the federal procurement of AI.
Implications for Businesses
Business stand to be affected in a number ways, including:
- Technology Firms developing “dual-use” AI technology will need to notify the federal government when training these models and share the results of all safety tests. The presidential authority here is derived from the Defense Production Act, as “dual-use” refers to technologies that have both civil and military applications.
- Federal contractors developing or using AI technologies may need to comply with new standards and regulations established under the Executive Order. This could include adhering to guidelines for ethical AI use, data privacy, cybersecurity, and bias mitigation. Contractors may be required to conduct impact assessments of their AI systems, particularly concerning their safety and rights impacts.
- The National Institute of Standards and Technology (NIST) has been assigned specific roles and responsibilities related to the development and implementation of standards for AI technologies. Beyond developing standards, NIST may also provide guidance and best practices for AI safety and security to various stakeholders, including federal agencies, private sector companies, and academic institutions.
- Privacy-preserving techniques are highlighted in the President’s call to Congress for legislation that extends to all Americans, with a particular focus on the protecting personal information used to train AI systems.
What Should Businesses Do?
While comprehensive privacy protections at the federal level seem unlikely in the near future, the Order does create a framework that starts to prescriptively affect businesses that develop certain AI technologies or provide services to the federal government. NIST standards have a strong history of being adopted by the private sector, and will likely impact the private sector more broadly. And it’s clear from the Order and OMB Memo that protecting Americans’ from bias and potential misuse of personal information are critical themes.
When adopting AI technologies, businesses should:
- Understand what sensitive and personal information is being used to train AI technologies.
- Where possible, use synthetic or de-identified data in lieu of personal information.
- Use heightened scrutiny when using “sensitive” categories of personal information that may infer bias, such as racial or ethnic origin, religious or philosophical beliefs, union membership, health, and sex life or sexual orientation to train AI models.
- Provide clear methods to allow users to opt-out of having their data used in AI models, and establish mechanisms to enforce consent rules against personal information used.
How BigID Can Help
AI, and Large Language Models (LLMs) like ChatGPT in particular, rely heavily on unstructured data for training purposes. BigID has the ability to identify personal and sensitive information across structured and unstructured data sources. BigID’s correlation technology not only finds personal information, but also the individuals the personal information describes. With BigID, organizations can:
- Discover personal and sensitive information in AI training sets.
- Identify and tag specific categories of information, including Sensitive Personal Information (SPI).
- Allow individuals to opt-out and manage consent through a self-service portal.
- Understand the specific individuals whose data is being used for AI training.
Schedule a demo with our experts to see how BigID can help your organization govern AI and reduce risk.