The widespread use of AI (Artificial Intelligence), from supervised learning models to finding patterns in data to ChatGPT, has infiltrated the enterprise— the resulting frenzy culminating in a direct call for AI technologies to be regulated. Recognizing the increasingly assertive, transformative, and potentially risky technology, President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence became a new milestone in America’s AI strategy.

See BigID in Action

What is the AI Executive Order: A Glimpse

President Biden’s executive order, issued on October 30th, marks several years of effort by the White House to provide guardrails on AI to ensure safe and secure adoption and development. Still, Biden has made it clear that this is the first step in AI development, implementation, and regulation in the US, as it seeks to protect consumers, reduce AI risk, and keep up with the current pace of innovation.

Here are some notable aspects of the AI executive order:

Protecting Privacy Rights & Civil Liberties

A significant component of the executive order is the protection of American citizen’s privacy rights and civil liberties. The EO aims to mitigate privacy concerns and risks by ensuring that data collection and processing is safe, secured, and only for lawful purposes.

The EO is designed to protect American citizens by encouraging the continued enforcement of consumer protection laws and providing security against bias, discrimination, fraudulent activity, and other potential harm from using, interacting with, and purchasing AI products.

The EO also emphasized the importance of implementing AI policies that align with progressing equity and civil rights. The Attorney General will work with federal agencies and departments to address potential civil rights violations and discrimination in AI.

AI Governance & Protection Standards

Section 4.1(a) of the EO states that the National Institute of Standards and Technology (NIST) and the Secretary of Commerce will set the industry standard for safe and secure AI systems and enable AI developers to test these systems to find vulnerabilities.

The executive order calls on various federal agencies and departments, from education to healthcare to national security, to adhere to the global technical standards and regulations for proper use and oversight of AI technologies. The order includes additional guidance on the responsible use of AI to protect the American public.

Data Privacy & Security

According to Section 9(b), the Secretary of Commerce must develop guidelines for agencies to measure the effectiveness of “differential-privacy-guarantee” protections. Differential privacy guarantees mean protections that reduce the incorrect use, access, and sharing of personal information. This will require those in charge of data privacy and security within an organization to consider reviewing the Secretary of Commerce’s guidelines. The specific guidance will be coming sooner rather than later and will be finalized within a year of the EO.

The Defense Protection Act

Due to its specific nature, the order invokes the Defense Protection Act, which requires organizations to notify the government of any AI model that may severely threaten public health, safety, or national security. Organizations must provide risk assessments to the government to ensure that AI models meet technical thresholds and standards to achieve compliance.

The Department of Energy and Homeland Security will assess the variety of risks that the AI models could pose, including specific threats such as terrorism and the making of weapons of mass destruction (nuclear and biological). The DHS will also need to establish an AI Safety and Security Board to advise on the responsible use of AI in critical infrastructure.

Get a Demo

Deep Fakes

Another central area of concern for AI is the proliferation of AI’s ability to produce deepfakes, such as text, images, sounds, and videos that are hard to distinguish from human-created content. The cause for concern has only elevated over the past few years as “deep fakes” have been seen as having the potential to swing elections and defraud consumers.

Section 4.5(c) of the executive order calls on the Department of Commerce to lessen the dangers of “deep fakes” by developing and providing best practices for detecting AI-generated content. The EO highly recommends watermarking AI-generated content such as photos and videos.

Support of the Workforce

Section 6(b) of the EO focuses primarily on supporting American workings and ensuring AI improves workers’ lives and does not infringe on their rights or reduce overall job satisfaction. The Secretary of Labor will provide employers with guidelines for mitigating AI’s harmful effects on employees.

Fair Market & Growth

The EO focuses on security and safety mandates, and there is also an emphasis on the future, innovation, and technological growth. The executive order aims to bolster America’s standing in AI on the global stage by fostering innovation and collaboration. The U.S. will maintain its competitive edge with provisions to facilitate the development of AI, including policies that will attract foreign talent.

The EO May Need Refinement

There seems to be cautious optimism after many lawmakers, civil rights organizations, industry groups, and others dug into the 111-page executive order. There is a consensus that the order has limitations: it’s a positive step, but it is only the beginning.

Here are some areas that need improvement:

AI Fairness & Bias

There is an ongoing debate focused on AI fairness and bias. Many civil rights organizations feel that the order doesn’t go far enough to address real-world issues resulting from AI models – especially concerning penalties when AI systems impact or harm vulnerable and marginalized groups.

Training AI to Limit Harm

Many agree with the executive order’s values concerning privacy, security, trust, and safety. However, there needs to be more concern over how AI models are trained and developed, which should minimize future harm before any AI is distributed. Many call on Congress to pass laws integrating protections prioritizing fairness instead of addressing the potential harms after development. It’s an opportunity for Congress to step up and create solid standards for bias, risk, and consumer protection.

Innovation

On the other side of the coin, many industry leaders have concerns that the EO may stifle innovation in many growing sectors. The reporting requirements for large AI models are the center of the issues as many feel it is over-policing of AI open systems, which will limit tech companies entering the marketplace. It highlights the significant expansion of power by the federal government over innovation and the future of AI development.

Limited Scope

The executive order has some limitations in preserving consumers’ data privacy, mainly due to the need for a federal data privacy law. Additionally, with the focus on improving governmental administration for AI governance, the EO doesn’t necessarily apply to the private sector.

How BigID Can Help Organizations Comply with New and Emerging AI Regulations

The AI Executive Order is a significant step toward harnessing the potential of AI while addressing its specific challenges. It reflects the U.S. government’s commitment to driving innovation, economic growth, and national security while ensuring responsible and ethical AI development and deployment.

The future of AI in America, shaped by this executive order, holds great promise for society. Here at BigID, we understand that AI is meant to drive efficiency, increase innovation, and enable automation, but that starts with responsible adoption, ethical standards, and proper AI governance.

Generative AI and LLMs have brought new risks related to large volumes of unstructured data, leading to data leaks, breaches, and non-compliance. Organizations trying to innovate and accelerate AI adoption need to know whether the data trained by AI is safe for use and aligns with business policies and regulations.

BigID can find, catalog, and govern unstructured data for LLM & conversational AI, which is the cornerstone of secure and responsible adoption of AI. With BigID, organizations can better manage, protect, and govern AI with security protocols that reduce risk by enabling zero trust, mitigating the threat of insider risk, and securing unstructured data across the entire data landscape.

Schedule a demo with our experts to see how BigID can help your organization govern AI and reduce risk.