AI Governance is increasingly used in many organizations to make automated and repeatable decisions using Artificial Intelligence (AI). In recent business applications, large volumes of data can help to answer repeatable questions such as: “Who gets approval for a bank loan?” in a mortgage company. Another application is an experimental hiring tool by Amazon, where they leveraged AI to help select resumes for selecting candidates for an interview. But what are the inherent issues around using large data sets for these types of business applications? How can AI Governance reduce bias in the model outputs?

The concern for many data professionals around the utilization of AI is a lack of transparency, explainability and ethics. Standards are developed to ensure that data in a financial report is consistent with the numbers in a dashboard. Lineage helps explain where the data is sourced from as well as how the data is calculated as it moves across different systems in an organization. Lastly, ethics is the usage of data that shows that proper care is taken in ensuring that the data is properly used as defined in the collection purpose. Ethical code of conduct also should describe the behavior of the data professionals that are in contact of their customer’s data.

The concept of AI Governance is a recently coined term that many organizations are using to explain how their existing data management programs are used to fuel their data-driven business decisions. This is a practice that needs to take into consideration recent privacy regulations such as General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in the United States. Privacy regulations is another dimension in AI Governance that requires better understanding of the data that is used by organizations.

The framework for AI Governance is benefited by a diversity of stakeholders in the business, compliance and legal to help identify and review the data usage. Additional controls and processes in the framework with an inclusion of different users helps to identify gaps and reduce the bias that is created from a homogeneous AI Governance group.