Keystone Strategy

The Legal Dangers of Artificial Intelligence/Machine Learning: Understanding the Potential for Bias and Ensuring Responsible AI

By Rohit Chatterjee (Partner, Keystone Strategy) and Junsu Choi (Engagement Manager, Keystone Strategy)
June 8, 2022   /   5 Minute Read
The Legal Dangers of Artificial Intelligence Machine Learning

Originally published on LinkedIn by Rohit Chatterjee (Partner) and Junsu Choi (Engagement Manager).

For most of us, technology has become so central to our daily lives that we don’t question its accuracy. We rely on our smartphone maps to show us how to navigate through unfamiliar territory. We depend on our spreadsheet programs to handle complex mathematical calculations.

Not surprisingly, many organizations expect artificial intelligence and machine learning to objectively predict outcomes and guide their decision-making when facing complex business and social issues.

Over the past several years, however, that assumption has started to come into question. Many machine learning systems are not robust enough to handle real-world complexities, including biases that can lead to inequitable results. While research on creating fair, accountable, and transparent systems is now a vibrant subfield in the machine learning community, there is work that must be done to connect state-of-the-art research trends to emerging regulations (for example, the EU AI Act) and compliance with those regulations.

Keystone has found in its practice that unfair outcomes can be driven by three common sources of bias in developing machine learning systems:

  • Bias encoded in data
  • Bias in learning
  • Bias in the action-feedback sequence

Each of these requires some explaining to understand, so let’s take them one at a time.

Bias encoded in data

Bias can also be encoded in data. For example, a machine learning model that is trained to predict the likelihood that an inmate being released is likely to commit a crime in the future (i.e., recidivism risk) uses arrest data because ground truth data doesn’t exist on who commits crimes. However, research shows that arrest data reflects biases due to historical prejudices.

Another case of such bias: Word embeddings are a core component of Natural Language Processing (NLP) models, which are mathematical representations of words that allow computers to capture semantic relationships. For example, word embeddings trained on large amounts of text data outputs that king – man + woman = queen, which demonstrates that embeddings can capture gender relationships. But what happens when the words are “doctor,” “man,” “woman,” and “nurse?” Given the way biases are reflected in the real world, it wouldn’t be surprising that machine learning systems built on top of word embeddings reflect the same biases.

Bias in the learning process

The standard machine learning pipeline optimizes for accuracy across all populations in the training or validation dataset. The effect of this is that, naturally, the machine learning model weighs the majority population more than a minority population in the data, leading to a model that is more accurate for members of the majority population. This can be especially problematic when different populations have different relationships with the label (i.e., outcome) that the machine learning model is trying to predict.

Bias in the action-feedback sequence

Finally, many machine learning applications deployed in the real world have a feedback loop. Feedback loops occur when the training data used to train a model depends on the output or action of a model, which may further depend on the actions of users that engage with the model. For example, users of a social media platform are only given the opportunity to engage with the content that is recommended by a machine learning model, which is then used by the model to learn engagement patterns and further recommend new content based on these learnings. Without the necessary interventions in place, recommender systems can rapidly zero in on narrow taste clusters of users and lead to issues such as echo chambers and filter bubbles.

Regulatory considerations of AI/ML bias

The challenge in identifying and mitigating bias in machine learning systems is that bias is often nuanced and hard to define. It is crucial, however, especially as there are more and more machine learning systems being deployed into production at scale. These systems are being used in such critical applications as providing healthcare, determining access to credit, and informing sentencing decisions. While there has been significant progress in creating frameworks and principles for what is called “Responsible AI,” companies must now start to operationalize these principles for their specific context and use case.

Keystone believes that there are four essential requirements to ensure fair and responsible machine learning systems:

  • Regulations are needed to guide machine learning deployments in high-risk applications. Regulation will be key in enabling the balance of innovation and risk in machine learning deployments. While machine learning will continue to disrupt industries, consumers are at risk if machine learning systems are not designed and deployed responsibly. These regulations, in turn, must receive input from the machine learning and the broader community to ensure that policies are possible to implement and are designed to result in expected outcomes.
  • Governance and culture around Responsible AI will be crucial for compliance. Before turning to specific tools and processes to implement Responsible AI practices (e.g., tools to assess and mitigate fairness concerns and interpret machine learning models), firms need to think about their governance model and structure. Responsible AI initiatives need to be implemented throughout the organization and create accountability among those using AI. For example, Microsoft has spent significant time developing its governance model, employing a hub-and-spoke model to set a consistent bar for Responsible AI across the organization. At the same time, each engineering and sales team is empowered to drive initiatives and foster a culture of responsible innovation through Microsoft’s “Responsible AI Champs” community.[1]
  • Machine learning systems need to be explainable, transparent, and auditable. There needs to be an auditable trail of data artifacts, model configurations and parameters, and documentation of the context in which the model was developed. Most importantly, models need to be transparent and interpretable for high-risk applications.
  • Once deployed, the machine learning systems need to be continuously monitored for bias. Responsible machine learning pipelines do not stop enforcing fairness after the development process has been completed. Systems for post-deployment monitoring for bias also must be in place and operative.

Implementing Responsible AI is crucial for all organizations that currently deploy machine learning systems with real-world implications or plan to do so soon. Should you need assistance in evaluating biases in machine learning systems or developing and operationalizing responsible machine learning systems, Keystone is happy to discuss these issues. Contact us through our website, www.keystone.ai.

[1] https://blogs.microsoft.com/on-the-issues/2021/01/19/microsoft-responsible-ai-program/

Find out why top tech firms, Fortune 500 companies, and global law firms partner with Keystone.