Keystone Strategy

Webinar Recording: Responsible AI and the Legal Implications of Bias in Algorithms

April 19, 2022   /   10 Minute Read
AI-ML Webinar

The prevalence of AI in decision-making systems has grown exponentially over the past decade. While the capabilities of these systems have evolved at a rapid rate, it is now widely acknowledged that many of these systems are not robust in real-world deployments. This has led to, for example, biased and “unfair” algorithms being deployed in critical decision-making systems.

To dive further into these trends, this informative webinar hosted on April 15th, 2022, looked at a few areas, including:

  • Why, and how, algorithms can be biased
  • Why Responsible Machine Learning is becoming an imperative
  • Pending AI regulations including the Algorithm Accountability Act of 2022 and more

The discussion on how the deployment of AI/ML technologies are introducing legal issues around fairness and bias that can impact your clients was led by Keystone’s own Rohit Chatterjee, a partner at Keystone Strategy and co-leader of the firm’s Economic & Technology Advisory group and founder of the firm’s K.ATS (Keystone Advanced Technology Services), along with Junsu Choi, an Engagement Manager at the firm and co-leader of our Information, Strategy, Risk, and Regulation (ISRR) practice, focused on Responsible Machine Learning and issues around fairness, accountability, transparency, and ethics.

You can find the recording below:

Do you have questions for our professionals? Reach out to Rohit Chatterjee (Partner) and Junsu Choi (Engagement Manager) to learn more. 

Please find the transcript below:

Emily: Let’s get started! Hi everyone — My name is Emily Leinbach and I’m the Director of Marketing here at Keystone. I’m happy to welcome everyone to today’s webinar, Responsible AI and the Legal Implications of Bias in Algorithms. We’re going to try to get started right on time here but first I wanted to go over a few housekeeping items. We have a lot of content to get through today with Junsu and Rohit, so we’re going to try to keep our presentation to 35 minutes so we can leave time for 5-10 minutes of Q&A. If you do have any questions, please add them to the Q&A box at the bottom of your Zoom screen.

Before I introduce our speakers, I wanted to share a bit about Keystone. Keystone Strategy is a leading strategy, economic, and technology consulting firm delivering transformative ideas from the tech industry to top law firms, government agencies, and global Fortune 1000 brands. Keystone leverages a triad of skills and its extensive expert network in providing end-to-end interdisciplinary expertise in technology, economics and industry as they apply to large, complex and important problems in either high stake litigation matters or other business strategies.

Without further adieu, I’m thrilled to introduce today’s speakers. Junsu Choi is an engagement manager here at Keystone co-leads the Information Strategy, Risk and Regulation (ISR) practice, focused on Responsible Machine Learning and issues around fairness, accountability, transparency, and ethics. Rohit Chatterjee is a Partner here at Keystone where he co-leads the firm’s Economic & Technology Advisory group and founded our KATS (Keystone Advanced Technology Services) team made up of data scientists and software engineers. And now, I’ll hand it over to our speakers to share more about Responsible AI today.

[SLIDE 4] Rohit: Thanks, Emily. Welcome everyone and thank you for joining. We have an action packed deck for you today and so without wasting much time, let’s get into it. In terms of agenda, I will kick things off by motivating the need for responsible AI by taking you through some real world examples of the highlights and lowlights of intelligent systems. This section might scare you a little bit and so to save us, we will have my colleague, Junsu here who will help us understand where and how to diagnose these systems and finally he will leave us with hope about upcoming regulations and actions that are attempting to address responsible AI for all of us. 

[SLIDE 5] Rohit: To take a step back I wanted to take a short moment to introduce this particular Keystone team that works on many different technology related problems for businesses and law firms. Junsu and I are both members of the Keystone Advanced Technology Services team, the KATS team, that comprises of computer scientists and engineers engaged in applying novel computer science techniques to solve a variety of interesting real world problems. I have highlighted four such areas here. First we work on questions related to software economics and value. Similar to financial assets, technology has become an important enough digital asset that companies are demanding their own equivalent income statements and balance sheets for these assets. By looking deeply all the way into the source code we help organizations build these statements to measure and grow technology and data in an efficient way in the organization. Number 2,  we help people understand how a piece of technology or the technology as a whole was developed, and how the technology works. We do this by doing deep dive forensics and investigations in either a piece of technology or a product. This involves ethical hacking techniques, reverse engineering, man-in-the middle attacks etc. Third, we help organizations with all things Machine Learning. Topics of Fairness, Accountability and Ethics are dealt with on a regular basis for our clients and today’s topic is a result of some of that work. Finally we also work on data security and risk where we help our clients deal with issues involving algorithmic privacy and cybersecurity threats. To do all of this we are lucky to have a great set of collaborators from academia and the industry and a world class infrastructure to help process these large data sets in a secure environment. With that lets get into Responsible Machine Learning.

[SLIDE 6] Rohit: It wont be surprising to you when I say that the capabilities of the Machine Learning and Deep Learning Models have been evolving at a rapid pace in the last few decades, especially after the comeback of the deep learning systems in the 2010s. So things that a lot of people couldn’t even imagine ML systems doing  like looking at an image and coming up with a meaningful caption of the image for example, in this case, a person playing the guitar; or doing things like semantic segmentation where every pixel of an object is classified and labeled such as the picture of the blue van or playing games like the game of Go or even deepMind’s solution to the 50 yr old protein folding problem, are things that these deep learning based systems have done exceedingly well in.. SO much so that one can easily argue that it would have been hard for even ML researchers to have predicted this level of performance sitting in 2005 or even 2010… Now, I know many of you may on the call know the differences between AI, Machine Learning and Deep Learning systems, and so before you call me out on it, I wanted to caveat that for the rest of the presentation I will the terms rather loosely and interchangeably to represent a class of intelligent systems that we are all experiencing in the world today.

[SLIDE 7] Rohit: As with most good things, deep learning models come with their own set of problems. In Oct 2019, the Journal Nature featured an article that summarized the work from a number of researchers who had shown that the potential for sabotaging AI is very real. Some researchers had shown that it was possible to fool an AI system to misread a stop  sign and to make it read it as a speed limit sign instead by carefully introducing stickers on the stop sign. Other researchers had shown that it was possible to easily produce images that are completely unrecognizable to humans , but the state-of the-art deep learning model believed it to be a recognizable object with over 99% confidence, such as a penguin in the pattern of wavy lines. Some scientists have also shown that it is possible to “hack” an image of a panda to make it look like a gibbon by carefully inserting noisy pixels in the image that people would not make a big deal of but would completely deceive the deep learning systems. Ultimately the conclusion from all these examples is that deep learning models are fundamentally brittle: brilliant at what they do until, taken into unfamiliar territory, they break in unpredictable ways. 

[SLIDE 8] Rohit: Not surprising when these deep learning systems move from a controlled lab environment into the real world, we start seeing issues. In late 2020, a Machine Learning Researcher, Sean McGregor, started a database to invite people to contribute and proactively discover how recently deployed intelligent systems have produced unexpected outcomes in the real world. This was done in the spirit that these outcomes will help many avoid making similar mistakes in their development. Since its launch less than one and half years ago, more than 1000 such incidents have been reported by researchers from companies like Intel, Google and Microsoft. A few recent incidents from Zillow, Apple, Amazon that you may recall are called out here.

[SLIDE 9] Rohit: When we have incidents the lawsuits are not too far behind. Here we have tried to curate a small subset of lawsuits where different aspects of responsible AI have been called into question. In HUD v Facebook, the U.S. Department of Housing and Urban Development sued Facebook in March 2019, claiming that Facebook’s ad tools discriminated against protected groups on the basis of race, sex, national origin, religion, etc. Similarly question of fairness is raised in Williams v City of Detroit where the plaintiff is suing the Detroit Police Department for the allegedly using biased facial recognition algorithm. In ACCC v Trivago and Loomis v. Wisconsin (which we will spend more time on later), questions of transparency and interpretability of Trivago’s AI based ranking algorithm and the risk assessment software used by the  Wisconsin Department of Corrections are called into question. The BIPA cases like Vance et. al v Microsoft or the one against Facebook have similar issues around transparency and interpretability of AI based face recognition system’s in its use of biometric identifiers like the distance between the eyes and the nose etc. Finally the Kumandan et al v. Google lawsuit highlights yet another aspect of Responsible AI- Privacy & Security where the plaintiffs argued that Google Assistant was allegedly recording conversations even without the trigger words like “Ok Google etc.” So where does all of this leave us? It brings us to a set of core principles behind a responsible AI system. Now, I wont claim to have the perfect set of principles but think of this as a good starting point, as you think carefully about your own AI system.

 The six principles include:

  • Principle 1: Inclusiveness:  AI should consider all human races and experiences and address potential barriers that could unintentionally exclude people.
  • Principle 2: Fairness: Key checks and balances need to make sure that the system’s decisions don’t discriminate on gender, race, sexual orientation, or religion bias toward a group or individual.
  • Principle 3: Privacy and Security: Data holders are obligated to protect the data in an AI system and Personal data needs to be secured, and it should be accessed in a way that doesn’t compromise an individual’s privacy
  • Principle 4: Transparency: The model should be transparent to allow someone to understand the data and algorithms used to train the model, what transformation logic was applied to the data, the final model generated, and offer insights on how the model was created.
  • Principle 5: Reliability & Safety: It’s important for a system to perform as it was originally designed and for it to respond safely to new situations
  • Principle 6: Accountability: people who design and deploy the AI system need to be accountable for its actions and decisions

For these six principles to be ultimately effective, it is vital that these be supported by a strong foundation of organization & culture and a legal and regulatory framework to draw from. With this message, I want to briefly leave you in the capable hands of my colleague, Junsu who will who will help us understand where and how to diagnose these systems for fairness related issues and discuss some upcoming regulations and actions that are attempting to address responsible AI.

Emily: Thank you to everyone who joined us today for this session on Responsible AI and the Legal Implications of Bias in Algorithms. I know we have a mix of folks on the webinar including litigators and professionals in the legal field, so we will be sure to send follow up where appropriate and please feel free to email Junsu or Rohit if you have any specific questions. We’re more than happy to schedule some time to meet one-on-one for any case consults or questions you may have. Otherwise, feel free to follow us on social media and check out our website for more information at www.keystone.ai. We really appreciate your time today and let us know if you have any feedback or further questions. Thank you!