Introduction – Algorithmic Liability and Malpractice

Print Friendly, PDF & Email

Algorithms Enable Lots of Impossible Things

The scale and reach of the internet makes many unthinkable things possible. Yelp data can be used to more effectively predict where health code violations are likely to occur. Twitter data can be used to more effectively track the spread of infectious disease and illness.

But how is it possible to go from such disparately fragmented pieces of information, which might only exist as entries in an excel spreadsheet, and turn them into such incredible outcomes? 

The answer is algorithms.

If data is the fuel of the impossible, algorithms are the engine. And if you have ever done any type of computer programming, you are probably familiar with just how much power is comprised in the humble `If, Then` loop. Even if you are not familiar with computer programming at all, I am sure you have the ability to understand the logic represented by `If, Then` loops:


An example ‘If, Then‘ Loop where If the condition A is True, Then, some action, B, takes place. If condition A is False, Then some action, C, takes place.  https://en.wikipedia.org/wiki/Conditional_(computer_programming)

Taking all of the True and False nuance out of there, we are left with the following:

if (condition A happens){
   Action = X;
}
else (condition B happens){
   Action = B;
}

When this simple statement is combined with large sets of information from devices we use all the time, we begin to gain insight into the ways such simple sequences some of the commercial applications of different algorithms. Below are a few examples to help demonstrate:

Example 1: Oversimplified Page Rank Algorithm

if (lots of people search for pictures of a dog and click this photo){
   Action = rank this picture of a dog higher than other pictures of dogs;
}
else (lots of people search for pictures of a dog and do not click this photo){
   Action = rank this picture of a dog lower than other pictures of dogs;
}

Example 2: Oversimplified Recommendation Algorithm and Spotify

if (User 1 listens to a songs P, Q, R, S, and T and User 2 listens to songs, Q, R, S, T, and U){
   Action = Recommend Song, U, to User 1;
}
else (User does not listen to any new songs){
   Action = Recommend popular songs that many users like;
}

Example 3: Oversimplified Image Recognition Algorithm

if (Composition of pixels in image 1 is similar to images labeled ‘firefighter’){
   Action = Return "This image likely has a firefighter in it";
}
else (Composition of pixels in image 1 is not similar to images labeled ‘firefighter’){
   Action = Return "This image likely does not have a firefighter in it";
}

With Great Power Comes Great Responsibility

As shown, algorithms can be used in some incredibly helpful and positive ways. Algorithms can help you find more useful search results, can help you find your next favorite song, and help automatically determine what is in a whole series of pictures. In these cases, algorithms save people time and they help unlock value in ways that are cheaper and more reliable than if a human were to manually replicate the same process.

Yet, for all the great progress made by algorithms, there are a whole host of concerns that can be raised about the ways algorithms are created, implemented, used, and modified. Using the same page ranking algorithms that helps users find the next entertaining video to watch, Youtube unwittingly automated part of the rise of far-right extremists in Brazil. Similarly, the same search algorithm that helps you find what you are looking for on Google became a tool that promoted Holocaust denial websites over legitimate sources.

Why Does This Matter to Me?

You may be wondering why the organization of pages for search results matters. The answer is simple. Not paying attention to the unintended consequences of new technologies can result in harm to lots of people. The infrastructure underlying the searches you make on your favorite search engine, the apps you use on your phone, the services you use everyday at your job, all of these have different algorithmic components.

In an era where algorithms are increasingly relied upon in order to monitor events, help with decision-making, and execute tasks, algorithms represent the newest domain that will inevitably expose organizations to new forms of liability and malpractice. In this and following posts, I will refer to the type of algorithmic liability from actively doing something wrong as active algorithmic liability. At the essence of active algorithmic liability concept exists several fundamental questions about the balance between the utility of these new tools and the unintended consequences that may result from their use. 

How should this new version of algorithmic liability be approached? With the distributed nature of software development, who should bear liability when an algorithm was created poorly? What about due diligence with algorithms — i.e., an algorithm gets implemented incorrectly or is used for something it was not intended for? What if the algorithm was modified in a way that produced harm? What if the algorithm was not modified at all when it should have been? These are the sorts of questions that will be asked whenever things go wrong and an algorithm is at the core of the process, and these are the sorts of questions that you will be required to answer when these new sorts of claims are filed.

Naturally, many in the risk management industry are hesitant to use new and unknown technologies. However, when you make a mistake and could have used an algorithm that would have done the job better, new questions about errors and omissions will be asked whenever you attempt to cover up your mess. The ubiquity of algorithms, the scale by which algorithms can accomplish routine tasks, and the nature of business will demand a new form of malpractice from ignoring algorithms as a tool. In this and the following posts, I will refer to this type of algorithmic malpractice from not doing something as inactive algorithmic liability.

It is not hard to imagine some future client, partner, or judge asking: what if there is a duty to avoid malpractice by using algorithms? Might there be a fiduciary duty to use algorithms when the results cost less, are more transparent, measurable, and adaptable than human behavior? Could there be an ethical duty for insurers, lawyers, risk management professionals, etc. to keep abreast of change, benefits, and risks associated with new technologies?

These are all questions that might have you worried. The future can be a scary place. However, borrowing a quote from one of my old law school professors, “The future either happens with you or it happens to you.” So congratulations. If you have read this far, you’ve taken the first step to ensure that it is the former instead of the latter.

Looking Forward

In the following sections, I will explore the development of algorithmic liability by tracing its roots all the way back to cyber liability. Then, I will provide a  framework for evaluating the various types of risks associated with the use of algorithms with a specific focus on the i) creation of algorithms, ii) implementation of algorithms, iii) use of algorithms, and iv) modification of algorithms. Next, I will provide a practical overview of algorithmic malpractice and strategies for avoiding this new liability. And, finally, I will conclude by looking at some of the limitations of this analysis and look at some of the looming issues out on the horizon.

Menu