Lessons from Cyber Liability

Print Friendly, PDF & Email

…and what the development of cyber liability products can teach us about algorithmic liability

The notion of algorithmic liability brings with it some background information that is worth unpacking. In the last post, we looked at the most elemental fundamentals of how algorithms work: by combining some set of instructions with some set of information, some outcome is produced. Because it is one of the first ingredients in the recipe for some flavor of algorithmic liability or malpractice, this post is going to look at the notional idea of data and the subsequent rise of cyber liability as a way to highlight some insights about one recent technological development that can help us understand how to deal with algorithmic liability a little bit better.

Two key points I want to highlight while you are starting this journey through history:

  1. The rise of cyber liability insurance demonstrates what the creation of a marketplace for algorithmic liability products might look like.
  2. Data, as the core driver of cyber liability products can help us distinguish between data and the application of an algorithm to data.

The Rise of Cyber Liability

Think back to the halcyon days of 1995. The internet was still something that you could only access by turning a computer on, plugging in a modem, and waiting around for an audio signal to be decoded into data that would then be sent to a router and recoded again when additional queries were made. Such were the days when the concept of “cyber liability” was invented.

Steven Hasse first socialized the idea of a product to insure the unwanted exposure of data in 1995. During this time period, data was not quite the lifeblood that it is today. The definitions of what might be included or excluded as cyber coverage were much squishier, but the core insight remained: there has to be some way to protect this new exposure that has the potential to result in huge losses for businesses.

Fast forward to today, the marketplace for the products to protect this data, cyber liability products, is at a stage of much greater maturity. These basic sets of risk associated with data have been defined under the following categories: Network Security, Privacy Liability, Network Business Interruption, Media Liability, and Errors and Omissions. And when you think about it, this ecology of categories makes sense. These are the basic ways that data can create an exposure — a system gets hacked, private information is revealed, access to data is lost, data is publicly posted in the wrong place, or data is not handled with proper attention.

However, while there is consensus around the substance of cyber liability products, little remains resolved in terms of form of cyber liability products. For some entities, they may be affirmatively protected with cyber-specific policy language. For other entities, they may be protected with “Silent Cyber” language that might, for example, include the protection of data as “property covered by this policy.” This highlights the need for greater strength of wordings and increased standardization for the ways that different stakeholders work together to ensure each is better off. What follows is a basic example of how to analyze, understand, and protect against new forms of risk.

Devotees of this fledgling publication will recognize that the STERB Index for emerging risks from the Cyber Wordings Guide follows this approximate pattern: frameworks for emergent forms of liability develop after the technology shows signs of widespread commercialization and specific problems begin to be recognized. There is general consensus that is achieved about the substance of what should be covered. Then, later, industry standards develop around how to standardize the forms by which policy language uniformly protects such information.

Taking what we know happened with the timeline for cyber liability products is helpful because it can help us identify strategies to more proactively address the emergent problems associated with algorithmic liability. With cyber liability, the motivation from 25 years ago was always to limit exposures associated with data being used in unintended or unexpected ways. From this motivation grew a general consensus around what cyber liability products do and do not cover. Yet, the way these products are expressed in an insurance policy varies from one carrier to another, by line of business, and by year.

It is important to have a thoughtful discussion about the developmental and ontological issues that are associated with algorithmic liability early on because these categories will come to frame the ways that future policies are written. The sooner we have good categories, the sooner we have better coverage. The sooner we have better coverage, the sooner we have standardization. The sooner we have standardization, the sooner we have strong wordings.

Recognizing that the data produced daily by all of our apps and behaviors is feeding the algorithms we use to interface with each other on our phones, at work, at home, and out in the rest of the world is a critical insight here. Because algorithms do these things cheaply and reliably, their use will only continue to increase.

Equally important to recognize: when these algorithmic tools malfunction or produce otherwise undesirable outcomes for individuals, they will necessarily create new exposures distinct from those exposures that are merely created by data being lost, falling into the wrong hands, etc.

Recognizing that algorithmic liability is an emergent form of risk that is coming to define the big data era and noticing that a symbiotic relation between algorithms and data exists, the next section outlines explores some of the unique considerations of algorithms in order to find out what, exactly, algorithmic liability might cover.

The Rise of Algorithmic Liability

“A billion hours ago, modern Homo sapiens emerged. A billion minutes ago, Christianity began. A billion seconds ago, the IBM personal computer was released. A billion Google searches ago… was this morning.”

– Hal Varian, Google’s Chief Economist (2013)

Even looking back 7 years ago to Hal Varian’s famous and awe-inspiring statement about the scale and depth of data usage, it is clear that we are in a vastly different environment from the one that existed when Stephen Hasse first hypothesized about cyber liability. Few could have predicted the rise of the smartphone, the proliferation of devices connected to the internet (the internet of things or IoT), and the way entire industries would be reconfigured as a result of these changes. 

If we took a fresh look at the way at our environment that is strung together by actions, technologies, and data, if we look at what these things say about the risks that people take, if we were to start from scratch with the way that we insure a general safety net by finding people, pooling together, managing risk, and protecting against unintended consequences, I would suggest we would have vastly different insurance products and services than we do now. What these products look like, I’m not sure. I do have some guesses, though.

If we pull at this thread a little bit and try to extract some basic strategies for what such products might look like, we could guess that these imaginary new products would take advantage of the natural ability of those technologies to do the things that humans do not do as well.

This insight also helps us distinguish between cyber liability and algorithmic liability. While cyber coverage is focused on the protection of data, coverage for algorithmic malpractice would focus on the applications of that data.

The rest of this section explores the ways that algorithms are unique from data and examines what those unique features can tell us about this new category of algorithmic liability.

How are algorithms different from data?

1. Algorithms use data

Now, let us do a quick recap about some of the ways in which algorithms are different from data. Cyber is data. Algorithms use data. Put one way, data is a noun and algorithm is a verb.

This basic structure of inputs and outputs means there is a symbiotic relationship between data and algorithms. Internet searches, smartphones, sensors, and other devices all produce data. By clicking on this article, you probably generated some microscopic record about your identity, how long you viewed the page, and whether or not you scrolled all the way to the bottom of the page. Data is all around, it is inescapable, and it can tell us better about who we are than traditional metrics that are used to manage risk.

Similarly, algorithms are all around and inescapable. In fact, the use of algorithms to do even mundane tasks has become so widespread and en vogue that Jonathan Zittrain, director of Harvard’s Berkman Klein Center for Internet and Society stated the following:

I think of machine learning kind of as asbestos… It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”

The reason there are so many algorithms being deployed so haphazardly now is because there is so much algorithm fuel out there (algorithm fuel is data). If we are not mindful about where we are allowing that fuel to drive us, we might end up in a place we do not want to be.

2. Algorithms are now their own fungible things

The ubiquity of algorithms and algorithm fuel (data) leads into my next point of distinction. The fact that you can purchase, implement, and modify algorithms highlights that algorithms are now their own fungible things. Thanks to the amount of data that is available, algorithms have gone from being a piece in some machine to being their very own machine. They have gone from being an indistinguishable part of the car to being the engine.

A quick search of Bing reveals that there are algorithms for sale on the internet (primarily for use in high frequency trading scenarios). If I hop on the education platform, udemy, I can search for course on algorithms and find 2029 online courses available at my fingertips to help teach me about how to make algorithms, data types and algorithms, algorithms for genetics, and even algorithms for 3D printing things.

In the medical contexts, algorithms are increasingly used to help detect illnesses, identify treatments, and reduce future health impacts. In the tax context, algorithms are used to calculate tax liabilities. In the legal context, algorithms are used to find important documents. Algorithms aid in the detection of crimes with facial recognition software, they are used to set bail and evaluate whether someone might be a flight risk, and they help monitor different statistics to address issues such as recidivism. In addition to demonstrating the vast potential of algorithms, these examples also help indicate something else. Algorithms are scary and have lots of negative downsides.

What Can The Unique Features of Algorithms Tell Us About Liability From Algorithms?

Like with other machinery, we can take what we know about the differences between algorithms and data and begin to imagine what some framework for evaluating the new forms of liability could look like, that emanate from algorithms.

The major classes of liability relating to data yielded five categories. These five categories cover active protections against cyber liability, such as attacks to your network, business interruptions, media liability, and privacy liability, and inactive protections, such as errors and omissions. Each of these five categories protect against the basic life cycle of data. Data is created, stored, and used. While it is being stored, someone might try to access your network, a disruption could interrupt your business, someone from within the network could leak information, the type of information you are protecting might be very sensitive and subsequently damage an individual, or you might just mess up and accidentally release all of your data.

While it may be some time before the market for algorithmic liability products is mature, based on this lifecycle analysis of cyber it would follow that we can develop a rudimentary understanding of algorithmic liability by identifying the lifecycle stages of an algorithm. In following posts, I do my best to break out algorithmic liability into active algorithmic liability — protection from 1) the creation of algorithms, 2) the implementation of algorithms, 3) the use of algorithms, and 4) modification of algorithms — and inactive algorithmic liability (protection from errors, omissions, and other sorts of negligence).

Stay tuned!


Menu