Keystone Strategy

Smashing the cookie jar? A bold ICO and CMA position on harmful digital design that may preface regulatory teeth-baring

By Stefan Hunt and Emily Chissell
August 18, 2023   /   5 Minute Read

Earlier this month, a surprisingly bold joint paper (see here) was published by the ICO and the CMA on how firms’ digital design (aka online choice architecture) can undermine people’s choices and control over their personal information. It’s a must-read for firms that personalise their services in some way through advertising, promotions or anything else – pretty much most firms that operate online – as it clearly calls out what constitutes potentially bad behaviour.

These types of joint papers often risk being a bit ‘motherhood and apple pie’ but not this one. In fact, much of the design that firms commonly use to manage people’s personal data online (such as cookie consent or personalisation) risks running afoul of both data and consumer laws. In the regulatory line of fire are five practices: ‘harmful nudges and sludge’, ‘confirmshaming’ (guilt tactics), ‘biased framing’, ‘bundled consent’ and misuse of ‘default settings’ (one of the most powerful and common practices). Though the agencies highlight that this is not a comprehensive list.

 

What’s ok and not ok?

Regulators are taking this stuff incredibly seriously. The examples in the paper show specific types of design the agencies think are not ok. They cover practices that are widespread online – and in the case of cookie permissions from news outlets, 95% of firms use problematic practices. Therefore, the paper fires a clear warning shot to companies who use these practices: do something, or the agencies may be forced to act. Unlike in trickier areas, such as the interplay between privacy and competition, it seems here the ICO and CMA are singing entirely in unison and want businesses to be in no doubt about that.

As the agencies go through the five types of online choice architecture practice, they highlight specific examples. Here are some of them:

– Harmful nudges and sludge, i.e., pushing people to make inadvertent or poor decisions. This section has two examples. The first is how, when consumers are setting up an account, firms typically offer “one-click” to allow various types of data sharing but to turn them all off requires manually changing each setting and there’s no equivalent “no to all” option (pages 13-14). A second is a cookie setting that conveniently offers people an “Accept all” option, but if consumers do not want that, they have to click into settings and refuse consent individually to multiple individual cookies (page 15).
– Biased framing, e.g., overly positive framing. The example illustrates a choice to share data or not (in this case, search history) with a company, where although the two options, “Yes, share” and “No, don’t share” are perfectly neutral, the downsides of sharing are not explained sufficiently compared to the upsides (page 19).
– Bundled consent. In this case, the example doesn’t on the face of it use problematic design features, but single consent permits firms to do multiple different things (page 22). Though in our minds, we see potential tension here with user ease.

Whilst some design features highlighted are widely used – others are much less common and will be of less concern to most firms, for example confirmshaming. This is where consumers are pressured into doing something by making them feel guilty or embarrassed if they do not do it, e.g., having to click on “no thanks, I will take the risk” to refuse data sharing with an insurance company (based on a real example from years ago, from a well-known firm). These more obvious tactics are relatively rare nowadays.

But many of the other practices we experience day-to-day. And in fact, so much so that the use of these practices could appear quite benign, at least on first blush. Put simply – many of us have got used to seeing this, but that alone doesn’t make it reasonable in the eyes of the agencies.

 

Are the agencies likely to act?

In short, more action on this looks highly likely from one agency, both agencies separately or even the two agencies working in tandem (as with the Google Privacy Sandbox case).

The ICO clearly wants to see firms improving their design and properly empowering consumers (outlining specific points that run counter to various articles of the GDPR). And behavioural economics is an area that the ICO has shown specific interest in, with its economics team seeking to draw on “behavioural economics to understand how consumers form preferences and decisions about use of their personal data.” The ICO also takes the lead in the paper, which could be an indication of their strong intent.

The CMA has a firm stake in this too. It has a significant ongoing programme of work on online choice architecture, which has not only created new guidance but kicked off several enforcement cases. And the paper signals there’s more enforcement action to come. The CMA also has a growing Behavioural Hub of scientists that can build evidence for cases. With new fining powers coming down the line with new DMCC bill, it could have even bigger teeth than the ICO.

Will there be action against big tech too? Given the prevalence of the behaviours it would be surprising if just smaller firms were targeted, as with the existing cases. Of course, the use of these practices by big tech is also likely to come under the microscope through the new Digital Markets Unit within the CMA (already in place but awaiting the new DMCC powers).

 

What should firms do, if anything?

Rather than wait for the agencies to come knocking, firms might be wise to get ahead on this. For example, doing something like a “behavioural audit” of existing practices is a simple way to get a quick overview of where they stand and whether anything potentially worrying could be easily addressed.

The right tools to use here are the conceptual and empirical tools of behavioural welfare economics (which underpins the regulatory framework), which include A/B tests, online experiments, customer surveys, usability testing and user interviews. These can help analyse the impact of different design practices, work out what is best for consumers and provide the most informative evidence.

As firms analyse their own practices and especially gather empirical data, they are likely to find that in many situations what is a ‘reasonable’ practice is not clear cut. When looked at more closely, much behaviour, such as the use of defaults, might be in tune with most consumers’ preferences and just not problematic under certain circumstances. In practice, despite the bold stance taken, there are always shades of grey. That is why the next step is to understand how these practices play out in real-life.