20 min read

Julia Dawson on the four Cs of online risk and the growth of the regulatory tech sector

Covering: interoperable age assurance, maintaining user privacy and the difference between facial detection and recognition
Headshot ofJulie Dawson, Chief Policy and Regulation Officer at Yoti
Julie Dawson, Chief Policy and Regulation Officer at Yoti

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


Very few topics can split a room like age estimation and facial scanning.

Now, with the Online Safety Bill and other proposed legislative measures likely to contain child safety provisions, platforms are having to ask themselves: how reliable is age assurance technology and what are the pitfalls when it comes to privacy and bias?

Although I've covered the topic at a high level in the newsletter over recent years, I wanted to go deeper and find out more about how facial scanning technologies think about their obligations and the risks involved.

As Yoti's Chief Policy and Regulation Officer, Julie Dawson seemed a good place to start. Since joining the company in September 2014 — the year it was founded — she has been responsible for developing Yoti's ethical policy framework, leading regulatory and government relations and managing the company's internal and external ethics boards. Prior to Yoti, she worked at Barclays, eBay and was COO of localgiving.com.

Yoti is an interesting case: the London-based company has partnered with high-profile tech firms over the last five years— Match Group (EiM #30) and Instagram (#180) to name just two — but came under fire last year after a reporter gamed its technology (#149). It has also trialled its technology in retailers in the UK to allow customers to buy alcohol without the need to show ID, something that privacy campaigners have criticised. Which makes it even more important to understand.

This interview with Julie was kindly conducted by Angelina Hue, a Media and Communications MA student at The London School of Economics and Political Science, specialising media policy and content regulation. Angelina's previous studies and work as a communication specialist brought a real rigour to the discussion and I'm grateful to her and Julie for the time they took to go deeper into an important topic on behalf of EiM and its subscribers.

It's worth noting that this interview took place back in June — TrustCon and holidays led to its publication being pushed back — and has been lightly edited for clarity.


Beginning with your work at Yoti, could you talk a little bit about your role in regulatory and policy?

There are sort of three key areas really. One is our ethical framework looking at the governance of how we do things, looking at the human rights angles, consumer rights, the last mile tech, the accessibility, transparency and then obviously looking a lot at the online harms, be that fraud or be that online harms to the vulnerable online. We do that through having an internal group of people in all different teams who come together and discuss knotty issues, who might think: Why are we doing that? How are we doing that? Why aren't we doing that? And then we also have an external group of people – the Yoti Guardians. Minutes [from this group] are published openly, so you as a member of the public could ask them a question. It's trying to invite scrutiny into what we're doing.  

The second area is really looking at all the different accreditations, so that could be on the data responsibility side or on the cyber security side. It could also be blended areas, such as working with regulatory sandboxes in different parts of the world and going through trust framework accreditation. So we've gone through regulatory sandboxes with the FCA [Financial Conduct Authority], with the ICO [Information Commissioner’s Office] and with the Home Office. We've worked on the euConsent project looking at interoperable age assurance. We're going through trust frameworks for digital identity in Australia, in Canada, in the UK, and probably soon also in the EU. We've gone through things such as going for the HIPAA [Health Insurance Portability and Accountability Act, the health accreditation in the US]. On the security side, we've got things like a quality framework through ISO [International Organization for Standardization], the ISO 27,001 and then SoC 2, which is for cybersecurity.

And then the third and final area is really looking at government relations. So in all the different countries around the world where we operate, there are multiple different regulators we can come under – data regulators, content regulators, sector regulators, and then at the moment, obviously there might be in addition AI regulators. So in each different country where we're operating, we have to obviously intersect and build that reputation, build that trust and understand where the regulations are going in different countries.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

Yeah, sure. So we're always taking issues in different elements to them. Sometimes it's work in progress, sometimes it's something that's been going on for a while and we're tweaking it, or moving in a different direction. Sometimes it's brand new things, and there are requests [from clients] for a new product or service, and we've done a consequences mapping – so looking at the positive, negative, intended, unintended consequences, but we want to share that with them, to see what else they might want to tease out. Sometimes it’s topics that have gone through our internal ethics council, and then we're sharing that with them, sometimes it's areas we know they've got a specific interest in. Let me give you a few examples. 

So in recent times, we've had one [issue] which was looking at should, and how might, identity intersect with voting. In the UK, there's a prospect coming up that you will need identification for voting and this has happened recently at some local elections and that could also obviously be continued for national elections. So what is Yoti’s stance on that? 

And after some reflection, our output was that we didn't think that you should tie – or there should be a fundamental tie between - identity and the right to vote in the UK. We saw that in the UK, we have this weird situation in that about 24% of adults do not own photo ID if they don't drive and if they don't have a passport because they don't travel. So, if they don't have that type of document, what should it be? Especially since it's often lower-income people that are in this position. And we couldn't see that there were significant amounts of registered incidents of fraud that made that proportionate to say that you should tie it at the outset. 

Then, we looked at whether should it be brought in, should there be a consultation, could it actually be something that is helpful for some people to actually have an ID on their phone, and we know there are 1,000,000 driving licences lost and stolen here in the UK and around 400,000 passports. A lot of these are when people are out and about. If [something like that] is brought in, should it then be that you can have as one of the options to show something on your phone in case you've either forgotten it, or you don't want to be carrying that passport or driving licence around and about with you? So there are different perspectives there.

Another one was actually in the process of voting; there were some tenders that we'd come across in the US for certain states that we're looking at the process of citizen voting versus the process of elected representatives voting, and we took this to our Guardians and they said they did not think we should get involved in citizen voting because, from experience in Latin America, one of our Guardians specifically said this is so controversial and there's always a winning side and always a losing side. And frequently, the technology provider can be deemed to be the scapegoat by whichever the losing organisation is, they're pissed basically, they're really not happy. And you're just a really easy scapegoat. So no matter if your technology worked as it was supposed to have done, there will always be a winner or loser.

In contrast, they said that if you looked at elected representatives and them having a way to vote remotely, if you think about during COVID — the House of Lords, for example — you might have people that are actually senior in age or have different disabilities, mobility conditions, pregnancy, temporary permanent etcetera. Actually then, knowing it's definitely me that's the elected representative from this area and that I could use that to log on and prove that I'm the right person doing this vote, could be something that's proportionate.

So that's a couple of examples for you of where it's never quite just one black and white. We're trying to look at the circumstances and look at the guidance. And every time we meet those nuanced sorts of thoughts like that, here are three extra people we think you should speak with. 

There'll be some areas where we have to be sensitive that this could give away very specific details to competition or to bad actors. So that is then something delicate that we have to look at. How do we describe what we're doing, but how do so without giving ideas to bad actors as to how they could either defraud more or create more harm?

How do you handle critics who are probably worried about privacy issues with, say, uploading a photo onto Yoti? Because I would imagine in practice, age estimation would involve storing that information for a temporary or long period of time. Could you talk about how Yoti handles these concerns about people who may be worried about having an image or video taken of their face to verify their identity? How do you kind of handle those privacy issues?

Absolutely. So let's take the one with the facial age estimation because that's the one in terms of volume that is probably the most significant. 

We've done almost 600 million of the facial age estimations, and I think the first thing we explain to people is it's not facial recognition. So there's no unique recognition of any one individual. FPF did a really good chart where you can see this is facial detection as it is this is a live face, and then what can I characterise or find out about that face, such as the age? 

But then, what it's not, is a unique persistent identifier — that is, this is the same face moving around shop, this is the same face logging in of one device or one face in a crowd of people. So all of those things  —the unique persistent identifier, one-to-one recognition, and one-to-many recognition — are not what Yoti is about. All it is is detecting whether it is a live human face and what can I find out about it.

And the way we've trained the algorithm was also quite different because it's not been trained to know that this is Angelina, this is Rachael, this is Julie. All it gets is a face with the month and year of birth. It's shown millions of those across many demographics and ages around the world, and we're very clear about how we get the data set. So we got that data set from the Yoti app for people aged 13 and above. People setting up the Yoti app, which we will describe, where there’s a clear option to opt-out at the point of setting up the app or any time subsequently. So we know the data set.

When it's used by organisations, when it sees a new face, the algorithm detects: is this a live face? And then what can I find out about it? And it's basically doing a pixel-level analysis, looking at ones and zeros, and then spitting out a result, as in, I think this person is 13.5 years old or 35.5 years old. And all it gives frequently to a platform, all they need to know is, is someone over 18 or are they under 18 or are they 13 to 17? And we can just give the result that's required, and that image is instantly deleted. 

So we have several things that are independently audited – we have the fact that the image is instantly deleted, we have the biased approach that we've looked at, and that we publish regularly, how good the algorithm is — males, females, light, dark skin tone at every age from 6 to 70. So we have the transparency about how good it is with buyers, we have it audited, the fact that we instantly delete the image, it's really clear that it's not recognising anyone, it's not facial recognition, there's no cross-checking against databases. So in terms of privacy, we're not learning anything from any of the checks. 

And, crucially, people aren't needing to share any details like their name or age. They're not needing to download an app in this instance. You can't recognise them, you can't link their face with their browsing history, and people are going through the consent process to actually use it. If they don't feel comfortable using it, we offer a whole range of different methods for organisations to prove age, that is just one of them, so they can offer other options side by side, and the data set has been collected in accordance with the GDPR. 

Ok, interesting. Now with that information, jumping into how the app works in practice, can you walk me through from the first step to the very final step of how Yoti is integrated into the check process, how the actual age estimation works in practice?

Yeah. So what we can do later is share with you something that shows you all the different methods, because we have about seven. So we have the reusable Yoti app where someone can just, from that app, once they've set up an added like having their ID on their phone, they can just show an over 18 or just show an under 18. So that's pretty much a one-click. We also have an approach on a one-off basis – you upload a document and just age can be shared.

We have the facial age estimation which we've gone through. We also have methods such as linking in with mobile phone companies, credit reference agencies, ownership of credit cards, and then in certain parts of the world, such as Scandinavia, there are e-ID's. You've probably heard of bank IDs in Scandinavia; companies that integrate with us can integrate into a portal where all of those methods are available. Say they're operational in over 100 countries around the world, they can look at each country and decide what method to use. Brazil will start with this method, then offer these methods. India will start with this method and then offer these methods. Companies can do AB testing and work out which methods they want in which areas. 

So taking us back to Instagram; their use case is someone changing their age from under 18 to over 18?  So are they basically trying to [avoid a user going] into the adult area of Instagram and, similarly with Facebook dating, trying to purport to be over 18.

They offer a couple of methods side-by-side for this; they offer the one-off upload of a document where just the over 18 is extracted, or they offer the facial age estimation. They find that about 81% of people are picking that facial age estimation. They're basically just asking —and we can put the little link from Yoti World with age scan in the chat so you can have a go — is someone over 18. The demo version we've got is a little bit more of a vanity one and people can actually assess their age, whereas the one that we're looking at with Facebook and Instagram looks at the individual and will give yes or no over 18. 

Obviously, there’s always a margin of error, so if a good is illegal — such as in certain country pornography or ad or content may in a certain jurisdiction be illegal — the regulator might say we'll add a 3-5 year buffer. So in order to get a hold of this service over 18, you can only use it if you're over 21, over 23, or over 25. In other instances, they might say that for [the purpose of] inclusion, we actually think it is proportionate that a 17 1/2-year-old could get in this area. So it depends on each country's views, and our system is very configurable so that they can just push on in which country they want to buffer and how many years they want that to be. 

How have you witnessed changes in the business, maybe reflected through regulation a need for more of this kind of technology, what kind of changes in the business of Yoti have you seen reflected due to an increasing number of regulations?

So we are effectively a reg-tech business. That is what we're doing, we’re providing technology that supports regulation. We were founded in 2014, so just coming up to our tenth year and absolutely we've already seen that [trend]. For example, take the gambling sector; there were already regulations around both age and knowing your customer, more [from a] financially oriented [perspective] with know your customer checks and anti-money laundering [processes]. The gambling sector was one of the first sectors to look at age assurance, and we have been tracking around the world as these trust frameworks have come in place around digital identity, which basically is looking at the validity or the parity of acceptance of digital forms of documents with physical. 

Because, unless you're a really trained document examiner, at border control level, and you've got additional technology to help you, the fakes or counterfeits and the number of lost and stolen [documents] all of those mean that it's really hard for you. It's really hard to see a good fake from a real one, to be able to look at documents internationally from 190 countries and know this is really the South Korean driving licence from this province of 2019, and those are the right security features. So what we provide our staff with is obviously training — we recruit people that have a high level of facial acuity of visual acuity — and we also have technology. So that's what has enabled us to continue to evolve and support regulations around identity. 

We then really started to look at the inclusion angle. So Yoti is a B-corporation, meaning that we look at profit, purpose, permits — the triple bottom line — and we started to think ahead of the age-appropriate design codes before they even had really come onto the statute books. How can you be inclusive for people around the world that don't own a document or don't have access to a document? So that could be a minor or someone with a controlling spouse or someone that's trafficked. It could be someone in the Global South that hasn't got any document with any government-issued security features, or someone that might just have a birth certificate which hasn't got enough security features. There are over a billion people on the planet that fit into that category. 

The UK's quite unusual in having this 24% of adults and about 33% of kids with no government-issued documents. So until you start travelling or until you learn to drive, or if you're not owning a car and you're not travelling, you're unlikely to have a document because we're not a country that says you must be able to show your papers in the street, which many countries around the world are. So that was why we started looking at age and looking at this inclusion angle. And that obviously over time, we've looked at which other countries are looking at the parity acceptance of digital identity and which countries are looking at age. So we were part of the standards development for the first age standards and following that we've looked at the next age standards coming through and the next countries, and then obviously also now looking at AI approaches and looking at the ethics and the trust and safety angles of the different sectors we're looking at. So the two big policy buckets really are fraud prevention and safeguarding, and the technologies we're developing are supportive of those. 

Is Yoti submitting to the UK's White Paper consultation on the AI Act, and what would that viewpoint be?

Yes, yes. We've actually found the Singapore approach very interesting. We’re obviously looking at the EU one, the US one, and the UK. What's really difficult for a lot of organisations, particularly small organisations, is having the capacity to respond to these as well as actually develop what you're developing, and getting the oversight. We've been quite early to the party and building in governance at the outset in our company and having even a policy function when we are ten people strong — that is really quite rare for an organisation to invest in that. But because we knew we were looking at this really crucial area of how do people prove who they are and how old they are, both face-to-face and online, we've been looking at this governance [piece] and we've been building the ethical framework, so that puts us in a bit of an unusual camp.

We’re actually a representative of Tech UK, the largest tech trade body, and we're continually looking at how other scaling organisations adapt to these new requirements. But absolutely across the board, we see there's quite an alignment around taking a risk-based approach, looking at the pre-principles of how do you build trust, looking at questions of bias, looking at transparency and looking at standards. You might have seen the work of Luciana Floridi from the Oxford Internet Institute, who's done a really good comparison across all the different approaches. 

So yes, we will submit and we will take a view close to the time because we're part of several trade bodies. Sometimes we actually find that it's good to either parallel submit, feed into [other submissions] or put in our own full one. It is quite a long consultation in the UK and at the same time we're doing a lot – obviously, we're following the Online Safety Bill closely and we look at how those roll out in other countries around the world of age and identity elements. So yeah, we will take a view closer as to what elements we fit in. 

How would you win an argument against a sceptic of age estimation technology?

So I think what we're seeing is that when people are offered the options, that is what they're choosing. Take the instance of OnlyFans or what you've seen from Facebook Dating or Instagram; we have large volumes of evidence that actually consumers are choosing this. 

We also spend a lot of time speaking with companies that are needing to meet regulations, and we spend a lot of time with the groups that are scrutinising those; privacy groups, children's groups. And then we also undertake user testing to look at and refine the experience; we worked ahead of the Age Appropriate Design Code in the ICO sandbox, creating materials that are really straightforward and simple to understand for 8-10 year olds. And, by creating white papers and materials that don't try to hoodwink people, that are really straightforward by taking the time to do these external audits, to run roundtables with representatives from government and civil society, we're inviting in scrutiny on a regular basis. And we're always looking at what are those next things that we need to explain. For example, we've been looking at whether there is any negative disadvantage for people transitioning gender? We're starting to look at things around facial difference. So we're not standing still. 

We work with Keele University to get feedback from data apprentices there on the actual white paper that we produce and its understandability. We did work with children peer-explaining this to other children. We've created materials which are simpler terms and conditions. We worked in the ICO sandbox — there's an exit report from that — and we were the first organisation to go through a voluntary audit with the ICO. So we keep on doing this outreach. We also speak at lots of platforms and we engage; actually, today I’m with an academic group up in the North East of England looking at how this fits alongside [questions of] misinformation, disinformation, pseudonymity, anonymity, verification, specifically around the metaverse. We're part of Sprite Plus that links out with 300 different academics looking at security, privacy, resilience, identity and trust in the digital economy. So yeah, it's a job that's never quite finished, but we keep at it and absolutely all you can be is straightforward. You know, publish your materials really clearly and invite that scrutiny.

We come out with white papers probably about once a year, sometimes twice. There's also a little shorter exec summary, and you can see there that chart, which was literally from age six to age 70. We're going for quite a bit of detail and explaining all elements under the hood. You can read about the false positives, the false negatives, and the true positives. It goes into a lot of detail, but it is really quite a straightforward read. People frequently come back to us and state, “Wow, I could actually understand that. You know, I'm not used to seeing nuclear physics, and yet, you know, this was really clear.” 

The last question we can touch on is how do you see the future of age estimation technology both in the short term and the long term?

Obviously, there's always innovation in these areas. And I think one of the things that is really clear is obviously looking at the take-up. The standards around age assurance are themselves coming of age; there's an ISO standard and there's also work on interoperability through the EU Consent Project. More and more sectors looking at this. So you've got social media, dating, gaming, gambling, adult content, physical retail, physical gambling terminals, lottery terminals, and then you've got online e-commerce, plus sort of metaverse and blended worlds. That's a lot of sectors. 

We’re also getting into ed tech and proctoring; that is, is this really Angelina who is starting to take this exam and is she still the person there 30 to 60 minutes later? So there are a lot of use cases. A third of the people online in Web 2.0 and 3.0 will be under 18, so actually going forwards, given all these regulations coming along, it's really important to know the four C’s — content, contact, contract, and conduct. 

There are lots of elements of risk according to whether you're dealing with an under 18. So looking at the Digital Services Act (DSA), looking at the Age Appropriate Design Code in California, UK, looking at the regulations coming through in similar copycat age-appropriate design codes, or others such as the e-safety in Australia; under 18s are a really important area. Can they enter into a contract? What is the sort of conduct? Profanity, sexual innuendo, is that permissible? Think of mixed gaming worlds and what is the actual content, so be that suicidal ideation, violence, anorexia, pornographic, etcetera that is deemed age-appropriate for a minor? There are a lot of different studies going on in that area and all sorts of different angles. Similarly, on gambling addictive behaviours, there will continue to be innovation around age assurance, which is why we curate a range of methods [for companies]. We don't think there's just one bullet, and actually we think that consumers around the world need choice. 

What is interesting is that we're seeing that the [facial recognition] estimations — in terms of their measurement — are actually easier to measure versus some of the document-based methods, where there are more details under the hood that are harder to ascertain; for example, what are the numbers of lost and stolen? What are the numbers of fraudulently obtained genuines, counterfeits, etcetera? How good is the document authenticity checking? How good is the face matching? How good is the optical character recognition of documents? So you have to stack all those different elements; how good is a manual human reviewer?

If it's a historic check done by, for example, a mobile phone company, you're having to look at how good that original check was, and who is now the owner. With an estimation approach, it's much clearer that this is this individual here and now and it's a live face. So you're taking away some of those elements of ownership or authenticity. 

What regulators are having to understand is; what are those elements in the steps underneath the different types of age check, and what inclusion is important, given that 1 billion people on the planet either don't own or have access to a document for different reasons. That's why I think for inclusion [purposes], the estimation approaches need to be brought into view, which takes some education. Across the board that what's needed is that clarity of what's under the hood and that independent audit aspect.

Thank you for chatting with me about all this and for going into detail, I've learned a lot about how age estimation technology is moving along with regulation and I’m excited to follow along future updates.

You're welcome, thanks very much for your time. 


Want to share learnings from your work or research with thousands of people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.