Recognising faces

How many of the cats pictured above are real cats?

The correct answer is one (Luna on the right).

How many of the people pictured are real people?

The correct answer is none.

You too can generate faces of people who don’t exist , or all sorts of things that don’t exist. Sometimes you can tell they aren’t real because there is something weird in the background or something that doesn’t fit on the face. However, mostly, you can’t tell (or at least I can’t tell).

The this-person-does-not-exist website went viral in 2019. I am seriously behind the times as I only heard about it in a conversation with a group of researchers this week, when we were discussing Stable Diffusion and ChatGPT as new instances of Artificial Intelligence (AI). The software behind the image generation is called StyleGAN, with ‘GAN’ meaning ‘generative adversarial network’.

A GAN (which is a form of AI) has two parts – a generator network and a discriminator network. The generator is trained to create new images by being provided with real images. The discriminator learns to differentiate between faked images and original images using training data of real images, together with fake images from the generator. Anything the discriminator cannot identify as fake is regarded as a success.

People are already using faked images of humans, such as in social media, including dating. A friend told us about a recent instance, where his friend (let’s call him John) met up in person with a date (let’s call her Mary), who he found online. Mary looked nothing like her online image and didn’t understand why John was unhappy about this situation … already people are not only using fake images (and stories), they don’t understand why other people might be upset by this.

One could think this is relatively minor…once you meet a person you are going to figure out who they are in reality, as opposed to their (potentially faked) online persona. However, many people carry out long-term relationships of all sorts – personal or work – online, without meeting people physically. Given the growing capabilities of image generators and text generators, one can start to imagine online text conversations in which some of the participants are computer generated. Actually, it’s getting more real than that, Soul Machines is creating avatars who interact with you online and have a conversation with appropriate facial expressions – have a look for yourself. As yet, these avatars aren’t at a point where they are completely plausible, but how long is that going to take?

Tied with the terrifying prospect that, very shortly, we won’t know whether we are interacting in the digital space with humans or machines, is the potential for the same technology to track us … everywhere. Facial recognition technology (FRT) is the flip side of image generation technology. We’ve recently heard that NZ supermarkets have been using FRT for four years, supposedly as a crime prevention method. Doing a quick scan of the internet showed how prevalent use of FRT is worldwide, as documented in an article by Comparitech.

Comparitech looked at the use of FRT in the 100 most populated countries. Relatively unsurprising, given media coverage, the greatest use of FRT technology is in China. Only 7 countries were not using FRT – Burundi, Cuba, Haiti, Madagascar, South Sudan, and Syria are not, most likely because of cost, rather than ethics – while Belgium has banned FRT use. Luxembourg is the only other country to have banned FRT (but isn’t in the top 100 populations). 70% of governments and 70% of police forces are using FRT on a large scale, including the Australian police force (NZ wasn’t included as it’s not in the top 100 populations).

Collection of biometric data for FRT is legal in New Zealand, if you are told when and why it is taking place (collection of such data is covered by the Privacy Act). There need to be signs telling supermarket shoppers if their images are being recorded for the purposes of crime prevention. However, it is questionable how clear the supermarkets are making their practices to customers (have you seen any of these signs?), and how long they are keeping images for.

As a result of concerns, the Office of the Privacy Commissioner is carrying out a process which was publicly notified (submissions closed in September), to recommend any regulatory change regarding use of FRT (the paper appears to suggest none is required). However, surely use of FRT should have far wider debate than writing of a paper for public comment that most people have never heard about. The power of FRT is already considerable, and is growing ever more rapidly. It’s very handy when your phone recognises your face and unlocks, but the potential for being ‘recognised’ by AI as a criminal, then picked up by robotic police and carted away in a self-driving car, starts to seem possible. Is automated recognition of people and consequent action something we want embedded in our society? Surely this is a social issue of the scale that begs for societal debate? At present we aren’t being offered any.

Published by janecshearer

I'm a self-employed life enthusiast living in Gibbston, New Zealand

Leave a Reply

%d bloggers like this: