Aviva Insurance Limited
PO Box 4 Surrey Street Norwich NR1 3NG
+44 (0)1603 622200
https://broker.aviva.co.uk/
  • About Aviva

    Aviva Insurance Limited is one of the UK’s leading insurance companies, part of the Aviva group with 34 million customers Worldwide. Aviva Insurance has been in the insurance business for more than 300 years.  In UK commercial, the insurance market remains challenging for insurance brokers and customers, due to the ongoing economic conditions. Aviva Insurance are focusing on improving our processes to ensure Aviva provide commercial customers with insurance cover at an acceptable price.  Insurance brokers also recognised our excellent customer service by voting us Insurance Times General Insurer of the Year in 2012, for the second year running.  youTalk-insurance sharing Aviva insurance news and video.

Are we teaching machines to discriminate?

Are-we-teaching-machines-to-discriminate?

Sexism, racism, bigotry – the thing about unconscious bias is you don’t know you’re doing it. But are you? 

“Is that a girl or a boy?”

Children ask this question a lot. About toys. About cartoon characters. Sometimes, embarrassingly, about people on the street.

As they grow, they ask less. They make assumptions based on what they learn about the world. But assumptions can be wrong. They can lay the foundation for discrimination and unconscious bias. 

Unconscious bias is when our background, culture and experiences lead us to make automatic judgements and assessments about people and situations.

What have children and algorithms got in common?

When Google introduced a new Gmail feature that auto-completes sentences, they blocked gender-based pronouns, like ‘him’ and ‘her’.

Why? Because when they typed “I am meeting an investor next week” Smart Compose, the algorithm behind their new feature, suggested the follow-up question: “Do you want to meet him?”

Smart Compose learns to write from data it’s exposed to, for example on webpages. In this case, data taught Smart Compose that investors are more likely to be men than women. Even if that’s true, it’s easy to see why Google wanted to avoid perpetuating it.

That’s the thing with algorithms. Like children, they learn from the world around them. So what’s to stop them developing bias?  

Algorithmic bias is when algorithms reflect human biases. By learning from human designers or by subsequent machine learning, sometimes known as ‘deep learning’. 

Sexist, racist, bigoted! 

As adoption of algorithms increases, more examples of algorithmic bias are coming to light.

Software used to shortlist job candidates has been found to favour not just men over women, but ‘white-sounding’ names over ‘black-sounding’ names.

A risk-assessment algorithm used by a US court flagged black defendants at almost twice the rate of white defendants. Worse, it underestimated the risk of white defendants being repeat offenders.

Facial recognition software can correctly identify the gender of white males 99% of the time. But accuracy drops - as low as 35% in some cases - for women, people of colour and ethnic minorities.

"Google ‘professional hairstyles at work’ and you’ll find images of mainly white women. In contrast,  Google ‘unprofessional hairstyles at work’ and you’ll find lots of images of black women.

It’s unlikely that anyone deliberately created these results. But the way those algorithms are trained can potentially lead to unintended consequences."

Orlando Machado

How to avoid unintended consequences

Companies are learning the value of interrogating data for bias before feeding it to an algorithm. Developers also set parameters to stop algorithms teaching themselves to be biased.

Trust in deep learning for social decision making, like who to hire, is waning. Humans can’t always understand deep learning decision-making processes, which makes it difficult to spot where bias occurs and stop it happening.

“At Aviva, we have lots of controls in place to stop major unintended consequences. The algorithms we build are often limited in what they offer, such as a pricing algorithm, which generates a price for a customer. It doesn’t do anything other than generate that price.

There’s a process in place to check that data is representative, and when the decisions generated by algorithms are so tightly controlled – as in pricing – there’s less to worry about.

We clearly still do need a process that governs our algorithms. There’s often no substitute for a diverse team from different backgrounds, as it’s extremely important to have a range of human perspectives to sanity check our processes.” 

Orlando Machado

Who’s at fault here? 

Millions of us unconsciously perpetuate deep-rooted assumptions every day. And what we write, what we share, what we click on – they all feed algorithmic bias.

“When algorithms are less constrained, the ethical issues can become complex. In the Google hairstyles example, search rankings are to an extent based on users’ clicking behaviour…and also the editorial decisions that have led to certain pieces of content being published. Should Google then even correct it? It might be reflective of other biases, with Google simply reporting what’s out there.”

Orlando Machado

If algorithmic bias is an echo of our own unconscious bias amplified by machines, it’s not an algorithmic problem, is it? It’s a human problem, and it’s age-old.

It was Zero Discrimination Day recently. A global opportunity to promote equality. Because we all have a part to play to end discrimination and tackle bias – human or algorithmic.

Latest video

Aviva Claim: ‘Battersea Arts Centre risen from the ashes’

An example of a mid-market claim and the importance of that tripartite relationship between broker, insurer and client. The client in this case, Battersea Arts Centre,... click here for more