Biased Algorithms: Can AI be Evil? | Avnet Silica

Display portlet menu

Biased Algorithms: Can AI be Evil? | Avnet Silica

Display portlet menu

Biased Algorithms: Can AI be Evil?

A person using smartphone while AI is scanning her face

While killer robots may exist only in science fiction, a growing number of people inside and outside the world of technology are concerned that AI can do harm through embedded and latent prejudices. For example, AI systems have produced embarrassing mistakes related to facial recognition based on inadvertent training with a narrow input set and may also embed bias in the hiring decisions they help make.

The most famous example occurred in 2015, when Google was forced to apologize after its new photo app labeled two black people as “gorillas.” It turned out the algorithm was trained using a database of facial Racist tags have also been a problem in Google Maps. For a while, searches for “nigger house” globally and searches for “nigger king” in Washington DC turned up results for the White House under former US president Barack Obama.

It is not just Google that has run across problems with biased algorithms. Flickr’s auto-tagging system came under scrutiny after it labelled images of black people with tags such as “ape” and “animal.” The system also tagged pictures of concentration camps with “sport” or “jungle gym.”

“More and more companies are using AI. The software can manage large amounts of data and react independently to inputs,” says Edna Kropp, digital engagement specialist at LivePerson, a provider of conversational commerce solutions, based in Berlin. People are only necessary to train the AI, she notes, and this is exactly where the problem lies. Even though artificial intelligence is an emotionless appliance, it is only as unbiased as the data provided by the trainer. In fact, she says, “The machines are usually programmed by white men.”

Portrait photo of Edna Kropp

People are only needed to train AI – and that’s the problem!
Edna Kropp, Digital engagement specialist, LivePerson

 

In response to this concern, the EqualAI (Equal Artificial Intelligence) initiative founded by LivePerson CEO Robert LoCascio has been gathering support. The organization is pursuing a four-pronged program to encourage more women and people from a range of ethnicities to learn to code. This would enable them to pursue degrees in technology, to work with companies in eliminating bias in human and AIcentric hiring and promotion, and to identify and eliminate bias embedded in new and existing AI systems Kropp says current estimates are that around 80 percent of software developers and programmers are male. “The data with which these people train artificial intelligence represent their world,” she says. For example, if a programmer has mainly white friends, he will show the AI photos of white people for facial recognition. As a result, the AI will only be able to fall back on less diverse image material and will distinguish faces of non-white people less successfully.

The problems do not stop with racial proclivities within the AI, notes Kropp, by mirroring the world in which the dominant white male programmers live, many other problems can arise.

 

Biased Algorithms: AI needs Better data

“The solution to this problem is obvious: AI needs better data, more data, and, above all, more diverse data,” says Kropp. This will only happen when people from different social and cultural contexts program such machines. Unfortunately, too few people with these backgrounds have so far decided on a career in software development. “The EqualAI initiative is working to ensure that more women and people from minorities are trained in the technology,” says Kropp.

On a similar note, Stephane Rion, senior deep-learning scientist for Teradata in France, says that a key aspect of AI implementations within financial organizations is transparency: “More and more banks and financial institutions are focusing their efforts not only on developing the most performant predictive models to catch fraud or agree on a loan but also in understanding why a model made a specific decision.” In the area of deep learning, neural networks can have a large number of neurons or parameters that will affect the final decision; being able to understand this is vitally important for a bank when it comes to meeting regulations or even running an audit, he explains.

READ MORE ARTICLES

Biased Algorithms: Can AI be Evil? | Avnet Silica

Display portlet menu

Sign up for the Avnet Silica Newsletter!

Stay up-to-date with latest news on products, training opportunities and more!

Take a DEEP look into the future!

Get the latest market trends and in-depth trainings on our Digital Event Experience Portal!

Avnet Silica Design Hub

Browse and review hundreds of proven reference designs to accelerate your design process. Our designs can be modified and saved in our AVAIL design tool and then exported to your CAD tool of choice.

Biased Algorithms: Can AI be Evil? | Avnet Silica

Display portlet menu
Related Articles
laptop with graphic overlay
How is AI changing the electronics industry?
By Philip Ling   -   May 22, 2024
Artificial intelligence has the potential to influence every part of any company. As a trusted partner and leader in technology, Avnet has a responsibility to its customers and suppliers to consider the impact of AI from all angles.
tensors
Why AI and embedded design share the same DNA
By Philip Ling   -   May 22, 2024
Intelligence comes in many forms. More of us are interacting with devices that appear to understand us. What and how they understand depends on the technology inside. How are embedded engineers implementing intelligence?

Biased Algorithms: Can AI be Evil? | Avnet Silica

Display portlet menu
Related Events
Human finger touching the screen
MuseBox
Date: September 28, 2023
Location: online, on-demand