It’s been an eventful few weeks news wise with many high profile organisations admitting to their algorithms containing bias, even our government have opened the door a wee crack acknowledging there might be an issue within their programmes. Here is a summary of whats been in the news and lessons we can learn from others:
Amazon ditched AI recruiting tool that favored men for technical jobs
Bias in algorithms can extend to self enhancing algorithms too. By capturing and analysing Amazon’s employment patterns their resume review tool designed to streamline recruitment reinforced an existing organisational gender bias and selected male candidates in preference to female. Amazon found the software penalised the use of the word woman and scored use of masculine words favourably. “Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs.” Good on Amazon for being transparent about their bias algorithm.
Microsoft tackles ‘horrifying’ Bing search results
I found this one quite confronting and am mildly surprised at the lack of backlash in the public domain. Basically Microsoft have admitted that their search engine Bing returned racially bias results including ranking white supremacist sites above others when returning result sets. While Microsoft have acknowledged and rectified the behaviour of their search engine on this front they have not disclosed how this happened or whether the engine contains other bias. The other surprising insight is Microsoft have 24% of the search engine market, so this isn’t a marginal issue.
Journalist Chris Hoffman discovered Bing suggested racist topics when he looked up words such as “Jews”, “Muslims” and “black people”.
Bing also ranked widely debunked conspiracy theories among the top suggestions for other words.
The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work
Not a new topic this Guardian article indicates that “manual or mechanical turk” practices where humans pose as a bot or AI function is more common than we think, one commentator describes this as “It’s essentially prototyping the AI with human beings”. Must say I have some sympathy with this approach, prove the processes before automating them using machine learning. When a story like this broke in NZ – Zach the Medical AI – last year there was outrage at what was eventually exposed as a scam, in this instance humans were mimicking the AI function yet the founders continued to insist they had invested significantly in technology infrastructure. The outrage was at the lack of transparency and the ethical issues associated with handling of patient medical records in this way.
What the NZ Government has said
I am thrilled our government is taking algorithmic bias seriously and has applied resource to investigate. The team here at OptimalBI work with, develop and enhance algorithms deployed in Government agencies so we are aware of a range which face the challenges I have previously outlined in the blog – Bias A Capability Challenge. We would have liked a stronger statement from the government on this to be honest, one specific example below is just too weak – a mandate to publish explanations in clear and simple terms should be in place for all government algorithms used in decision making should be put in place for transparency and trust reasons:
There is also scope to ensure that all of the information that is published explains, in clear and simple terms, how algorithms are informing decisions that affect people in significant ways.
Other worthy reading
I found this article with the hypotheses on gender bias becoming more pervasive interesting – Why artificial intelligence will have very human frailties.
An interesting Q&A from a Data Scientist at Microsoft (ironic given above) – Algorithms and Bias Q and A with Cynthia Dwork.
Our own Melanie Butler wrote this piece last week – A thought or two on bias.
Finally if you haven’t read it the first blog post in this series – Bias a very wicked problem.
Hope this has broadened your thinking on the use of algorithms and bias. Vic