Select Page

Last month I was invited to speak to students at our local Victoria University, studying the Future of Work course, co-taught by the Faculty of Humanities and Social Sciences and the Victoria Business School. Impressed by the innovative convergence of humanities, social sciences and business school I gave them an overview of our industry, how it has changed in my working lifetime before moving onto my very wicked problem.

Luckily talking about algorithms lends itself to many film references so I was able to provide context for the non-technical in the audience – although only quarter of the room had seen Prometheus, so my example of distribution to the medical industry didn’t land as well as expected. Below is an overview of the talk.

How do we solve the problem of algorithmic bias?

Algorithmic bias occurs when a computer system behaves in ways that reflects the implicit values of humans involved in that data collection, selection, or use.

We are only just beginning to discover how extensive this bias may be. AI and Machine Learning technologies can all analyse and process your data, predict outcomes, make decisions and operationalise processes in a cost effective accessible way. These platforms are increasingly becoming accessible to operators who are not educated or trained in methodologies or statistical sciences. There is a risk knowingly, or unknowingly, prevalent biases will be baked into decision making and actions taken.  

US Examples include – systems used to rank teachers, criminal sentencing algorithms, gender bias in pre-employment tests.  

How do we solve this? how do we identify whether a programme or algorithm contains bias? how do we ensure invisible programmes don’t impact the way we live and work?

Are dystopian future novels on the right path?

It’s a misconception that AI is objective because it relies on mathematical computations; the construction of an AI system is an inherently human-driven process. It is unavoidable that such systems will contain bias. WEForum

Dystopian future novels and movies have painted a range of hypotheses where humans are either segmented by status/wealth (what constitutes wealth changes), race or gender; or where humans are marginalised by machines. In both of these scenarios computers of various forms are involved in the profiling, identification, segmentation and enforcement activities depicted.

Recently we read an article about a man who was fired by a computer, followed up by the supposition “better AI could have saved him“. In 2016 an investigative journalist found the sentencing recommendation software used by courts and judges in the USA predicted black offenders as more likely to re-offend twice as often as white offenders – this article spells out some shocking examples and evidence this bias was overriding other factors including past convictions.

When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases. Guardian.

What can be done?

At a high level the concepts I discussed with the students are listed below, they asked great questions about what society can do to avoid bias in our everyday lives, I pushed them to think about what technology companies can do including:

  • being attentive to context;
  • consciously understanding the purpose for which data is collected;
  • clarifying the questions we are asking of data, auditing and explaining algorithms;
  • testing assumptions and features for referencing behaviour;
  • ensuring teams who are developing technology are inclusive and diverse;
  • and incorporating equitable, ethical and just into the core values of systems.

None of this touches on everyday operational activities so I will continue with a series on capability development and bias awareness for analysts and teams. Until then, this video provides a great overview of the topic – it is sourced from here. Vic.

 

Victoria spends much of her time focusing on Digital Inclusion, Digital Literacy and Digital Rights.
You can read her other blogs here

 

%d bloggers like this: