Unbiased Coding & Algorithms

Recognising potential bias in algorithms is an important step towards preventing it.

At a time when many of the systems responsible for managing our lives depend on artificial intelligence and machine learning, any biases built into the algorithms that run them also have the potential of projecting a bias on the outcomes that we rely on in every realm of our lives: healthcare, education, employment, banking, investments, etc. The need for accountability in coding is thus becoming paramount in the development of any system affecting humanity, going forward. 

How does bias creep into algorithms?

Biases in algorithms can occur when certain parts of the population (e.g. women, the unemployed, the disabled, people of colour, LGBT and others minority groups) can be negatively or unfairly impacted by decisions or outcomes taken by software tools, predetermined by algorithms that don’t take their needs or particular circumstances in mind. This can happen without anyone having a pre-conceived malicious agenda, simply due to the fact that these circumstances are considered ‘’exceptions,’’ or haven’t been fully thought out when defining the typical rules of the particular system. 

In most cases, this so-called coding bias is the unintentional result of system development practices designed with the average or usual user, beneficiary or event in mind, though being conscious of the possibility of unfairness to occur is a good first step towards ensuring we can actively avoid it during the algorithm definition process.

Real-life examples of code bias

MIT grad student Joy Buolamwini was working with facial analysis software when she encountered an issue: the tool didn’t detect her face, because its underlying algorithm didn’t know how to identify a wide array of skin colours and facial structures (it assumed most users would be Caucasian). Buolamwini is since determined to fight bias in machine learning, which she has dubbed ‘’the coded gaze,’’ and describes in further detail in her TED talk. Other examples of the coded gaze include flawed and misrepresentative systems to rank school teachers and a gender-biased model for NLP, among others. 

What is the developer community doing about this?

Although we are still in the early days of dealing with algorithmic bias detection and its consequences, thought leaders are realising the global impact this trend could have on already disadvantaged populations, if it isn’t addressed in a responsible and timely fashion.

Earlier this year an initiative called AI Now was formed by Microsoft, MIT and Google representatives to study and combat what researchers and practitioners alike are beginning to recognise as an important issue. The founders of AI Now say, though hard to detect, bias may already be present in many products and services we use daily.

Preventing bias when developing apps

It’s easy to leave the responsibility of algorithm building and code writing to the programmers, those mystical creatures who can rarely be spotted in the daylight, and let them deal with any bias they might be unwillingly or unknowingly contributing to. However, it is within the ethical responsibility of every organisation, product owner or development company, to be aware of the potential for bias when creating software systems, whether it affects something as innocuous as recognising skin tone or as serious as predicting unemployment rates. Addressing this potentially devastating trend should start with awareness and triggering conversations involving all stakeholders. 

Thus, business analysts, decision makers on the client side and project leads on the agency side all need to participate in the discussion of potential bias threats with those responsible for the algorithm building itself. Ideally, this issue should be flagged at the very beginning, pointing out potential pitfalls and coming up with an action plan on how they will be resolved down the road. In order to make bias consideration and avoidance a valid part of software development, it needs to be a formal part of the planning and the validation processes, as well as be addressed in project documentation and referenced during the maintenance phases of the product or service lifecycle. 

The development team at PegusApps is actively fighting algorithm bias by firstly preventing it from occurring as much as possible, and in later phases, revisiting code to ensure it isn’t unjustly affecting any groups of users or subjects. 

Copywriter: Ina Danova

All articles

We develop Enterprise Apps and IoT solutions that bring together devices, cloud & data, sensors, etc. for your business needs. We develop software (mobile apps, api’s, etc.) and hardware to connect devices and machinery in various kinds of industries.

Get in touch