Some twenty-one years ago, a computer beat then Grandmaster and world champion Gary Kasparov as he was engaged in a game of chess against a supposed ‘lower player’. While this may have been a one-off situation back then, a lot has happened over time to suggest there is more to advancing technology. Computers are currently outsmarting humans and this may soon turn into bigotry. As massive technological advancements progress in today’s world, there is increasing concern about the extent to which computer technology can grow. In fact, as they are designed with an automated decision making system, they are now swiftly assuming control over very significant aspects of our lives.


Bank employees are now forced to assume a lesser role and obey more computer commands rather than making use of common sense to make important decisions. This has given robots the “freewill” to decide key aspects of our financial lives such as qualifications needed for access to loans or insurance and the creditworthiness of individuals. The decision making machines are highly sophisticated, super-technical computer programs that run on a complex algorithms in ways that humans can hardly understand.

Once they are programmed for a specific use in different agencies and institutions such as insurance firms, hospitals, banks, and marketing companies, these robots use already installed data to make crucial decisions, becoming a self-imposing tool that rules human lives. They reach conclusions that amaze even experts in the technology field – the ‘decisions’ of these programmed engines are evidence of irrational, bigoted verdicts that could affect the human race for many years. There is fact real fear that an ‘intelligence explosion’ is looming.

While it is hardly known how far and for how long these intelligent machines would continue to develop, it is pretty certain that as the development continues, it would likely be either the best or worst thing to ever happen to the human race. Therefore, it is vital that the goal of the growth of computers and artificial intelligence should be to benefit humanity rather than create undirected and uncertain artificial intelligence. It may take decades to finally get it right but until then, we can only imagine how much impact computer bigotry would have on organizations and individuals.

Another major cause for concern is employment. The qualifications of individuals may soon be under heavy scrutiny especially when robots are thought to deliver more efficiently. This would only cause many to end up poor and miserable except there is an efficient strategy to evenly distribute wealth. And even when few people are considered for a job, the decision making abilities of these robots may prove inadequate.

Although laws have been enforced across Europe to try to make firms that make use of robots more accountable, experts are worried that this legislation would be ineffective. This is even more worrying as the British Police are also beginning to rely on advanced computer technology. In fact, the Durham Police started making use of a robotic system known as Harm Assessment Risk Tool (HART) in May 2017 to make decisions on re-offending risk of individuals in custody – if they are at low, medium, or high risk of breaking the law again.

Although the use of HART is currently restricted to providing ‘orientation and advise’ to subjects using a program known as Checkpoint, American and British lawyers fear that there is a risk of the system becoming biased due to differences in race or class. Despite basing its decisions on 34 pieces of information ranging from age and gender to the criminal history, age of first offence, and type of current offence, critics already claim that there have been reported cases of racial bias in Wisconsin, United States. Experts observed that blacks were twice as likely to be flagged as ‘high risk’ in Wisconsin. This means that they were more likely to remain in jail without a cogent reason.

Since robots base their decisions on accumulated data, concerns would continue to rage on for many years as to the rationale behind their ‘verdicts’. The robots also take the postcodes of offenders into account, exposing poor people to harsher punishments as ‘high risk’ individuals. Hence, experts do not have solid confidence in predictive robots as they are hardly known for providing consistent judgments. Until they can be more consistent in making decisions and providing judgments – proving to be more reliable than police officers, robotic systems must be scrutinized for any potential biases.

In conclusion, whether robotic systems are used in hospitals for treatments or for making financial analysis and crucial legal decisions, their proliferation and use must be checked to avoid any bias. The rate of proliferation of decision making robots and the trust organizations have in these systems are a cause for concern today. Fears that robotic systems may assume a bigger role in future are not to be dismissed. Checks and proper regulations should be in place to avoid uncertainty and inadequacies in big organizations – the concern is genuine and must be taken seriously to avoid impending computer bigotry.

Authors Bio:

Nick Burbridge is the owner of Elive, managed services provider from Auckland, New Zealand. He has a passion for computer hardware and travelling the world.