Herbert Simon’s approach of decision making (precursor to AI systems) and how the social biases can be amplified in Artificial Intelligence systems
Simon’s approach to decision making essentially consisted of three main assumptions:
· Decisions are not performed by agents with perfect rationality, they are made by agents with bounded rationality
· The quality of decisions vary as a function of the expertise of the decision maker.
· To understand decision making, it is paramount to investigate the cognitive processes involved; that is, an analysis based on performance only is not sufficient.
With Artificial Intelligence and Machine Learning, we can alleviate all of above three upto certain degree, depending on the underlying algorithm.
Bounded Rationality
Bounded rationality says you don’t know everything about every product. And, if you spent all day just to think through all the financial, health and taste outcomes of your choice, you would be wasting an enormous amount of valuable time and effort. So, instead, bounded rationality says you make an optimal choice given your limited time, effort and information.
With above, we can leverage the power of Machine learning to the process of decision making, which over a period, not only work faster but more accurately. As the data increases, the power of underlying algorithm will be able to make consistent decision.
Expertise
For example, it’s impossible to handle all the clients by few expert financial advisors and intern they must delegate. With delegation the quality of advice to each single investor is not consistent and may vary in quality. For example, with robo-advisors powered by AI and Machine Learning, all the clients can benefit the same level of expertise and with continuous improvement, all the clients are going to benefits equally without need for human experts in person.
Cognitive Processes
For humans, the cognitive process for analyzing huge and complex data will take time even existing available tools. Machine learning using the big data technologies along with computing power of neural network server model can increase the decision-making process on data to great extent.
As is it clear, many modern systems apply these principles to create AI systems to solve various complex problem and build modern business. However, these AI systems, by themselves, are not free of social biases and prejudice.
Biases in AI systems
As per article published by James Manyika in (What Do We Do About the Biases in AI?( https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai#:~:text=Bias%20can%20creep%20into%20algorithms,or%20sexual%20orientation%20are%20removed.))
AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas. For example, as the investigative news site ProPublica has found, a criminal justice algorithm used in Broward Country, Florida, mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants. Other research has found that training natural language processing models on news articles can lead them to exhibit gender stereotypes.
Bias can creep into algorithms in several ways. AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes, for example. Another source of bias is flawed data sampling, in which groups are over- or underrepresented in the training data. For example, Joy Buolamwini at MIT working with Timnit Gebru found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data.
Few Steps to Mitigate Biases
1. Correctly labeling of data and clear scope of data as to the purpose of the system. The data for one purpose (containing the labels) must not be used recklessly for other AI system
2. Continuous testing and feedback on the decisions produced by AI systems. More by involving the parties impacted by the decisions.
3. Making algorithm aware of most prevalent biases in society and then raise flags.
4. Diversify the data collection base and sampling.