Bias in AI is making headlines in Canada
Our clients hear from us all the time that we are using AI and machine learning to help us with our data analysis work. The powerful analytics, combined with lightning fast evaluations, make it an essential tool in our review toolkit.
Two recent articles highlight the challenges organizations can face when using AI. The first one, bias in medicine, highlights how using historical data to train an AI system can introduce unintentional racial bias into diagnosing medical conditions, while another article discusses how Department of National Defence failed to follow the government’s privacy impact regulations when employing third party AI technology, opening up the possibility that bias could be introduced into in their recruitment process.
Bias in AI/Machine learning is a real issue. In 2016, Microsoft technicians were developing a “conversational understanding” AI system that was designed to learn from chatting with people, and eventually be able to engage in conversations. In order to speed up training, the technicians decided to attach it to a Twitter account. People could tweet with the system, and the system would respond. Unfortunately, within 24 hours, the system developed a very well defined misogynistic and racist personality. It proved, once again, that garbage in=garbage out. There was nothing wrong with the underlying AI technology. The problem was with the data used to train the AI.
While AI / Machine Learning is powerful, organizations need to be aware of potential biases that can be introduced during the training/model development phase. These biases, if left unchecked, will dramatically affect the results. In document review, biases and errors are expected, and a robust, independent validation of results needs to be included in all review projects.
At MT>3 we continually conduct quality control, testing, and validation of the Machine Learning and AI tools in order to verify the results.
Two recent articles highlight the challenges organizations can face when using AI. The first one, bias in medicine, highlights how using historical data to train an AI system can introduce unintentional racial bias into diagnosing medical conditions, while another article discusses how Department of National Defence failed to follow the government’s privacy impact regulations when employing third party AI technology, opening up the possibility that bias could be introduced into in their recruitment process.
Bias in AI/Machine learning is a real issue. In 2016, Microsoft technicians were developing a “conversational understanding” AI system that was designed to learn from chatting with people, and eventually be able to engage in conversations. In order to speed up training, the technicians decided to attach it to a Twitter account. People could tweet with the system, and the system would respond. Unfortunately, within 24 hours, the system developed a very well defined misogynistic and racist personality. It proved, once again, that garbage in=garbage out. There was nothing wrong with the underlying AI technology. The problem was with the data used to train the AI.
While AI / Machine Learning is powerful, organizations need to be aware of potential biases that can be introduced during the training/model development phase. These biases, if left unchecked, will dramatically affect the results. In document review, biases and errors are expected, and a robust, independent validation of results needs to be included in all review projects.
At MT>3 we continually conduct quality control, testing, and validation of the Machine Learning and AI tools in order to verify the results.