We look upto Machine Learning AI systems to give the best decisions in whatever situation hey are put in. But actually do things happen this way ? This dilemma of the different risks associated with a machine learning AI system has been presented quite well by Boris Babic et al in his HBR article of Jan-Feb 2021 issue titled When Machine Learning goes off the rails. (click here)
What are the different risks associated with a machine learning AI system.
Machine Learning systems are complex, they simply rely on probabilistic models
Concept Drift - inputs collected during typical data collection periods may not be applicable during other peiods
Covariate shift - Coviariate shift happens when the data fed into the algorithm under use differs from the data fed while training it.
Agency risk - risk that stems from usage in situations that are not under the control of the user
Moral risk - being unable to solve moral dilemmas, of racial discrimination, gender discrimination etc
One of the tough questions that decision makes often encounter with AI systems is how long to allow the systems to learn and when to lock it out. Because with each learning we find the system taking different decisions from the previous ones.
What should organisations do to make the system risk free ?1. We need to look at AI systems as human systems and not machine systems
2. Get the AI systems certified before using it
3. Monitor the system continuously to avoid any incorrect or biased decisions that can affect humans
4. Asking the right questions can help unravel the inherent bias in the system
Mitigating the risks associated with AI adoption may be more critical than adopting the technology itself.
Since in the future we can expect most of the systems to be AI dependent, it is highly necessary that we need to understand and manage the risks of AI adoption that managing the implementation of th technology itself. We can then avoid some really embarrasing situations.
George
No comments:
Post a Comment