ALERT LINE
How to make opaque AI decisionmaking accountable | KurzweilAI

How to make opaque AI decisionmaking accountable | KurzweilAI

Machine-learning algorithms are increasingly used in making important decisions about our lives — such as credit approval, medical diagnoses, and in job applications — but exactly how they work usually remains a mystery. Now Carnegie Mellon University researchers may devised an effective way to improve transparency and head off confusion or possibly legal issues.

CMU’s new Quantitative Input Influence (QII) testing tools can generate “transparency reports” that provide the relative weight of each factor in the final decision…

Source: How to make opaque AI decisionmaking accountable | KurzweilAI