Information, Volume 11, Issue 3 (March 2020) – 49 articles
Cover Story (view full-size image):
Like most technologies, machine learning presents both opportunities and risks. Since mitigating unknown risks is difficult, this text first describes two types of interpretable machine learning architectures, including a promising deep learning variant. To facilitate human appeal of inevitable wrong predictions and to aid in regulatory compliance, the interpretable model predictions are paired with individualized explanations. Finally, interpretable models are tested for discrimination using measures with legal precedent in the United States. General highlights from the burgeoning Python ecosystem for responsible machine learning are also presented. These tools can enable practitioners to capitalize on machine learning opportunities while accounting for some discrimination, privacy, and security risks. View this paper.
- Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
- You may sign up for e-mail alerts to receive table of contents of newly released issues.
- PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader
to open them.
Previous Issue
Next Issue