Abstract: While a computer's state can be quickly rewritten, brains are relatively inflexible in the short term. As a totalitarian regime bends facts to fit ideology, the brain's inflexible structure can force evidence to bend to fit existing theories, across individuals and groups (Greenwald, 1980). I'll illustrate how this relatively fixed architecture contributes to perceptual and cognitive biases that affect how we process, remember, and reason with visualized data.
Bio: Steve Franconeri is a Professor of Psychology at Northwestern, and Director of the Northwestern Cognitive Science Program. He studies Visuospatial Thinking, especially within Information Visualization and Education, as well as Visual Communication.
Abstract: People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and the people affected by these systems—have at least a basic understanding of how they work. Yet what makes a system “intelligible” is difficult to pin down. Intelligibility is a fundamentally human-centered concept that lacks a one-size-fits-all solution. I will explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, ways of empirically testing whether intelligibility techniques achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.
Bio: Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency.
Abstract: Artificial intelligence (AI) techniques provide great opportunities for improving healthcare research and clinical practice. With the recent advancements along with ever-increasing clinical data, researchers have demonstrated successful application of AI techniques in predicting diagnosis, unexpected readmission, and mortality of patients. However, there has been limited adoption of the techniques for clinical use because of their black-box nature. Despite growing research in explainable AI methods, healthcare professionals may still find difficulties in understanding and using the techniques without visual aids. Visual analytics can help clinical experts to gain transparency and trust in using AI techniques for analyzing healthcare data. This talk aims to provide a brief overview of the stated problems and to describe the potential role of visual analytics in healthcare research and clinical practice by discussing previous research.
Bio: Bum Chul Kwon is a research scientist at IBM Research, Cambridge, MA. His research goal is to enhance users’ abilities to gain insights into data through interactive visualization systems. He is also interested in making machine learning algorithms more transparent, solving real-world heathcare problems, and improving data visualization literacy. He recieved his Ph.D. in Data Visualization from Purdue University in 2013.