Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI
AAAI Spring Symposium, March 25-27, 2019, Stanford, CA
In recent years, Artificial Intelligence and Machine Learning
have received enormous attention from the general public, primarily
because of the successful application of deep neural networks in
computer vision, natural language processing, and game playing
(more notably through reinforcement learning). We see AI
recognizing faces with high accuracy, Alexa answering English spoken
questions efficiently, and Alpha-Zero beating Go grandmasters.
These are impressive achievements, almost unimaginable a few years
Despite the progress, there is a growing segment of the
scientific community that questions whether these successes can be extrapolated to create general AI without a major retooling.
Prominent scholars voice
concerns that some critical pieces of the AI-puzzle are still
pretty much missing. For example, Judea Pearl, who championed probabilistic reasoning in AI and causal inference, recently said in
an interview: "To build truly intelligent machines, teach them cause
and effect" (link).
In a recent OpEd in the New York Times, Cognitive Scientist Gary
Marcus noted: “Causal relationships are where contemporary machine
learning techniques start to stumble” (link).
These and other critical views regarding different aspects of the machine learning toolbox, however, are not a matter of speculation or personal taste, but a product of mathematical analyses concerning the intrinsic limitations of data-centric systems that are not guided by explicit models of reality. Such systems may excel in learning highly complex functions connecting input X to an output Y, but are unable to reason about cause and effect relations or environment changes, be they due to external actions or acts of imagination. Nor can they provide explanations for novel eventualities, or guarantee safety and fairness. This symposium will focus on integrating aspects of causal inference with those of machine learning, recognizing that the capacity to reason about cause and effect is critical in achieving human-friendly AI. Despite its centrality in scientific inferences and commonsense thinking, this capacity has been largely overlooked in ML, most likely because it requires a language of its own, beyond classical statistics and standard logics. Such languages are available today and promise to yield more explainable, robust, and generalizable intelligent systems.
Our aim is to bring together researchers to discuss the integration of causal, counterfactual, and imagination-based reasoning into data science, building a richer framework for research and a new horizon of applications in the coming decades. Our discussion will be inspired by the Ladder of Causation and Structural Causal Models (SCM) architecture, which unifies existing approaches to causation and formalizes the capabilities and limitations of different types of causal expressions (link). This architecture provides a general framework of integrating the current correlation-based data mining methods (level 1) with causal or interventional analysis (level 2), and counterfactual or imagination-based reasoning (level 3). We welcome researchers from all relevant disciplines, including, but not limited to, computer science, cognitive science, economics, social sciences, medicine, health sciences, engineering, mathematics, statistics, and philosophy.
We invite papers that describe: 1. methods of answering causal questions with the help of ML machinery, or 2. methods of enhancing ML performance with the help of causal models (i.e., carriers of transparent causal assumptions). Authors are strongly encouraged to identify in the paper where on the causal hierarchy their contributions reside (i.e., associational, interventional, or counterfactual reasoning). Topics of interest include but are not limited to the following:
- Algorithms for causal inference and learning
- Causal analysis of biases in data science & fairness analysis
- Causal and counterfactual explanations
- Causal reinforcement learning, planning, and plan recognition
- Imagination and creativity
- Fundamental limits of learning and inference
- Applications and connections with the 3-layer hierarchy
Submissions (electronic submission, PDF) due: December 17, 2018
Notifications of acceptance: January 27, 2019
Final version of the papers (electronic submission, PDF) due: February 17, 2019