Beyond Curve Fitting: Causation, Counterfactuals, and Imagination-based AI

AAAI Spring Symposium, March 25-27, 2019, Stanford, CA

Motivation

In recent years, Artificial Intelligence and Machine Learning have received enormous attention from the general public, primarily because of the successful application of deep neural networks in computer vision, natural language processing, and game playing (more notably through reinforcement learning). We see AI recognizing faces with high accuracy, Alexa answering English spoken questions efficiently, and Alpha-Zero beating Go grandmasters. These are impressive achievements, almost unimaginable a few years ago. Despite the progress, there is a growing segment of the scientific community that questions whether these successes can be extrapolated to create general AI without a major retooling. Prominent scholars voice concerns that some critical pieces of the AI-puzzle are still pretty much missing. For example, Judea Pearl, who championed probabilistic reasoning in AI and causal inference, recently said in an interview: "To build truly intelligent machines, teach them cause and effect" (link). In a recent OpEd in the New York Times, Cognitive Scientist Gary Marcus noted: “Causal relationships are where contemporary machine learning techniques start to stumble” (link).

These and other critical views regarding different aspects of the machine learning toolbox, however, are not a matter of speculation or personal taste, but a product of mathematical analyses concerning the intrinsic limitations of data-centric systems that are not guided by explicit models of reality. Such systems may excel in learning highly complex functions connecting input X to an output Y, but are unable to reason about cause and effect relations or environment changes, be they due to external actions or acts of imagination. Nor can they provide explanations for novel eventualities, or guarantee safety and fairness. This symposium will focus on integrating aspects of causal inference with those of machine learning, recognizing that the capacity to reason about cause and effect is critical in achieving human-friendly AI. Despite its centrality in scientific inferences and commonsense thinking, this capacity has been largely overlooked in ML, most likely because it requires a language of its own, beyond classical statistics and standard logics. Such languages are available today and promise to yield more explainable, robust, and generalizable intelligent systems.

Our aim is to bring together researchers to discuss the integration of causal, counterfactual, and imagination-based reasoning into data science, building a richer framework for research and a new horizon of applications in the coming decades. Our discussion will be inspired by the Ladder of Causation and Structural Causal Models (SCM) architecture, which unifies existing approaches to causation and formalizes the capabilities and limitations of different types of causal expressions (link). This architecture provides a general framework of integrating the current correlation-based data mining methods (level 1) with causal or interventional analysis (level 2), and counterfactual or imagination-based reasoning (level 3). We welcome researchers from all relevant disciplines, including, but not limited to, computer science, cognitive science, economics, social sciences, medicine, health sciences, engineering, mathematics, statistics, and philosophy.

Topics

We invite papers that describe: 1. methods of answering causal questions with the help of ML machinery, or 2. methods of enhancing ML performance with the help of causal models (i.e., carriers of transparent causal assumptions). Authors are strongly encouraged to identify in the paper where on the causal hierarchy their contributions reside (i.e., associational, interventional, or counterfactual reasoning). Topics of interest include but are not limited to the following:

  1. Algorithms for causal inference and learning
  2. Causal analysis of biases in data science & fairness analysis
  3. Causal and counterfactual explanations
  4. Causal reinforcement learning, planning, and plan recognition
  5. Imagination and creativity
  6. Fundamental limits of learning and inference
  7. Applications and connections with the 3-layer hierarchy
Invited Speakers

  • Yoshua Bengio
    (University of Montreal)

  • Mark Cullen
    (Stanford University)

  • Thomas Dietterich
    (Oregon State University)

  • Frederick Eberhardt
    (Caltech)

  • Mohamed Elhoseiny
    (KAUST)

  • Tobias Gerstenberg
    (Stanford University)

  • Maria Glymour
    (UCSF)

  • Paul Hünermund
    (Maastricht University)

  • Kosuke Imai
    (Harvard University)

  • John Ioannidis
    (Stanford University)

  • Murat Kocaoglu
    (IBM Research)

  • Judea Pearl
    (UCLA)
  •  
Schedule (*tentative)
History Building (#200), Room 02
Date Start End Description
Monday, March 25 9:00am 9:10am Opening Remarks
9:10am 10:30am Accepted paper talks (session 1/2)
10:30am 11:00am Break
11:00am 12:30pm
Causality + Computer Vision & Imagination

Frederick Eberhardt (Caltech)
Murat Kocaoglu (IBM Research)
Mohammed Elhoseiny (KAUST)
Moderator / Discussant: Tadepalli

12:30pm 2:00pm Lunch
2:00pm 3:50pm
Keynote: Judea Pearl (UCLA)

Title: The Foundations of Causal Inference, with Reflections on ML and AI
Location: Jordan Hall 420-40.
(This is a different location than the other talks.)

3:50pm 4:00pm Break
4:00pm 5:30pm Posters' Session
6:00pm 7:00pm Reception
Tuesday, March 26 9:00am 10:30am
Causality + Machine Learning & Artificial Intelligence

Tobias Gerstenberg (Stanford University)
Thomas Dietterich (Oregon State University)
Yoshua Bengio (U Montreal)
Moderator / Discussant: Bareinboim

10:30am 11:00am Break
11:00am 12:30pm
Causality + the Social Sciences & Economics

Kosuke Imai (Harvard University)
Paul Hünermund (Maastricht University)
Moderator / Discussant: Bareinboim

12:30pm 2:00pm Lunch
2:00pm 3:30pm
Causality + the Health Sciences

Maria Glymour (UCSF)
Mark R. Cullen (Stanford University)
John Ioannidis (Stanford University)
Moderator / Discussant: Bareinboim

3:30pm 4:00pm Break
4:00pm 5:30pm Posters' Session
6:00pm 7:00pm Plenary (joint with other symposia)
Wednesday, March 27 9:00am 10:30am Accepted paper talks (session 2/2)
10:30am 11:00am Break
11:00am 12:30pm Q&A and Open Discussion
Logistics

Registration The WHY-19 symposium is part of the AAAI-19 Spring Symposium Series and all the logistics, including the registration, is handled by AAAI. For details, see here.

Format The symposium will include invited talks, discussions, and presentations of some of the accepted papers.

Submissions We solicit both long (7 pages including references) and short (3 pages including references) papers on topics related to the above. Position papers, application papers, and challenge tasks will also be considered. Submissions should follow the AAAI conference format and should be anonymized. We accept submission through AAAI's Easychair (link), look for our symposium.

Important Dates Submissions (electronic submission, PDF) due: December 17, 2018
Notifications of acceptance: Feb 4, 2019 Jan 27, 2019
Final version of the papers (electronic submission, PDF) due: March 15th, 2019

Program Committee
  • Sander Beckers (Utrecht University)
  • Tom Claassen (Radboud University)
  • Alex Dimakis (UT Austin)
  • Frederick Eberhardt (Caltech)
  • Antti Hyttinen (HIT / Univ. of Helsinki)
  • Murat Kocaoglu (IBM Research)
  • Sanghack Lee (Purdue University)
  • Sara Magliacane (IBM Research)
  • Joris Mooij (University of Amsterdam)
  • Pedro Ortega (DeepMind)
  • Roland Ramsahai
  • Uri Shalit (Technion)
  • Karthikeyan Shanmugam (IBM Research)
  • Ricardo Silva (UCL)
  • Jin Tian (Iowa State)
  • Kun Zhang (CMU)
External reviewers (Purdue): Juan Correa, Amin Jaber, Yonghan Jung, Daniel Kumor, Junzhe Zhang.
Organizers
Elias Bareinboim
eb@purdue.edu
Purdue University
Adobe Research & UMass
Oregon State University
DeepMind & University of Alberta
Max Planck Institute
University of California, Los Angeles