Using AI to Decode Human Decision-Making
Insights from Professor Tom Griffiths' talk at Stanford Data Science Lecture Series

Using AI to Decode Human Decision-Making

How do we model human decision-making? Can machine learning not only predict but also help explain why people make the choices they do? These were some of the key questions tackled by Professor Tom Griffiths in a recent Stanford Data Science Distinguished Lecture.

The Challenge: Understanding Human Behavior in a Data-Driven World

From financial decisions to self-driving cars, understanding human decision-making is crucial for building intelligent systems. However, traditional psychological theories have struggled to capture the full complexity of human choices, and machine learning models, despite their power, often lack the right inductive biases to make meaningful predictions about human behavior.

To bridge this gap, Professor Griffiths and his team at Princeton University developed three key methods that combine psychological theories with machine learning:

1. Theory-Based Pre-Training

One of the biggest challenges in training AI models to predict human behavior is the lack of sufficient data. Instead of relying solely on limited real-world data, Griffiths’ team pre-trains models using simulated data from psychological theories before fine-tuning them on real human decision data.

?? Application: This approach significantly improved predictions in a risky choice task, where people decide between uncertain outcomes.

2. Differentiable Theories for Decision-Making

Psychologists and economists have long studied decision-making models like Expected Utility Theory and Prospect Theory. Instead of testing these models separately, Griffiths' team expressed them in a differentiable format, allowing AI to automatically optimize and discover the best-fitting theory.

?? Finding: While classic theories work well in some cases, human decision-making is more context-dependent than previously thought.

3. Scientific Regret Minimization: Using AI to Improve Theories

Rather than evaluating psychological models directly against real data, Griffiths proposed comparing them to a high-performing black-box machine learning model trained on human decisions.

?? Why? This approach filters out noise in human data and highlights gaps in existing psychological models.

?? Example: Applied to the "moral machine" dilemma (should a self-driving car hit pedestrians or swerve?), this method uncovered surprising insights, such as how people weigh factors like legality, age, and even profession in moral decisions.

Key Takeaways

?? Machine learning can help refine psychological theories, not just predict human choices.

?? Human decision-making is more nuanced than traditional theories assume.

?? AI models need better inductive biases—psychological research provides a strong foundation.

?? Fine-tuning large language models with behavioral data improves alignment with human-like decision patterns.

As AI becomes more deeply embedded in our daily lives—from recommender systems to autonomous vehicles—understanding human behavior is more important than ever. By combining insights from cognitive science and machine learning, researchers like Griffiths are shaping the future of human-centered AI.

要查看或添加评论,请登录

Leeza Nadeem的更多文章

社区洞察

其他会员也浏览了