Launching April 7: The #AIIndex2025 — Stanford HAI’s most comprehensive AI Index report yet. Packed with rigorously vetted data, this report equips leaders, policymakers, and researchers with the insights needed to navigate AI’s global impact. Sign up to receive the full report: https://lnkd.in/gFeDS7vr
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
高等教育
Stanford,California 110,456 位关注者
Advancing AI research, education, policy, and practice to improve humanity.
关于我们
At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Stanford HAI leverages the university’s strength across all disciplines, including: business, economics, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics. Our goal is for Stanford HAI to become an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential.
- 网站
-
https://hai.stanford.edu
Stanford Institute for Human-Centered Artificial Intelligence (HAI)的外部链接
- 所属行业
- 高等教育
- 规模
- 11-50 人
- 总部
- Stanford,California
- 类型
- 非营利机构
- 创立
- 2018
地点
-
主要
US,California,Stanford,94305
Stanford Institute for Human-Centered Artificial Intelligence (HAI)员工
动态
-
“The key thesis underpinning all our work is that nothing can ever replace a teacher. AI should augment, not substitute, their expertise,” says Rizwaan Malik, a Knight-Hennessy Scholar studying education data science at the Stanford University Graduate School of Education. In our latest blog, Malik talks about how?Stanford's education and computer science researchers are leveraging large language models to help teachers create lessons that meet every student's needs effectively. “Teachers spend so much time adapting curricula to their students’ needs, but no one is really asking — how can we support them in that process?” Read more: https://lnkd.in/gUQubxuG Knight-Hennessy Scholars
-
The Co-Leads and collaborators of the Joint California Policy Working Group on AI Frontier Models released a draft report to develop policy principles that can inform how California approaches the use, assessment, and governance of frontier AI. The working group is seeking public feedback until April 8. Learn more here: https://lnkd.in/gTGGKN_X
-
Just announced: Stanford University’s?#RAISEHealth initiative has awarded seed grants to five projects focused on the ethical, responsible, and safe development of AI in biomedicine. The projects range from creating reliable datasets for training AI models to refining methods for assessing how patients with multiple health conditions are impacted by disease. “The projects awarded through the RAISE Health Seed Grant program exemplify our commitment to research that places humans at the center of AI innovation,” said?HAI Co-Director James Landay. #RAISEHealth is a collaboration between Stanford University School of Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Learn more: https://lnkd.in/gMZW6r9b
-
We're excited to share our brand new website, with improved navigation, fresh content, and a sleek modern look! ?? https://hai.stanford.edu Got some feedback? Let us know here: https://bit.ly/3RcOuRU
-
What are the broader implications of AI-driven robotics in society? Join us at this year's Stanford HAI Spring Conference for an interdisciplinary discussion on the evolving landscape of robotics. Learn more about the agenda here:?https://lnkd.in/gzYZqMuJ
-
How do we currently evaluate AI? Today's evaluation practices rely on static benchmarks, but these methods aren't always efficient, reliable, or relevant to real-world situations. On Mar. 19, Stanford HAI Faculty Affiliate Sanmi Koyejo will present a path toward a measurement framework that bridges established principles and techniques with modern AI evaluation needs. The goal: To create a more scientific way of measuring AI. Curious about this topic? Learn more or register here: https://lnkd.in/guSX9c9M
-
-
Courthouse AI is "one of the most compelling but also underdeveloped areas of public sector AI," said Stanford Law School professor David Engstrom at a recent Stanford HAI seminar. Here, he describes how AI can support self-represented litigants in civil cases: https://lnkd.in/gUzTXaiC
-
-
Stanford Institute for Human-Centered Artificial Intelligence (HAI)转发了
Reflections on the French AI Action Summit and what it augurs for AI in 2025. The French AI Action Summit heralded a major shift in AI governance away from AI risk and safety to AI opportunity and national self-interest. Three powerful tributaries merged to create this new current. First came France's surge of AI ambition, determined to prove that Europe could do more than merely regulate California's tech giants. Next flowed the open-source movement, successfully steering discourse away from closed ecosystems and toward a vision of distributed innovation. Finally, the breakthrough of China's DeepSeek — achieving frontier-level performance at a fraction of the cost — swept in like a flash flood, intensifying the focus on geopolitical competition. Europe, watching this confluence of forces, was eager to ride the rising tide. The summit's tone was captured perfectly in Vice President JD Vance's opening declaration: "I'm not here this morning to talk about AI safety, which was the title of the conference a year ago. I'm here to talk about AI opportunity." This shift reverberated beyond Paris — within days, the UK had rebranded its "AI Safety Institute" as the "AI Security Institute.” This demotion of safety concerns to an afterthought comes at a particularly troubling moment. As AI capabilities accelerate at a dizzying pace, industry leaders like Anthropic's Dario Amodei, OpenAI's Sam Altman, and DeepMind's Demis Hassabis warn that radically transformative AI looms on the near horizon. Concerns about AI consciousness and free will remain science fiction rather than science, yet dismissing safety concerns in pursuit of accelerated development betrays a dangerous myopia. The notion that safety advocacy merely serves as corporate protectionism collapses in the face of DeepSeek's success. Rather than choosing between innovation and safety, we need both. In this context, the summit's bright spot — the announcement of?ROOST?(Repository of Robust Open Online Safety Tools) under?Camille Fran?ois's?leadership — stands out precisely because it dared to bridge this artificial divide. That this common-sense initiative combining openness with safety appeared radical amid the summit's innovation-at-all-costs atmosphere speaks volumes. The diplomatic failure that followed — with neither the US nor UK signing even the summit's watered-down final statement — only underscores our fraught technological and geopolitical moment. What could have been a crucial opportunity to advance global AI governance instead became a showcase of national technological ambitions, leaving the harder work of ensuring safe AI innovation for another day. History teaches us that sometimes it takes a catastrophic flood to spur communities to build proper defenses — but nothing requires us to learn safety's lessons the hard way, if only we can find the collective will to act. https://lnkd.in/eesWchwq