As AI expands its reach, the promise of personalized learning is clear, but so are the potential harms. In today’s Stanford HAI seminar, HAI postdoc fellow Faye-Marie Vassel and AI researcher Evan Shieh discussed their research and its implications on AI in education. How harmful are AI’s biases on diverse student populations? Read more about this research: https://lnkd.in/giXahTFc
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
高等教育
Stanford,California 103,207 位关注者
Advancing AI research, education, policy, and practice to improve humanity.
关于我们
At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Stanford HAI leverages the university’s strength across all disciplines, including: business, economics, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics. Our goal is for Stanford HAI to become an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential.
- 网站
-
https://hai.stanford.edu
Stanford Institute for Human-Centered Artificial Intelligence (HAI)的外部链接
- 所属行业
- 高等教育
- 规模
- 11-50 人
- 总部
- Stanford,California
- 类型
- 非营利机构
- 创立
- 2018
地点
-
主要
US,California,Stanford,94305
Stanford Institute for Human-Centered Artificial Intelligence (HAI)员工
动态
-
New: The #RAISEHealth Symposium Summary Paper is now out! Featuring insights from 60+ experts and actionable recommendations on the responsible use of AI to transform biomedicine. Find out more: https://stan.md/4eIxYU1
-
Stanford Institute for Human-Centered Artificial Intelligence (HAI)转发了
Our new paper in Nature Magazine shows that entorhinal grid cells adapt their representations to a new environment in one-shot and we derive a model that fuses landmarks & motion to *predict* the detailed grid rep *before* the mouse enters the new environment. A fun theory experiment collaboration lead by John Wen and Ben Sorscher w/ Emily A. Aery Jones, PhD and Lisa Giocomo. https://lnkd.in/gtUXRHQA
-
Developing new classroom curricula is a complex process, which includes running experiments to ensure they work for all learners. Could AI help improve the process? Stanford scholars explored this question in a new study supported by the Hoffman-Yee grant: https://lnkd.in/g8dv-_m7
AI+Education: How Large Language Models Could Speed Promising New Classroom Curricula
hai.stanford.edu
-
How will different policy proposals for governing open foundation models affect the innovation ecosystem? A new paper in Science Magazine by Stanford CRFM's Rishi Bommasani and colleagues explores the potential impact. https://lnkd.in/gcRjVirX
Considerations for governing open foundation models
science.org
-
Will AI make our society or break it? Watch the replay of this recent Stanford HAI seminar with Peter Norvig and Gary Marcus for new insights into the challenges of “Taming Silicon Valley.”https://lnkd.in/gWPVvBcg
Taming Silicon Valley: Peter Norvig in Conversation with Gary Marcus
hai.stanford.edu
-
Last chance to take part in this unique educational experience for the social sector ↘?
The intersection of AI and social impact presents an exciting frontier. Social sector leaders are invited to participate in this program to learn how to design AI strategies with a human-centered approach. Only a few spots left – register today: https://lnkd.in/gmqaZjzh
-
Congratulations to David Baker, John Jumper, and HAI distinguished fellow Demis Hassabis for being awarded the 2024 #NobelPrize in Chemistry! From designing new proteins to predicting protein structure with AI, their work paves the way for significant breakthroughs in science. Back in April 2023, we hosted a conversation between HAI co-director Fei-Fei Li and Hassabis on using AI to accelerate scientific discovery. Watch here: https://lnkd.in/gtPB9HRp
BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 Nobel Prize in Chemistry with one half to David Baker “for computational protein design” and the other half jointly to Demis Hassabis and John M. Jumper “for protein structure prediction.” ? The Nobel Prize in Chemistry 2024 is about proteins, life’s ingenious chemical tools. David Baker has succeeded with the almost impossible feat of building entirely new kinds of proteins. Demis Hassabis and John Jumper have developed an AI model to solve a 50-year-old problem: predicting proteins’ complex structures. These discoveries hold enormous potential. ? The diversity of life testifies to proteins’ amazing capacity as chemical tools. They control and drive all the chemical reactions that together are the basis of life. Proteins also function as hormones, signal substances, antibodies and the building blocks of different tissues. ? Proteins generally consist of 20 different amino acids, which can be described as life’s building blocks. In 2003, David Baker succeeded in using these blocks to design a new protein that was unlike any other protein. Since then, his research group has produced one imaginative protein creation after another, including proteins that can be used as pharmaceuticals, vaccines, nanomaterials and tiny sensors. ? The second discovery concerns the prediction of protein structures. In proteins, amino acids are linked together in long strings that fold up to make a three-dimensional structure, which is decisive for the protein’s function. Since the 1970s, researchers had tried to predict protein structures from amino acid sequences, but this was notoriously difficult. However, four years ago, there was a stunning breakthrough. ? In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic. ? Life could not exist without proteins. That we can now predict protein structures and design our own proteins confers the greatest benefit to humankind. Learn more Press release: https://bit.ly/3TM8oVs Popular information: https://bit.ly/3XYHZGp Advanced information: https://bit.ly/4ewMBta
-
Congrats to Geoffrey Hinton, HAI founding fellow, and John J. Hopfield for being awarded the 2024 #NobelPrize in Physics! Their work in machine learning and neural networks has transformed the field of AI, paving the way for innovations shaping the future. And ICYMI, the University of Toronto hosted this fascinating discussion with Hinton and HAI co-director Fei-Fei Li to dive into the ethical considerations of AI development and its advancement. Hear from these seminal leaders in AI: https://lnkd.in/g5wcbQpJ
BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” This year’s two Nobel Prize laureates in physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures. When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through connections that can be likened to synapses and which can be made stronger or weaker. The network is?trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward. John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The?Hopfield network?utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with. Geoffrey Hinton?used the Hopfield network as the foundation for a new network that uses a different method: the?Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning. Learn more Press release: https://bit.ly/4gCTwm9 Popular information: https://bit.ly/3Bnhr9d Advanced information: https://bit.ly/3TKk1MM