Issue #3 AI Updates
Forrest Alonso Haydon
Chief Project Officer @ MJV | Building Custom AI Agents for Everyone
AI Milestone(Political Misinformation Perception): We’re in for a challenging milestone, folks. Widespread AI-generated misinformation, even if it doesn’t directly persuade voters, could erode public trust in information sources. A recent poll by Northeastern University's new AI Literacy Lab found that 53% of Americans believe AI-spread misinformation will impact the outcome of the 2024 U.S. presidential election, despite no evidence to support this yet. This mistrust could lead to significant issues, namely something called the ‘liar’s dividend’, where people begin to claim true information is false, believing that most information sources are saturated with misinformation.
Historical Precedent(Industry Regulations): Earlier this week, I stumbled upon an interesting graphic by The New York Times about the pace of regulation and technology. While the government has historically been slow to pass regulations, the pace seems to be accelerating. Noting that information-based technologies like Radio, Television, and Internet Content have had major regulations occur much quicker than other technologies, we’re curious about what YOU think will happen with technologies like GenAI. Let us know in the comments!
Policy Shift (China’s legislation): In stark contrast to the graphic above, China has historically moved very quickly to enact legislation around new technologies. They’re no different now, having already put out legislation on deep fakes months after ChatGPT’s big break. This time they’ve set their sights on broader legislation that they hope to release this year. There are many questions around it since it's an all-encompassing “AI Law” and not for a specific subset like generative AI. The government-run Chinese Academy of Social Sciences released a “negative list” of areas and products AI companies should avoid unless they receive explicit government approval. Possibly the most interesting update is that third-party evaluations of AI models may become the norm in China. These third-party testers would work in conjunction with a potential national testing platform, enhancing the enforcement of regulations. Sources: Reuters, and MIT Technology Review
AI Update (AI Employment Exposure): For this segment, we turn to the market makers and world shakers who gathered at Davos last week. As expected, AI was a hot topic. Many prominent leaders in the space were in attendance and presented, with most policymakers outlining similar points about AI’s potential but the need for guardrails in its development. An interesting metric that emerged from the event was the IMF’s First Deputy Managing Director, Gita Gopinath, releasing research showing that about 40% of the world's workforce is exposed to AI. His prognosis: 20% would benefit and 20% would struggle. Just like the Pareto Principle (aka the 80/20 rule), there’s likely a natural distribution of how this new technology will impact life in the next 5-10 years.
Tech Trend (GPT Updates): OpenAI’s releasing new features faster than AI developers and GPT builders can integrate them. Around 2 months after they released CustomGPTs, OpenAI has now released in beta the functionality for users to call their GPTs through a chat using the @ symbol.This new functionality is similar in experience to the ability for some iOS apps to be used in iMessage threads between iPhone users.?
领英推荐
How this feature aligns with the ChatGPT roadmap is unclear but all signs point to a consolidation of all external tools into the normal ChatGPT experience. It brings to mind a question we asked in a previous newsletter: Will GPT-5 make the Custom GPT store irrelevant?
Data Science News (Data Product Disparities): Insights from over 500 senior executives paint an interesting picture for the future of data and AI. In a survey conducted by Thoughtworks and MIT, they found that 80% of organizations are considering the use of data products and data product management. However, about half of the respondents included AI capabilities and analytics in what they consider a data product. First, it highlights an important discrepancy in the market around how to define future data products and second, it signals that a significant number of companies are looking at AI as a separate tool to be layered onto their data products, not as an inherent offering by providers. It’ll be interesting to see whether the marketplace converges on one definition or another. This will be especially interesting as roles such as Data Scientists, Chief Data Officer, and other similar titles are in decline, as more quantitatively savvy individuals are able to create or train models and algorithms themselves.
AI Ethics (The Pope’s AI Advisor): We may have just found the most unorthodox voice in the AI space. The Vatican’s expert on AI ethics is a friar from a Medieval Franciscan order. Yes, Friar Paolo Benanti, who wears plain robes, shares his thoughts with Pope Francis and top Silicon Valley engineers. With a background in engineering and a doctorate in moral theology, he has spent much of his life devoted to the study of bioethics and is tackling what he views as potentially the greatest question of our time: “What is the difference between a man who exists and a machine that functions?”. His message is that 1) without inclusive data, the choices from AI cannot be inclusive, 2) that the best way to utilize AI is to design a good theory of governance, and 3) it is crucial to find the right level of AI use within a social context. Who knew a friar trained in 13th-century ethics could be so spot-on with the questions he’s asking?
Tech for Good (AI for Sustainability): While the world has had its eyes squarely fixed on the latest OpenAI news, other industries have been hard at work. Many AI innovations got their commercial starts in heavy industry, such as CCUS (carbon capture, utilization, and storage), advanced biofuels, clean hydrogen, and synthetic fuels. These efforts offer efficiency improvements and overall cost reduction. A great example is some of the research coming out of the AgriLife lab at the University of Texas, which is utilizing AI to increase yields for algae-based biofuels, which have been rendered unviable by low yields and high harvesting costs. Additional use cases are popping up every day with other biofuels, such as Carbon Re, which is “pushing the boundaries of AI to accelerate the decarbonization of foundational materials such as cement”, aiming to reduce carbon emissions by gigatonnes per year (a billion metric tons, or all the emissions produced by South America in 2022, per Our World In Data).
Well written Forrest. I particularly enjoyed your find/insight with AI Ethics on the Pope's AI advisor. I think it is a romantic idea for such a large and traditional institution like the Catholic Church to remain at the forefront of tech. in some way. I also find it reassuring because in a day where policy and organization seem to lag and money moves faster than morals, it is encouraging to see leadership on critical world matters and movement.