This week in impact investing: Alignment
At the end of each week, ImpactAlpha rounds up not only the news, but the vibe, with a post at the top of Friday mornings daily Brief. We call it "the ditty." Here's the latest (h/t Henrik Jones ):
Training day. It’s either comforting or alarming that just about every artificial intelligence company is spinning up teams and whole departments to tackle what is known as “the alignment problem.”
The technical challenges of encoding human values and principles into AI is hard enough. The normative challenge of knowing what to encode is even tougher.
To be sure, artificial general intelligence, or AGI, is a big step from the generative AI chatbots like ChatGPT. But it’s not too early to move “alignment” up the global to-do list.
“Solving the AGI alignment problem could be so difficult that it will require all of humanity to work together,” three authors from OpenAI?wrote last year. Good luck with that.?
Their solution:?train AI systems themselves to advance alignment. “They will work together with humans to ensure that their own successors are more aligned with humans.”
Non-coders can contribute by developing and documenting models on which AI can be trained. Impact verification firm BlueMark is benchmarking funds’ practices for managing impact, including through a?leaderboard of top-performing firms, as?Dennis Price?reports.?Neil Gregory, ex- of the International Finance Corp., shares tips on?assessing risks to impact, while Snowball’s?Jake Levy?offers?five ways?to advance impact measurement and accountability.?
领英推荐
Aunnie Patton Power?and?Riannah Burns?argue for?compensation structures?that better align fund managers with the impact outcomes they profess to deliver. To help investors better leverage ESG data for impact, Kieger’s?Panagiota Balfousia?makes a plea for?transparency and traceability?in ratings and scores.?
Perhaps most important are real-world implementations. As part of our ongoing coverage of the ownership economy,?Roodgally Senatus?reports on efforts to?democratize fast-food franchising?as a strategy for racial wealth-building.?
Sebastian Welisiejko?of the Global Steering Group for Impact Investment makes the?case for infrastructure investments?in?informal settlements, aka slums.
As?Amy Cortese?reports, Microsoft this week contracted with Helion?to receive fusion power?in 2028, with penalties for late delivery.
“Innovative models for sustainable finance and positive social impact are often underrepresented in current models of business and investment decision-making,” ChatGPT responded when I asked how Agents of Impact can help align AI systems.
Mitigating these risks, it went on, “requires rigorous data collection, diverse and representative training data, regular audits for fairness and bias, integration of ethical considerations, and human oversight throughout the decision-making process.”
That is, it’s up to us, at least for now.
Online Spanish adjunct @NSU Abraham S. Fischler College of Education and School of Criminal Justice Apple Teacher
1 年In education we are talking about the ethics of using AI, and I am teaching my high school students that they have a choice. Use AI for good or to beat the system. Every day they will have to make that choice. My goal for next year is to teach them how to be creative and innovative thinkers by creating a space to use AI for good in my discipline.
Founder and General Partner, Buckhill Capital
1 年Thank you David Bank at ImpactAlpha for highlighting AI’s need for Alignment with human values! ?Anthropic’s commitment to #alignmentmatters is a major reason why Buckhill Capital eagerly invested in Anthropic.?