The Armilla Review #87

The Armilla Review #87

TOP STORY

First Draft of AI Code of Practice Released for Stakeholder Feedback

Independent experts have unveiled the first draft of the General-Purpose AI Code of Practice, marking a significant milestone in its development. Prepared by Chairs and Vice-Chairs of four thematic working groups, the draft serves as a foundation for further refinement and invites input from nearly 1,000 stakeholders who will discuss it in upcoming dedicated meetings. The Code aims to guide the trustworthy and safe development of general-purpose AI models, detailing transparency and copyright-related rules, and addressing systemic risks posed by advanced AI technologies. Stakeholders are encouraged to provide feedback through a dedicated platform, with the goal of shaping clear objectives, measures, and key performance indicators by April 2025. The drafting principles emphasize proportionality, flexibility, and consideration for the size of AI model providers, including simplified compliance options for SMEs and exemptions for open-source models. This collaborative effort seeks to balance clear requirements with adaptability to evolving technology, ensuring the Code effectively guides future AI development.

Source: European Commission


Stay ahead with the most important news from the AI industry, market, government, and academia. It's free to subscribe to the Armilla Review.


FEATURED

Five Essential AI Considerations for Financial Services Firms

In the rapidly evolving financial sector, integrating AI technologies offers significant opportunities for efficiency and enhanced decision-making but comes with unique challenges. Karthik Ramakrishnan, CEO of Armilla AI, outlines five key considerations: choosing the right AI strategy (build vs. buy), ensuring risk management and regulatory compliance, selecting appropriate AI projects for early success, addressing ethics and bias in AI decision-making, and managing AI skills alongside organizational change. He emphasizes that decisions in these areas are interconnected and should be part of a cohesive AI strategy. By adopting a holistic approach, financial institutions can effectively harness AI's potential while minimizing risks and building a robust foundation for AI adoption.

Source: Forbes


THE HEADLINES

DHS Unveils Framework for Safe AI Deployment in Critical Infrastructure

The U.S. Department of Homeland Security has released the "Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure," aiming to guide the secure deployment of AI across America's essential services. Developed by the Artificial Intelligence Safety and Security Board—a collaboration of industry leaders, academics, and public officials—the Framework offers recommendations for cloud providers, AI developers, infrastructure operators, and civil society. It addresses vulnerabilities such as AI-based attacks, targeting of AI systems, and design flaws, emphasizing the need for safety, transparency, and accountability. The Framework has received endorsements from various industry and government leaders, who highlight its role in fostering trust and cooperation in the responsible use of AI within critical infrastructure.

Source: U.S. Department of Homeland Security


AI's Potential and Pitfalls in Predicting Human Rights Violations

Artificial intelligence holds significant promise in predicting and preventing human rights violations through its capabilities in pattern recognition, predictive modeling, and real-time monitoring. By analyzing vast amounts of data, AI can identify early warning signs of abuses, enabling timely interventions. However, the use of AI in this sensitive area raises ethical concerns, including data bias, lack of transparency, and the risk of misuse by authoritarian regimes for surveillance and oppression. These challenges necessitate strict ethical guidelines and human oversight to ensure AI systems are fair, transparent, and not exploited for harmful purposes. Balancing the benefits and risks is crucial to making AI a powerful ally in safeguarding human rights.

Source: Open Global Rights


Scientist Suggests Musk Could Influence Trump Toward Tougher AI Regulations

Max Tegmark, a prominent AI researcher, believes that Elon Musk's influence on a Donald Trump administration could lead to the adoption of stricter safety standards for artificial intelligence. Pointing to Musk's support for an AI regulation bill in California, Tegmark suggests that Musk might persuade Trump to introduce measures preventing the development of artificial general intelligence (AGI), which both see as potentially dangerous if unregulated. This comes amid Trump's plans to repeal a Biden administration executive order on AI safety, highlighting a possible shift in AI policy. Tegmark notes that Musk's involvement could help Trump understand the risks of an unrestrained AI race and the need for control.

Source: The Guardian


Leaked Emails Reveal Early Tensions Among OpenAI Founders

A court case between Elon Musk and Sam Altman has led to the release of internal emails among OpenAI's early leadership, shedding light on the organization's formative challenges. The emails detail discussions about OpenAI's mission, governance, compensation, and concerns over competition with entities like DeepMind. Disagreements emerged over control and strategic direction, particularly regarding the transition from a non-profit to a for-profit model to secure more funding and talent. Tensions culminated in Musk stepping away from OpenAI's board. The correspondence offers a rare glimpse into the complexities of building an AI research organization committed to advancing technology safely and ethically.

Source: LESSWRONG


Study Finds Low-Precision Training Impacts AI Model Performance

A recent research study has introduced "precision-aware" scaling laws to examine how low-precision training and inference affect language model performance and cost. The researchers found that training models in lower precision reduces the effective parameter count, which can impact performance. Additionally, they discovered that degradation from post-training quantization increases with the amount of training data, potentially making additional pre-training data counterproductive when using low-precision models for inference. The study suggests that training larger models in lower precision may be compute-optimal and provides a unified framework to predict degradation from varying precisions. Validated across over 465 training runs, the findings offer insights into optimizing precision for AI model development and deployment.

Source: arXiv


Study Reveals Chat GPT Plus Doesn't Boost Doctors' Diagnostic Accuracy

A new study from the University of Virginia Health System has found that physicians using Chat GPT Plus did not significantly improve their diagnostic accuracy compared to those using traditional resources like medical reference sites and internet searches. Involving 50 doctors diagnosing complex cases, both groups achieved similar accuracy levels, though those using Chat GPT Plus did so slightly faster. Interestingly, when used independently, Chat GPT Plus outperformed both groups, indicating potential benefits if harnessed effectively. The researchers conclude that physicians may need more training on integrating AI tools into their practice, suggesting that AI should serve as an augmenting resource rather than a replacement for human expertise.

Source: University of Virginia Health System.


PEOPLE & AI

?? In this episode of the "People & AI" podcast, we had a conversation with Darren Wilson, the Chief Product Officer at Soul Machines. We dive into the role of AI in enhancing human interactions, the potential of avatar technology, and the future of digital companions.

Here are three key takeaways from our conversation:

?? We discussed the ethical concerns surrounding AI, especially in creating digital avatars of deceased loved ones and therapy bots. Darren highlights the urgent need for industry-wide guardrails to ensure responsible AI usage, particularly in sensitive areas like medical assistance.

?? Darren showcases the incredible potential of avatar technology, emphasizing diverse use cases like language practice partners, personal coaching, and the entertainment industry.

?? Darren provides advice for product managers navigating the dynamic landscape of AI. He underscores the importance of being adaptable, focusing on customer value, and swiftly changing strategies that don’t work due to accelerated industry cycles.

??Tune in for an inspiring conversation on the future of AI and video technology!

Apple: https://podcasts.apple.com/us/podcast/avatars-digital-companions-and-ethical-considerations/id1612854047?i=1000659616406

Spotify: https://open.spotify.com/episode/63z1uFdvfMIu8fD4N1m3NV?si=IlvxmUwGScCe4tcUWpMatA

Fantastic roundup of the latest in AI! The breadth of topics covered highlights the transformative and multifaceted nature of AI today. Insights like these are crucial for staying informed in such a rapidly evolving field. Great work!

回复

要查看或添加评论,请登录

Armilla AI的更多文章

社区洞察

其他会员也浏览了