Here’s the latest in public (and private) sector AI news.
- The push for Elon Musk to lead American AI policy is already starting A nonprofit is petitioning for Elon Musk to advise Trump on AI, highlighting his technical expertise and safety advocacy, though critics question potential conflicts with Musk's own AI ventures.
- Inside NGA’s approach to exploring powerful next-gen AI The NGA’s Chief AI Officer discusses early efforts to responsibly implement frontier AI, addressing challenges in training models to enhance geospatial intelligence and meet unique national security needs.
- DHS issues AI framework to safeguard critical infrastructure The Department of Homeland Security's new AI framework outlines practical, voluntary guidelines for safely integrating AI across 16 critical infrastructure sectors, addressing risks like attacks and design flaws.
- Trump likely to scale back AI policy with repeal of Biden order President-elect Trump plans to repeal Biden's AI executive order, favoring a lighter regulatory approach. Key changes could include removing algorithmic bias efforts and altering national security strategies while retaining bipartisan elements.
- OPM touts assistance to agencies under AI Executive Order The Office of Personnel Management reports the placement of 250 AI specialists under the AI Executive Order, expanded hiring flexibilities, and training for 18,000 federal employees on AI fundamentals.
- Anthropic, DOE collaborate on AI safety in nuclear context Anthropic and the Department of Energy's NNSA are testing Claude AI models in a classified environment to prevent misuse in nuclear weapon development, advancing national security safeguards for frontier AI systems.
- Integrating AI into the next National Security Council AI could transform the National Security Council by streamlining policy analysis, enhancing interagency coordination, and stress-testing strategies, equipping the U.S. to tackle evolving global challenges.
- San Francisco hackathon seeks to use AI for public good At the Hack for Social Impact event, tech professionals developed AI-powered tools addressing issues from affordable housing to tenant rights, aiming to create solutions with immediate, real-world benefits.
- Just half of state CIOs say employees use GenAI in daily work, NASCIO report says A NASCIO report reveals only 53% of state CIOs report regular generative AI use by employees, citing access barriers and reluctance, despite increased AI accessibility and pilot programs for productivity.
- State workforces in the “zero to one phase” with AI, officials say U.S. states are piloting generative AI tools, emphasizing workforce training and ethical guidelines to enhance services and address workforce challenges, but adoption barriers remain, officials say.
- Small city proves large language model chatbots are accessible and effective, even on tight budgets Covington, Kentucky, launched a $200 generative AI chatbot to support economic development, marking a trend where small cities leverage affordable AI to enhance public services.
- Connecticut deploys AI-driven cameras to combat wrong-way driving fatalities Connecticut’s Department of Transportation is installing AI-based wrong-way detection cameras on Route 15 to alert drivers and reduce fatal accidents, prioritizing high-risk exits statewide.
- National League of Cities and Google share AI report and toolkit for local governments The National League of Cities and Google Public Sector unveiled a report and toolkit guiding local governments on adopting AI responsibly, addressing risks, and modernizing services without deepening inequities.
- Redefining AI procurement for local government The Ada Lovelace Institute urges the UK to establish a national taskforce to enhance AI procurement in local government, ensuring safety, ethics, and public interest in rapidly evolving technologies.
- California lawmakers target AI-fueled fraud in new House bill A bipartisan bill proposes harsher penalties for AI-driven fraud, increasing fines up to $2M for offenses like wire fraud and money laundering, aiming to deter misuse and protect citizens.
- Australia’s AI regulation balances privacy, security, and innovation Australia’s approach to AI regulation emphasizes privacy, cyber reform, and voluntary and proposed mandatory guardrails. Key guidelines from OAIC and federal standards aim to balance innovation with public safety.
- Iceland unveils strategic action plan to lead in responsible AI use Iceland's 2026 AI action plan emphasizes responsible AI to boost job creation, quality of life, and economic growth through five pillars, including public sector and healthcare advancements.
- AI Hub for Sustainable Development aims to transform Africa’s AI capacity with new partnerships Addressing Africa’s critical compute shortage, the AI Hub for Sustainable Development and G7 partners collaborate to increase AI infrastructure, green energy solutions, and affordable resources for African innovators.
- Danish AI system risks discrimination and surveillance in welfare fraud control Denmark's welfare fraud algorithms may violate human rights, discriminating against marginalized groups and enabling surveillance, Amnesty warns, urging a halt on data practices that compromise privacy.
- Singapore public sector AI playbook A new AI adoption playbook empowers public officers by demystifying AI, showcasing successful projects, and offering step-by-step guidance to identify opportunities, start initiatives, and leverage central support for implementation.
- Assessing potential future AI risks, benefits and policy imperatives The OECD’s Expert Group on AI Futures highlights AI's transformative benefits, potential risks like cyberattacks and disinformation, and policy priorities including liability rules, red lines, and enhanced risk management.
- UK Ministry of Defense launches AI 'Productivity Portfolio' The UK MoD's new Productivity Portfolio will explore generative AI and automation to boost efficiency in policy, logistics, and military operations while adhering to ethical and data-use frameworks.
- UK FSB looks at AI in Finance:? opportunities and risks A new report highlights AI's transformative impact on finance, offering efficiency gains and risks like systemic vulnerabilities, fraud, and governance challenges, urging enhanced monitoring and regulatory adaptation by authorities.
- EU introduces draft regulatory guidance for AI models The EU’s draft AI Code of Practice outlines transparency, systemic risk mitigation, and copyright compliance for general-purpose AI, aiming to set global standards for safety, accountability, and innovation.
- Canada launches AISI The Canadian Artificial Intelligence Safety Institute (CAISI) debuts with a $2.4 billion investment to address AI risks, advance safe development, and strengthen international collaboration on AI safety standards.
- UK ethics in action: from whitepaper to workplace techUK's new paper highlights AI Assurance techniques for bridging ethical principles with practical implementation, emphasizing their role in fostering trust, innovation, and responsible AI adoption across industries.
- AI and international aid: balancing promise and risk AI offers transformative potential for global development, but challenges like bias, inequity, and centralized control demand urgent action. Development organizations must prioritize inclusive AI ecosystems, responsible governance, and diverse representation.
- Israel drops in global AI rankings Israel's AI rankings decline due to the absence of a national strategy, stalled regulation, and underutilized funding. The State Comptroller urges leadership and infrastructure investment to regain technological edge.
- Inside AI experts' candid views on GenAI’s real-world challenges and potential At an off-record roundtable, AI leaders discussed generative AI's potential beyond productivity, stressing proprietary data, systemic vetting, and readiness for socioeconomic shifts amidst fast-evolving AI capabilities.
- Taco Bell Expands AI-Driven Voice Ordering and Labor Scheduling Taco Bell enhances drive-thru efficiency and scheduling with AI, now deployed in 300 locations and scheduled for wider rollout, aiming to boost order accuracy and employee support.
- Google’s AI model expands 7-day flood forecasting to 150 countries Google’s advanced flood forecasting model, now accessible via FloodHub, provides accurate 7-day predictions in 150 countries, aiming to improve global disaster preparedness, especially in data-scarce regions.
- Turning AI governance from burden to benefit Effective AI governance bridges principles with actionable practices, fostering trust and accelerating adoption. Automation streamlines tasks like risk management, enabling scalable solutions and maximizing AI's transformative potential.
- OpenAI to preview autonomous AI agent "Operator" OpenAI plans to debut "Operator," an autonomous AI agent that controls computers, as a research and developer tool in January, aiming to redefine AI interaction and intensify industry competition.
- What is retrieval-augmented generation (RAG)?? Retrieval-augmented generation (RAG) enhances large language models by integrating enterprise-specific data for accurate, relevant outputs, transforming knowledge management, customer service, and drafting while addressing challenges like bias and data quality.
- Recruiters urge candidates to use AI to apply for jobs Recruiters now support AI for crafting CVs and cover letters, helping candidates align with job requirements, but caution against misrepresentation, urging responsible and personalized use of AI tools.
- Google AI chatbot responds with a threatening message: "Human … Please die." A Michigan student received a disturbing message from Google’s Gemini chatbot, raising questions about AI safety. Google acknowledged the violation and pledged improvements, but concerns over AI reliability persist.
- AI bubble warning: LLMs face economic and technical challenges Experts like Gary Marcus caution that the AI bubble may burst as LLMs face diminishing returns, high costs, and commoditization, challenging assumptions of their scalability and profitability.
- Demystifying AI CFR’s inaugural technologist-in-residence, Sebastian Elbaum, explored AI’s transformative potential, its technical underpinnings, and challenges for policymaking, emphasizing bridging gaps between AI research and policy implementation.
Global Business Strategist & Board Member | Chairman & CEO | Co-founder, INSEAD AI | Publisher of 'AI News You Missed
1 周adding (as usual) to "AI News You Missed". Thanks for doing.
AI Product Counsel at Snap | Board Member at AI League for Good | GenAI | Data Privacy | Ethics
1 周What an excellent resource! Thank you for putting this together.