- Connecticut Bill Asks State to Choose AI Tool for Schools: The proposed Connecticut legislation aims to identify a state-sanctioned AI tool to bolster educational outcomes. The bill highlights the potential of AI in personalizing learning while sparking debate over privacy and effectiveness.
- How Google Cloud AI and Assured Workloads Can Enhance Public Sector Security, Compliance, and Service Delivery at Scale: Google Cloud's AI and Assured Workloads promise to revolutionize public sector operations with enhanced security and compliance. The services look to streamline data management and bolster service delivery, marking a significant step in government tech modernization.
- How NIH’s National Library of Medicine is Testing AI to Match Patients to Clinical Trials: The NIH's National Library of Medicine is piloting AI to revolutionize clinical trial matchmaking. This initiative could greatly improve patient outcomes by efficiently connecting individuals with relevant medical studies.
- AI Models Measurement: The New York Times examines the challenge of measuring AI model performance in a standardized way. Insightful commentary reflects on how moving beyond benchmarks can improve AI reliability and trustworthiness in practical applications.
- UK Drafting AI Regulations But Not Rushing to Introduce Them: The UK is cautiously drafting AI regulations, aiming to match ethical innovation with responsible policy. The carefully paced approach seeks to balance technological growth with the societal impact of AI.
- Lots of Talk About AI, But Are Agencies Spending Money on It?: Despite extensive discourse on AI's potential, there's skepticism about actual financial commitment from agencies. This piece questions whether discussions are translating into tangible investments in AI technology.
- How the US Government is Regulating AI: CNBC sheds light on the US government's measures to regulate AI, addressing the balance between fostering innovation and ensuring ethical use. The article underscores the complexities of legislating such a rapidly evolving technology.
- Reimagining search: How AI and Google Search turbocharges patent examinations at USPTO: In a groundbreaking partnership, the USPTO has integrated AI and Google Search to revamp patent search capabilities, enabling examiners to sift through vast data with unparalleled precision. This initiative showcases a significant leap towards modernizing the patent examination process, emphasizing AI's role in enhancing accuracy and efficiency while keeping the examiner in control. It's a bold stride not just for the USPTO, but as a beacon for digital transformation in federal agencies.
- NSA, Partners Release Guidance for Deploying Secure AI Systems: The National Security Agency, along with both U.S. and international allies, has introduced guidance for the secure deployment of AI systems. This initiative aims to help organizations implement AI technologies responsibly, with a focus on countering potential security threats. Highlighting the dual-edged nature of AI, the guidance serves as a vital tool for enhancing cybersecurity in the age of artificial intelligence, demonstrating NSA's commitment to navigating the complex landscape of AI security.
- TSA looks to AI to enhance x-ray screenings of travelers’ luggage: The TSA is stepping up its game by planning to integrate AI with x-ray screenings to detect prohibited items in carry-on luggage more efficiently. While current technology highlights explosives, the AI aims to assist officers by identifying a wider range of prohibited items, offering a "machine assist" for a more focused review. This effort not only showcases TSA's commitment to leveraging cutting-edge technology for security but also aligns with ensuring the safe development and application of AI in critical public domains.
- Education Sector in Constant State of Flux, Driven by AI: The education landscape is continuously evolving, heavily influenced by the advent of AI and online learning technologies. While platforms like edX and Khan Academy explore AI to personalize and enhance learning, the sector grapples with challenges such as ensuring technology serves meaningful educational purposes and addressing equity and access issues. Amidst these developments, the core question remains: How can technology, especially AI, be harnessed to solve fundamental educational problems without exacerbating existing inequities?
- NIST adds 5 new members to its AI Safety Institute: The U.S. AI Safety Institute at NIST welcomes five new experts to drive forward initiatives in AI safety and standards, aligning with President Biden's 2023 executive order. This team, comprising former leaders from OpenAI, USC, and Stanford University, will focus on national security, AI model testing, and international cooperation for developing AI that is both safe and trustworthy. Their collective expertise signals a strong commitment to positioning the U.S. at the forefront of responsible AI development and regulation.
- Gina Raimondo Unveils New US AI Safety Institute Leaders: Commerce Secretary Gina Raimondo introduces a new leadership team at the U.S. AI Safety Institute (AISI) within NIST, highlighting the government's commitment to developing and regulating AI technologies responsibly. The team, including experts from academia and research, will focus on evaluating AI models for national security and aligning U.S. AI standards with global practices. This move underscores the emphasis on AI safety and the collaborative effort between the public sector and international allies to address the challenges posed by AI technologies.
- House committee introduces 5 guardrails for internal AI use: The Committee on House Administration has unveiled five key guardrails for employing artificial intelligence within the U.S. House of Representatives, focusing on human oversight, comprehensive policies, rigorous testing, transparency, and workforce education. These guidelines, developed from discussions with AI and legislative experts, aim to navigate the responsible integration of AI in legislative operations, ensuring efficiency without compromising oversight. This initiative marks a proactive step towards blending AI technologies with the legislative process, emphasizing security, policy coherence, and the potential to learn from state and local government AI applications.
- IARPA Program Seeks Algorithms That Can Re-Identify, Geolocate People & Objects Across Disparate Recordings: IARPA's new Video LINCS program aims to create algorithms capable of re-identifying and geolocating people, vehicles, and objects within video footage from various sensors, enhancing intelligence analysts' ability to identify threats in extensive video data. Spanning 48 months and unfolding in three phases, this program focuses on advancing the analytical capabilities of intelligence operations. With MITRE, MIT Lincoln Laboratory, and NIST as partners, Video LINCS seeks to refine the process of associating and locating subjects across non-collaborative sensor data.
- 80% of AI decision makers are worried about data privacy and security: A recent study by Coleman Parkes Research, sponsored by SAS, reveals significant concerns among AI decision-makers regarding data privacy, security, and regulatory compliance. Despite the optimism surrounding generative AI's potential, challenges like integrating AI into existing systems, talent shortages, and predicting costs are major hurdles. Furthermore, most organizations lack a comprehensive governance framework for generative AI, highlighting the need for strategic planning, investment in technology that ensures governance and explainability, and focusing on high-value, human-centric use cases.
- How DOD and Google Public Sector partnered using AI to fight cancer: The Department of Defense, in collaboration with Google Public Sector and other partners, has pioneered the augmented reality microscope (ARM) to enhance cancer diagnosis. This AI-enhanced tool offers a significant leap in diagnosing cancer by digitizing tissue samples for more accurate analysis, bridging the gap caused by the declining number of healthcare specialists in the U.S. The development of ARM underscores the importance of public-private partnerships in advancing healthcare through innovative technologies, promising a new standard of care in pathology and a hopeful stride towards eradicating cancer.
- With 2023 tax season in the rearview, IRS commissioner eyes expansion of AI capabilities: Post a successful tax season marked by unprecedented customer response times, IRS Commissioner Danny Werfel is focusing on leveraging AI to further enhance taxpayer service in future seasons. Utilizing Inflation Reduction Act funds, the IRS aims to integrate AI for more efficient problem resolution and to bolster the tax system's integrity through precise auditing. This strategic approach not only aims to streamline taxpayer interactions but also targets a more equitable and effective enforcement process, especially benefiting vulnerable communities susceptible to fraud.
- Agency leaders discover the power of AI to scale and support citizen services: Public sector leaders across Michigan, California, and Wisconsin are harnessing the potential of AI and generative AI in collaboration with Google Public Sector to enhance the delivery and efficiency of citizen services. Innovations like Dearborn's multilingual AI chatbot and California's streamlined healthcare processes exemplify AI's role in modernizing government operations and improving accessibility. These efforts underscore the transformative impact of public-private partnerships in deploying AI solutions that meet evolving constituent needs while addressing staffing and budget challenges.
- Commerce adds five members to AI Safety Institute leadership: The Department of Commerce has expanded the AI Safety Institute's executive team at the National Institute of Standards and Technology with five new members, including professionals from academia, former OpenAI personnel, and current administration officials. Announced by Commerce Secretary Gina Raimondo, these appointments underscore the Institute's commitment to fostering the safe development and application of AI technologies. The AI Safety Institute is set to play a pivotal role in implementing safety guidelines, evaluations, and research to support President Joe Biden's executive order on AI, alongside developing international partnerships and a consortium for collaborative advancements in AI safety.
- Commerce requests information about AI, open data assets, data dissemination: The Department of Commerce is actively seeking insights on making its data assets AI-ready and developing standards for data dissemination, in light of the rapid evolution of generative and general AI technologies. The initiative aims to enhance the accuracy and integrity of data usage by AI systems, focusing on improving guidance, metadata, and licensing for AI ingestion and research analytics. This effort underscores Commerce's strategic mission to expand opportunities through data, ensuring its accessibility and understandability for AI applications while maintaining the data's semantic integrity.
- CMS’s financial office is using LLM pilot to combat loss of institutional knowledge: Facing challenges like an aging workforce and constant workload, CMS's Office of Financial Management is leveraging artificial intelligence, specifically a large language model (LLM) pilot with Meta's Llama 2, to preserve and enhance institutional knowledge. This initiative aids in rapidly training new staff and providing reliable, context-specific information, thereby facilitating more accurate and expedient decision-making within CMS.
- Columbia, S.C., May Use AI on Garbage Trucks to Enforce Codes: Columbia is considering deploying cameras equipped with AI on city garbage trucks to identify code violations, such as overgrown grass or excessive leaves in yards, by analyzing images taken along their routes. This program, developed in partnership with City Detect, aims to enhance code enforcement efficiency and city appearance. While the initiative has received mixed reactions from residents, with concerns ranging from increased government surveillance to privacy issues, it represents a high-tech approach to urban maintenance and regulation enforcement.
- TSA Chief Sees Potential for AI to Reduce Burdens on Security Screeners: The TSA is integrating AI to enhance luggage screening and reduce workload on security officers, aiming for operational efficiency and workforce flexibility.
- Treasury giving agencies a fighting chance to prevent fraud: The Treasury Department is taking significant strides to combat fraud, leveraging machine learning tools to scrutinize paper checks for anomalies, thus preventing potential fraud amounting to over $500 million. By enhancing real-time account verification and engaging in collaborative efforts with the private sector, the Treasury aims to shift agencies from a reactive to a proactive stance against fraud, underscoring the importance of utilizing innovative technologies and databases to ensure payment integrity.
- AI Report Shows ‘Startlingly Rapid’ Progress—And Ballooning Costs: The Artificial Intelligence Index Report 2024 highlights AI's rapid advancements, with AI now outperforming humans in complex tasks like reading and math. However, the escalating costs and environmental impacts of training these models are raising concerns. The report calls for new benchmarks to evaluate AI, noting that academic efforts are shifting towards analyzing models from the industry. With regulatory interest in AI skyrocketing, there's a push for responsible AI use amidst ethical concerns and the potential for a global divide in AI perception.
- Foreign adversaries using AI to push disinformation, crumble election process, US warns: China, Russia, and Iran are reportedly leveraging AI-driven propaganda to undermine confidence in U.S. elections, according to a joint report from the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence, and the FBI. These efforts, which include creating fake social media profiles and distributing disinformation, aim to deepen partisan divisions. The report highlights the sophistication of these campaigns, which use techniques like typosquatting and voice cloning, and underscores a call for increased awareness and collaboration to combat these threats.
- Leveraging RPA, AI and Automation in Government Processes: This article discusses how government agencies are turning to Robotic Process Automation (RPA) and Artificial Intelligence (AI) to address the cybersecurity staff shortage and streamline operations. The General Services Administration was among the pioneers, launching RPA bots to reduce workload. Beyond standard RPA, agencies are exploring AI-enhanced RPA and standalone AI applications for more complex tasks. While RPA excels in repetitive, rules-based tasks, AI introduces the capacity for cognitive automation, handling tasks that require decision-making and learning. The piece underscores the importance of careful planning and management oversight in implementing these technologies effectively.
- How Will the Advent of GenAI Impact State IT Workforces?: A NASCIO report suggests generative AI (GenAI) will enhance productivity and service delivery in state governments without displacing jobs. It highlights a need for workforce reskilling and addressing skills shortages for successful GenAI integration. States are actively developing policies and investing in training for GenAI adoption.
- TSA looks to AI to enhance x-ray screenings of travelers’ luggage: The TSA is developing AI technology to improve the detection of prohibited items in carry-on luggage, aiming to assist officers in identifying explosives, firearms, and knives more efficiently. This AI-enhanced system is part of TSA's broader security enhancements, including the use of facial recognition.
- Senate bill aims to bring more private sector participation to federal AI innovation: The Future of AI Innovation Act introduced by bipartisan senators focuses on bolstering U.S. leadership in AI through enhanced collaboration with the private sector. It proposes federal support for standard development, AI testbed competitions, and international alliances for AI standards, aiming for a consensus in AI development.
- Fight Over AI Regulation Continues in Connecticut: Connecticut's legislature is finalizing a bill focusing on AI, emphasizing consumer protection and job creation amidst industry pushback for its potential to hamper innovation. The bill aims to address issues like sextortion and deep fakes, proposing an online academy for AI training and emphasizing the need for AI literacy.
- Federal CIO calls on Congress to fund Technology Modernization Fund: Clare Martorana, Federal CIO, advocates for the Technology Modernization Fund (TMF) amidst efforts to retract funding. Highlighting the TMF's role in advancing government tech projects, Martorana discusses the initiative to accelerate tech deployment and the ongoing development of AI governance and infrastructure across federal agencies.
- Keeping public sector data private and compliant with AI | FedScoop: Amidst the increasing incorporation of AI in public sector operations, Google Cloud’s Gemini and Google Workspace are setting new standards for data privacy and security. Featuring client-side encryption and comprehensive security frameworks, these tools promise to keep data within user-defined trust boundaries, showcasing efforts to balance productivity enhancements with stringent privacy safeguards.
- DIA CIO Doug Cossa Shares Top 5 Priorities & Gen AI Opportunities - GovCon Wire: Defense Intelligence Agency CIO Doug Cossa outlines his top priorities, with a notable emphasis on leveraging AI to uncover unseen opportunities rather than simply streamlining existing processes. This vision for AI, particularly in intelligence and security domains, aligns with broader strategic efforts to refresh aging infrastructure and ensure cybersecurity, underscoring a forward-looking approach to technology adoption.
- State Department encouraging workers to use ChatGPT | FedScoop: The State Department's push for generative AI use among its workforce, including the deployment of an internal chatbot, reflects the Biden administration's broader strategy to embrace AI for various tasks. This initiative, aimed at enhancing efficiency in tasks such as document declassification and information synthesis, signifies a proactive stance on adopting AI technologies while navigating the complexities of data privacy and security.
- "New Mexico Laboratory Unveils Supercomputer to Advance AI": A New Mexico lab introduces a supercomputer to fire up AI research, aiming to crack complex problems from climate to cybersecurity.
- ?"UK Weighs AI Regulation: Is Mandatory Algorithm Sharing on the Horizon?": The UK contemplates groundbreaking AI regulations, potentially requiring companies to divulge their closely guarded algorithms.
- "AI Skills Gap: Bridging the Public-Private Divide": Public sector AI faces a skills canyon, sparking vital private-sector partnerships to future-proof government tech talent.
Chief AI | Cyber Security | Officer | AI Advisory Board CapTechU | AI/ML/Quantum Computing | Chair | Board Member | Professor, Adjunct
11 个月Vinay Vijay Singh please check your Inbox.
Managing Director at EY
11 个月Most interesting for me... There is a shift! Lots of Talk About AI, But Are Agencies Spending Money on It?