Developing large-scale language models (LLMs) for health care requires fine-tuning with health care domain data suitable for downstream tasks. However, can fine-tuning LLMs with medical data expose the training data used during learning to adversarial attacks?? ? This issue is particularly important as medical data contain sensitive and identifiable patient data. The prompt-based adversarial attack approach was employed to assess the potential for medical privacy breaches in LLMs.?? ? The success rate of the attack was evaluated by categorizing 71 medical questions into three key metrics. To confirm the exposure of LLMs training data, each case was compared with the original electronic medical record.?? ? The security of the model was confirmed to be compromised by the prompt attack method, resulting in a jailbreak (i.e., security breach). The American Standard Code for Information Interchange code encoding method had a success rate of up to 80.8% in disabling the guardrail.?? ? The success rate of attacks that caused the model to expose part of the training data was up to 21.8%. These findings underscore the critical need for robust defense strategies to protect patient privacy and maintain the integrity of medical information.?? ? Addressing these vulnerabilities is crucial for integrating LLMs into clinical workflows safely, balancing the benefits of advanced artificial intelligence technologies with the need to protect sensitive patient data.? ? Learn more in the Case Study “Fine-Tuning LLMs with Medical Data: Can Safety Be Ensured?” by M. Kim et al.: https://nejm.ai/4a4w2nb? ? #ArtificialIntelligence #AIinMedicine?
AI MINDSystems Foundation
健康与公共事业
Washington,DC 925 位关注者
AI MINDSystems Foundation is driving systemic interventions for humanity's health, prosperity, privacy, and equity.
关于我们
AI MINDSystems Foundation has a timely vision for systemic interventions to advance humanity’s health, prosperity, and privacy. Our mission is to transform how exponential technologies are accessed and adopted by underserved communities, and how data from those communities is collected, stored, and used for the advancement of ethical AI, community health, and the pursuit and sustainment of health and digital equity. To achieve this mission, we are creating and will sustain a new, trusted data ecosystem, a just data economy inclusive of data subjects, and self-sovereignty of identity, data, AI, and personal digital twins. We are dedicated to the empowerment of the individual relative to corporations and governments in the accelerating age of converging exponential technologies such as AI, web3, blockchain, digital biology, digital twins, robotics, and advanced Privacy Enhancing Technologies (PETs). The person-centered heart of the Foundation's mission is each of our decentralized “MIND” -- an acronym for "My Individual Networked Data." Through self-sovereign AI, MINDSystems will underpin trustworthy social institutions that formally and equitably engage individuals, families, and communities in new kinds of Public-Private Partnerships (PPPs). MINDSystems are integral to the Foundation's urgently needed approach to operationalize decentralized, ethical AI and data governance that includes people legally, financially, socially, clinically, scientifically, and technologically. AI MINDSystems Foundation is seeking highly-qualified volunteer leaders with shared values, specializations across a wide array of skills, and in geographies around the world. We are also now accepting tax-deductible donations and gifts to accelerate our mission, in alignment with a series of fast-moving, high-impact potential initiatives. Please reach out to me here on LinkedIn if you would like to volunteer or give.
- 网站
-
ai-mindsystems.org
AI MINDSystems Foundation的外部链接
- 所属行业
- 健康与公共事业
- 规模
- 2-10 人
- 总部
- Washington,DC
- 类型
- 非营利机构
- 创立
- 2024
- 领域
- Ethical AI、Trusted AI、Data Economics、Computational Governance、Privacy Enhancing Technologies、Confidential AI、Community Health、Precision Health、Public Health、Health Equity、Decentralized Trials和Decentralized AI
地点
-
主要
US,DC,Washington,20852
AI MINDSystems Foundation员工
-
Michael Glavich
Growth & Emerging Technology Accelerator focused on: Cognitive Infrastructures evolving into Smart Cities, AI, IoT, AR/VR, Blockchain, Digital Twins,…
-
Andrew Schwartz
Nike Metaverse | Innovation | Product | Strategy | Branding | T-Ball
-
Paul Kavitz
Chief Governance Officer, AI MINDSystems Foundation
-
Paul Nielsen
Global Innovation Leader | Former Optum Executive | AI / ML | Robotic Automation | Entrepreneurship | Board Member
动态
-
We need more openness, discussion and research on standards, that is my personal opinion. That is why StanDat, this open database on international standards grounded in the research community is exciting. This article in Cambridge University Press & Assessment Political Science and Research Methods, was just recently published. Please help by resharing this expanding it or questioning this article. It is important work also relevant for the field of artificial intelligence, regulations and evidence-based policymaking. Here is the link to the article: https://lnkd.in/dKAdeVKE StanDat facilitates studies into the role of standards in the global political economy by (1) being a source for descriptive statistics, (2) enabling researchers to assess scope conditions of previous findings, and (3) providing data for new analyses, for example the exploration of the relationship between standardization and trade, as demonstrated in this article by Solveig Bj?rkholt. StanDat is a database comprised of four parts; “Standards,” “TC-membership,” “Historical,” and “Certifications,” where each part contains 2–3 individual datasets. StanDat is created through three different procedures. The first procedure involved scraping information; the second procedure addresses a common shortcoming with web scraping for archived pages; the third procedure involved parsing of other file formats, namely PDF and Excel. The “Historical” datasets are parsed from a PDF file in archives. I think this is the start of something wonderful... Kudos to Bj?rkholt! I previously ran a course called Political Data Science Hackathon at the University of Oslo with Bj?rkholt, so I have some bias in sharing this — positive bias — I know she is serious about her work and highly talented. #standards #openness #research #artificialintelligence Tagging a few people, please help spread the written word: Klas Pettersen, Morten Goodwin, Morten Dalsmo, Morten D?hlen, Sacha Alanoca, Haavard Ostermann, Heather Broomfield, Odd Arne S?tervik, Hilde Lovett, Antoine-Alexandre André, Marc Rotenberg, Merve Hickok, Florian Ostmann (I know AI Standards Hub could be helpful in this context too), Helga M. Brogger, MD, Armando Guio Espa?ol, Fabio Seferi, Livio Rubino, Filippo Bagni, Pinar Heggernes, Marija Slavkovik, Russell Wald, Elinor Wahal, Arto Lanam?ki, Robindra Prabhu, Kari Laumann, Aleksandr Tiulkanov, Virginia Dignum, Serge Belongie, Colin van Noordt, Elja Daae, Nathalie Smuha, Hendrik Nahr, Jacob Wulff Wold, Catharina Nes, Janicke Weum.
-
?? Exciting Event Alert! ??Phicil-itate Change is proud to sponsor “From HeLa to Health Equity: A Tribute to Diverse Contributions” – a Black History Month event hosted by LabCentral shining a light on the critical advancements made by Black and Brown individuals in health and health equity. I’m honored to moderate this thought-provoking event, which focuses on the theme of rare diseases – a powerful lens to explore health equity. The evening will feature two impactful panels: 1?? A Panel of Patients – sharing their lived experiences navigating rare diseases and the healthcare system. 2?? A Panel of Experts – discussing strategies, solutions, and the systemic changes needed to address disparities in rare disease care. This balanced conversation will provide a holistic perspective and leave you with actionable insights to make a difference in health equity. ??? Date: Thursday, February 13, 2024 ?? Time: 4:30 – 7:00 PM EST ?? Location: LabCentral | 700 Main St., Cambridge, MA Don’t miss this opportunity to reflect on the legacy of trailblazers like Henrietta Lacks and engage in transformative discussions about improving care and access for those living with rare diseases. Spaces are limited – RSVP today to secure your spot in this meaningful conversation. ??? ?? Register here: https://lnkd.in/dBXtTQfB Let’s honor the past, drive awareness, and take steps toward a more equitable future for rare disease care. #HealthEquity #RareDiseases #HealthcareInnovation #BlackHistoryMonth #SocialImpact
-
-
"What’s it like to be a boffin? Natural science phenomenon in an artificial age" That's the title of a new abstract I've submitted to the upcoming Interdisciplinary Coalition of North American Phenomenologists conference. Whether or not it's accepted for me to present (I'm not a phenomenologist, more of a phenomenological neuroscientist focus on tech), I've stumbled across an interesting new thread of my theme of looking to the past to improve the future. Natural science needs human science to achieve greater heights with machine science. Also, how many people know the term boffin? #science #phenomenology #computing #research #technology #abstract #boffin
-
-
?Does my Organization Have to Comply with External Regulations If I Reference Them in ISO42001 Clause 4?? This post was prompted by community interaction, and the short answer is:?Yes, if you explicitly commit to them. #ISO42001?Clause 4 (Context of the Organization)?requires organizations to define?external and internal factors?affecting their AI Management System (#AIMS). This includes?legal, regulatory, contractual, and stakeholder expectations. ??Key Considerations for External Regulations in Clause 4 1??Identification vs. Commitment If an organization lists an external regulation as “applicable” to its AIMS,?auditors will expect compliance. If it merely identifies a regulation as “influential”?but not binding, it may not be a requirement unless explicitly stated as an obligation. 2??Implications for Audits Auditors will assess?whether external regulations listed in Clause 4 are addressed in governance, risk, and compliance controls. If an organization references a law (e.g.,?#EUAIAct, #GDPR, #NISTAIRMF) but?has no documented compliance efforts,?auditors may flag this as a?gap or misalignment. 3??Contractual & Stakeholder Commitments Matter If a customer contract?requires?compliance with a specific regulation and the organization includes it in Clause 4, it becomes an?auditable requirement. Failing to implement controls for referenced regulations could lead to nonconformities. 4??Managing Regulatory Uncertainty Organizations can?define the scope of applicability?in their?Statement of Applicability (#SoA)?and risk assessments. ??Example:?If a U.S.-based company references?EU AI Act?but does not operate in the EU, it should clarify?why it is listed and whether limited compliance is intended. ??How to Avoid Compliance Pitfalls ??Be precise when referencing external regulations, avoid broad commitments unless compliance is planned. ??Clearly define regulatory obligations vs. considerations?in your governance policies. ??Ensure listed regulations are addressed in your AI risk management (#ISO23894) and impact assessments (#ISO42005). If your organization?declares?a regulation as applicable in Clause 4?without implementing controls,?you'll risk audit findings.?Strategic and transparent scoping is key to avoiding taking on unnecessary compliance burdens. ??Be precise when referencing regulations - avoid commitments unless compliance is planned. ??Clearly define regulatory obligations vs. considerations?in your #SoA. ??Ensure external regulations are addressed?in AI risk management (ISO23894) and AI system impact assessments (ISO42005). By strategically scoping Clause 4, your organizations can?limit unnecessary compliance burdens while better promoting both full audit readiness responsible AI governance. #TheBusinessofCompliance #ComplianceAlignedtoYou A-LIGN
-
We are thrilled to announce a new member of our leadership team, Frederic de Vaulx! He is now serving as the Global GM of our Technology Standards Office, and as our Lead Assessor within the Government Blockchain Association Blockchain Maturity Model (BMM) Assessment Service. AI MINDSystems Foundation has a significant, multi-dimensional standards strategy that we will begin to share with our stakeholders, soon. It involves creating several new standards and their systems of certification to conformity, driving early adoption of emerging and ratified AI standards like ISO - International Organization for Standardization 42001 and the US National Institute of Standards and Technology (NIST) AI Risk Management Framework, and engaging closely within IEEE Standards Association | IEEE SA, Decentralized Identity Foundation, and W3C. Frederic's role is part time with us; he will continue in his role as CEO of Prometheus Computing -- also a key partner to us -- that solely serves the US NIST. He joins us after 22 years of engineering and leadership roles in technology standards. Welcome, Frederic! Heather Leigh Flannery, Gerard Dache, Steve Henley, Alejandro Mandujano, Christopher Smithmyer, Bob Miko, Paul Tibbits, MD, Michael Weiner, Blaise Wabo, CPA, CITP, CISA, CCP (CMMC), CCSK, CCSFP, Patrick Sullivan, Oki Mek, Richard Blech, Jordon Kestner, Richard Searle, Dr. Melvin Greer, Dan Sanders, Brian Ahier, Sean Manion, Peter Tittle, Rami Akeela, Ph.D., Stacey Ferris, CPA, CFE, Hassan Tetteh MD MBA FAMIA, Dr. Sindhu Bhaskar, Prof. Dr. Ingrid Vasiliu-Feltes, Joe Bormel, MD, MPH, Michael Marchant, Jim St. Clair, Patrick W., Emille Bryant, Christopher De Felippo, Zan Lowe-Skillern
-
-
The recording of our livestream event last week is now available, "DeSci: the Groundbreaking Potentials and Avoidable Pitfalls," featuring Rama Rao of Bloqcube Inc, my Co-Founder and Chief Scientific Officer at AI MINDSystems Foundation, Sean Manion, PhD, and me. The conversation went so quickly! This was an event in the monthly the Healthcare and Life Sciences (HLS) Livestream series I'm hosting as part of my role as Chair of the Government Blockchain Association's HLS Working Group. Enjoy! Register for the series here --> https://lnkd.in/eWH-Y2hT <-- it takes place the 4th Wednesday of every month at 12pm EST.
Zoom link here: https://lnkd.in/e9PKbipj Join us on Wednesday, Feb 26th for a livestream discussion on Decentralized Science! Experts Rama Rao of Bloqcube Inc and Heather Leigh Flannery and Sean Manion of AI MINDSystems Foundation will take us through the ongoing transformation blockchain is bringing the Life Science and Healthcare industry. #healthinnovation #DeSci #researchintegrity #datasovereignty #ethicalAI #GBAwebinarseries
Transforming Healthcare & Life Sciences with Blockchain: DeSci
www.dhirubhai.net
-
Throwing AI and digital tools at a 75 year old health system is like putting a jet engine on a donkey. It won’t be pretty!?What we need is a completely new mindset followed by a 1st principles rebuild of societal health. Prevention needs to be the foundation - not an afterthought - not a reactive band aid. 30% of disease is estimated to be preventable, yet we keep thinking of better ways to treat instead of better ways to prevent. We remain trapped in an economic model that rewards businesses that make society ill (processed food, nicotine, pollution etc. etc.) and then rewards treating illness. Looking at society from the outside it could appear as if we actively want to promote illness? New tech alone can't fix old thinking?
-
The Future of Work & Healthcare: AI-Powered Delegation & Decision-Making Ethan Mollick, this is a fascinating glimpse into how AI is reshaping organizational structures, leadership, and decision-making. Your course, Leading an AI-Powered Future at Wharton, emphasized how AI is not just a tool but a collaborator—and this paper reinforces that idea in a game-changing way. The ability to send an AI delegate to meetings forces us to rethink how work gets done, how decisions are made, and what leadership truly means in an AI-driven world. This shift will have profound implications for healthcare, where AI-powered systems could: ?? Optimize Medical Decision-Making – AI agents could attend interdisciplinary care meetings, synthesizing patient data and offering insights to improve treatment plans. ?? Enhance Physician Efficiency – AI could manage administrative burden, enabling clinicians to focus more on patient care and human connection rather than documentation. ?? Revolutionize Telemedicine & Consultations – AI delegates could summarize case histories, suggest best practices, and streamline collaboration across medical teams. As we explored in the Wharton Executive Education course, organizations that fail to adapt will struggle to remain relevant. AI is no longer just augmenting tasks—it’s transforming the very fabric of how we work, communicate, and lead. Excited to see where this goes. #AILeadership #FutureOfWork #SmarterHealthcare #AIinHealthcare #Wharton #HealthcareInnovation #AITransformation
Organizational life is about to get much weirder. This paper creates an early form of meeting delegates, where you send an AI to a meeting on your behalf, and it uses your voice and knowledge to advance your goals A lot of old organizational methods need to be rethought for AI or they will lose their meaning and purpose as AI is slotted into place to automate them.
-
-
??Manage Third-Party AI Risks Before They Become Your Problem?? AI systems are rarely built in isolation as they rely on pre-trained models, third-party datasets, APIs, and open-source libraries. Each of these dependencies introduces risks: security vulnerabilities, regulatory liabilities, and bias issues that can cascade into business and compliance failures. You must move beyond blind trust in AI vendors and implement practical, enforceable supply chain security controls based on #ISO42001 (#AIMS). ??Key Risks in the AI Supply Chain AI supply chains introduce hidden vulnerabilities: ??Pre-trained models – Were they trained on biased, copyrighted, or harmful data? ??Third-party datasets – Are they legally obtained and free from bias? ??API-based AI services – Are they secure, explainable, and auditable? ??Open-source dependencies – Are there backdoors or adversarial risks? ??A flawed vendor AI system could expose organizations to GDPR fines, AI Act nonconformity, security exploits, or biased decision-making lawsuits. ??How to Secure Your AI Supply Chain 1. Vendor Due Diligence – Set Clear Requirements ??Require a model card – Vendors must document data sources, known biases, and model limitations. ??Use an AI risk assessment questionnaire – Evaluate vendors against ISO42001 & #ISO23894 risk criteria. ??Ensure regulatory compliance clauses in contracts – Include legal indemnities for compliance failures. ??Why This Works: Many vendors haven’t certified against ISO42001 yet, but structured risk assessments provide visibility into potential AI liabilities. 2?. Continuous AI Supply Chain Monitoring – Track & Audit ??Use version-controlled model registries – Track model updates, dataset changes, and version history. ??Conduct quarterly vendor model audits – Monitor for bias drift, adversarial vulnerabilities, and performance degradation. ??Partner with AI security firms for adversarial testing – Identify risks before attackers do. (Gemma Galdon Clavell, PhD , Eticas.ai) ??Why This Works: AI models evolve over time, meaning risks must be continuously reassessed, not just evaluated at procurement. 3?. Contractual Safeguards – Define Accountability ??Set AI performance SLAs – Establish measurable benchmarks for accuracy, fairness, and uptime. ??Mandate vendor incident response obligations – Ensure vendors are responsible for failures affecting your business. ??Require pre-deployment model risk assessments – Vendors must document model risks before integration. ??Why This Works: AI failures are inevitable.?Clear contracts prevent blame-shifting and liability confusion. ?? Move from Idealism to Realism AI supply chain risks won’t disappear, but they can be managed. The best approach? ??Risk awareness over blind trust ??Ongoing monitoring, not just one-time assessments ??Strong contracts to distribute liability, not absorb it If you don’t control your AI supply chain risks, you’re inheriting someone else’s.?Please don’t forget that.