StackAware的封面图片
StackAware

StackAware

计算机和网络安全

Bartlett,NH 1,480 位关注者

Harness AI. Manage risk.

关于我们

StackAware helps AI-powered companies measure and manage cybersecurity, privacy, and compliance risk.

网站
https://stackaware.com/
所属行业
计算机和网络安全
规模
2-10 人
总部
Bartlett,NH
类型
私人持股
创立
2022
领域
Artificial Intelligence、Governance, Risk, and Compliance、Cybersecurity和Risk Management

地点

StackAware员工

动态

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    Everyone talks about AI "bias," but usually stops there. A 3-level framework for business leaders to manage risk: Level 1 - Reputation The most restrictive "standard," if there even is one. This addresses public (and customer) opinion, usually driven by activist groups and media reporting. Risks at this level can materialize in the form of: -> Image models showing non-whites as Nazi soldiers -> Legal health insurance denials causing bad press -> Chatbots saying offensive things (incited or not) These aren't crimes or contractual violations, but somewhere between 1 and 8 billion people have a problem with them. Because these requirements aren't written down, you'll need to read the societal "vibes." And also understand your risk appetite for angering people. Good luck offending no one while deploying AI. Deploy guardrails to stay below this appetite. Level 2 - Law The minimum acceptable level of compliance for any AI-powered business is the letter of the law. Examples include: -> Colorado SB-205 banning algorithmic discrimination -> NYC LL 144-21 forcing employment tool bias audits -> EU AI Act mandating data governance measures The good news here is that these laws require a concrete set of steps. Take them AND document what you have done. The bad news is that what any of these laws mean usually only becomes clear from enforcement action. Be conservative early on to avoid one, then learn from the misfortunes of others. Detailed documentation of your AI governance program will help if you have the bad luck to be the test case. This can take the form of your: Level 3 - AI Management System (AIMS) If you implement ISO/IEC 42001:2023, there are few hard requirements about what you must do in terms of bias. It's the most flexible level of the framework. But you must have a coherent SYSTEM for addressing it. Things that might be okay (or not) under your AIMS could be rules (dis)allowing AI systems to: -> Make it harder to apply for a job from certain states -> Accept blocking some legit credit transactions -> Charge personal training rates based on BMI* * Not in WA, MI, San Francisco, or other jurisdictions where body weight is a protected characteristic. You'll need to answer to your auditors here, as well as customers whose contractual restrictions you accept. And no. None of this is legal advice. I'm not a lawyer. TL;DR - managing bias risk is complex and applies at: Level 1 - Reputation and public opinion Level 2 - Law and regulation Level 3 - AIMS How are you managing bias-related business risk?

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    3 signs from your customer messaging you badly need an AI governance program: -> You claim to only process publicly available information when that's not true (my email isn't public). -> Your personal data retention statement is blatantly incorrect (they must store my email, which is personal data under GDPR, considering they sent me a message). -> You say "most of the AI data generation is synthetic" but probably aren't using synthetic data for training. BONUS: You appeal to the recipient's "kind, big heart."

    • 该图片无替代文字
  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    Does ISO 42001 stand alone or require ISO 27001 first? Short answer: 42001 stands alone. Long answer: If you already have an Information Security Management System (ISMS) in place, building your Artificial Intelligence Management System (AIMS) will be easier. But not required. And here are 3 key mistakes to avoid: 1. Using different risk criteria for each standard ISO 42001 and 27001 both require identifying acceptable and unacceptable risks. It's tempting to create different standards for each. I wouldn't. Instead, use this as an opportunity to kick off your quantitative risk management program. Establish a risk appetite in $; measure risks against that. You can also create qualitative risks for each standard that are also unacceptable, like: -> Knowingly violating laws, regulations, or contracts -> Deploying AI that reduces human lifespan (net) -> Training AI on pirated data 2. Creating "AI-specific" policies and procedures AI risk doesn't live in a vacuum. It overlaps with cyber risk. So don't create separate approaches for: -> Measuring -> Assessing -> Treating risk. The only possible exception is the AI impact procedure. This requires looking outside the organization much more than ISO 27001 does. 3. Using generic risks instead of system-specific ones Based on the wording of ISO 27001, it's acceptable to analyze risks to the entire ISMS, like: -> Outages -> Insider threats -> Software vulnerabilities This generic approach doesn't actually help manage risk, but it might get you certified under 27001. 42001 is different. It requires risk assessments for specific systems. So you can't just say "hallucination" is a risk. Tie it to a given AI system. TL;DR: - ISO 27001 isn't required for ISO 42001 certification, but avoid these 3 mistakes: 1. Creating different risk criteria for each 2. Building AI-specific procedures 3. Analyzing generic risks Are you building your AIMS on top of your ISMS?

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    Do you make generative AI available to Californians? Do you comply with AB-2013? If not, you have less than a year to get there. The law applies: -> to GenAI systems released on/after January 1, 2022 -> that are made publicly available to Californians -> beginning January 1, 2026 AB-2013 defines GenAI as that which: “can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data.” And no, the law doesn't: -> Exclude companies based outside the state -> Exempt open source -> Define "Californians" What AB-21013 does do is require developers to publicly post the following about GenAI system training data: -> sources/owners of data -> whether data sets are purchased/licensed -> whether synthetic data was used for training -> whether it was cleaned, processed, or modified -> how data sets relate to purpose of GenAI system -> dates each data set was first used in development -> number of data points (can be estimated if dynamic) -> time period of collection (noting ongoing collection) -> intellectual property protections for the training data -> whether it has personal (incl. aggregated) information The law does not apply to genAI systems: -> used only for cybersecurity or physical safety -> solely used for aircraft operation in U.S. airspace -> only used for federal security, military, or defense And It basically requires three ISO 42001 controls: A.4.3 - data resources This requires noting data: -> provenance -> intended use -> preparation techniques A.7.3 - acquisition of data For this control, you must document: -> data rights -> quantity of data -> characteristics of data A.7.6 - data preparation This one focuses on techniques such as data: -> cleaning -> normalization -> labeling and encoding TL;DR - especially when combined with other AI governance legislation, AB-2013 makes ISO 42001 compliance basically mandatory. Need help getting certified? StackAware helps AI-powered companies in healthcare and B2B SaaS do just that. DM me "CALI" to discuss the details.

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    AI doesn’t exist without data. So AI governance and security require: -> tracking -> classifying -> organizing it effectively. The HITRUST AI Security Certification gives concrete requirements here: 1. Baseline Unique IDs (BUID) 07.07aAISecOrganizational.4-5 For data sources used: -> to train, fine-tune, test, and validate AI models -> in retrieval-augmented generation (RAG) Organizations must: -> Maintain a catalog of trusted data sources -> Inventory data used, including at least: -- Provenance -- Sensitivity 2. BUID 06.10hAISecSystem.7 When building machine learning models, document linkage between versions of the: -> Pipeline configuration -> Training dataset -> Resulting model 3. BUID 17.03bAISecOrganizational.3 Evaluate/document need to take additional measures with AI training data like: -> adversarial training -> randomized smoothing to ensure AI models are resistant to evasion and poisoning. 4. 19.06cAISecOrganizational.1 Evaluate/document compliance with constraints on data, such as: -> self-imposed data governance requirements -> copyrights/commercial interests -> applicable laws/regulations -> contractual obligations Bottom line: AI governance starts with your data. Need a free register to track these requirements? DM me "DATA" and I'll send it over. (no consultants or GRC product teams)

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    "What business are you in?" A wise man (Stephen D.) asked me yesterday. "Trust," I answered. This came up while discussing mapping governance and security strategy to business objectives. Steve brought up the example of Kodak, once a titan in the film business, but now a shadow of its former self. Despite inventing the first digital camera, Kodak wanted to avoid cannibalizing its original, film-based business. "They thought their business was film, but it was really memories," he observed. And then he turned the question to me. I didn't hesitate. While StackAware started as a vulnerability management software tool and is now an AI governance services company, the medium isn't important. What matters is that we earn - and keep - client trust. As AI reshapes the world, our delivery model will evolve. But trust will always be core to everything we do. What business are you really in?

    • 该图片无替代文字
  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    As a Marine Corps recon officer, I was accountable for everything my platoon did or failed to do. If: -> Private X got drunk -> Corporal Y lost a laser optic -> Sergeant Z didn't prepare properly for a mission it was ultimately my responsibility. (And guess what - these things all happened). A heavy burden, but one that makes sure those: -> Who are properly incentivized -> With the right level of authority, and -> Overall responsibility for mission accomplishment are the ones making decisions (as it should be). Is your security program operating the same way? Or are you punishing privates (individual contributors) instead of officers (executives)?

  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    AI risk = lost $. That's what savvy business leaders like Embold Health CFO Jurgen Degros, CTP (a StackAware client) understand. Check out this clip for a peak into how CFOs should think about the challenges - and opportunities - presented by AI.

  • StackAware转发了

    查看Noah G. Susskind的档案

    Head of AI & Cybersecurity - General Counsel @StackAware | JD CISSP CIPP | Helping companies get ISO 42001 certifications to manage AI, cyber, and privacy

    Experts predict Artifical General Intelligence will arrive within the next few years. They include people without a profit motive to overhype, people whose job is to be right about what’s around the corner. AGI means AI “capable of doing basically anything a human being could do behind a computer — but better.” It’ll shake and reshape humanity. I don’t pretend to have all the answers to questions about how to steer that ship. But I knew that joining Walter Haydock at StackAware, specifically to help organizations think about their AI governance, was a no-regrets move. I will never forget driving through a wintery forest to visit Walter, listening to the Ezra Klein podcast that Ethan Mollick references below: “I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.” What I do know is that I am legitimately excited to try to figure it out with all of you.

    查看Ethan Mollick的档案
    Ethan Mollick Ethan Mollick是领英影响力人物

    “I believe now is the right time to start preparing for AGI” The same warnings are now appearing with increasing frequency from smart outside observers of the AI industry who do not benefit from hyping what it can do, like Kevin Roose (below) & Ezra Klein. I think ignoring the possibility they are right is a real mistake.

    • 该图片无替代文字
  • StackAware转发了

    查看Walter Haydock的档案

    I help AI-powered healthcare companies manage cyber, compliance, and privacy risk so they can innovate responsibly | ISO 42001, NIST AI RMF, HITRUST AI security, and EU AI Act expert | Harvard MBA | Marine veteran

    Do you do business in Colorado? Using high-risk AI for healthcare or finance? Well, you've got less than a year to comply with SB-205! In 2024 Colorado passed Senate Bill 205, a comprehensive AI governance law. It requires a laundry list of things including: -> Building an AI risk management program -> Summarizing data governance controls -> Algorithmic discrimination disclosures If you are building this from scratch, it'll be a huge pain. The good news? An affirmative defense to certain regulatory actions under the law is being ISO 42001 compliant. And StackAware has already guided 2 companies to certification under this standard. So if you are: -> A security, compliance, or technology leader -> At an AI-powered company operating in CO -> In healthcare or finance DM me "ROCKIES" to chat about getting ISO 42001 ready in 90 days to avoid fines and build customer trust.

相似主页

查看职位