StackAware

StackAware

计算机和网络安全

Bartlett,NH 1,355 位关注者

Harness AI. Manage risk.

关于我们

StackAware helps organizations manage AI-related cybersecurity, privacy, and compliance risk.

网站
https://stackaware.com/
所属行业
计算机和网络安全
规模
1 人
总部
Bartlett,NH
类型
私人持股
创立
2022
领域
Artificial Intelligence、Governance, Risk, and Compliance、Cybersecurity和Risk Management

地点

StackAware员工

动态

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    Welcome to the club*, Amazon Web Services (AWS)! These AI services are now ISO 42001 certified: -> Q (Business) -> Transcribe -> Bedrock -> Textract As the 800 lb. gorilla in the cloud, with the broadest customer base, this was going to become mandatory sooner or later. Good to see them getting proactive. There are still some key AI-related controls I recommend my clients implement, though, like opting out of training for Transcribe and Textract as well as these non-42001 certified services: -> Chime SDK voice analytics -> Connect Optimization -> Connect Contact Lens -> CodeGuru Profiler -> Q (non-Business) -> Entity Resolution -> CodeWhisperer -> Fraud Detector -> Security Lake -> Comprehend -> CloudWatch -> Rekognition -> QuickSight -> GuardDuty -> DataZone -> Translate -> Connect -> Polly -> Glue -> Lex * Non-humble brag that StackAware got ISO 42001 certified in October, also by Schellman.

    AWS achieves ISO/IEC 42001:2023 Artificial Intelligence Management System accredited certification | Amazon Web Services

    AWS achieves ISO/IEC 42001:2023 Artificial Intelligence Management System accredited certification | Amazon Web Services

    aws.amazon.com

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    Who is staying on top of your AI risk? If the answer is "no one," StackAware can help: Our 30-day assessment, using the NIST AI Risk Management Framework (and any of your existing compliance standards) will: -> Highlight shadow AI use -> Surface vendor AI training -> Pinpoint policy/procedure gaps -> Give you actionable recommendations -> Get you hard data to make the case for governance Want to learn more? DM me "ASSESS" and we'll work out the details.

    • 该图片无替代文字
  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    A GRC leader at a $5B revenue global fintech company asked me this about AI governance frameworks: "Do we start with the EU AI Act first or do we do all three [AI Act, ISO/IEC 42001, and NIST AI RMF] together?" Here's how I think of each: 1. EU AI Act Adopted in 2024, the European Union (EU) AI Act forbids: -> Inference of non-obvious traits from biometrics -> Real-time biometric identification in public -> Criminal profiling not on criminal behavior -> Purposefully manipulative or deceptive -> Inferring emotions in school/workplace -> Blanket facial image collection -> Social scoring It heavily regulates AI systems involved in: -> Intended to be use as safety component; and -> Underlying products already EU-regulated -> Criminal behavior risk assessment -> Education admissions/decisions -> Job recruitment/advertisement -> Exam cheating identification -> Public benefit decisions -> Emergency call routing -> Migration and asylum -> Election management -> Critical infrastructure -> Health/life insurance -> Law enforcement Fines can be up to 35,000,000 Euros or 7% of worldwide annual revenue. So ignoring the EU AI Act’s requirements can be costly. It's mandatory for anyone qualifying (according to the AI Act) as a: -> Provider -> Deployer -> Importer -> Distributor -> Product Manufacturer -> Authorized Representative 2. ISO/IEC 42001:2023 Published by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in December 2023. ISO 42001 requires building an AI management system (AIMS) to measure and treat risks to: -> Safety -> Privacy -> Security -> Health and welfare -> Societal disruption -> Environmental impact An external auditor can certify this. Also, compliance with a “harmonised standard” of the EU AI Act, which ISO 42001 may become, gives you a presumption of conformity with some AI Act provisions. But ISO 42001 is not a silver bullet. A U.S.-based company offering facial recognition for public places could be ISO 42001 certified but banned from operating in the EU. In any case, it's one of the few ways a third party can bless your AI governance program. It's best for: -> AI-powered B2B startups -> Companies training on customer data -> Heavily-regulated enterprises (healthcare/finance) 3. NIST AI RMF The National Institute of Standards and Technology (NIST) Artificial Intelligence (AI) Risk Management Framework (RMF) launched in January 2023. ISO 42001 also names it as a reference document. The AI RMF has four functions: -> Map -> Measure -> Manage -> Govern These lay out best practices at a high level. But like all NIST standards, there is no way to be “certified." But because of NIST’s credibility and the fact it was the first major AI framework published, using the AI RMF is a good way for any company to build trust. BOTTOM LINE Stack AI frameworks to meet: -> Regulatory requirements -> Customer demands -> Risk profile How are you doing it?

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    Are you managing AI risk throughout the entire lifecycle? Have you even defined it? If not, I put together a detailed guide with considerations for every step of the process, including: -> Inception -> Design and Development -> Verification and Validation -> Deployment -> Operation and Monitoring -> Continuous Validation and Re-evaluation -> Retirement Check it out on Deploy Securely:

    Risk management throughout the AI lifecycle

    Risk management throughout the AI lifecycle

    blog.stackaware.com

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    Need to quantify (AI) risk? Check out this video where I walk through StackAware's risk assessment, impact, and treatment register. It meets the requirements of standards like: -> ISO 42001 -> ISO 27001 -> SOC 2 and many others. P.S. I'm giving away FREE editable versions to the first 5 people who DM me "REGISTER" by the end of the day.

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    Great speaking with Chris Hughes on the Resilient Cyber podcast about AI governance. We covered: -> Emerging frameworks (ISO 42001 and HITRUST AI) -> Regulatory best (and worst) practices -> StackAware's pivot and future Check out the discussion:

    Resilient Cyber w/ Walter Haydock - Implementing AI Governance

    Resilient Cyber w/ Walter Haydock - Implementing AI Governance

    resilientcyber.io

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    When people talk about "compliance," they are likely talking about one of three things: 1. Voluntary standards 2. Certifiable frameworks 3. Mandatory legal and regulatory requirements In this clip I break down the difference between the three. What do you think about this "framework for frameworks"?

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    “It would be a business differentiator for us." How a health-tech security leader described ISO 42001. With sophisticated business customers focused on: -> Data privacy -> Responsible AI -> Regulatory compliance external auditing of your AI governance program goes from "nice to have" to "must do." ISO 42001 is the best option to get it done. And StackAware's AI Management System (AIMS) Accelerator gets you certification-ready in 90 days. Want to learn more? DM me "AIMS" and we can chat about details.

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    3 reasons why you should keep kicking the can on AI security and governance: 1. You are waiting until next quarter, year, etc. If you want engineering and product teams to deploy AI-powered products without worrying about security, compliance, or privacy, this makes sense. But what are you going to do after they have already: -> Integrated open source AI libraries into their code? -> Been using ChatGPT every day for two years? -> Connected workflows to SaaS-hosted LLMs? If you think you are going to be able to apply guardrails after all of this is already done, think again. Security teams often get left out of discussions about new product and feature development. So the time is now (frankly, yesterday) to start talking about how you are going to do these while using AI: -> Meet regulatory obligations -> Classify and govern data -> Vet vendors 2. You don’t have dedicated budget Security exists to enable the business. So you may not have set-aside funds to build out AI governance. With that said, it might make sense to ask your: -> Engineering -> Product -> Finance colleagues whether your company has budget to: -> Lose deals to competitors because you can't explain your AI security posture to prospects (while also blasting out blogs posts about how you use AI)? -> Address lost competitive advantage from employees training SaaS generative AI models on intellectual property? -> Jump through your a** before your next audit to document how you are meeting compliance requirements while using AI? Money is money. But sometimes spending it earlier - and in the right place - can save you a lot more later down the road. 3. You are trying to hire the right person to manage it Having an employee run your AI governance program might make sense under certain conditions. For example, if you: -> Can monitor the constantly-changing landscape. -> Have or can get sufficient in-house expertise. -> Want to deal with all of these things. But while you may have complete control if employees build from scratch, also consider the opportunity costs: -> Heavy initial investment paying “ignorance debt.” -> Negotiation with overlapping internal stakeholders. -> Not leveraging best practices developed elsewhere. For these reasons, it often makes sense to leverage a specialist. Do you think any of these are good reasons to kick the can on AI governance?

  • StackAware转发了

    查看Walter Haydock的档案,图片

    I help AI-powered companies get ISO 42001 certified to manage cybersecurity, compliance, and privacy risk so they can innovate responsibly | NIST AI RMF and EU AI Act expert | Harvard MBA | Marine veteran

    ISO 42001 has concrete benefits like giving you safe harbor under certain provisions of Colorado's SB 205. With that said, the biggest advantage is helping you build a SYSTEM to stay compliant with changing regulations. In this clip I talk about both of these advantages of certification. --- Need more AI governance tips? Go to my profile (Walter Haydock) and ring my bell ??!

相似主页

查看职位