Eticas.ai的封面图片
Eticas.ai

Eticas.ai

软件开发

New York,NY 4,769 位关注者

AI Audit Solutions.

关于我们

Eticas.ai is a SaaS company specializing in AI auditing, bias monitoring, and certification, with over a decade of experience helping EU and US organizations optimize their AI processes, leading to better, safer, and fairer business decisions. Our flagship platform, ITACA, automates bias detection and monitoring in predictive AI models, ensuring compliance with key regulatory frameworks. We audit from Generative AI, to Predictive AI, and Biometric AI, addressing critical issues such as bias, hallucinations, misinformation, data provenance, and environmental impact. Our services include independent audits and B2B SaaS solutions, providing organizations with comprehensive oversight and tools to ensure their AI systems are equitable, transparent, and sustainable. Led by founder and CEO Dr. Gemma Galdon-Clavell, Eticas.ai collaborates with governments, institutions, and businesses to promote responsible AI practices and has contributed to major policy frameworks like the EU AI Act. Turn AI risks into trust, automatically. Learn more: https://eticas.ai/

网站
https://www.eticas.ai
所属行业
软件开发
规模
11-50 人
总部
New York,NY
类型
私人持股
创立
2012
领域
Impact assessment、Cybersecurity、Ambient-Assisted Living (AAL)、Privacy Technologies、Economics of surveillance、AI Ethics、Algorithmic Audits、SaaS、AI和AI Governance

地点

Eticas.ai员工

动态

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Recent developments, including the UK’s decision to delay its AI Safety Bill and the proliferation of AI legislation at the U.S. state level, underscore that AI regulation is no longer speculative. It is an immediate concern for businesses operating in this space. ?? Here’s a summary of the most recent regulatory shifts: 1. UK Delays AI Safety Bill Amid Alignment Efforts with the U.S. The UK government has postponed the AI Safety Bill, which mandates that tech companies provide their AI models for regulatory testing. This delay is reportedly an effort to synchronize policies with the U.S. administration, which favors a less restrictive approach to AI regulation. Critics, including Labour's Chi Onwurah, urge the government to address AI-related concerns promptly to safeguard public interests. Read more: https://lnkd.in/diVafAi2 2. U.S. Federal and State Dynamics in AI Governance In the United States, the federal government has adopted a more relaxed stance on AI regulation. Concurrently, individual states are introducing their own policies to ensure the safe deployment of AI technologies. For instance, California is exploring new regulations to manage AI applications within the state. Read more: https://lnkd.in/gM7twEF5 3. Industry Pushback on AI Copyright Proposals Major tech companies, including Google and OpenAI, are advocating for the ability to train AI models using copyrighted content without compensating creators. This proposal has sparked significant backlash from publishers, artists, and other content creators who argue that such measures could undermine the creative industries. Read more: https://lnkd.in/e_74BagP 4. Calls for U.S.-China Cooperation in AI Amid escalating technological competition, leaders from both the U.S. and China emphasize the need for collaboration in AI development. Stephen Orlins, president of the National Committee on U.S.-China Relations, highlighted that joint efforts could prevent redundant initiatives and promote global AI advancements. Read more: https://lnkd.in/eXJFCRCJ 5. Advocacy for Transparent AI Use State Representative Hubert Delany underscores the importance of regulating AI to ensure ethical usage. He supports legislation that mandates transparency, requiring organizations to inform individuals when AI is utilized in significant decision-making processes. Read more: https://lnkd.in/em_RzVrP

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Automated AI evaluations vs. one-time reviews: What’s more effective? Many companies run a one-time audit when deploying AI, then assume the job is done. But as AI systems evolve, so do the risks they pose. So, what’s more effective: automated audits or one-and-done reviews? ?? One-time reviews: ? Useful for initial compliance checks or product launches ? Risk missing emerging issues like model drift, new biases, or updated regulations ? Doesn’t account for real-world changes in data, usage, or context ?? Automated, ongoing audits: ? Continuously monitor for bias, misinformation, and security risks ? Catch issues early, before they escalate into real-world problems ? Helps businesses stay aligned with evolving regulations (like the EU AI Act, Local Law 144) ? Builds trust by ensuring your AI remains fair, accurate, and reliable over time ?? AI systems don’t stand still, and neither should your audits. Ongoing, automated evaluation is key to managing risk and ensuring your AI consistently delivers safe, compliant, and trustworthy outcomes. ?? Still relying on one-time checks? Let’s talk about a smarter way to audit AI. #AIaudit #AutomatedAudits #AISafety #ResponsibleAI #Compliance #FairAI #AIethics #TechForBusiness

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    5 signs your AI model needs an audit ?? AI is a powerful tool, but without proper oversight, it can expose your business to serious risks. Wondering if your system needs a check-up? Here are 5 signs it’s time to audit your AI: ?? 1. You operate in high-stakes areas If your AI is used in hiring, lending, healthcare, or law enforcement, even small errors can have major consequences. These systems demand regular evaluation to ensure fairness, accuracy, and accountability. ?? 2. You’re subject to regulations like NYC Local Law 144 In NYC, Local Law 144 requires bias audits for AI used in hiring. If your AI supports employment decisions, compliance isn’t optional, an audit ensures you meet legal requirements and avoid penalties. ?? 3. Outputs don’t add up Seeing unexpected or inconsistent results? If AI decisions are impacting people or profits and you’re unsure why, it’s time to dig deeper. ?? 4. You can’t explain the results If you can’t explain why your AI made a decision, neither can regulators or affected users. Lack of explainability puts trust and compliance at risk. ?? 5. Security risks are growing AI can be vulnerable to prompt injection attacks, data leaks, and manipulation. If security is on your radar, an audit can reveal hidden vulnerabilities. AI offers incredible potential, but without continuous evaluation, it can expose organizations to real risks, especially in sensitive sectors like hiring, finance, and healthcare. ?? We offer an automated AI audit solution that helps businesses stay ahead of bias, misinformation, and security vulnerabilities. With regular monitoring, you can spot issues early and ensure your AI systems are fair, secure, and compliant. ?? Have questions? Let’s connect: [email protected] ?? Explore more: https://eticas.ai/ #AIaudit #ResponsibleAI #LocalLaw144 #AISafety #AICompliance #FairAI #AIethics #TechForBusiness

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Terms like AI agents and agentic AI are popping up everywhere, but what do they actually mean, and are they the same? Let’s break it down ?? ?? AI Agents. These are systems designed to autonomously perform tasks to achieve a specific goal. They can perceive input, make decisions, and act—think of them as digital assistants that follow instructions and execute actions without constant human oversight. Examples: – A chatbot that handles customer queries – An AI that schedules meetings or sends reminders – A robot that navigates a warehouse ?? Agentic AI. This goes a step further. It refers to AI systems with a higher degree of autonomy and initiative. Agentic AI doesn’t just complete tasks—it can set sub-goals, plan, adapt, and act proactively in dynamic environments. Key traits: – Goal-directed behavior – Adaptability to new situations – Initiative without constant input ?? In short: ? All agentic AI are AI agents, but not all AI agents are agentic. It’s about the level of autonomy, adaptability, and initiative the system has. ?? As AI agents evolve to become more agentic, businesses need to think about oversight, accountability, and risk management. More autonomy means more potential and more responsibility. ?? Is your organization exploring AI agents or agentic AI? Let’s discuss the opportunities and challenges. #AI #AIagents #AgenticAI #ResponsibleAI #FutureOfWork #AIethics #Automation #TechForBusiness

  • Eticas.ai转发了

    查看Eticas.ai的组织主页

    4,769 位关注者

    Explainability vs. Transparency in AI: What’s the difference? As AI takes on more decision-making roles in hiring, finance, healthcare, and beyond, two concepts are often mentioned, but not always clearly understood: Explainability and Transparency. Here’s why both matter and why they’re not the same: ?? Transparency. This is about what’s inside the system. It refers to access to information about how an AI system is built and how it operates: – What data was used for training? – What algorithms are at work? – Who developed it, and under what guidelines? Transparency helps us understand the process behind the AI, but it doesn’t always tell us why a specific decision was made. ?? Explainability. This focuses on why the AI made a specific decision. Can the system provide a clear, understandable reason for its output? – Why was one candidate selected over another? – Why was a loan application rejected? Explainability is essential for building trust, especially when AI impacts people’s lives. It allows users and regulators to challenge decisions and ensures accountability. ?? In short: Transparency = We can look under the hood. Explainability = We understand the outcome. ?? For businesses, having both is crucial: Transparency ensures compliance, while explainability builds trust with users and stakeholders. ?? How does your organization handle AI explainability and transparency? Let’s discuss. #AI #AIethics #ResponsibleAI #ExplainableAI #Transparency #AIaudit #AIaccountability #TechForGood

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Explainability vs. Transparency in AI: What’s the difference? As AI takes on more decision-making roles in hiring, finance, healthcare, and beyond, two concepts are often mentioned, but not always clearly understood: Explainability and Transparency. Here’s why both matter and why they’re not the same: ?? Transparency. This is about what’s inside the system. It refers to access to information about how an AI system is built and how it operates: – What data was used for training? – What algorithms are at work? – Who developed it, and under what guidelines? Transparency helps us understand the process behind the AI, but it doesn’t always tell us why a specific decision was made. ?? Explainability. This focuses on why the AI made a specific decision. Can the system provide a clear, understandable reason for its output? – Why was one candidate selected over another? – Why was a loan application rejected? Explainability is essential for building trust, especially when AI impacts people’s lives. It allows users and regulators to challenge decisions and ensures accountability. ?? In short: Transparency = We can look under the hood. Explainability = We understand the outcome. ?? For businesses, having both is crucial: Transparency ensures compliance, while explainability builds trust with users and stakeholders. ?? How does your organization handle AI explainability and transparency? Let’s discuss. #AI #AIethics #ResponsibleAI #ExplainableAI #Transparency #AIaudit #AIaccountability #TechForGood

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Can AI accurately predict patient outcomes? AI is transforming #healthcare, helping doctors analyze patient data, assess risks, and predict outcomes. But how reliable are these predictions? A recent study in Nature Communications Medicine found that AI models missed up to 66% of critical injuries that could lead to death. ?? What went wrong? ? AI models relied on historical data that didn’t fully capture real-world patient variability. ? They failed to recognize high-risk conditions, leading to inaccurate risk assessments. ? Over-reliance on these models could misguide healthcare professionals and compromise patient safety. ?? What can we learn from this? ? AI in healthcare must be rigorously tested and continuously monitored to prevent harmful errors. ? Medical professionals need to understand how AI reaches its conclusions. ? Bias in training data must be addressed to ensure AI models work across diverse populations and conditions. AI can revolutionize healthcare, but it can pose serious risks without proper oversight. That’s why we’ve developed an automated AI evaluation solution that helps organizations avoid bias, misinformation, and security risks. Regular audits can catch issues before they lead to real-world consequences, especially in high-stakes fields like healthcare. ?? Read about this case: https://lnkd.in/e3sf4BWJ ?? Need expert guidance? Contact us at [email protected]. ?? Learn more: https://eticas.ai/ #AIinHealthcare #AISafety #ResponsibleAI #AIethics #HealthTech #AIaudit

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    AI isn’t magic, but it often feels that way. Behind every AI system, whether diagnosing diseases, detecting fraud, or powering autonomous cars, are algorithms that mimic human decision-making, sometimes in ways we don’t fully understand. But what is AI? How does it work? And why has it become so essential? In this article, Hugo Meza, Tech Lead at Eticas.ai, breaks it all down, tracing AI’s evolution from early rule-based systems to today’s powerful deep learning models. He explores how AI interprets data, makes predictions, and even influences the world around us. ?? Read more: https://lnkd.in/eQuNPNcZ #ArtificialIntelligence #TechExplained #MachineLearning #AIethics #FutureOfTech

  • 查看Eticas.ai的组织主页

    4,769 位关注者

    Do you trust AI? AI is streamlining hiring, finance, customer interactions, and business decisions, but do you trust it to make fair and reliable choices? ?? What’s your take? ?? Curious about your AI system’s reliability? Let’s evaluate it. ?? Contact us at [email protected] ?? Learn more: https://eticas.ai/

    此处无法显示此内容

    在领英 APP 中访问此内容等

关联主页

相似主页

查看职位