Luminos.AI的封面图片
Luminos.AI

Luminos.AI

软件开发

Washington,DC 299 位关注者

Luminos gives lawyers the tools they need to reduce AI liabilities and sign off on AI risks.

关于我们

Luminos.AI was custom-built to serve the needs of the largest and most innovative companies on the planet. Our software has been deployed to manage AI risks in everything from generative AI to facial recognition, credit, hiring decisions and more. We apply years of nuanced legal experience to make sure AI governance works—in practice and at scale.

网站
https://www.luminos.ai/
所属行业
软件开发
规模
2-10 人
总部
Washington,DC
类型
私人持股
创立
2022

地点

Luminos.AI员工

动态

  • 查看Luminos.AI的组织主页

    299 位关注者

    ?? Generative AI and Copyright: Why Current Legal Frameworks Fall Short — and What Companies Should Do ?? Existing copyright law in the U.S. and text-and-data mining (TDM) exceptions in the EU don’t fully address the use of massive datasets to train GenAI models - meaning that many companies can feel like they are flying blind when it comes to clear legal rules for their GenAI training data. Neither U.S. fair use doctrine nor EU TDM exceptions were designed with foundation models in mind — leaving companies exposed to uncertain, untested legal risks. Professors Tim W. Dornis and Sebastian Stober provide a great overview of these issues in a new paper, Generative AI Training and Copyright Law (link below). ?? Why this matters for companies: Legal, privacy, and AI governance teams are already overwhelmed with GenAI approvals. Without knowing what data a model was trained on — or how that data interacts with evolving copyright and privacy law — approving AI tools is a minefield. Manual review is impossible at scale. ?? The way forward? ? Automated training data governance solutions to evaluate datasets before models are deployed. ? Proactive AI governance processes that align with fast-evolving global regulations. And regulation is coming fast: ?? The EU AI Act mandates training data transparency for high-risk models. ?? The FTC has warned AI trained on unlawful data may violate consumer protection laws. ?? The UK has called for clear data accountability in AI development. Bottom line: If you’re building or buying GenAI tools, understanding and managing training data risks is no longer optional. Want to learn more about automated approaches to approving GenAI systems for risks? Reach out to [email protected]! #GenerativeAI #AIgovernance #Copyright #TDM #AIregulation #InHouseCounsel #AIethics Read the paper here: https://lnkd.in/eESuEy4G

  • 查看Luminos.AI的组织主页

    299 位关注者

    Enterprises are betting big on AI—but getting models approved for legal risk has become their biggest challenge. And this will only get worse with laws like the EU AI Act, the Colorado and Utah AI Acts, and more. ? The AI approval process is slow, complex, and often ad hoc, delaying innovation. ??? Our CEO Andrew Burt sat down with Ben Lorica 罗瑞卡 on The Data Exchange Podcast to unpack why legal hurdles are holding AI back—and how to fix it. ? At Luminos, we’re streamlining AI adoption by automating AI approvals—so your models move from idea to production, faster without getting stuck in an approval process bottleneck. ?? Listen here: https://lnkd.in/gYr7A2K8 ?? Want to see how Luminos can help? Email us at [email protected].

  • 查看Luminos.AI的组织主页

    299 位关注者

    What’s the biggest challenge posed by artificial intelligence? It’s not social media, the spread of misinformation, fears of Skynet, or all the other harms that dominate the headlines. Instead, it’s much simpler. The problem is that organizations adopting AI do not have the right *tools* to manage its risks. Data scientists and legal teams live in different worlds, use different tools, and have an extremely difficult time communicating. And this means that companies have a choice: adopt AI and overlook compliance, or emphasize compliance and jeopardize the speed at which they adopt AI. The way to solve for this problem is to give these teams the tools they need to communicate, test and manage AI risks efficiently and at scale. In short, to automate the way that AI is managed for risks. And this is exactly what we’re focused on at Luminos. Read our latest on the disconnected between lawyers and data scientists below, and please reach out to us at [email protected] if you're interested in learning more! https://lnkd.in/eEUsAP8K

  • 查看Luminos.AI的组织主页

    299 位关注者

    What do you need to know about AI in 2025? Ben Lorica 罗瑞卡 provides the key highlights over at Gradient Flow - describing the need for platforms that *align* testing and tools across AI risks. That's because in practice, legal and data science teams have too many different tools for workflows, risk triage, testing, documentation and more - which is why it can take months to oversee and approve AI systems for risks. But all of that changes with AI alignment platforms like Luminos.AI. Read more about AI alignment platforms below - and reach out to [email protected] to learn more! https://lnkd.in/eRwb8CAZ

  • 查看Luminos.AI的组织主页

    299 位关注者

    Congrats to the entire Luminos.Law and ZwillGen team! Compliance has become *the* biggest barrier to AI adoption for every company that is serious about AI. With our roots in the legal and privacy community - and Luminos.Law in particular! - we know that it takes a creative, interdisciplinary team to allow enterprises to move fast and ensure AI compliance. The new AI Division at ZwillGen will be a central part of that approach! To learn more about our tools for automated AI oversight, reach out to us at [email protected]. We'd love to talk to you! https://lnkd.in/epvG7hFw

    I’m thrilled to announce that ZwillGen has acquired Luminos.Law! ? We founded Luminos.Law five years ago and what a journey it has been. The law firm launched just before the pandemic, quickly losing our early clients amidst the chaos. We regained our footing just a few months later and took off. Since then, we’ve served some of the most innovative companies in the world, with clients across the Fortune 500 and Global 2000 in nearly every sector. We have red teamed frontier models and GenAI systems, tested and debiased AI in healthcare, finance, insurance, retail and more. We became the first law firm in the history of the National Institute of Standards and Technology (NIST) to receive a research grant, where we conducted research in support of truly foundational government initiatives like the AI Risk Management Framework. ? Throughout this journey, I’ve routinely had to pinch myself that I get to do such impactful work. I was not only running a business, but I felt like I was doing public service work aligned to my time in the government. For all the hype about the impact of AI, it cannot be a force for good unless we as a society can meaningfully address its very real risks. ? And that’s why I’m so excited to kick off this new chapter where Luminos.Law has become the AI Division of ZwillGen, one of the top privacy law firms in the world. Our resources combined means that we can help clients adopt and manage AI risk responsibly and thoroughly. The law firm is in amazing hands with Marc Zwillinger, Brenda Leong, Jey Kumarasamy and the rest of the team. ? With this new chapter comes a new role for me. I will be supporting the new AI Division as an advisor, but my day to day focus will shift to Luminos.AI. At Luminos.AI, we are automating the way that AI risks are managed - so that legal, privacy and risk teams can exert oversight over AI systems practically without draining all their time and resources. And I am so excited about the work we’ve done over the last year at Luminos.AI, supporting our amazing early customers as we launched our beta program. As we open up access to our platform to new customers later this year, I’m excited for us to play a critical role in how AI risk is managed in the next era of its adoption.? ? The truth is - and I’ve seen it over and over again - that lawyers and privacy officers simply do not have the resources or the time they need to manually review every AI system for risk. While new AI laws are coming out in the EU, Colorado, Utah and elsewhere, the tidal wave of new regulations is just starting and is only going to make AI oversight harder. We’re ready to meet that moment.? ? Reach out to me if you’d like to learn more about the new AI Division at ZwillGen or what we’re building at Luminos.AI - and stay tuned for more! ? https://lnkd.in/eY7gwXfD

  • 查看Luminos.AI的组织主页

    299 位关注者

    Is this your AI governance solution: some combination of documents, spreadsheets, email, and ad-hoc meetings? The state of the art in GenAI is evolving rapidly--and so is the legal landscape you'll need to comply with. Your legal, technical, governance, and business teams need to collaborate closely to understand the risks, complete any needed testing and documentation, and get the necessary sign-offs. But if you're like most organizations we talk to, this AI governance process is a hodge-podge of disconnected tools, delayed responses, confusion and furstration. The result? It often takes 3-6 months or more just to get deployment approval for a new AI tool, even after development is done! It doesn't have to be that way.?Luminos.ai's Workflow Manager, part of our AI Alignment Platform, is designed specifically to bring these teams together in one collaborative environment. One where you can build on industry best practices, easily construct your own workflows and governance documents, and get input from all the internal & external stakeholders--rapidly and with full transparency. If you're interested to take your AI governance approval process from 3-6 months down to 3-6 days, reach out to?Luminos.ai?at [email protected] or visit our website??? https://www.luminos.ai/

    • 该图片无替代文字
  • Luminos.AI转发了

    查看Luminos.AI的组织主页

    299 位关注者

    Misuse and harms from Generative AI are no longer theoretical problems: they are happening today. The article below explores recent research from DeepMind on the topic -- recommended reading for anyone interested in the space. If you are looking to strengthen your own Responsible AI initiative and mitigate these risks, our alignment platform can help!?How??By combining easy collaboration between legal, data science, and business teams with rigorous testing and clear documentation. Interested in learning more? Reach out at [email protected] or visit our website??? https://www.luminos.ai/ #GenerativeAI #legal #datascience

    查看Ben Lorica 罗瑞卡的档案

    gradientflow.substack.com

    ?? The recent Google DeepMind paper on real-world AI misuse underscores the critical need for effective safeguards and responsible practices. By categorizing strategies employed for malicious purposes, the study provides valuable insights for mitigating risks. This research highlights the importance of AI Alignment Platforms like Luminos.AI, which can incorporate these findings to develop comprehensive solutions for managing performance, legal, compliance, and reputational risks associated with AI technologies ?? https://lnkd.in/gkV5Sanm

  • 查看Luminos.AI的组织主页

    299 位关注者

    Misuse and harms from Generative AI are no longer theoretical problems: they are happening today. The article below explores recent research from DeepMind on the topic -- recommended reading for anyone interested in the space. If you are looking to strengthen your own Responsible AI initiative and mitigate these risks, our alignment platform can help!?How??By combining easy collaboration between legal, data science, and business teams with rigorous testing and clear documentation. Interested in learning more? Reach out at [email protected] or visit our website??? https://www.luminos.ai/ #GenerativeAI #legal #datascience

    查看Ben Lorica 罗瑞卡的档案

    gradientflow.substack.com

    ?? The recent Google DeepMind paper on real-world AI misuse underscores the critical need for effective safeguards and responsible practices. By categorizing strategies employed for malicious purposes, the study provides valuable insights for mitigating risks. This research highlights the importance of AI Alignment Platforms like Luminos.AI, which can incorporate these findings to develop comprehensive solutions for managing performance, legal, compliance, and reputational risks associated with AI technologies ?? https://lnkd.in/gkV5Sanm

  • Luminos.AI转发了

    查看Ben Lorica 罗瑞卡的档案

    gradientflow.substack.com

    The ?? AI Risk Repository consolidates insights from 43 frameworks, cataloging 700+ AI risks. It offers a unified tool for stakeholders to manage evolving AI risks through Causal and Domain taxonomies. This living database aims to bridge gaps in understanding and mitigating AI-related dangers. [from Massachusetts Institute of Technology & others; cc Luminos.AI Luminos.Law] ? https://lnkd.in/e2_fgXiy

  • Luminos.AI转发了

    查看Ben Lorica 罗瑞卡的档案

    gradientflow.substack.com

    AI incidents are coming. Is your organization ready? New playbooks needed for AI-specific issues. Define thresholds, monitor closely, plan containment. Consider unified AI alignment platform to manage risks across bias, privacy, safety as you scale AI. #AI #cybersecurity #LLM #GenAI Andrew Burt Luminos.AI Luminos.Law ? https://lnkd.in/ggvksQ2m

相似主页

查看职位