Thrilled to have our CEO Devshi Mehrotra spotlighted in Axios' Women in AI series! At JusticeText, we are proud to be using AI technology to strengthen public defense advocacy across the country. Read more about how we are making an impact in the criminal justice system.
JusticeText的动态
最相关的动态
-
The Daniel Morcombe Foundation is proud to be a part of the SaferAI for Children Coalition. We're pleased to share our collaborative discussion paper, "Artificial Intelligence and Child Protection: A Collaborative Approach to a Safer Future". As AI transforms our world, we're working to ensure that it’s harnessed to protect, not harm, children. From outlining the risks posed by AI-enabled tools in facilitating child sexual exploitation (CSE) to advocating for innovative AI-driven solutions, this paper navigates the complex terrain of protecting children in the digital age. We're honoured to contribute alongside @ICMEC Australia and our Coalition partners from law enforcement, not-for-profits, academics, and AI experts, to chart a path forward for Australia in this vital space. Together, we can create a safer digital future for all children. Read the full discussion paper here: https://ow.ly/kCHf50UfvPL #SaferAIforChildren
要查看或添加评论,请登录
-
-
The recent case discussed in The Guardian centers on the unprecedented federal prosecution of someone creating AI-generated child sexual abuse material (CSAM). With advances in AI tools, creating such explicit deepfakes has become increasingly accessible, raising serious concerns among law enforcement and child advocacy groups. Federal and state laws typically require evidence of real harm to prosecute CSAM, but new legislation is emerging to address the exploitation inherent in synthetic imagery. This case, which marks a first in U.S. federal law, sets a critical precedent, indicating that law enforcement is beginning to treat AI-generated CSAM as a criminal offense. The Justice Department and other groups aim to make it clear that creating or distributing AI-depicted abuse is prosecutable, even if no real child is involved. Lawmakers across various states are enacting laws to clarify and strengthen these efforts. For instance, California recently passed a bill targeting AI deepfakes involving children, allowing for prosecution without needing to prove a real child was involved. Experts, including those from organizations like the National Center for Missing & Exploited Children, emphasize that these images could still be used to groom and traumatize children. There’s also concern that such images might flood law enforcement databases, potentially diverting resources from identifying real victims and increasing the scale of psychological harm for minors when their likenesses are misused. As AI image models become more powerful, tech companies like Google and Stability AI are collaborating with anti-child abuse organizations to curb misuse, though many argue that proactive safety measures could have been prioritized earlier. This case underscores the growing urgency for regulatory frameworks and proactive tech safeguards to keep pace with AI advancements and prevent abuse. It’s a stark reminder that while AI offers transformative potential, it also requires equally robust ethical guidelines and enforcement measures to protect vulnerable groups. #AI #abuse #children
要查看或添加评论,请登录
-
The Daniel Morcombe Foundation is proud to be a part of the SaferAI for Children Coalition. We're pleased to share our collaborative discussion paper, "Artificial Intelligence and Child Protection: A Collaborative Approach to a Safer Future". As AI transforms our world, we're working to ensure that it’s harnessed to protect, not harm, children. From outlining the risks posed by AI-enabled tools in facilitating child sexual exploitation (CSE) to advocating for innovative AI-driven solutions, this paper navigates the complex terrain of protecting children in the digital age. We're honoured to contribute alongside @ICMEC Australia and our Coalition partners from law enforcement, not-for-profits, academics, and AI experts, to chart a path forward for Australia in this vital space. Together, we can create a safer digital future for all children. Read the full discussion paper here: https://ow.ly/kCHf50UfvPL #SaferAIforChildren
要查看或添加评论,请登录
-
-
?? **AI & Policing: A 2024 Review** ?? In 2024, the landscape of policing transformed significantly with the integration of AI technologies. Here are some key highlights: ?? **Surveillance Expansion**: Law enforcement agencies increasingly adopted AI-driven surveillance tools, leading to a staggering 20% rise in surveillance operations over the previous year. ?? **Predictive Policing Concerns**: Algorithms used for predictive policing faced widespread criticism, as communities raised alarm about racial bias and the potential for wrongful targeting. Nearly **60% of surveyed communities** expressed concerns regarding transparency and civil liberties. ?? **Legislative Movement**: In response, several states introduced bills aimed at regulating AI use in policing, with **27 key proposals** emerging across the nation. ?? **Community Engagement**: AI implementation prompted new policies encouraging community engagement, with **over 70% of police departments** reporting improved public relations initiatives involving AI transparency. ?? **Looking Ahead**: As AI continues to evolve, the conversation on ethical use and accountability in policing grows vital. 2024 signifies a pivotal year in harnessing technology while safeguarding civil rights! Let us know your thoughts on the future of AI in law enforcement! ?? #AIPolicing #DataPrivacy #TechForGood #SocialJustice
要查看或添加评论,请登录
-
-
Meet JusticeText, the AI platform revolutionizing the U.S. criminal justice system by leveling the playing field for public defenders. With public defenders often overwhelmed and underfunded, JusticeText steps in to transcribe and analyze crucial video evidence quickly, saving countless hours. This innovative tool, created by Devshi Mehrotra and Leslie Jones-Dove, ensures fairer trials and better outcomes for defendants who can't afford private attorneys. JusticeText uses AI to sift through hours of footage, flagging key moments and providing critical insights. This technology is already transforming the defense process for hundreds of public defenders nationwide, helping secure dismissals and reduced charges. By balancing resources, JusticeText is redefining public safety and accountability in America. Discover how JusticeText is making a difference: https://lnkd.in/gWc9i7US. #JusticeText #AI #CriminalJustice #PublicSafety #LegalTech #Innovation #EqualJustice #TechForGood #PublicDefenders #FairTrials #ReduceCrime
要查看或添加评论,请登录
-
As a senior in college currently taking the #DCIMCapstone class, I've been diving deep into the intersection of technology and societal issues. AI and predictive policing, a topic covered in my Data Analysis class, is a rapidly growing area where artificial intelligence is used to predict criminal activity and allocate law enforcement resources more efficiently. This technology aims to make communities safer by analyzing data to identify potential crime hotspots. However, it also raises important concerns about fairness and justice. One of the biggest challenges is bias. Predictive policing often uses past crime data, which might reflect patterns of over-policing in certain neighborhoods, especially minority or low-income areas. This means that the AI could unfairly target these communities, leading to even more policing in places that have already been heavily monitored. It’s a serious issue because it can reinforce existing inequalities instead of solving the root causes of crime. We need to bring this topic into the public discussion because AI is becoming a bigger part of our everyday lives, including in law enforcement. Predictive policing isn’t just a technology issue—it’s a human rights issue. We need to make sure that the tools we use for public safety are transparent, fair, and protect civil rights. It’s crucial to ask questions now so that we don’t let technology unintentionally cause harm by reinforcing past injustices. By encouraging this conversation, we can push for AI systems that are not only innovative but also ethical and beneficial for all communities. I found an interesting article that looks at how AI is used in predictive policing, discussing its benefits for public safety and the important ethical issues related to civil rights and communities: https://lnkd.in/eibQBN9c.
要查看或添加评论,请登录
-
?What is your greatest hope for Artificial Intelligence and Child Justice? https://lnkd.in/d-bQRhUY A few days ago, the i-ACCESS project brought together experts in technology, law, and child protection from across Europe and beyond. Over two days, participants tackled critical and forward-looking questions: ??How can Artificial Intelligence be used to advance children's access to justice, while protecting their safety and rights? ??What can be done to ensure children have meaningful opportunities to participate in shaping these solutions? ??How can we collaborate effectively to drive child-centred innovation? The conference fostered dynamic and thought-provoking discussions on the intersection of child justice and digital innovation. Missed it? You can catch up by accessing recordings of select sessions. Discover more about the potential of AI in child justice and join the movement for child-centred digital solutions! https://lnkd.in/d-bQRhUY #EthicalAI #ChildJustice #ChildrenRights #DigitalRights #AIEthics
要查看或添加评论,请登录
-
AI on the Beat: Empowering Police to Protect and Serve More Effectively Fresno Police are showcasing a powerful use case for AI: cutting down on paperwork so officers can spend more time protecting and serving their communities. According to a recent article from YourCentralValley, the department has implemented AI tools that have significantly boosted officer productivity, allowing them to focus more on proactive policing and community engagement. Imagine this: Instead of being stuck behind desks, officers can respond faster to emergencies, build stronger relationships with the communities they serve, and enhance public safety—all thanks to AI streamlining administrative tasks. This isn’t about replacing officers but enabling them to do their jobs better. By automating time-consuming tasks, AI empowers law enforcement to prioritize what matters most: keeping people safe. Could AI-driven efficiencies like this redefine the role of law enforcement in communities nationwide? Let’s discuss. #AIInAction #PublicSafety #AIForGood #PoliceTech #EfficiencyMeetsSafety
要查看或添加评论,请登录
-
?What is your greatest hope for Artificial Intelligence and Child Justice? https://lnkd.in/d-bQRhUY A few days ago, the i-ACCESS project brought together experts in technology, law, and child protection from across Europe and beyond. Over two days, participants tackled critical and forward-looking questions: ??How can Artificial Intelligence be used to advance children's access to justice, while protecting their safety and rights? ??What can be done to ensure children have meaningful opportunities to participate in shaping these solutions? ??How can we collaborate effectively to drive child-centred innovation? The conference fostered dynamic and thought-provoking discussions on the intersection of child justice and digital innovation. Missed it? You can catch up by accessing recordings of select sessions. Discover more about the potential of AI in child justice and join the movement for child-centred digital solutions! https://lnkd.in/d-bQRhUY #EthicalAI #ChildJustice #ChildrenRights #DigitalRights #AIEthics
要查看或添加评论,请登录
-
?? Pennsylvania becomes the latest state to criminalize non-consensual AI-generated sexual content, with Governor Shapiro signing the legislation into law on October 29th. The new law will: ? Make it a criminal offense to create or share "artificially generated sexual depictions" - content that appears to authentically depict someone in nudity or sexual conduct that never occurred in reality ?Set clear penalties: ? First-degree misdemeanor if the victim is a minor ? Second-degree misdemeanor if the victim is an adult ?Update terminology and strengthen protections against "artificially generated child sexual abuse material" It is truly no brainer that we need a federal policy on this! Last year, all 50 state attorneys general united in calling for federal action against AI-generated child sexual abuse material - a rare show of unanimity that demonstrates how serious this issue is. ?? The AI Policy Newsletter: https://lnkd.in/eS8bHrvG ???? The AI Policy Course: https://lnkd.in/e3rur4ff #AIpolicy #ArtificialIntelligence #TechPolicy #AIGovernance
要查看或添加评论,请登录