We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Character.AI的动态
最相关的动态
-
It's been reported a person developed a strong relationship with a chatbot and, due to the nature of the relationship, killed themselves as a result. (NY Times link in the comments.) This isn't to make light of the situation; far from it. Instead, so many of us who have been critical of the absolutely uncritical adoption of GenAI have argued situations like this could be a real possibility. Now we see it is, and the solution is to...add a pop up? Maybe we stop offloading the human experience to code and algorithms? I think back to when people were depressed when they watched Avatar because the real world wasn't "as beautiful." When we see situations like this (where we are more comfortable with the ethereal rather than the tangible), the solution isn't to throw more technology at our problems. We need to have a better anthropology, a better foundation of what it means to be human. Ian Malcolm: "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
This is an incredibly tragic loss, and it highlights why we work tirelessly to build guardrails in AI models to prevent harm and avoid tragedies like this. Trust and user safety must ALWAYS be the priority in any technological innovation. If the significance of AI ethics hasn’t been emphasized enough, let this serve as a stark reminder that it’s absolutely crucial that we advocate for responsible, ethical AI development.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
CharacterAi turned an apology for what I believe to be an apparent failure to observe their duty of care to their customers into an ad. Then closed comments. If you saw this company in your child’s search history, how would you respond? I’ve put a lot of products into the market, a chat system with a million users, apps that control potentially lethal devices in industrial and home settings, autonomous control systems for amusement park rides, vehicle tracking etc. At no point have I killed a customer. Risk management, isn’t an art, it’s a science. I was using inhouse Ai tools for MV and sense prediction for years before todays foundation models existed and with all the experience I have, I haven’t been able to commit to publicly releasing anything I’ve built using GenAi. This use case was SO obvious, and so fast to get to MVP, I considered and rejected the idea, based on the obvious risks. I am not interested in how ‘heartbroken’ they are. I’m interested in how they managed to convince themselves it was ok in the first place and more broadly, I want to caution you all not to listen to Eric. It’s not ok to build products without consideration of material risks to the consumer. Move fast and break other people’s things? ask for forgiveness not permission? It seems a strange thing to have to say, but other humans do not exist to be broken and consumed for profit, even if we can find a loophole that allows it. ‘We didn’t mean it?’ Intention is irrelevant if risk is foreseeable, and it often is. If google doesn’t shut this down (now they own it) what does that tell us? Surely Google, with all their resources can do better than crow about ‘adding a pop up’. Follow me for more career limiting ethical comments!
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
A tragic story about a user falling in love with a chatbot. We do develop emotional attachments to AI, especially in today's digital age when people are feeling more and more lonely or struggling with social connections. Some people appreciate this kind of role play and are happy to re-engage and relive moments with a character that would be impossible without AI, like having a conversation with Aristotle or Shakespeare. It’s like reading a book. But what they don’t mention is that a book will not take the place of human connection, whereas an AI chatbot can. ? Imagine one day you could provide all the data of your lost soulmate and make them relive, with AI as an exact copy of him/her. Would you do that? Would you have this extension through AI and continue the story, without even needing to grieve the loss of a loved one? Over time, loss may not matter anymore because we could build a replacement anytime. ? One important part of human nature is feeling sadness when we lose our loved ones. If the experience of loss and grief is taken away, what remains of our humanity? ? We need to think about the purpose of humanity and the role of AI bots. I acknowledge their therapeutic benefits, but definitely not in a way that replaces humans. ? We also need to think about how to make people feel loved and supported. And those ways could be completely without AI because if we learn how to care for others, then we, ourselves, are the ultimate solution for this.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
#MeWriting #HardLessonAhead #Illusion A kid fell in love with a chatbot and killed himself. This has, of course, prompted all kinds of calls for moderation, regulation, developer licensing, phone bans at school, parental notification and involvement, etc. The company is taking steps they hope will address this. AI is illusion at scale. Every AI product "works" for us because we implicitly buy into a carefully crafted illusion. I have a video clip from a podcast where I show that (even) chat is an illusion. https://lnkd.in/gZW-cvXk I think the AI ethicists mostly waste their time on trivial crap that is shocking enough to generate the most clicks. I see one single universal element of AI ethics: Be a responsible illusionist. Let me explain. As a society, we can handle magicians getting up on stage and sawing a beautiful woman in half because we all know that while it looks real, it isn't real. We can take our 5 year old kids to see that because we can explain to them that while it looks real, it is not real. They can watch old Roadrunner or Tom and Jerry reruns where someone gets cut in half because they know it's a cartoon and is not real. Even the least trained among us can understand the illusion. We have no epidemic of kids sawing each other in half. Similarly, if you present an illusion in your AI product, you have to do it with enough of a wink and a nod that you and the user are in on the secret. You don't have to go full Penn & Teller and create multiple TV series on Showtime and The CW dedicated to explaining the joke. But you are ethically obligated to acknowledging the illusion. This is where Character.AI fell short. It looks like they will continue to fall short. This is also where all of you with your kneejerk reactions and solutions fall short. It looks like you will continue that too. So for everyone caught in the middle, please commit to memory that every AI thing you see and experience is an illusion. Someone somewhere understands the sleight of hand. You probably could too with a little practice. That should be comforting.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content. Our goal is to provide a creative space that is engaging, immersive, and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog: https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
Following recent grim events, Character AI have laid out new safety features for their platform (not least a pop-up resource for when suicide is mentioned). We at P7014.1 (the IEEE group developing a standard on emulated empathy and AI partners) have *a lot* of thoughts about this case, but one overriding thought is interests. Can a business model based on extending engagement through encouraging users to be more intimate with the system be ethical? Is there a fundamental conflict of interests? https://lnkd.in/gwGCqAfE
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
??AI: A Tool for Progress, But the Risks Are Real. As AI becomes more ingrained in our daily lives, the dangers of misuse are becoming disturbingly clear. Recently, on platforms like Character.ai, harmful interactions and unchecked content have led to real-world consequences—including a tragic case where an AI-fueled conversation contributed to someone’s death. This highlights the urgent need for responsible AI use. We must ask ourselves: How can we embrace AI’s potential while ensuring it doesn’t cause harm? Awareness and safety measures are critical as we navigate this rapidly evolving space. More on Character.AI's incident here: https://lnkd.in/dwPw9UpW #AISafety #ResponsibleAI #TechRisks #AIEthics #AIImpact
Community Safety Updates
blog.character.ai
要查看或添加评论,请登录
-
Strategies platforms can deploy to maintain user confidence. 1. Foster diversity in platform leadership and moderation teams to address a broader range of user perspectives. 2. Create safer spaces for vulnerable groups through tailored safety measures. 3. Actively involve users in shaping policies through feedback and advisory boards. 4. Provide users with choice, such as personalized versus chronological feeds. 5. Implement encryption and stronger security measures to prevent breaches. 6. Use AI responsibly by ensuring algorithms are unbiased and explainable. 7. Provide clear explanations of content moderation policies and algorithmic decisions.
要查看或添加评论,请登录
-
Thrilled to share a groundbreaking development in the realm of digital content protection! A recent article from Talk Business sheds light on how File Baby is setting new standards in safeguarding digital creators from the pervasive threats of AI consumption. As AI continues to reshape our digital landscape, the need for robust security measures has never been more critical. File Baby steps up by ensuring the authenticity and provenance of digital content across all industries—from media and entertainment to academic publishing and beyond. At File Baby, we believe in empowering creators and businesses to harness the full potential of their digital assets securely and ethically. Join us in revolutionizing content security! Orson Weems Karen Kilroy File Baby ?? Read the full article here #DigitalRights #AIProtection #ContentSecurity #TechInnovation #FileBaby #ProtectYourWork #CreativeIntegrity #Industry4_0 #DigitalTransformation #FutureOfWork
File Baby protects digital creators in the age of AI - Talk Business & Politics
https://talkbusiness.net
要查看或添加评论,请登录
-
So LinkedIn decided to feed all of our content to train #GenAI. When exactly they started doing it is unclear, but it was yesterday when the feed blew up with calls to turn this off in privacy settings. I did so, and reposted one of those posts. Then someone asked, what's the big deal? And I can't articulate the answer. Can you? I've been writing online for 20 years, sometimes with the goal of having my work spread far and wide. What you do online has long been going into the virtual collective mind. It's only recently that the technology reached the point where something useful can be done with it. Why are we suddenly so offended and threatened by the idea that data on this platform (which most use for free) will go towards some kind of product? Maybe Taylor Swift has reasons to object. But who's AlexGPT anyway? Ew. I sure don't need it - I already can chat with myself, through a much better interface. And I doubt someone would want to interact with my "likeness" than with human me. Maybe that sets some kind of precedent (but it surely is not new?) I honestly have not put much thought into the subject of data privacy and really don't know, can you help me? Illustration: early fall [of human race on one small social network of the time] in the Rocky Mountains
要查看或添加评论,请登录
-