himala AI转发了
?????? ???????? ???????????????????????? ???????????????????????????? ?????????? ?????????????????? ???????????????????? ????????? ??Ilya Sutskever just raised a billion USD to fund his start-up SSI focused on safety. Clearly people are worried about AI being in the wrong hands and misinformation, manipulation, and distortion of truth. This is especially concerning when deployed in political contexts, where the stakes are incredibly high. ???? As we approach the next U.S. elections, the role of AI-generated images and deepfake technology becomes even more relevant and with great innovation comes great responsibility. But how much responsibility should be carried by the enterprises? How much should the regulator cover? Where should it be regulated? Do the individuals writing the code carry responsibility? How high should the guardrails be? If all sides have equal access to the same tools is it still a weapon? I don't have answers yet but would love to hear your thoughts. ???? ??????????????: ?????? ?????? ???????????????? via Jan Andresen #AI #Deepfakes #Geopolitics #EthicsInAI #US2024 #ElectionIntegrity
Sick. It’s already in people’s hands and people can fine tune however they want and produce how they want to. It’s no more different than drug overdose. I wonder how SSI can undo this?
Wow this is a topic to fill books with. Sometimes people need rules and guardrails to prevent them for hurting themself and others. These might prevent most harm, but the ones who want to exploit AI will do it without caring about the rules we have put in place as a society. I do not believe in self regulation of companies but I do also not believe in preventing innovation. Is it even a political or a philosophical question? Maybe it depends on the values of each group or society, and if we care to fall behind others in AI, what in this case can be very dangerous.
I'm reminded of the discussion we had last year around Christmas: How easy will it be to create deepfakes in the future, and could young people become victims of bullying at school as a result, similar to what is currently happening with Revenge Porn? One hypothesis we discussed was that it will soon be so easy to create high-quality deepfakes for everybody that they will be used in such an inflationary way that they will lose their impact. In the end, like the initial face-swap hype, they were initially fascinating but quickly lost their appeal and became boring. Nevertheless, it should still be a criminal offense to spread deepfakes with the intention of harming someone. Technically, we will hardly be able to prevent the creation of such content, and regulatory measures can at best make access more difficult by raising the technical hurdles.
As is often the case with disruptive technologies, it is difficult to find the right balance and provide the appropriate answers to key questions. Most likely, time will tell what truly works. Mistakes will be inevitable—and it is precisely from these that we, as a society, entrepreneurs, and politicians, must learn. No one possesses a crystal ball to make perfect decisions in advance. Fortunately, awareness is growing that AI can be used for more than just positive purposes, which offers the chance to find a balance. This, in turn, opens up further opportunities... I believe that mistakes are necessary to learn from them. In my opinion, overregulation only serves to stifle innovation. We should remain bold when it comes to new technologies and allow ourselves the space to make mistakes.
I am interested in understanding of the prompt maker of this content, especially the choice of depicting male world leader in female body. Why does not vice versa depicting female in male body? And, I am also curious why Elisabeth L'Orange chose this depiction to represent “geopolitical responsibility” and “deep fakes”? What is the message to highlight or be emphasised? One sample interpretation could be by depicting “female-like” of male world leaders, it would embarass the power, represented by masculinity.
Hi Elisabeth L'Orange I need the reference for article for this part " Ilya Sutskever just raised a billion USD to fund his start-up SSI focused on safety. Clearly people are worried about AI being in the wrong hands and misinformation, manipulation, and distortion of truth"
I would like to read the prompts… as this is the result of the input(s). Or have I misunderstood how it works???
Brain-based Consultant | Trauma-informed Systemic Psychologist | Speaker | Leader | Compassion Ambassador ?? Unshakably convinced that humans, given the right education & empowerment, are deeply good & capable ??
2 个月... and then there's the psychological aspect of how much therapy we will all need to deal with the long-term damage caused by seeing gross, hyper-sexualised versions of the likes of already disturbing public figures like Trump, Kim Yong, Putin & co ??. After seeing that footage I'm looking for the delete/reverse button in my hypothalamus... what can make me unsee that ???