Can someone report @Maxwell Hsiang to LinkedIn please. He has claimed that he is the founder of Cognit AI which is not true. Seems like a fake profile or a scam account.
Cognit AI的动态
最相关的动态
-
Saw a call come in from HI. I didn't answer. No msg. Same # called again approx 10 seconds later. Didn't answer; no msg. Googled # and found web sites saying possible scam, in which scammers will say "can you hear me?" so they can record you saying 'yes'. I have had a few clients previously tell me of this scam and bushed it off contributing to media paranoia, but now it hits home. With AI, are we going to be afraid of even talking on the phone? Will we be afraid of AI duplicating peoples voices not for just one word, but unlimited words and phrases, then mimic this to targeted people's phone #s? Is this what the future holds? Will we lose our human interaction altogether? God help us all.
要查看或添加评论,请登录
-
Lying is a feature of AI, not a flaw. Lying is particularly appreciated by marketers, spammers, advertisers, propagandists, grifters, con artists and scammers. Lying is a great tool for manipulation and getting people to buy things they don't need and believe things they don't benefit from. Lying is a net for fools and you can make a lot of money out of fools. When many people and orgs hear that AI is good at lying, they shout: "Yes! I'll have two of those, please!!"
要查看或添加评论,请登录
-
Unfortunately, not everything that's coming out of the GenAI innovation wave is positive. This interesting read outlines some approaches to dealing with one of the downsides you may have experienced first hand. While the topic's not necessarily directly business related, you might find it interesting, if like me, you've noticed a significant recent uptick in audio scams. In short, this article outlines a few methods you can use to protect yourself from certain types of AI scam calls. #GenAI
要查看或添加评论,请登录
-
This is a fascinating and instructive case study of the dangers of AI, algorithms and our increasingly digital world. AI programmed bots may never be able to ape human morality, yet we are increasingly relying on them to play the role of humans. Thus, people are seeking to meet their social needs through machines that are not able to make decisions about what is good for humans in a particular predicament. If machines merely reflect what we want to hear, they are likely to act as amplifiers of existing fears, doubts and prejudices and with Big Tech platforms being solely focused on monetisation that is only likely to get worse. https://lnkd.in/ejyW6CXU
Mother says AI chatbot led her son to kill himself in lawsuit against its maker
theguardian.com
要查看或添加评论,请登录
-
#AI detects mass collection of prompts, bans account permanently. #ImageGeneration ?? Follow us on Discord ??: https://lnkd.in/gt823Zd3 ?? Follow us on Whatsapp ?? https://wapia.in/wabeta _ ?? Summary: Midjourney has banned all employees of Stability AI from its services due to suspected data scraping using BOTs to retrieve prompt and image pairs. The ban was a result of large and unusual activity detected by Midjourney, forcing a temporary service suspension. Stability AI CEO denies the allegations, claiming their AI model does not require scraping. Both companies are conducting internal investigations, with Midjourney providing information to assist. The Verge has requested comments from both companies, but no response has been received yet. Hashtags: #chatGPT 1. #AIimagegeneration 2. #BOTdetection
#AI detects mass collection of prompts, bans account permanently. #ImageGeneration
https://webappia.com
要查看或添加评论,请登录
-
AI + Scamming = A Problem Imagine AI being used to scam you. In the context of the Clip Andrew Rohm is talking about how someone could potentially scrape, grab all of the information on, your public profile and create a almost real version of you to scam. Worst part, grandma is going to have no clue that it's not you. Lynette Bruecker Arnhart, PhD, CBCP from team Logic IT in Mosinee agreed with the statement and added on that she's trying to find ways to use it in a safe way. The problem with AI is that it's too good at its job, convincing your that it's real. Some of the scamming uses I've thought of for AI: → Relationship Modeling (Family Member or Otherwise) → Mass Scam Outreach → Personalized Scam Outreach What uses of AI should we be scared of? --- P.S. I'm a Digital Marketer If you want to learn more about Digital Marketing Subscribe to my Newsletter. Links in my Bio :) or go to: https://lnkd.in/gchYvCp7
要查看或添加评论,请登录
-
This might be the most negative post you see on LinkedIn today. But I genuinely want to ask, am I the only one who thinks AI-oriented future is more concerning than exciting? Generative AI is within everyone's reach. Deepfakes are more accessible than ever. And not everyone is going to use them in ethical manner. We already see AI-generated celebrity videos on reels, which look damn real, until we cross-check the account. Remember that pic from Ambani wedding where Salman and Aishwarya were posing together? Too real to believe it was edited. We are entering an era where every screenshot, image, video needs to be viewed from an extra lens of suspicion. No one can tell what's real, what's AI, anymore. And those morphed clips won't remain confined to celebrities or politicians. It can happen to you and me. Imagine someone blackmailing a common person using their morphed pics or videos! How much mentally draining it would be to go through cyber complaining and legal process? Worst case, these things can be fatal, costing a person's life. While AI continues to evolve, there is no evolution in laws around these technologies. We currently don't have rigid legal framework that clearly dictates how these AI-generated outputs can be utilized, sold, or distributed. There's a need for whole separate category of AI laws and regulations, ensuring its fair use by every individual. Privacy, in digital world, is already a myth. And AI will be the final nail in the coffin. Now tell me honestly, if similar thoughts have crossed your mind or I'm just being dramatic overthinker like always? #storiesbyharshi
要查看或添加评论,请登录
-
???????????????????? ???????????????????? ???????????????????????? - ?????? ???????????? ???? ???????????????????????? ???????????????? ???????????? AI is a powerful tool that can be utilized in various ways, including potentially aiding criminals in their activities. While platforms like ChatGPT and DALL-E have safeguards in place to prevent misuse, there is still a risk of manipulation for illicit purposes. However, it is crucial to acknowledge that AI has numerous beneficial applications beyond criminal intent. As society becomes more familiar with AI technology, the initial fear and resistance will likely diminish, much like past concerns about new media formats displacing traditional modes. To download the document, open it in large view. The download button will appear in the top-right corner. #GenAI #AI #AIethics #TechnologicalAdvancements
要查看或添加评论,请登录
-
It was another great workshop yesterday as I had the opportunity to dive into the world of AI and Your Practice with friends and colleagues! I was so pleased to have my friend Jacinta Gallant join us as she recently had shared her concerns around the use of #AI. Delighted that she found the time together to be helpful and that she enjoyed dipping her toes in - thanks for sharing Jacinta! I am going to be turning the workshop into an ON DEMAND training in the next few weeks to make it more accessible and to allow practitioners to learn at their own pace , whether that be slow drip or firehose! If you want to be notified as soon as the training is available, send me a DM or drop a note in the comments! #Entrepreneur #entrepreneurship #practicebuilding #mediation #law #legaltech #ai #makemoneymediating #podcast #susanguthrie
I’m still worried about AI being used by haters to wreck our world ?? but after spending the afternoon with Susan Guthrie, learning about the many ways we can harness the benefits of AI, I took a big step forward. Rather than just resisting it (Leave me alone, Bots!????????), I am far better informed and a lot more open. Thanks,Susan. The webinar was excellent. Susan walked us through the upsides and downsides, the ways we can protect ourselves and our clients, privacy stuff, questions around who owns it once it’s in AI (?!) AND gave me the gift of dipping my toes in. I found it mind-blowing and while I’m still worried about the haters, I am not scared to open ChatGPT or Gemini now. Full disclosure: I used a dummy email address to sign in for the free account. I’m still not sure I want AI to find me! And I am still annoyed by the AI bots on Linked-in. #lifelonglearning
要查看或添加评论,请登录
-
Botsplaining - when AI tells you it’s an AI and not able to do a simple task. Botshit - AI hallucinations that are obviously made up. Botpaganda - AI documenting its hallucination with BS links that don’t exist. Botlighting - when the AI tells you the fault for its crappy response lies in your prompt, not the AI. Botslapping - using AI to send spammy pitch-slaps. ~~~~~~ EDIT - adding a few from the great comments! Botman - bots that identify as male (?? Meryl Evans, CPACC (deaf)) (I had some fun testing this and posted a screenshot of the results) Botroll - using bots for automated commenting on social media (Iva Vlasimsky) Botshit crazy - how the bots make us feel (especially the customer service bots) (Dale W. Harrison) Botslipping - when AI gives you copy far fetched and not what you're looking for (Fatima Khursheed) Botfishing - a bot pretending to be a human to set you up for a scam (Mike Blake) Botmageddon - when the bot depresses the reed button without human consent?(Adam P. Shiell) Botpulation - when bots manipulate us by spreading lies and fake news (that one is my invention, but I think we need a better word for it.) What can you add?
要查看或添加评论,请登录