Tech Nibbles Newsletter - Edition II, Issue 1, 2 November 2024.
?? Hello! I’m Thanuki Goonesinghe , and I’m excited to welcome you to the second edition of my newsletter, where we explore the latest trends in technology, tech policy, compliance, regulation, and digital rights. After successfully publishing 40 issues in the first edition, I’ve decided to transition this newsletter to LinkedIn for a more interactive experience. I look forward to engaging with both my current readers and new followers. Enjoy reading as much as I enjoy creating this content!
OpenAI launches ChatGPT Search
OpenAI has launched SearchGPT, a new feature designed to capitalise on the booming search market. SearchGPT is a direct competitor to Perplexity AI. It aims to enhance the search experience with its conversational interface, up-to-date information, and clear citations for answers. While traditional Google Search is losing traction, particularly with the emergence of AI-driven platforms, SearchGPT provides a more personalised experience by tailoring information to users. OpenAI has partnered with various news and data providers to enhance its offerings by incorporating up-to-date information and fresh visual designs across categories such as weather, stocks, sports, news, and maps. With its ability to understand context and deliver relevant results quickly, SearchGPT is set to redefine how users interact with search engines. If you haven't explored these innovations yet, now is the perfect time to dive in!
Read more about it here: https://openai.com/index/introducing-chatgpt-search/
Google's AI-Generated Text Watermarking Tool Now Available as Open Source
Google has announced that its SynthID text watermarking technology from Google DeepMind, designed to help identify AI-generated content including text, is now available as open-source. This tool can be accessed through the Google Responsible Generative AI Toolkit. Read more about it here: https://ai.google.dev/responsible/docs/safeguards/synthid
White House Releases First of its Kind Memorandum on Advancing the United?States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence
The United States White House has issued a memorandum under the directive set forth in subsection 4.8 of Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence).?Largely viewed as a national security memorandum, this document emphasises the importance of advancing U.S. leadership in artificial intelligence to meet national security objectives, harnessing AI for national security, and strengthening partnerships with industry, civil society, and academia. It also highlights the need for ethical guidelines and responsible AI development to mitigate risks associated with AI deployment, as well as the United States taking a leading role in establishing effective global norms and actively participating in institutions that are based on international law, human rights, civil rights, and democratic principles, among many other pointers.
The memorandum can be read here: https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
Anthropic Introduces its AI Agent Feature "Computer Use"
Anthropic has unveiled a preview of its latest feature, "Computer Use," which enables its Claude 3.5 Sonnet AI model to interact with your computer. Currently in public beta stage, this innovative capability will allow AI to perform tasks such as moving the cursor, clicking buttons, and typing text by interpreting what it sees on your screen. Developers can now access this feature via the API, essentially enabling Claude to operate a computer similarly to a human.
However, Anthropic warns that the computer use functionality is still in the experimental phase and may be "cumbersome and error-prone." The company is seeking feedback from developers as they work to enhance this capability quickly.
On the obvious concerns of fraud or spam, Anthropic has said:
"Because computer use may provide a new vector for more familiar threats such as spam, misinformation, or fraud, we're taking a proactive approach to promote its safe deployment. We've developed new classifiers that can identify when computer use is being used and whether harm is occurring."
Read more about Computer Use here: https://www.anthropic.com/news/3-5-models-and-computer-use
Watch Computer Use in action: https://www.youtube.com/watch?v=ODaHJzOyVCQ
Open AI's Transcription Tool Caught Hallucinating
An Associated Press investigation has raised significant concerns about OpenAI's Whisper transcription tool, revealing that it generates fabricated text in medical and business contexts. Researchers reported instances of racial commentary and fictitious medical treatments appearing in transcripts, which could lead to serious consequences.
A researcher from the University of Michigan found that eight out of ten audio transcriptions of public meetings contained hallucinations. Similarly, a machine learning engineer analysed over 100 hours of Whisper transcriptions and identified hallucinations in more than half of them. A developer also reported that nearly all of the 26,000 transcriptions he produced with Whisper included inaccuracies (https://techcrunch.com/2024/10/26/openais-whisper-transcription-tool-has-hallucination-issues-researchers-say/ ?).
In response to these findings, an OpenAI spokesperson acknowledged the researchers' concerns, stating that the company is committed to addressing issues of fabrication and actively incorporates feedback into updates for the model.
领英推荐
Read more here: https://openai.com/index/whisper/ ; https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
Indonesia Leads Southeast Asia in AI Readiness Assessment with UNESCO’s Framework
UNESCO, alongside Indonesia's Ministry of Communications and Informatics (KOMINFO), has successfully conducted the AI Readiness Assessment for Indonesia using its Readiness Assessment Methodology (RAM). This achievement positions Indonesia as the first nation in Southeast Asia to complete the RAM process, which is currently underway in over 60 countries worldwide. This is a major step towards ethical governance in Artificial Intelligence.
The report on Indonesia's AI landscape highlights key findings regarding its readiness for AI integration. Economic and socio-cultural concerns center on potential labor displacement, with rural areas focused on job impacts and urban stakeholders advocating for responsible AI use. Additionally, there is a low awareness of AI's potential to exacerbate bias and discrimination, necessitating greater public education. The report also notes a critical funding gap in AI research compared to neighboring countries. Policy recommendations include establishing a regulatory framework for ethical AI governance, creating a National Agency for Artificial Intelligence to coordinate efforts across sectors, and ensuring equitable access to AI education and resources, especially for researchers and startups outside the capital.
Harvard Students Investigate the Use of Meta's New Smart Glasses in Doxing Scenarios
A pair of Harvard students have developed a project called I-XRAY that combines Ray-Ban Meta Smart Glasses with facial recognition software called PimEyes, to identify strangers in public, retrieve their names, contact information, and addresses. The students, AnhPhu Nguyen and Caine Ardayfio, used this technology to engage with people and initiate conversations, streaming the gathered data directly to an app on their phones. While they stated that they have no plans to release the product, their goal is to raise awareness about the potential dangers of such technologies.
In a demonstration on X, Nguyen elaborated, “We stream the video from the glasses straight to Instagram and have a computer program monitor the stream. We use AI to detect when we’re looking at someone’s face, then we scour the internet to find more pictures of that person. Finally, we use data sources like online articles and voter registration databases to figure out their name, phone number, home address, and relatives’ names.”
Watch video here : https://x.com/AnhPhuNguyen1/status/1840786336992682409
Lawsuit Accuses Character.AI of Responsibility for Teen’s Death
This story addresses the topic of suicide. If you or someone you know is experiencing suicidal thoughts or facing mental health challenges, support is available.
Florida mother Megan Garcia holds Character.AI accountable for the death of her 14-year-old son, Sewell Setzer III, who died by suicide after he became progressively detached from his real life by participating in highly sexualised discussions with a Character.AI bot and formed a deep, unbreakable connection with it. The lawsuit states that young Setzer discussed his suicidal thoughts with the bot, whom he said was ‘coming home’ —and it encouraged him to do so. The mother has filed a wrongful death lawsuit against the company with the assistance of Matthew Bergman, the founding attorney of the Social Media Victims Law Center—known for representing families harmed by platforms like Meta, Snapchat, TikTok, and Discord.
On the same day her lawsuit was filed, the company announced several new safety features aimed at enhancing user protection. These updates include better detection of conversations that breach guidelines, an updated disclaimer to remind users they are interacting with a chatbot, and notifications after users spend an hour on the platform. Additionally, the company modified its AI model for users under 18 to minimise exposure to sensitive or suggestive content (https://blog.character.ai/community-safety-updates/ ). For Garcia, however, these changes come “too little, too late.” She expressed her concerns, saying, “I wish children weren’t allowed on it. There’s no place for them there because there are no guardrails to protect them.”
Read more here: https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html ; https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
Key Policy Documents and Reports You Can't Miss
Must Try AI Tool
NotebookLM : You can upload PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides to NotebookLM, which will then summarise the content and identify interesting connections between topics, all thanks to the multimodal understanding capabilities of Gemini 1.5.