AI4Future: Top AI News (29th July-4th August)
AI4FUTURE generated this image using a detailed prompt, with no modifications, while testing of the newly released FLUX.1 by Black Forest Lab

AI4Future: Top AI News (29th July-4th August)

This week marks a new phase in the global AI market as the EU AI Act came into force on August 1st. This legislation represents the world's first attempt to regulate AI on the risk-based approach. The European AI Office has already started multilateral consultations with market stakeholders regarding the rules for general-purpose AI models. The main EU AI Act is a high-level document, and further development of detailed recommendations and rules is anticipated.

The enactment of the EU AI Act means that global legislative initiatives will consider existing foundational experience, even as it must be adapted to local specifics. For instance, this week the UK made several significant announcements regarding AI regulation. The new government has announced the creation of an AI division to develop an AI Action Plan and a strategic vision for future steps, including reviewing the AI Bill and initiating market consultations. Additionally, the government has revised previously approved Conservative plans for AI industry funding, aiming to address billion-pound unfunded commitments.

An interesting update from the Argentine government about using AI for crime reduction highlights the close connection between AI deployment and issues of ethics and citizen rights, which underpin global AI regulation initiatives.

Regulators are also concerned about the growing concentration of big tech in the AI market, which could lead to monopolies, and are taking initial steps to address this issue. The news of the UK's Competition and Markets Authority (CMA) investigating Alphabet's connections with AI company Anthropic reflects this concern.

Meanwhile, companies across various industries are making progress in implementing AI technologies, investing millions to enhance their services and products, and improve consumer experiences.

Here’s a roundup of the week’s news.

J.P. Morgan has pitched its own AI-based chatbot called LLM Suite, which performs analytical tasks.

Around 50,000 employees are using the bank’s bespoke model to enhance their productivity. The bank has started integrating generative AI, informing staff that its version of OpenAI’s ChatGPT can handle analytical and research functions, assist with writing, idea generation, and document summarisation. This implementation is one of the largest examples of deploying LLMs on Wall Street. Morgan Stanley is also working with OpenAI, having introduced its own AI assistant for wealth management advisors. J.P. Morgan plans to spend around $17 billion on technology this year and has already hired 2,000 experts in AI, machine learning, and data analytics.

Read more ?

The new UK government has announced ambitious plans to harness AI to drive economic growth

A new AI Opportunities Unit will be established to design an AI Opportunities Action Plan ?to accelerate the use of AI in improving people's lives by enhancing services and developing new products. Additionally, the government has outlined plans to focus regulation on sustainable general-purpose AI models (such as ChatGPT), rather than on AI as a whole, to avoid stifling innovation. The AI Bill is expected to be reviewed in its first reading this year, with market consultations to begin soon. Furthermore, the new government has frozen funding for some previously announced Conservative projects, amounting to £1.3 billion, citing the need to address billion-pound unfunded commitments.

Read more

Cohere Raises $500 Million to Outpace Competitors

Cohere, a generative AI startup founded by former Google researchers, has raised $500 million in new investment from companies including Cisco, AMD, and Fujitsu. The funding round also saw participation from Canadian pension fund PSP Investments and Export Development Canada (EDC), valuing the Toronto-based company at $5.5 billion, according to Bloomberg. This valuation is more than double the startup's worth in June 2023, when it raised $270 million from Inovia Capital and others, bringing the total funds raised to date to $970 million.

Read more

UK Competition Watchdog Investigates Alphabet's Ties with AI Company Anthropic?

The UK Competition and Markets Authority (CMA) has confirmed an investigation into the partnership between Alphabet, Google's parent company, and AI firm Anthropic due to concerns about its impact on competition in the AI market. Alphabet has invested over $2 billion in Anthropic across several funding rounds, a common practice in the tech sector. However, this approach has attracted the regulator's attention. Earlier this year, the CMA released a report highlighting risks to transparent competition in the AI sector, noting a "interconnected network" of over 90 partnerships and investments from major tech companies such as Alphabet, Amazon, Apple, and Microsoft. Amazon and Alphabet have made significant investments in Anthropic, while Microsoft is the largest investor in ChatGPT's developer, OpenAI. OpenAI is also working with Apple to integrate ChatGPT into future generations of iPhones.

Read more ?

The World Sees the First Version of Apple Intelligence

Apple has unveiled the first version of Apple Intelligence, a suite of AI features designed to enhance Siri, automatically generate emails and images, and organise notifications. The new software package, named Apple Intelligence, has been released in beta for iOS 18.1 developers. It is also available for iPad and Mac. Currently, developers can test it, but access to the service, which allows more complex requests to Apple’s servers, requires signing up for a waitlist. Investors hope that the deep integration of AI with Apple’s operating system will lead to a major wave of updates in the coming years, especially since the system will only run on the iPhone 15 Pro, iPhone 15 Pro Max, and newer models.

Read more

The European AI Office has Begun Consultations with Market Stakeholders

These consultations are related to the EU AI Act that recently came into force and is a high-level framework requiring Level2 regulation. The focus will be on "trustworthy" general-purpose AI models, such as ChatGPT. The consultation period will run until 10 September. According to the AI Office, the consultations provide an opportunity for all interested parties to share their views on issues covered by the first Code of Practice, which will outline detailed rules for providers of general-purpose AI models. Academic circles, independent experts, industry representatives, as well as public organisations, rights holders, and government bodies are invited to participate in the dialogue.

Read more

Argentina to Use AI for 'Predicting Future Crimes', but Experts Worry About Civil Rights

This week, President Javier Milei has established an AI Unit tasked with security and, according to lawmakers, will use "machine learning algorithms to analyse historical crime data with the aim of predicting future crimes." The initiative is expected to deploy facial recognition software to identify "wanted persons," monitor social media, and analyse real-time video footage from surveillance cameras to detect suspicious activity. Amnesty International has expressed concern that this move could infringe on human rights.

Read more

A Third of Generative AI Projects Will Be Abandoned by 2026

Gartner predicts that at least 30% of generative AI projects will be abandoned after the proof of concept stage by the end of 2025, due to poor data quality, inadequate risk controls, rising costs, or unclear business value. Developing generative AI models requires investments of $5-20 million. Unfortunately, there is no one-size-fits-all approach for generative AI, and costs are not as predictable as with other technologies, according to Gartner. At the same time, Gartner’s findings suggest that generative AI demands a higher tolerance for future investment compared to immediate ROI.

Read more ?

ChatGPT Funnier Than Humans

The University of Southern California (USC) conducted a study in which participants were asked to vote on jokes, some created by humans and some by ChatGPT. The respondents did not know which jokes were which, but 75% of the votes favoured the AI-generated jokes. As study author Drew Gorenz (USC) explains: “ChatGPT cannot feel emotions, yet it tells new jokes better than the average person. This suggests that emotions are not necessary to tell a really good joke.” DailyMail.com ran a global test: they asked ChatGPT to create five jokes and selected five popular jokes from Reddit. The original article presents two lists and challenges readers to guess which are human-made and which are generated by AI. The answers are provided at the end of the article. Why not have a go at joking with ChatGPT and see what you come up with?

Read more

Microsoft allocates impressive budgets to AI, while profits remain a distant prospect

According to the Wall Street Journal, the tech giant disclosed that during the quarter ending in June, it spent an astounding $19 billion in cash capital expenditures and equipment - equivalent to what it used to spend over an entire year just five years ago.

Unsurprisingly, the majority of this $19 billion was AI-related, with about half spent on building and leasing data centres. Big tech companies are actively investing to capitalise on the current hype around generative AI, and costs are escalating. However, when the returns on these investments will materialise remains uncertain.

Read more

Insightful papers I have observed this week:

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

3 个月

The surge in generative AI models like GPT-4 is fascinating, but their reliance on transformer architectures and massive datasets raises concerns about explainability and bias amplification. Recent work exploring reinforcement learning from human feedback (RLHF) offers promising avenues for mitigating these issues by incorporating human values and preferences into the training process. In a recent interview, Elon Musk cautioned against unchecked AI development. How can we leverage RLHF to ensure AI aligns with human values while still fostering innovation?

要查看或添加评论,请登录

Kate Shcheglova-Goldfinch, MSc MBA的更多文章

社区洞察

其他会员也浏览了