When ChatGPT does not give good answers ...
... then try an alternative!
Here is a concrete example: I wanted to use an AI assistant as a programming tool. Years ago, I tried to learn about iOS development. The main driver was an idea for a fairly simple Apple Watch app. I diligently studied Treehouse for a few weeks and thought I was on the right track - until I tried to adapt one of the seemingly simple code examples to my liking and just couldn't get it right.
In the end, I gave up because it wasn't that important to me.
The trigger for a new attempt was the release of the AI assistant Codestral by the French company Mistral. Without further ado, I installed it on my Mac via LM Studio and started.
I actually made good progress. I was thrilled. But then I found myself at a dead end because Codestral's knowledge was either not up to date or the instructions were too unclear to help me.
I tried other assistants. These included GPT-4 Turbo, Google Gemini 1.5 Pro, and finally Anthropic's Claude 3 Opus.
A logic problem in the structure of my watch app seemed to trip up all the assistants until Claude 3 solved it with a flick of the wrist. Then I could continue.
I have had similar experiences with text. As I mentioned in a previous issue, I use the browser-based TypingMind to combine several assistants under one roof. This makes experimenting very easy. For example, I have found that Gemini 1.5 Flash sometimes writes better than Gemini 1.5 Pro. This is despite the fact that the "Flash" version is primarily designed for speed, while the "Pro" version is supposed to be very powerful.
In the end, it always depends on the individual case.
By the way, it is worth experimenting with the different variants within ChatGPT as well! The following are currently available: GPT-3.5, GPT-4 Turbo and the brand new GPT-4o.
It is not always the case that the newest tool is the best. GPT-4o, for example, has been criticized for not following prompts as well as GPT-4, and sometimes giving unnecessarily long responses. On the other hand, GPT-4o's strength lies in its ability to handle input in a variety of media formats.
In short, it's worth experimenting with. What doesn't work in one place might work like magic in another.
P.S.: In the very first issue of the Smart Content Report I gave general tips for better results with ChatGPT. German only – sorry. Google Translate does a decent job.
T O O L S
Apple shows its approach to AI
After Google and Microsoft showed how they are integrating the new generation of AI into their products and services, it was Apple's turn. One thing quickly became clear: Apple wants its AI capabilities to be not only useful and easy to use, but also to protect users' privacy.
The "Apple Intelligence" feature set consists of three layers:
Features of Apple Intelligence include proofreading text, summarizing emails, and even a simple image generator. Siri will also be able to perform tasks across applications.
All of this will be available in English only in the US for now starting in fall. You will need a fairly recent Apple device, including the latest iPhone 15 Pro, as well as iPads and Macs with M-Chips. Last but not least, features will be added gradually. Apple is being very cautious here. Given the missteps of Google and Microsoft with their AI offerings (see below), this seems appropriate.
However, we cannot yet judge how well Apple Intelligence will work in practice. I would be surprised if everything goes smoothly ...
Interestingly, all these features will be available for free, including ChatGPT. Personally, I would have expected Apple Intelligence to be part of the "Apple One" subscriptions. The company seems to be relying on the new AI features to boost hardware sales instead.
Other tools in brief
Kling, a new AI video generator from the Chinese company Kuaishou Technology, impresses with quite realistic videos in high resolution that can be created via text input. Although Kling is currently only available with a Chinese phone number, its open beta approach puts pressure on other vendors: OpenAI, for example, continues to keep its video AI Sora under wraps.
Perplexity has introduced a new feature called "Pages," which uses AI to automatically create reports on a topic. Pages creates a custom website for which it researches information and writes the text. However, the feature has been criticized for copying content from news sites such as Forbes, CNBC and Bloomberg without giving sufficient credit, Forbes writes.
The new streaming platform "Showrunner" from studio Fable allows users to create their own series episodes with the help of AI by generating scenes with short text instructions and then combining them into episodes. The goal is ambitious: Showrunner aims to become the Netflix for AI-generated content, allowing viewers to actively participate in the production.
The You.com search engine allows users to create their own personalized AI assistants using top language models such as GPT-4 or Llama 3. This allows users to tailor the AI to their individual needs.
Anthropic is extending its AI chatbot Claude with a feature that can be used to create custom solutions such as email assistants or shopping bots. By connecting to any programming interfaces (APIs) and with a little programming knowledge, it is apparently possible to develop versatile AI assistants that can, for example, provide personalized product recommendations, answer customer queries, or analyze visual data.
Adobe's new AEP AI Assistant is designed to help organizations make better use of customer data and optimize their marketing efforts. It can answer questions about customer segments, provide real-time insights, and create personalized marketing materials on demand, including copy, design, and images.
Nvidia introduces NIM (Nvidia Inference Microservices), a new technology that supposedly enables developers to deliver AI applications in minutes instead of weeks. These microservices provide optimized models as containers that can be deployed in clouds, data centers, or on workstations. The goal is to enable organizations to build generative AI applications for co-piloting, chatbots, and more quickly and easily. More than 40 microservices support different AI models, including Meta Llama 3, Google Gemma, and Microsoft Phi-3.
Nvidia also introduced "Blackwell," a new hardware architecture for AI applications. It is designed to significantly increase the efficiency of data centers and accelerate the development of new AI solutions. New systems based on Blackwell will be offered by a number of vendors, including Asus, Gigabyte and Supermicro, and are expected to be suitable for both cloud applications and local data centers.
Intel introduced the "Lunar Lake" processor, a completely redesigned laptop chip for AI applications that eliminates the need for separate memory modules and instead integrates up to 32GB of LPDDR5X memory directly into the chip package. Thanks to numerous optimizations, Intel promises up to 14 percent more CPU performance at the same clock speed, 50 percent more graphics performance and up to 60 percent longer battery life compared to its predecessor, Meteor Lake. Intel is also introducing the new Xeon 6 processor, which is designed to modernize data centers and handle enterprise AI workloads.
The new $70 Raspberry Pi AI Kit allows users to implement AI applications for visual tasks with the microcomputer. The kit enables real-time object recognition, segmentation, and pose estimation with low power consumption, which could make the Raspberry Pi 5 suitable for many local AI applications.
For the first time, Scale AI publishes rankings for large language models, evaluating their performance in specific application areas such as generative AI programming, regression, mathematics, and multilingualism. OpenAI's GPT models ranked first in three of the four areas (coding, multilingual, instruction following), while Anthropic's Claude 3 Opus ranked first in the remaining one (Math).
ElevenLabs, a speech synthesis AI startup, unveiled "Sound Effects," a new product that allows users to create audio samples simply by entering text. Developed in partnership with Shutterstock, the tool is designed to help creative professionals in fields as diverse as film, television, video games, and social media enhance their content with interesting and appropriate soundscapes without the need for costly recording or licensing.
Stability AI releases "Stable Audio Open," a new AI model for the free creation of sounds and pieces of music up to 47 seconds in length. However, due to the training material, it is limited to English descriptions and Western music styles.
领英推荐
Camb AI's Mars5 AI model enables realistic voice cloning in over 140 languages, combining voice cloning and text-to-speech in a single platform. The company claims that Mars5 is particularly good at capturing emotional nuances in speech, making it ideal for applications such as sports commentary and movies.
Writer's new "AI Studio" platform enables companies to develop their own AI applications using the "Writer Framework" and without programming knowledge. Writer is positioning itself as a one-stop shop for companies looking to harness the potential of generative AI in their day-to-day operations.
French AI startup Mistral is offering new options for customizing its generative models, including a new software development kit (SDK). This allows developers and enterprises to optimize the models for specific use cases.
Microsoft is discontinuing GPT Builder for consumers just three months after its launch. The company does not believe it is economically viable to continue developing the service. Users have until July 14 to back up their data, after which all GPTs and associated information will be deleted.
N E W S
Microsoft backtracks on controversial "Recall" feature
Microsoft has scaled back its plans for a new Windows feature called Recall. Originally, Recall was supposed to be automatically activated and record user activity in the background to make it usable for AI applications. After heavy criticism from privacy advocates and security researchers, Recall is now optional. New security measures have also been introduced that make it more difficult to access the stored data and require authentication. Despite these changes, experts warn of potential risks, as the data collected could still be accessible to attackers. See reports from Wired, The Verge, and VentureBeat.
Google responds to criticism of AI Overviews
Google admits to errors in its new AI Overviews, which the company says are based on misinterpretations of search queries and web sources. Google is working on improvements, but insists that the accuracy of AI Overviews is comparable to that of traditional search results. In my opinion, this casts a rather bad light on the search results ...
At the same time, Google seems to be delivering AI Overviews much less frequently, according to an analysis by BrightEdge. When they were still in the testing phase, they appeared in 84 percent of search queries, but now only in 15 percent.
More news in brief
Adobe has responded to criticism of its terms of service by assuring customers that their files will not be used to train AI, nor will their property rights be violated.
Meta apparently wants to use content from Instagram and Facebook to train its AI models. European users will be able to opt out of having their data used for AI training, but they will have to actively request it. Meta's wording in the email is also vague, suggesting that users must provide a reason for their objection. The UK's Information Commissioner's Office and the European Commission have now launched investigations into the legality of this practice.
French startup Mistral AI raises $640 million in a funding round, valuing the company at nearly $6 billion. Mistral intends to use the fresh capital to expand its resources for developing open source AI models and to keep pace with OpenAI and Anthropic in the global competition.
This year's Tribeca Festival will be the first to feature short films created entirely with OpenAI's upcoming video AI, Sora. Five filmmakers will present their Sora-generated works and discuss the new technology at the festival.
A new survey from McKinsey shows that 65% of companies are already using Generative AI on a regular basis, and the majority expect the technology to lead to significant changes in their industries. However, 44% of respondents have also experienced negative consequences from using Gen AI, such as inaccurate results or cybersecurity issues, which is why experts stress that responsible AI practices must be considered from the outset.
California wants to implement strict safety rules for artificial intelligence, including a "kill switch" and reporting requirements for developers. Critics warn of barriers to innovation, excessive bureaucracy, and negative impacts on open source models that could weaken the state's technology sector.
Researchers are making great strides in developing 1-bit LLMs that can achieve similar performance to their larger counterparts while using significantly less memory and power. This development could open the door to more complex AI applications on everyday devices such as smartphones, as they require less processing power and energy.
G O O D ? R E A D S
OpenAI insiders warn of dangerous corporate culture
In an open letter, current and former OpenAI employees warn of a "reckless" development in the race for supremacy in artificial intelligence. They call for sweeping changes in the AI industry, including more transparency and better protection for whistleblowers. The signatories criticize a culture of secrecy and profit at any cost at OpenAI. The company would put security concerns aside to be the first to create Artificial General Intelligence (AGI).
Behind the scenes at Anthropic (Claude): security as a priority
In an in-depth article, Time Magazine looks at AI company Anthropic and its efforts to make security a top priority. Co-founder and CEO Dario Amodei made a conscious decision not to release the chatbot Claude early to avoid potential risks. Anthropic's mission is to empirically determine what risks actually exist by building and researching powerful AI systems. Through voluntary self-restraint and a "responsible scaling policy," the company hopes to spark an industry-wide race to safety.
The inglorious story of an AI-powered news portal
BNN Breaking, a news site with millions of readers, an international team of journalists, and a partnership with Microsoft, turned out to be a source of errors and misinformation. Former employees say the site relied heavily on AI-generated content, which was often published without sufficient verification. This led to complaints from people who were misidentified in articles, as well as news outlets whose content was taken without permission. The case highlights the growing threat that AI-generated "fake news" poses to legitimate journalism.
New sources of better AI training data
Large Language Models (LLMs) are no longer trained solely on data from the Internet. In the past, LLMs were based on the vast data pool of the Internet, but this approach has reached its limits. To advance LLMs, companies like OpenAI are turning to new types of data: targeted annotation and filtering improve the quality of existing data, human feedback optimizes the behavior of models, and the use of proprietary data, such as chat histories and internal documents, expands the scope of training. But the biggest change is coming from new approaches: These include synthetic data generated by the LLMs themselves, as well as human-generated data sets that specifically fill gaps in Internet training. They make it possible to improve skills that previously could not be adequately trained. In this way, LLMs not only become "Internet simulators", but also learn to master more complex tasks that are not adequately represented on the Internet.
C U R I O U S ? F I N D
Visual weather station
This craft project not only shows the current weather at a location, but also generates a matching illustration. It is based on the "Sol Mate" GPT and uses an e-paper display.
G L O S S A R Y
Adapter
Imagine you have a universal toolbox that contains many different tools, but is too large and cumbersome for certain tasks. To perform certain tasks efficiently, you can use small, specialized attachments called adapters. These adapters attach to the general-purpose tool and extend its function. For example, you can attach a screwdriver adapter to a drill to tighten screws, or a sanding head adapter to smooth surfaces. In artificial intelligence, adapters work in a similar way: they are small, specialized modules that dock onto a large, general AI model and optimize it for specific tasks. This allows large, complex models to be used efficiently for specific tasks without having to retrain the entire model.