Is AI a product or a feature?

Is AI a product or a feature?

[German version of this issue | Subscribe to the German edition]

Currently, there are two main trends in the AI market: AI as a product and AI as a feature.

AI as a product mainly comes from startups like OpenAI, Anthropic, Ideogram, or Runway. You create an account and usually pay a monthly subscription fee to use these tools. Often, there is a free trial access with usage restrictions, fewer features, and other limits.

AI as a feature, on the other hand, mainly comes from established companies like Google, Apple, or Adobe. They integrate it into their existing products. Currently, this is often included at no extra cost. After all, you are already paying for the use of these products, for example, with a subscription, through the purchase price of the device, or with your data.

Additionally, there is a third variant that is a kind of middle ground: AI as a service. As an analysis of the revenues of OpenAI and Anthropic has shown, this can already be important today. The keyword is: API, Application Programming Interface. This means that application developers can rely on the services of OpenAI and others.

I ultimately see today’s AI models most as a feature. I believe that the big tech companies are currently right: people want to use AI where it is useful.

Example: AI as a writing coach. Of course, I can copy an email into the ChatGPT app, ask the assistant for feedback, and copy the result back. However, it is much easier if that is a function of my mail application and is available there with one click.

It works similarly with Adobe’s new AI features: The image and video model Firefly is directly integrated into the corresponding programs. If I were to use a service like Runway for this, I would first have to upload my clip there, edit it, download it again, and add it to my project – cumbersome!

This does not mean, however, that it will always remain this way. The more powerful the AI models become, the more reasons there are to view and use them as standalone products.


T O O L S

ChatGPT Search shows promise but falls short as Google alternative

ChatGPT Search, the AI-powered search engine developed by OpenAI, offers an intriguing glimpse into the future of web searching. However, after testing the product for a day, Maxwell Zeff found that it struggles with the short, navigational queries that make up the bulk of searches on Google, often providing unreliable or irrelevant results.

While ChatGPT Search excels at answering longer, research-oriented questions by scraping multiple websites and presenting concise answers, it falls short in replacing Google for everyday web navigation. Zeff suggests that this limitation may be due to ChatGPT’s reliance on Microsoft Bing and the inherent challenges large language models face when dealing with short prompts. Despite these shortcomings, both OpenAI and competitor Perplexity are working to improve their AI search products to better handle short queries and potentially rival Google in the future.

Box introduces new AI studio and enterprise application builder

Box has unveiled two major AI-focused products: Box AI Studio for creating custom AI agents and Box Apps for building no-code enterprise applications. CEO Aaron Levie announced these tools at the BoxWorks event as part of the company’s expansion from file sharing into intelligent content management, VentureBeat reports. Box AI Studio, built on partnerships with Anthropic, Google, and OpenAI, allows enterprises to develop AI agents with custom instructions for specific business scenarios. The new Box Apps platform enables organizations to create business applications without extensive development, utilizing structured data and metadata from Box content. Both features will be available in a new Enterprise Advanced subscription tier, marking what Levie calls the company’s biggest product enhancement to date.

Google releases AI-powered video creation app for work

Google has announced the general availability of Google Vids, a new AI-powered video creation app for work, to select Google Workspace editions. According to the company’s announcement, Vids is designed to help teams in customer service, learning and development, project ops, and marketing create engaging videos more easily. The app utilizes generative AI capabilities to suggest scripts, scenes, and media elements based on user prompts. It also features real-time collaboration, sharing options, and a user-friendly interface similar to other Google Workspace apps. Source: The Verge

More tools in brief



N E W S

OpenAI plans January launch of “Operator” AI agent

OpenAI is preparing to launch a new AI agent called “Operator” that can perform automated tasks like coding and travel booking on behalf of users, according to reporting by Shirin Ghaffary and Rachel Metz for Bloomberg. The company plans to release the tool in January 2024 as both a research preview and through their developer API, based on information from anonymous sources familiar with the matter. This development is part of a broader industry trend toward AI agents, with competitors Anthropic, Microsoft, and Google working on similar tools. The general-purpose software will primarily operate through web browsers to execute various tasks with minimal user supervision. OpenAI CEO Sam Altman previously indicated this direction on Reddit, suggesting that agents would represent the next major breakthrough in AI technology.

OpenAI defeats copyright lawsuit over AI training data, for now

The Southern District of New York has dismissed a copyright lawsuit brought by online news outlets Raw Story Media and AlterNet Media against artificial intelligence company OpenAI. The plaintiffs alleged that OpenAI violated copyrights by using scraped news content in its training data without preserving copyright management information (CMI) as required under Section 1202(b) of the Digital Millennium Copyright Act (DMCA).

Judge Colleen McMahon granted OpenAI’s motion to dismiss the case, finding that the plaintiffs failed to demonstrate a concrete, actual injury resulting from OpenAI’s actions. The judge noted that the evolving nature of large language model interfaces and the synthesis of information by generative AI make it difficult to prove direct infringement of specific works.

The dismissal highlights the challenges courts face in applying traditional copyright law to generative AI and the uncertainties surrounding Section 1202(b) of the DMCA. While the ruling is a win for OpenAI, it also raises questions about how content creators can ensure proper credit and prevent unauthorized use of their work in AI training datasets. The case may be refiled, but significant obstacles remain for the plaintiffs to prove harm.

Sources: Reuters, VentureBeat

More news in brief



B A C K G R O U N D

Workforce survey shows slowing AI adoption and cooling excitement

A new global survey by Slack reveals that while 99% of executives plan to invest in AI next year, adoption rates among desk workers have stalled in some countries like France and the U.S. The Workforce Index also found that excitement around AI has dropped 6 percentage points globally in the past three months, driven by significant decreases in the U.S., France, Japan and the U.K.

According to Christina Janzer, head of Slack’s Workforce Lab, nearly half of desk workers would be uncomfortable admitting AI use to their manager, citing reasons like “feeling like using AI is cheating” and fear of appearing less competent or lazy. The survey also highlighted a disconnect between workers’ hopes for AI to enable more meaningful work and their expectations that it may actually lead to increased workloads. Additionally, 61% of desk workers have spent less than five hours learning how to use AI.

Source: Axios

AI expert warns of limits to current AI approaches

Gary Marcus, a prominent AI expert, argues that pure scaling of AI systems without fundamental architectural changes is reaching a point of diminishing returns. He cites recent comments from venture capitalist Marc Andreesen and editor Amir Efrati confirming that improvements in large language models (LLMs) are slowing down, despite increasing computational resources. Marcus warns that the current AI bubble, based on the assumption that LLMs will lead to artificial general intelligence (AGI), may burst as the economic realities become clear.

OpenAI and others exploring new strategies to overcome AI improvement slowdown

OpenAI is reportedly developing new strategies to deal with a slowdown in AI model improvements. According to The Information, OpenAI employees testing the company’s next flagship model, code-named Orion, found less improvement compared to the jump from GPT-3 to GPT-4, suggesting the rate of progress is diminishing. In response, OpenAI has formed a foundations team to investigate ways to continue enhancing models despite the dwindling supply of new training data.

New framework helps companies measure AI investment returns

A comprehensive new approach to measuring returns on generative AI investments has emerged, addressing widespread challenges in quantifying AI’s business impact. According to a KPMG survey cited by VentureBeat’s James Thomason, while 78% of C-suite leaders express confidence in generative AI’s ROI, most companies struggle to measure its actual value.

The article presents a detailed 12-step framework for evaluating AI initiatives, covering everything from strategic alignment to stakeholder communication. The framework, developed through expert consultations across multiple industries, balances traditional financial metrics with qualitative benefits like improved decision-making and customer experience.

A case study of fintech company Drip Capital demonstrates practical implementation, showing how the firm achieved 70% productivity increases through strategic AI deployment and careful measurement of both tangible and intangible returns.

More background in brief



G L O S S A R Y

LLM Router

A LLM Router (Large Language Model Router) is a system that automatically directs incoming queries to the most appropriate language model.

Similar to a traffic control system, the router determines which of the available AI models can solve a specific task most efficiently. This selection is based on various criteria such as the type of query, required expertise, costs, or processing speed.

For example, a simple text correction might be directed to a smaller, faster model, while a complex analysis would be routed to a more powerful but potentially slower model.

LLM Routers are particularly important in enterprise environments where multiple AI models operate in parallel and resources need to be used optimally. They help reduce costs and improve response quality by ensuring that each query is handled by the most suitable model.

An LLM Router can be thought of as an intelligent telephone operator who forwards incoming calls not randomly but purposefully to the appropriate experts.


要查看或添加评论,请登录

Jan Tissler的更多文章