AI 2024 - Smarter but more Expensive
"AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning, and planning."
These are the findings of Stanford University's Institute of Human Centered AI (HAI) in their just-released 2024 edition of the annual AI Index. After a year of fast and furious development of generative AI, the industry is at a crossroad. The technology is poised to deliver unprecedented productivity growth but may be hamstrung by limits to the technology as well as guardrails for its usage. AI is a powerful tool, but it's nowhere near the point at which it can seamlessly take on the bulk of work.
There's no question that AI has become smarter and more powerful over the past 12 months. At the same time, the costs of building and maintaining large language models (LLMs) has increased astronomically, In addition, the industry still lacks standards for responsible AI best practices.
LLM's Growing but crazy $$$!
The number of new large language models released worldwide in 2023 doubled over the previous year, the report states. "Two-thirds were open-source, but the highest-performing models came from industry players with closed systems. Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark; and performance on the benchmark has improved by 15 percentage points since last year. LLMs have also grown way more expensive, the HAI authors also observe. "For example, OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra costs $191 million for compute," they estimate.
Gen AI Investment up, up & up!
At the same time, generative AI investment skyrocketed over the past 12 months. "Funding for generative AI surged by a factor of eight, from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds."
领英推荐
Transparency
Those working to design, build, and implement AI systems need to be more open about their methods, the report also suggests. "AI developers score low on transparency," the co-authors suggest. "This is especially the case "regarding the disclosure of training data and methodologies. This lack of openness hinders efforts to further understand the robustness and safety of AI systems."
Responsible AI
Responsible AI is still an open and incomplete effort. "Robust and standardized evaluations for LLM responsibility are seriously lacking," the HAI authors report. There is "a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models."
IP & Copyright Issues
Another issue surfacing over the past 12 months is intellectual property and copyright violations, as generative AI synthesizes existing information from many sources. "Multiple researchers have shown that the generative outputs of popular LLMs may contain copyrighted material, such as excerpts from The New York Times or scenes from movies," the HAI researchers point out. "Whether such output constitutes copyright violations is becoming a central legal question."
McKinsey has released its survey on GEN AI use which has grown in adoption, specially in sales, marketing, and professional services. Analytic AI has seen significant cost benefit in service operations.The most recognized and experienced risk of Gen AI use is "Inaccuracy".
Rice at Jio
3 个月Joint