Insider's Edit: ChatGPT Can Now Analyze Uploaded Documents
AI Business
Informing, educating and connecting the global AI community. Next up: #AISummit London, June 2024
Here are this week's top news on AI Business. To get all of the latest news and insights, subscribe to the email newsletter.
ChatGPT Can Now Summarize, Analyze Uploaded Documents
OpenAI has given ChatGPT a huge upgrade with the chatbot now able to work with PDFs and perform analytics automatically.
The startup said subscribers to ChatGPT Plus, OpenAI’s $20 a month subscription service, and ChatGPT for Enterprise can upload PDFs, data files or "any document you want" for analysis. Users can then interact with ChatGPT and ask questions about the document.
For example, you can upload a research paper on machine learning and ask ChatGPT to summarize it in simple terms. Or input sales reports to spot potential trends. Or even use the vision feature to take a picture of an object and use that image to influence DALL-E 3 generations.
AI Safety Summit: 28 Nations and EU Sign the ‘Bletchley Declaration’
The U.K. kicked off its AI Safety Summit at a rural English country estate steeped in history, where heads of state, AI leaders and other experts from across the globe congregated to set an international framework for developing safe AI.
Mere hours after the event began, the U.K. government announced that attendees had signed the Bletchley Declaration on AI Safety, named after its venue Bletchley Park, the birthplace of modern computing and the site of the British code-breaking operation in World War II led by computer science pioneer Alan Turing.
The agreement is a list of pledges to ensure AI is "designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”
领英推荐
Also announced was an AI Safety Institute to be created by the U.S. government and allocation of millions of dollars for AI grants. King Charles III gave a speech, as well.
In a Rare Outburst, Meta’s LeCun Blasts OpenAI, Turing Awardees
Meta Chief AI Scientist Yann LeCun is blasting his fellow AI luminaries for asking for regulation because of fears AI would kill all humanity ? as the U.K. hosted its first global AI summit this week.
“I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous,” he posted on X (formerly Twitter). “If powerful AI systems are driven by objectives (which include guardrails) they will be safe and controllable because (we) set those guardrails and objectives.”
LeCun noted that current Auto-Regressive LLMs are not driven by objectives, so “let's not extrapolate from their current weaknesses.”
Then he accused the leaders of the three hottest AI companies of colluding to keep AI models closed: OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei.
Next, LeCun turned his attention to his fellow Turing awardees and MIT professor Max Tegmark.
“You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D,” he continued, referring to Geoff Hinton and Yoshua Bengio, who are towering figures in AI.