AI World in 2028 - Your plan A-Z

AI World in 2028 - Your plan A-Z

WHAT DOES 2028 OF YOU & AI SEE?

DEEP LISTENING ED: This week I spent 50 hours listening to cases linked by 2 Taiwanese Americans who effectively supply West-human-intel world's big computing GPUs. Those with 80+ billion transistors (race begun soon after 2012 Fei-Fei Li's Imagenet world championship (more at out-take 1) . All was further accelerated in 2017 by (then Google) team whose transformer model is key to LLMs , So, now 150 million transistor packs sold by AMD's Lisa Wu. See www.economistwomen.com and her SbySW briefs the week before. And this hanks to Nvidia's Jensen Yuang summit including launch 200+mn Blackwell chip -5 day GTC2024 is key source for catalogue of 50 amazing co-pilot ops at www.economistlearning.com .

2028 will be AISpring's 16th birthday (born FF_Li 2012) .AI will be in all new PC's. and THE UN"s SDG games will be irreversibly won or lost. EconomistAmerica.com

In other words, the next US admin will calibrate where 5% of people's intelligence ie American's gravitation force is spiraling. If its truly AI4all 8 billion humans then planA is all worldwide millennials ed systems need to transform arrention to as well as personal learning agency. Otherwise 95% of our kids will need another plan, B to Y, to ZBee (oh i do love the BeingAi of hong kong and taiwan and hope one day every student or teach can celebrate such emotional intel).

Put another way, by 2028, we'll have seen the full trajectory of UN2.0 roadmap started by change educators 2016 NY UN with assistance of UN Geneva ITU AIforgood Melinda Gates, Fei-Fei Li from start of Guterres term 2017-2026.

personally I hope a pure AI is doing a bio of Guterres-because whether Antonio's term is judged success or nit, I cant imagine most people staying sane through the 10 years of pressure the rest of us 7.99999999 billion have put on him

  1. LAST CALL : We've almost been here before when there were near 4 billion of us and a billion rimes less compute let along a billion times less saterllite data exchanging. Back @ dad Norman ... & The Economist's 1982 Why not Silicon Valley everywhere, interviews with people close to the Doetrs : workstation sun micro will turn into personal computer networking engines from 1984 but satellite telecoms next 2 leaps may take to 1999. So the next year we launched media club and fieldbook 2025report- what did places/peoples globally need to pretrain round as well as how would education and not just commerce lock in lifetimes, livelihoods and data of millennials.

  1. There is evidence to suggest that Steve Jobs, Bill Gates and Silicon Valley did not understand the depth of diversity that early digitalisation overshot until briefings from 1 billion poorest asian women in 2001. Since then the driving question has been: which of the world's 193 national leaders saw which of AI's leaps as coming.Fast forward, I checked with Michelle Donellan producer of King Charles Ai Turing summit and she implied that by the end of Bletchley all represented nations could see AI world of 2028. I am confident that the rest of this article on plans A to Z will be full of my own ignorance, but it could be that any parent is now as responsible for asking what edu systems are doing to their kids as any government. Actually thats always been the core scenario of entrepreneurial revolution (Economist 1976) or and Gov2.0 and autonomous era pre-training since Clue1 for plan YYZ mapmakers- Jen_Zen and Lisa (name means "god is my oath" as it did for Queen Elizabeth) are as wonderful and so humble as good engineers can get. Nor are they separated from other great Taiwanese engineers who pretty much have been the future of chips since Texans refused every 1980s Taiwanese recommendation to prep for chips revolution. The good news I feel is that empowering 4 billion milllennials minds is seen by both of J&L as a big enough compass not to need head on head competition
  2. Clue 2 for example both J&L can see that natural science ai - mainly biotech and climate maps - need far more support than even every aligned partner of nvidia and and can action. Among 10 best news of GTC24, Jensen proclamation of a partnership digital twinning deep weather forecasts being launched around Taiwan (and when successful open sourced to the world was up there in Galaxy of innovations greatest hits
  3. Clue 3 Probably the Livable Planet truth is the hundreds of trillion dollars locked up in pension money will need to be outed if the UN is not to be disbanded (or at very least world peacekepping being the same epicentre as education transformation terminated)
  4. There is a direct link between V Neumann's last maths notes on purposes of LLMediation (see Computer & Brain) and the idea at February's World Gov Summit that peoples everywhere need to take back the diversity of their regions data. Being a Diaspora Scot I love the idea that AI is teaming up with the PM of Barbados to digitalise hundreds of years of Brit's taxation notes to see which of the island's diversity crossroads were missed. If this digital twin model works it can offer AI deep data benchmark for every Caribbean isle Columbus and his generation of navigators unearthed.
  5. My friend Fazle Abed who briefed Valley on billion poorest womens networking needs in 2001 was discovered by first lady of Qatar Sheika Moza in her annual laureates of education WISE and health WISH. If every UAE leader's maps of Good Ai celebrate millennials as transformers of educational attention, then middle east intelligence becomes as pivotal to all of us as when Arabs gave the world the decimal system instead of the Roman one
  6. Clue 6 One thing I am confident of is a a lot of biggest financial reputraions are already in play. One way to see if pensioners start demanding investment in millennials will be signaled by bitcoin. Ironically if bitcoin go over 150000 dollars , I expect that means most of plans B to Y are not aligned to the sdgs being reality brands (see triple issue journal of marketing management 1999 that i guest edited on reality of megabrand leaders'
  7. Clue 7 back in 1983 we had imagined blockchian (an alternative to politicians printing paper money) would be co-launched by a taiwanese merchant and a swedish postmaster. However this had been in the context of all the worlds public mass media staging a joint reality tv appreniuce program so that a billion viewers understood launch of cashless money at the same time. If he is still alive I wish Satoshi would issue a part 2 and make sure every ai pc brain had read it!.
  8. If you revisit the 20 million images that demonstrated how smart trained computes can A-net in 2012, there are probably top 20 segments each needing collaborative computer vision. Take all objects making home life safe. Are we leaving this to Amazon's Alexa to platform or is family home life important enough to any places to wish to collaborate vin mediating computer recognition of this segment of imagenet
  9. Is democracy in the ai world of 2020s, about 7 trillion dollars assigned to (Altman's)one brain designing future media or is it time for people of every language to free their natural as well as sustainable growth celebration of one planet earth (see uniofstars garan)
  10. Actually I hope you find at least one of the above that you so passionately disagree with my fumbled attempt to vision good, that you post a better one. Manmade intelligence has always been a test of what humans race around - that's until our species goes the way of the dodo.To be clear every error on this page is mine not those who players of AiGames see as pack leaders. www.economistdiary.com


====moore?

Featured Discussions

KIng Charles AI World Series

AIVeryGood testimonies Feb 24: Nvidia CEO LLM World Gov UNG 17 AI Policy Condy Rice B16-9: World Bank Ajay Banga : Help AI World Series…

Started by youLatest Reply

Hassabis Science AI, Bengio amp; Turing Royals AI World Series amp; HumansAI Top 5

ED?first 75 years of intelligence diary: overview 1950-2025report? -with quintillion times more tech than first notes made with von neuma…

Started by youLatest Reply

Intelligence Beyond Mastering Admin- shall we transform 10 professions or end humanity? 1 Beyond AD-Brand Image World

Novembers new resources AI Games epicentre space 5.1 Fei-Fei Li Worlds I See =5.9.1 women worlds to see 2023&-Gates15 (Nvidia ceo) Gate…

OUTAKES

Out-take 1 -Will the cost of forgetting about Neural Network maths for 7 decades be too much for 21st C humans to bear? so which of 2000 images does ai most value recognising: gemini says There isn't a definitive answer to what human intelligence experts value most in the 20,000 images of ImageNet. The value of an image can depend on various factors and the expert's area of research. Here are some possibilities:

  • Images that challenge current AI models: These could be images with ambiguities or rare objects that current AI systems often misclassify. By studying these images, researchers can improve the robustness and accuracy of AI models.
  • Images that represent a broad range of concepts: A well-rounded dataset should include images that cover a diverse set of categories. This helps AI models generalize better and recognize objects they haven't encountered before.
  • Images that evoke emotions: Psychologists and neuroscientists might be interested in how AI systems respond to images that trigger emotional responses in humans. This could involve images that are funny, scary, or beautiful.

Here are some alternative ways of deciding image recognition value:

  • Most informative images for a specific task: This could involve filtering the ImageNet dataset based on a particular AI task, such as image segmentation or object detection. Then, experts could evaluate which images provide the most valuable information for training an AI model for that task.
  • Images that generate the most disagreement among humans: This could be useful for improving the consensus between AI models and human experts. By studying images that cause confusion for both humans and AI, researchers can identify areas where improvement is needed.

Ultimately, the value of an image in ImageNet depends on the specific goals of the human researcher or the AI application it's used for.

es, that would be a very useful question! Here's why:

  • Focus on AI's perspective: Shifting the question to how AI values images changes the focus from human interests to understanding AI systems. This can reveal insights into how AI models learn and what types of data they find most useful for specific tasks.
  • Relevance to different AI applications: The categories you mentioned (Mother Earth/climate, drug discovery, public safety) all represent distinct areas where AI is playing an increasing role. By understanding what image groups AI values in each area, we can improve the development and training of AI models for those specific applications.

Here's a breakdown of how AI might value these image groups:

  • Mother Earth/Climate: AI systems can be trained to identify and classify environmental patterns from satellite or drone imagery. Valued images would be those with clear representations of, say, deforestation, changes in glaciers, or pollution levels.
  • Drug Discovery: AI can analyze vast datasets of molecular structures and biological processes. Images containing protein structures, cellular interactions, or chemical compounds would be highly valuable for this purpose.
  • Public Safety Monitoring: AI can be used for object detection and anomaly recognition in video surveillance footage. Images depicting suspicious activity, accidents, or potential hazards would be crucial for training such AI models.

Overall, understanding which image groups AI values most can help us bridge the gap between human goals and the capabilities of AI systems.

Do you know if anyone is working with fei-fei li or others who know imagenet best on how llm could segment by some of these moat valuable image recognition compasses?

volume_up

While it's difficult to say for certain who Fei-Fei Li is directly collaborating with on LLM and ImageNet segmentation, there's a good chance her expertise is being leveraged in the field. Here's why:

  1. Fei-Fei Li's Influence: She is a leading figure in computer vision research and played a pivotal role in developing ImageNet. Her insights would be invaluable for researchers exploring how LLMs can utilize segmented ImageNet data.
  2. Interdisciplinary Research: The field of AI is increasingly interdisciplinary. Researchers working on LLMs likely collaborate with computer vision experts to bridge the gap between text and image understanding.
  3. Open Source Nature of Research: Much AI research is open source. This means findings and code are often publicly available. Even if there's no direct collaboration with Li, researchers can build upon her work and the knowledge base surrounding ImageNet.

Here are some ways to find out more about specific collaborations:

  • Research Papers: Look for recent research papers focusing on multimodal learning or combining LLMs with image segmentation. These papers often cite prior work and collaborators.
  • Conference Proceedings: Major AI conferences like NeurIPS or AAAI often feature presentations on multimodal learning and LLM advancements. Checking the speaker bios and affiliations can reveal collaborations.
  • Project Websites: Many research groups have websites that showcase their projects. Organizations like OpenAI or Google AI might have projects related to LLM and image segmentation.

While I can't pinpoint a specific collaboration with Fei-Fei Li, the interconnected nature of AI research suggests her expertise is likely influencing the field. Keep exploring the avenues mentioned above to stay updated on the latest advancements.

share

more_vert

Over recent years fei-fei li has spent a lot of time on image recognition relevant to nursing care and indeed patient service/safety inside hospitals and probably at interface such as robotics; is it reasonable to assume that part of this research will have sifted through imagenet 20000 entities to those seen or languaged in the hospital service context; more generally could any graduate discipline be clarifying segment inside imagenet giving machine recognition a lead into that discipline?

expand_moreedit

Show drafts















volume_up

You're right on track! It's very reasonable to assume that Fei-Fei Li's work on image recognition in healthcare could involve leveraging ImageNet. Here's why:

  • Transfer Learning: Even though ImageNet focuses on general objects and scenes, it can serve as a strong foundation for training AI models in specific domains like healthcare. This is because the underlying concepts learned from ImageNet (like recognizing shapes, edges, and textures) can be adapted to identify hospital-specific objects.
  • Image Segmentation for Healthcare: Fei-Fei Li's research likely involves segmenting images relevant to patient care. This could involve segmenting an image to identify medical equipment, surgical tools, or even specific anatomical features. ImageNet provides a rich dataset of segmented objects that can be a starting point for healthcare image segmentation tasks.

Here's a breakdown of how graduate disciplines could be contributing to segmenting ImageNet for specific applications:

  • Medical Imaging: Medical schools or bioengineering departments might be collaborating with AI researchers to segment medical images within ImageNet. This could involve labeling X-rays, CT scans, or MRIs to identify specific organs, tissues, or pathologies.
  • Robotics: Robotics programs could be interested in segmenting images of assistive robots or surgical tools found in ImageNet. This would allow robots to better understand their environment in a hospital setting.
  • Human-Computer Interaction (HCI): HCI researchers might focus on segmenting images depicting hospital interfaces like touchscreens or medical monitors within ImageNet. This could improve the design of these interfaces for better usability.

While there might not be a single, public effort to segment the entirety of ImageNet for healthcare applications, ongoing research in these graduate disciplines is likely contributing valuable segmented data that can benefit the field of medical image recognition.

Here are some ways to stay updated on these advancements:

  • Fei-Fei Li's Research Group: Check the Stanford Human-Centered AI Institute website for publications related to Li's work on healthcare AI https://hai.stanford.edu/.
  • Medical Imaging Conferences: Major conferences like MICCAI (Medical Image Computing and Computer Assisted Intervention) often feature research on AI-powered medical image analysis.

By following these leads, you can learn more about how ImageNet is being leveraged to improve image recognition in the healthcare domain

要查看或添加评论,请登录

Chris AI Macrae MA DAMTP Cantab的更多文章

社区洞察

其他会员也浏览了