Takeaways from #EmTech Digital (2) - Envisioning the next AI
Nitish Goswami/Unsplash

Takeaways from #EmTech Digital (2) - Envisioning the next AI

EmTech Digital is MIT Technology Review’s Signature AI Conference. Here are my takeaways from Day 2: Envisioning the next AI and more.

Note:?These aren't comprehensive and by no means cover the depth of insights that speakers gave. These are just a few things that I'm geeked about.

Read Part 1:?Takeaways from #EmTech Digital (1) - Better Data, Better AI

What's next for deep learning?

  1. AI is becoming another cloud-based service.?Can use off-the-shelf? Example, large language AIs.

Ali Alvi, Microsoft AI

Microsoft's Turing model - one of the largest AI models.

1. How AI models are built is changing.?Now - we no longer need to label data, it began learning on its own.?Things that took 20 years to surpass human parity, take about one year now.?Beyond saturation point now. You may not need NLP experts now to create deep learning models. Same model can be extended to multiple languages.

2. Real measure of quality is in end products.?Future is about interactive AI and unified models - models you interact with on an ongoing basis.?You don't need to create a neural network on your own - use what's there.

3. This stand-alone deep learning model is able to solve riddles. Credit: Ali Alvi's presentation.

No alt text provided for this image

4. Foundational models are being trained on multiple modalities - vision, text, speech, 3D signals, structured data.

5.?More the parameters, the better the neural network - just like more synapses with neurons.

Oriol Vinyals, DeepMind

  1. Say, there's a programming task, AlphaCode sees natural language specs and creates code.?Produces a solution for a particular instance. It defines variables, does loops,?does what programmers do.
  2. Lesson #1: Bias towards to simple algorithm. Lesson #2: Data is key. Compute is key. Lesson #3: Clear evaluation metrics. Lesson #4: Have fun while doing research.
  3. AlphaCode works like translation systems. Instead of a language like French, it spits out programming language.
  4. Train the neural network via Github code (publicly available, 715 GB of code). Train from code contests (13K code contests). Then fine tune.?Generate hundreds of thousands of program. Like reinforcement learning only a smaller set (e.g. 10) is submitted to server to test which code is correct. So there's a filtering process.
  5. Even if you focus on a particular project, these will impact other fields.?E.g. AlphaFold protein folding wasn't originally thought of as a problem they wanted to solve.

Mira Murati, OpenAI

No alt text provided for this image

Credit: OpenAI

  1. All AI involves 3 things: neural networks + datasets + computational power.
  2. GPT3 - a transformer that's been trained on huge data from Internet to predict the next word well. Example, It was a dark, stormy [night]. DALL.E - maps text to an image. The neural network actually drew the above pic. Codex. Can predict the next line in code. InstructGPT. Predict next word?safely. Makes up facts less often.
  3. We want AI to learn as humans learn.
  4. Value to businesses: Examples, yabble, Github Copilot.

All in day-to-day business

AI, ML in business - to better manage customer experience.

Fiona Tan, Wayfair

Suppliers-customers focused on home, digitally native company. 3,000+ tech team. 27 million active customers.

  1. If you are searching for blue sofas and you picked the 5th one. We learn from it so that feeds back in to the AI.?Striking balance between full vs. partial automation.?We will win if purchase exceeds ROI.
  2. ML assists in the decision augmented by humans in the loop.?Example, geo-sort - getting products to move closer to customers (less than 250 miles). Boost products closer to the customer.

Tony Jebara, Spotify

406 million active users. 82 million music tracks. 184 markets. Massive scale problem - 200 peta bytes of data.?16 billion artist discoveries.?We can't have humans looking at this data - we need machines. Both creators and listeners are growing. Where are those key value problems?

  1. 3 things: Data. Shared models. Experience.?ML assembles tracks into playlists. Assembles search results into a list.
  2. Logs all UX effects of app as it happens.?Content modeling.
  3. What tracks are similar, what artists are similar. What artists, playlists are in your taste profile? End result is personalized experience.
  4. Blend your playlist with a friend's playlist or with a celebrity's playlist. Genres. Moods. Workouts. Daily drive. One of 100 million recommendations - now that's ML challenge.
  5. Pathway to take user to higher value relationship with audio. Future rewards.?Reinforcement learning.

Making AI work for all

  1. With AI, are creating conditions that people will live in. AI needs to bring spatial inputs. Map your data. If we drop values from certain clusters, then we might be dropping entire zipcodes. Then there's no equity.?Maps improve narrative around AI projects.
  2. Despite the urgencies of technologies or AI, focus on longterm results. Technovation looks at 10 year outcomes.?Educating girls is the 5th most effective strategy to reduce carbon emissions.
  3. 5 things appealing to teenagers: independence, teams, purpose, warm and caring mentors, physiologically thrilling (movement, excitement).?Helps programs scale. 350,000 participants across 120 countries.

Envisioning the next AI

Sameena Shah, JP Morgan & Chase AI

AI research to business transformative setting with its constraints.

  1. Built a social calculator to monitor social conversation to predict markets?(e.g. twitter, reddit). To predict short squeeze, predict stock movement, mitigate risk. They looked for a leading indicator. Not all voices are equal - so who are the influencers. The model needs to understand history of influence, virality.
  2. Capital Connect. Bankers never covered early stage startups - not yet big enough. AI is front and center with three algorithms. Interests, targets, history, mission, founders, viability. To triage information from Internet sources, investor sources and so on and establish a subset - then go for eligibility. Now, who's likely to invest in them (a specific partner in a specific firm).?"The machine is recommendation because of so-and-so reasons."
  3. From an AI perspective, a lot is about discovery of data, text data -- not numerical data. Google searches aren't task aware.?Find > Extract > Mine > Represent > Process > infer > Use. Passive searches vs. autoresponsive models. Always on, always alerting when attention is needed. Pipeline of algorithms vs. one algorithm.

4.?Don't fall in love with solutions, fall in love with problems.

5.?Financial services AI themes.?Fraud, anti-money laundering, data (as a customer of AI - proxy data, synthetic data), markets area, client-side - experience, identification, intent, empowering employees - augmenting human knowledge, policy and regulation, ethics, sustainability.

Agrim Gupta, Phd student, Stanford -?AI learning from animals

  1. Animals display embodied intelligence.?In contract, progress in AI has been disembodied. Fueled by large scale available data, no causality. Embodied AI takes a more holistic perspective. Discover general principles of embodied intelligence.

Focus was to understand relationship amongst environmental complexity, evolved morphology, learnability, and intelligent control.

2. DERL - Deep evolutionary RL (reinforcement learning). 1) Outer loop - distributed asynchronous evolution 2) Inner loop - low level sensory input. UNIMAL design space - inspired by nature. See videos (amazing - AI mimicking Nature).

Fascinating stuff - watch this even if you don't understand it.

3.?Evolution might find genotypic modifications that lead to faster phenotypic learning.?Baldwin effect. Time has a positive effect. Foundational agents can further progress for next generation AI and robotics. A foundational controller/morphology algorithm can go in many different directions.

4.?Instead of creating hardware and fixing the software - consider building the mind first and finding the right hardware/robotics for it.?Applicable for generative design, drones for specific tasks/optimization.

Zenna Tavares, Basis Research

  1. Building blocks of universal reasoning: knowledge representations, reasoning algorithms, kinds of reasoning.
  2. Most popular AI languages: Turing, Pyro, Gen, Stan.
  3. 3 lessons: Copy deep learning and use massive amounts of compute, use white boxes instead of black boxes, operational structure to make outcomes possible. Large socioeconomic situations, gene expression.

Natasha Jaques, Google Brain -?AI social learning

  1. Reinforcement Learning. DeepMind's AlphaGo. Robotics with Rubik's cube.?AlphaGo had to play 5 million games of Go to get human level efficiency.?560 years of human playtime.

2. Social learning. Human social environments drive complex behavior. Same in AI. Basically, one agent learns from another. Humans are the most intelligent agents out there -- so can AI learn from human-esque social learning. Google Assistant application to book a flight. See below it's able to generate environments to train the agent.


David Ferruci, Elemental Cognition?- Founder of IBM Watson Jeopardy team

  1. How have things evolved??Last 10 years -- most exciting 10 years in AI -- tremendous advances in machine learning.?530 billion parameters - Microsoft's model - faster than jeopardy.

Issue is -- does the machine understand? Watson didn't 'understand' Jeopardy. We need the machine fluently talking to you, understanding you, solving a problem for you -- that's natural learning. The machine must learn the way we learn. Humans might say -- in spite of the data, we'll do this.

2. It doesn't matter how, but say you use 10,000 features and can accurately predict a certain man gets pancreatic cancer. That's a model one can take action on.?Machines need to be understood -- you need to know WHY is the machine making a certain recommendation. It's not enough to say that it's right 75% of the time.?But in this particular case, WHY? It's not simple -- if a human says he has an intuition, what's your track record? You got to be able to explain.

3. A more holistic AI that understands how humans understand. It's not easy.?We understand in many different ways -- seminars, reading, research, watching videos, we talk to experts etc. You have this exchange before arriving at the decision.

Set your goal higher -- demand that from the AI -- as a collaborative work partner.
A good answer may not be the one that most people voted for.

Read Part 1:?Takeaways from #EmTech Digital (1) - Better Data, Better AI

For weekly gastroenterology business/tech updates directly in your inbox, subscribe to the?Scope Forward newsletter.

Get the book:?Scope Forward - The Future of Gastroenterology Is Now In Your Hands.

Narayanachar Murali

Gastroenterology/ GI Endoscopy / Hepatology / Clinical trials / New drug development/ New device development

2 年

And the never ending mania of linking fitness apps to med records and insurance premiums ..

Narayanachar Murali

Gastroenterology/ GI Endoscopy / Hepatology / Clinical trials / New drug development/ New device development

2 年

Great points Praveen. While these technologies are evolving,, the developers of ML product must engage with clinicians who actually see patients for a living every day in designing these products for clinical applications( not just lobby them in like EMR companies have done) . ML in medicine is a whole lot more difficult than in an automotive assembly or stock market analysis. The risks and the implications of false positive and false negatives are both serious for the individual patient and family. Here is an example from my recent experience: https://www.dhirubhai.net/posts/legionhealthcaresolutions-medicalbillingexpert_take-a-holistic-approach-to-genomic-testing-activity-6915007616414879744-hJUE?utm_source=linkedin_share&utm_medium=member_desktop_web

要查看或添加评论,请登录

Praveen Suthrum的更多文章

社区洞察

其他会员也浏览了