AI Summit '18: What you missed.

AI Summit '18: What you missed.

In a hurry? Find the TL;DR at the bottom.

I had the pleasure of attending one of the largest conferences centered around artificial intelligence & machine learning this past week. In classic Bay Area fashion, AI Summit selected one of the most gorgeous conference venues I’ve set foot in – Palace of Fine Arts Theater. If this were the icing on the cake, then the dedicated stream on quantum computing is the cherry on top.


Upon entering the massive conference center, it would be hard to not set eyes upon the largest booth in the exhibition zone. IBM Watson, with its giant logo glowing shades of deep blue and magenta on a charcoal background, made their presence known. Only a few feet away, Microsoft stood tall as the next largest exhibitor on the floor. The second immediate observation: I was most certainly at an AI conference. I mean, I don’t think I’ve ever seen the two-letter string appear this often in my field of view. And, this is by no means a bad thing.


Venue aside, there was a great lineup of keynotes and presentations from market leaders and practitioners alike. Looking at the agenda, I could tell they were attempting to cater to the whole gamut: C-suite executives, product managers, data scientists, software engineers, and everyone in-between. I did, however, feel that this might have been both a boon and a weakness. While this may have attracted the largest audience from the field to the conference, at times it felt like content was dancing only with the thousand-foot view rather than swinging into the ten-foot view. The biggest benefit of this, though, was that it facilitated interesting conversations between executives and practitioners that typically would not have happened.

Okay, that’s neat. What about the meat?

This is not to say that it wasn’t a great conference with substance, though. On the contrary, many very talented individuals and companies described their success stories with implementing AI into real-world applications.

There were some cutting-edge use cases such as leveraging machine learning in order to assist astronomers with detecting anomalies in our vast universe. This is a classic example of machine learning [specifically deep learning] taking advantage of big data. These astronomers would be required to perform the time-draining task of looking at each picture, one by one, searching for anything out of the ordinary. Their CV model can now detect these anomalies – and even classify the type of anomaly – in only a fraction of a second, allowing the astronomers to spend their time on the more-valuable task of analysis. Some may be surprised to find that deep learning techniques have already surpassed human capabilities in several areas. Most of the sessions related to deep learning had illuminated this fact for everyone to appreciate. Rovers on neighboring planets were a hot topic as well.


Another classic ML use case is within fraud detection. Capital One took it one step further and presented their results with implementing NLP into their current fraud detection process. It was explained that fraud is a huge $30B industry, and that it’s a major expense for many financial firms like Capital One. He went on to describe that their fraud detection system blasts a text to the customer if suspicious activity is detected. However, they found that this only worked just over a majority of the time, and most of the failures were a result of a binary response system. Customers could only respond “YES” or “NO” to the question of whether or not they were responsible for the transaction. Unfortunately, human nature results in typos and other responses that the system decided to ignore. Leveraging NLP, they were able to introduce a more robust system where customers could even change their statement soon after their initial response was sent, e.g., “Wait! I just talked to George, he did make the purchase actually! Silly George.” This trained model significantly improved their fraud detection prevention accuracy.

And as if they hadn’t already brought a diverse roster of topics and companies, they managed to persuade an ex-particle-physicist turned AI researcher at Ubisoft to join the fray. His topic was centered around the (mostly rhetorical) question of whether AI will disrupt the video gaming industry. Due to the nature of video games already being completely digital, it opens many opportunities to leverage machine learning. Content creation of 3D and other art assets can be automatically generated. Training 3D models to rig and animate assets such as humanoid figures to walk more realistically and, for example, sit in a char more fluidly can now be generated at a fraction of the cost and time. The AI bots within the games can generate an eerily realistic voice to communicate with players and save developers time by leveraging text-to-speech for most dialogue instead of being required to capture hours and hours of voice lines. Of course, this isn’t just for AAA games such as Battlefield, it’s also being leveraged to assist those with disabilities and impairments as well. On top of this, the game engines that are used to develop these games and programs make for a great playground to train machine learning models for real-world tasks. One example of this is a simulation from a racing game to train a model how to drive, which can then be deployed in real life.

Naturally, my background in nanotechnology has conjured a fascination with quantum computing. I spent several hours in the quantum computing stream, which during one talk had me and several other curious attendees congregating around the room’s entrance to eavesdrop due to popularity consuming all available seats. Many leaders in the industry presented their findings over the past year or so. Names such as Google, IBM, Microsoft, D-wave, and Lockheed Martin populated the stage. Topics such as significant improvements being made in job scheduling and planning (e.g., air traffic or planning Mars Rover’s day) and Boltzmann sampling (as it pertains to deep learning) engaged the audience. The Summit even brought a panel of distinguished individuals to discuss the current and future state of quantum computing and how businesses might prepare. Overall, well worth attending and I’m grateful they decided to include this topic at this conference.

TL:DR

Here’s the few, short points that I took home from the conference:

  • Every company struggles – to some degree – with data. This would also explain the strong presence of startups focused on helping users aggregate, cleanse, and prepare datasets for use with machine learning.
  • The quality of the data is just as important – if not more important – as the ML algorithm. For example, consider adding additional features that will enrich the data set in order to mitigate biasing. E.g., the term “modesty” has different connotations in different parts of the world, thus this can lead to sentiment biasing.
  • While the recent focus has been on big data for training DNN’s, there’s a movement for improving machine learning techniques on smaller datasets. This will lead to improved efficiency and open doors for applications where the datasets sample size has previously prohibited the use of ML. Plus, I doubt anyone really enjoys labeling thousands of objects.
  • Machine learning is not quite as difficult to leverage as most would think, but it’s also not quite as capable as most were led to believe. I’m not saying it’s not difficult, nor am I saying it’s not amazingly useful. It just seems hype has creeped in and manipulated perceptions. Many tools, libraries, and resources exist now to help practitioners tackle problems with machine learning. Yet, thinking the solution to world peace is just a DNN away is going to lead to disappointment.
  • Successful implementation of AI requires traditional project planning. Speakers presented numerous accounts of ML projects failing due to not adequately structuring the problem and goals. Don’t expect much when you’re just training models for the sake of training.
  • The improvements of deep learning have produced a surge in AI inventions and investments. However, it seems expecting to arrive at AGI by just riding on deep learning’s back will result in us waiting an eternity. I expect in the next several years we’ll see another major advancement in AI research which will further accelerate the adoption rate as new applications are discovered and implemented.
  • There’s several target dates for “quantum supremacy” being tossed around. The most popular dates fall between 2022-2025. It would appear that those are aggressive dates, and researchers suggested that it could be closer to 2030.
  • Another common thread between the quantum computing presentations was the emphasis on a hybrid approach with classical computing. Quantum processors likely won’t surpass traditional CPUs in clock frequencies and traditional computational tasks. Whereas quantum computing’s mantra is work smarter, not harder. The largest benefits will be realized when we’re able to assign sub-tasks to each based upon the type of workload.

Overall, AI Summit '18 in San Francisco was a success. There are a few nit-picky things that can, and likely will be, improved upon for next year. For more information on this series of conferences, you can visit their site.

Tony Barrett

CISSP and CISM certified Identity Access Management Engineer(IAM) with over 20 years experience in IT

6 年

Nice write up!

回复
Shiva Maganahalli

Cloud, Data Architect & Product Lead

6 年

Hey Austin, Thanks for the summary article.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了