Allstate's Vision for Autonomous and Digital Safety aligns with mission of Stanford University Center for AI Safety


It was fun and an honor to join some of the world's leading thinkers on Artificial Intelligence at Stanford University Center for AI Safety. I also enjoyed being part of the speaking panel in the afternoon. Thank you Erika Strandberg ( Exec Director, Center for AI Safety) for a great event! The topics that we covered were so rich and nascent that, not surprisingly, the time just flew by. The list of names that were either on the stage with me or presented during the day long event, feels like a short version of who's who of AI!

As I was reflecting on some great points that were made during the discussion on stage and by other speakers throughout the day, I wanted to spend some time over the weekend to synthesize my own thoughts on how some of these hard problems around AI Safety should be framed not just within the context of our own team's initiatives but also across many other industries where AI is being rolled out with a high sense of urgency.

So here it is:

Some background first: Almost everyone knows about Allstate as one of the Fortune 100 companies with a formidable, decades long track record in innovating and rolling out great risk assessment (Automotive, Home) and consumer digital safety (InfoArmor) products. 

But few people know that the company is well on its way to transform itself into a Data Company. We, at Allstate are now aggressively investing in, and taking on bigger and harder data driven challenges to serve our customers across the spectrum. The two biggest domains for our team are: Autonomous Vehicle Safety and Digital/Data Privacy. 

Digital Safety and Data Privacy of our Customers: How do we help our customers secure and more importantly, quantify the value of their own data? We care deeply about securing our customers' digital world and serving on behalf of them to extract value. Our data footprints across ecosystems are mushrooming on so many platforms and Operating Systems. We currently don't have a way to understand or make sense of what happens once the data is "out there". AI driven interactions are major reason why these consumer data trails are exploding every second, even without actively engaging with the devices and systems.

Our ambition is to serve as data custodians for our customers, to help navigate their digital universes before they morph into unwieldy, tangled mess that conjures images of ticking time bombs strewn all over.

Autonomous Vehicle Safety: We all have been part of or witnessed the AV technology that goes into building important, but sometimes glamorous prototypes/demos (I am guilty of contributing to the creation of some over the years). These prototypes and early systems showcase powerful autonomous vehicle features at forums such as CES. But when it comes to rolling out these advanced Level 3/level 4 vehicles at scale, there are some of the hardest technical questions that are yet to be solved.

For instance, are we building high fidelity, "risk aware" digital maps at scale? Are we picking the right AI based, real time data pipelines, computing architectures and platforms Two relevant hot areas of focus for engineering teams in AV space right now are: Annotating training data and vehicle perception. In terms of complexity, Computer Vision based vs. Sensor based implementation is an N-P complete problem space in itself. Most of us insiders can feel that there is a sense that the hyper excitement has died down, for now. Now we need to gear up to climb the mountain of taking these shiny, new assets from 2/3 sigma quality/dependable platforms to 5/6 sigma platforms.

Under-researched areas of AI: Looking across the whole AI landscape, there are also many under-debated, under-invested areas. Are we making AI more transparent, more accountable and ultimately fair for all participants in the ecosystems (not just within driving context)? Who will be held accountable, when things go wrong or when things often do, fall into grey areas? The open (may be even a dirty) secret that many people outside of the "deep end of the Computer Science ocean" don't know, is that most Artificial Neural Networks and DNN architectures are not explainable or "auditable" in the classic sense. And so, in my humble opinion, these systems are at best accountable for real world problems that are like "100 meter sprints", but are not really accountable for problems that resemble marathon races.

Those who have spent time, like me, toiling in the trenches to engineer secure, production grade systems in Consumer, Enterprise and Automotive domains will empathize with me when I say - it is stuff of nightmares to hold AI driven complex systems to standards such as bi-directional trace-ability or to get "A-Spice certified" or other domain specific standards or to withstand regulatory scrutiny. AI has enormous potential, but also has a long way to go.

The AI based technology stack (even for enterprise architectures) is so complex, and opaque and intertwined with distributed components that it is not easy to assign ownership in the traditional sense. For that reason alone (there are many others), regulators and legal minds have reasons to advocate against rolling out high stakes, model driven dynamic products in majority of industries.

All of us, who are driving AI adoption for so many exciting applications are also ultimately responsible for, and should be advocating for, more investments in the following areas: Learning and Control for AI Safety, Verification and automatic diagnosis and repair of Systems with AI Components. 

It has been a fun ride so far, but more exciting times are around the corner! We are looking forward to collaborating with Prof. Mykel, Dorsa, Clark, others at Stanford along with our industry peers in the years ahead!

-Sunil Chintakindi

要查看或添加评论,请登录

社区洞察

其他会员也浏览了