An Interview with Kathleen Walch and Ron Schmelzer of Cognilytica - An Analyst and Advisory Firm Focused Solely on Artificial Intelligence – Part 1
There's a lot of hype surrounding artificial intelligence in healthcare. But what do we even mean when we talk about AI? Health care is a very highly regulated space. But. how can we even begin to regulate something that is continuously evolving? And what is the future going to hold? We know that predictions about the future have been notoriously wrong. Kathleen Walsh and Ron Schmelzer are managing partners at Cognilytica, an analyst firm focused solely on artificial intelligence.
Joseph Anderson, MD: Kathleen and Ron, you two occupy a very unique niche. You do analysis, research, publish and even podcast exclusively on AI - across a variety of industries. Could you tell us a little bit about your practice, who are your clients are and what are their concerns?
Kathleen Walch: Ron and I are both managing partners and principal analysts at Cognilytica, which is an AI-focused research and advisory firm. About half of our business is public sector. We’re based in the DC region, so we work with many different agencies - mostly civilian agencies. The USPS, Department of Energy, Department of Education, Internal Revenue Service, Department of the Treasury. And then about half of our business is with the private sector as well. We focus on many different industries ranging from retail banking, insurance, finance, automotive and health care as well. We cover AI very broadly across these disciplines.
Ron Schmelzer: As an analyst firm, the primary thing that we do is research on the markets. We cover about six thousand vendors across the forty some odd reports that we do in a year. It keeps growing and growing. We’re five or six reports into it so far this year. We spend a lot of time trying to understand what our customers are doing with their actual implementation of AI. We look at best practices. We also run a training program called C. P. M. A. I. which is a methodology for doing AI machine learning projects. It lasts three days. We also have a virtual version that we do over the course of a few weeks.
We actually don't do implementation. What differentiates us from a traditional consulting firm is that we won’t go in there and build an AI system. We won't necessarily go in there and spend long engagement cycles on particular problems.
Although we do spend time with our customers, helping them answer their questions and providing guidance and sometimes augmenting their advisory and research team. As Kathleen mentioned, we do a lot of additional content creation with the AI Today Podcast. We have over 130 episodes, with tens of thousands of listeners and downloads across iTunes and Google Play and Spotify and Stitcher. We also write for Forbes and Tech Target as contributing writers. We produce infographics. We speak at events. We run the AI in Government meetup and the AI Demo Showcase.
KW: We also have a monthly meet up, like Ron said, in DC, that's called the AI in Government meet-up, because we found that a lot of different agencies in government were not talking to each other about what they were doing. We thought it was important to have a platform where people from all different levels - senior level CDOs and CIOs , all the way down to low-level implementers - come and present at AI in Government. And then, as Ron mentioned, we're also doing an AI Demo Showcase. That's a monthly event, where we have vendors come to showcase their technologies. That's also in DC for now, but we're hoping to expand that soon.
JA: I find this incredibly fascinating. You're able to work across such a broad array of industries. We’re a little more focused here in healthcare, and specifically, diagnostics. What struck me as you were describing your practice, is not only your work in the public sector but also the private sector as well. And then, within government, there are all of the various agencies you work with. Can you describe the different needs of public versus private and then even within different sectors of the government?
KW: That's a great question. A lot of people think that the government lags with technology. In some cases, that's true. Adoption may be five to ten years behind, but what we found with artificial intelligence, is that they're keeping on par with their counterparts in the private sector. There are some companies, of course, that are very forward thinking with artificial intelligence, that are leading the way in general. But for the most part, government and companies that are just starting to adopt artificial intelligence, seem to be on pace with each other. One is not extremely far ahead of the other. In general, I think that the needs are the same. It's just sometimes the execution is a little bit different - meaning that the government may need to deal with different data privacy regulations and laws in ways that private companies don't need to. But, in general, we're seeing artificial intelligence being used for a wide range of things. A lot of them are very mundane - document classification, text extraction, natural language processing, chatbots - nothing that's too cutting edge, necessarily. But they're all very useful and they're saving a lot of resources, whether that's man-hours or whether that’s money, the applications can be utilized in and spread throughout many different agencies.
RS: And likewise, in private industry - especially in banking, insurance, and finance - they're always adopting technology at the leading edge, because they're primarily actually technology companies. Think about what the actual physical assets of most banks is. It is sitting in a database and most likely not the actual physical currency that hasn’t been changing hands for years. Stock trading is like that too. So these technology firms are trying to gain competitive advantage with things like artificial intelligence and machine learning, which is actually based on extracting value from data. Many other industries like that too - across health care, as you know - and the industries around manufacturing and retail and pharmaceuticals and energy. Every industry is being is faced with this transformative technology that it is artificial intelligence.
JA: That’s an interesting point. No matter what space you're in, you can also make the argument that in some way you're also a technology company as well.
How did you two go about developing this unique expertise?
KW: We've been interested in this space and artificial intelligence for quite some time. Prior to starting Cognilytica, Ron and I both ran Tech Breakfast, which was a monthly demo showcase for companies to come and demo their technology. We started to see that a lot of companies were moving more towards artificial intelligence. They had AI in their applications, or they were looking towards going there. It started specifically around voice applications. We said, “Oh, this is interesting. I think that there's a need in the market that we can fill.” We had an interest in it, so we said, “Let's go and move forward with that.” We've been in this space for a few years now and continue to learn and grow every day.
RS: Prior to that, I actually had another analyst firm called “Zap Think," which was focused on enterprise architecture and service oriented architecture - the big movement in the early part of the 2000’s, which has become a thing now. Everyone is familiar with microservices and, of course, containerized infrastructure. That grew pretty large. That company was acquired by another firm, Dovel Technologies. Prior to that, I was at MIT and actually had a start-up company called “Channel Wave,” which was focused on the e-commerce space - especially on the B2B e-commerce space. My undergraduate academic advisor was Rodney Brooks. When I came to MIT in the mid-90’s, I was very interested in AI, personally, but it was it was still - and up until very recently - the domain of researchers. So, this was a place where researchers went. A lot of thought was being put into it at the time. Neural networks were actually on the downswing. I was very much interested in AI at that point, but didn’t put my career into it until much, much later.
JA: Out in California here, we think Silicon Valley is the epicenter of all things. But as I'm doing these interviews, I’m meeting more and more people from the Boston area and realizing what a hot bed and hub of activity it has become in technology, healthcare and a variety of industries.
Often, we are enamored by, in awe of, or confused by technology. AI. is definitely a very hot area. What do you think are some of the misconceptions that people may have? And what are we just flat wrong about?
KW: I think a few years ago people were talking about “artificial general intelligence” much more than they are now. We actually had Nick Thompson, the Wired Editor-in-chief on our podcast recently. He said that he was noticing that as well. People were talking about, “what could you do if you have this machine that's able to act and think and be just like a human?” But, as we're starting to see, applications more and more in real life, those conversations are taking a back seat -not going away, but taking a backseat to, “What can I do with narrow applications, and what can I do now with the technology that we have?” not, “What can I do in five, ten or a hundred years from now ?” I think that some of the misconception that “artificial general intelligence” will be here sooner rather than later is starting to go away.
RS: I think the other big thing is that there's still a lot of misunderstanding about AI. When two different people talk about AI, they may not necessarily even be talking about the same thing. That's why we've started spending more time on what we call, The seven patterns of AI, which are the things that people are trying to do with AI systems.
These are: 1) conversational systems like chatbots and voice assistants; 2) recognition systems, as we may be familiar with, things like image recognition all the other forms of unstructured data recognition; 3) patterns and anomalies; 4) predictive analytics, which is machines trying to help us make better decisions and spot trends; 5) the autonomous pattern, which we may think of as autonomous vehicles, but there's lots of situations where machines are independently doing things without humans in the loop , whether it's software automation, hardware automation and all sorts of things; 6) hyper personalization, which is the ability of systems to find and build things for individual people - build the profile of the individual, which is definitely the whole idea of personalized medicine; and finally, 7) goal driven systems, which you may think of as machines that can play games, but it's basically machines that can find the optimal scenario for something.
Those, together, are the seven patterns. So, when two people are talking about AI, I say, “Well, what were you specifically talking about?” and that does seem to help the conversation a little bit better.
JA: I think that is very helpful, because the term “intelligence” is somewhat nebulous. It does call to mind something like Kathleen was describing as “general intelligence,” but I think breaking it down may offer a more realistic picture of what specific tasks we’re able to perform today using AI.
You’ve recently published on regulatory aspects around AI. People in health care are very much interested in this. You might even say that healthcare is a very highly regulated industry, perhaps more so than others. My big question is, “ How do you go about regulating a system that is roughly defined as something that evolves and can incorporate previous learnings into future decisions - how can something that's continuously evolving even begin to be regulated?”
KW: That's a great question. We recently published our Worldwide Laws and Regulation report. We wanted to see what countries were doing right now. Had they thought about laws? Do they have any in place and if so, what were they around? So, we looked at nine general areas. What we found is that most countries are taking a “wait and see” approach to see exactly how this technology will be adopted. Right now, I think it's a little too early to build very robust laws and regulations around AI, especially for fields that are very heavily regulated. So, for now what we always suggest is, “Keep the human in the loop.” In the United States there's only one AI system that's allowed to autonomously diagnose disease - for retinopathy. That's it. Everything else uses an augmented intelligence approach, where the human is always the person who makes the final call.
AI systems for radiology images are being talked about a lot lately, where we have computer vision systems that are able to spot anomalies in images. They’re not actually able to fully diagnose the patient. The human being physician needs to go in and be the second set of eyes and actually give the diagnosis. So that's where we are right now. I'm not saying don't use a technology, but we're saying, “Use it with a human in the loop, and let's see how it evolves.”
RS: There’s basically a couple ways of thinking about the legal framework. There are those we call “permissive laws,” because the laws that exist right now don’t necessarily apply. This is the case with autonomous vehicles. The laws of the road now are not built for nobody behind the wheel or the issues of insurance and things like that. We need to actually build laws to allow the use of driverless cars in our existing environment. Then you have “prohibitive laws,” which serve to prevent things that we don't want to happen. For example, there's a lot of laws emerging around banning the use of facial recognition software, because of data privacy and to prevent the sharing and storage of this data. And then you have rules around lethal autonomous weapons. You might think, “Well, that's very Robocop or terminator-esque,” Looking into the future , though, people are already saying, “ Wait a second let's put the laws in now, because it's kind of hard to put the laws in later.”
There is this general philosophy that people are trying to build a regulatory framework around algorithmic decision making, which is what a lot of this comes down to, and sometimes it has nothing to do with AI. If you're going to build an algorithm that will take some of the responsibility away from the human to make a decision, what are the laws that we need to have in place to enable that or to prevent bad things from happening to provide transparency? It’s not necessarily AI specific.
JA: This is an incredibly fascinating area. Right now, people might be a little skittish; they’re wondering if they’re going to be replaced by a robot or an AI system. They’re wondering if AI is going to make all the decisions in the future. Are we ultimately headed towards that type of scenario, where decisions are made by AI and human beings are replaced?
Or even if the human being is allowed to make the final decision, but relying heavily on AI support, could this weaken our capabilities as we become more dependent on AI support?
KW: We always say that technology is not a job killer, but it's a job category killer. With any transformative technology, there will be jobs that disappear. But then there will also be jobs that are created as well. If you look at the 1960’s, there were rooms of secretaries and file cabinets and those were replaced by computers. So will be the case with some other jobs as well, maybe truck drivers, for example. But we'll have jobs that we never thought of before. Back in the 1960’s we didn't have social media marketers and now a lot of people have that as their job title. The work will shift, and jobs will be created.
As far as humans using AI is an augmented intelligence tool, I think that will continue to happen. People have argued that with new technology, we've lost sight of things. We no longer can we navigate by the stars, for example. I don't know that I'd be able to survive in the woods by myself. I don't know how to hunt and gather and do all those things, because I don't need to anymore. But have we as society become less smart because of that or less self-sufficient because of that? Have we not survived? Of course, we've survived. We’ve thrived. Things like disease have gone away in some cases or lessened in other cases. We don't die. We don't have infant mortality rates nearly as high as we did three hundred years ago. Some things get better when we use technology.
Continued in Part 2…