Episode 6: Exploring the Intersection of Healthcare, AI, and Cybersecurity
We challenge the business of healthcare with bold ideas for change.

Episode 6: Exploring the Intersection of Healthcare, AI, and Cybersecurity

“It comes down to transparency on how the AI system was built. How does it work and how did you get there? And what do you do when it breaks? - Nick Merker AVP of Privacy, Cybersecurity, and AI at Eli Lilly”

Welcome to the Boombostic Health Podcast newsletter!?

Get weekly insights and candid discussions with experts who are shaping the future of healthcare. Explore trends, technologies, and challenges in an accessible way for all. Click subscribe for our newsletter. Join the conversation by listening to our latest podcast episode here.?

Welcome to Boombostic Health, where we challenge the business of healthcare and explore bold ideas that drive meaningful change. Each week, we bring you candid conversations with top experts, innovators, and leaders exploring the latest trends and technologies shaping the future of healthcare. Join us and be part of the conversation in transforming healthcare.

Episode Spotlight: Navigating the Future of Healthcare: AI, Cybersecurity, and Regulation

Join us on Boombostic Health as we explore the transformative power of AI in healthcare with Nick Merker , AVP of Privacy, Cybersecurity, and AI at Eli Lilly. Discover how transparency and accountability are reshaping AI systems, ensuring they serve as trust mechanisms in life-altering decisions. Dive into the complexities of the regulatory landscape, focusing on the EU AI Act's global influence and its parallels with the GDPR.

We also tackle the pressing issue of cybersecurity, examining the significant impact of incidents like the Change Healthcare breach. Learn practical strategies for managing continuity plans and safeguarding patient data in an era of increasing cyber threats. Don't miss this insightful conversation that challenges the status quo and drives meaningful change in healthcare.

BRADLEY: Welcome to another episode of Boombostic Health. I'm thrilled with our guest today, Nick Merker. Nick's a friend and also a major leader in the healthcare industry who hasn't always been in healthcare. He actually started his career in computer science and ultimately transitioned into the legal profession where he has a breadth of experience in multiple industries. And then more recently has moved to Lilly, the medicine company headquartered in Indianapolis as an AVP in the legal team focused on privacy, cybersecurity, and artificial intelligence, all incredibly huge jobs in and of themselves. So I suspect Nick probably doesn't sleep given how much he has to cover every day. And I really appreciate you squeezing in the time here to join us on Boombostic Health.

NICK: Thanks so much for having me. I'm really excited to be here.Thank you, Brad.?

BRADLEY: Awesome, Nick. Well, maybe before you dive in with your own introduction, I'll frame what the key topics are that we're going to cover today. So the first is the regulatory landscape in healthcare. Anybody who wants to build a healthcare business needs to be aware of what's happening from a regulatory perspective, but now more than ever, because you can't go a day without hearing some new story about artificial intelligence and the role that it's playing, but also the challenges of how the regulatory landscape is going to treat this new capability and there's also a new administration. You might have heard about it in Washington. And it looks like there'll be some major changes from an FDA perspective and the regulations, of course, will be impacted by that leadership.

We're also going to cover cybersecurity. Wow, that alone, you know, something we could spend a couple of days talking about. So we'll get into some key considerations, though. And you know, Nick, you're now in this healthcare space. I always like to know, how did you get into healthcare? In your own words, maybe just give us a little bit more depth of your background and how you've gotten to where you are in your career.

Nick’s Journey: From Patents to Privacy

NICK: Yeah, sure. So I'm a computer scientist at heart or a computer geek at heart, however you want to phrase it. My career started at the age of 14 when I sought out and volunteered to work at a hometown internet service provider, had about 5,000 dial-up customers. This was back in the Red Hats 6.1 days, you know, very, very old school time. I took that and I got really interested in computer science, kept building on my skills and I went to the University of Illinois, got a computer science degree. I was working at the National Center for Supercomputing Applications at the University of Illinois.?

When I graduated there, I went up to Cars.com, where I was the head of cybersecurity for a time. During that whole time, I was going to law school at night and I graduated and actually started out as a patent attorney here in Indianapolis. I was working for a law firm here where I was drafting patent applications in the email marketing space. If you're familiar with Indianapolis, there's some major email marketing. We're kind of a hub.?

BRADLEY: That's kind of a digital marketing epicenter. Yeah, that's where the Salesforce Marketing Cloud was born, right? Through ExactTarget. Is that right?

NICK: Exactly right. Exactly right. And kind of the transition there was this major Supreme Court decision happened called Alice, and it became very difficult to get patents on email marketing, those types of issues. So I quickly, as I drafted my eighth patent application for dental drills, and that's not a joke, actual dental drills.?

BRADLEY: How many ways can you do a dental drill??

NICK: Yeah, had to get into something else. Me and another associate at our law firm started the cybersecurity and privacy practice. It was really at the infancy of it all. It was working small cybersecurity incidents, trying to figure out what compliance even was. There were a few laws around at the time, but there were very few privacy laws. Then I worked in that capacity, became a partner, and was there for about 12 years. Just recently, I came over to Eli Lilly and Company, as you gave that great introduction. And kind of the thing for me that brought me here is just a passion for healthcare and the great work that Lilly is doing.?

I feel like everyone I talk to here has some personal story that kind of brought them me. It was by my grandmother when I was a kid. She suffered from, and eventually passed away from, Alzheimer's. That was really kind of a defining moment in my life. I was very, very, very young. It was the first death in my family, and it just really impacted me and has always stuck with me. And I have some stories about that. Like I actually went through and engaged in 23andMe to get my own genetic information because they said they could test for a late onset Alzheimer's genetic risk. I've completely changed my diet to avoid Alzheimer's. It's just that I'm obsessed with it because it impacted me so much. So that kind of brought me to Lilly and the great work that we're doing here and I get to be a little part of that.

BRADLEY: That's awesome. What was your hometown?

NICK: Quincy, Illinois. It's a little river town. You probably have not heard of it. We're known for high school basketball, but it's really close to Hamble, Missouri, where Mark Twain did a lot of his writing.

BRADLEY: Oh, interesting. And then University of Illinois. That was the birthplace of Mosaic, right? Which became Netscape. Is that right?

NICK: Yes.?

BRADLEY: OK. That had to have been a really interesting place to study just because of heck, when you have that kind of a breakthrough that comes through, you know, an outstanding degree like computer science in Illinois, that's legendary. Did people talk about that when you were there? Was that kind of part of the lore?

NICK: Yeah, kind of. But I don't know. We're all in it. I'll try not to fail out. We had other things on our mind. I do have a really quick story about the computer science building at U of I. They built this new building called the Siebel Center. It was the first building on campus where all the doors were electronic key cards. And there were the RFID one way cards where you could essentially what a PhD student did is he bought a scanner off of eBay and he just went around scanning, like getting really close to professors and custodians and he made a master key for the building and then wrote a paper about it, like the cyber security risks of electronic key cards. So it was it was such a fun time. I really miss it.

BRADLEY: Was that Siebel for Tom Siebel?

NICK: Yeah. OK. I believe he was a major donator or donor to that.?

BRADLEY: Yeah, that's very cool. Also, I remember Siebel was the original CRM master before Salesforce came and kind of changed that world. Well, in any event, that's a cool background. I actually learned a few things about you I did not know previously. In full disclosure, you were a fantastic IP attorney when you used to be outside counsel and we worked together back then, which was really cool. So thanks for all your help. We actually got some of those patents issued. And, you know, it was cool.?

You've obviously made a huge leap in terms of the impact you could make. You know, just think someday you can tell your grandkids, you know, it all started with patents about dental drills. But yeah, I mean, well, I will say the things that Lily is doing related to Alzheimer's and, you know, diabetes and doing things to get to the root of how you can help people. It's just really it's exciting and a fantastic success story. And I'm really excited for you to be part of that.

So why don't we dive into our first topic here related to this regulatory landscape, AI, the new administration. I would love to hear from where you sit, what you see as these major trends and issues. And just to also be clear, our audience, it's really people who are looking at how to do a better job in the business of health care by bringing innovation that has some kind of economic engine to support that innovation. You know, and in all of that, you have to align with the regulatory framework or things don't go well. You know, in health care, it's not like starting up an email marketing company where the stakes are, you know, if somebody doesn't get an email or lands in spam or something, who really cares? But now you're talking about health care. You know, this is literally dealing with life or death, longevity, the way that people live their lives every day. I'll turn it over to you and would love to hear what the major trends are and what we should be thinking about as we innovate in health care.

Global Influences of the EU AI Act

NICK: Yeah. So the number one topic on everyone's mind is this revolution of Gen AI and the regulatory landscape that is not quite the Wild West, but it feels a little bit like we're just past the Wild West. And what I mean by that is this Gen AI revolution happened. Machine learning has been around for a very long time, AI and its core has been around for a long time but Gen AI really got the lawmakers attention. You've seen lawmakers and standards development folks rushing to try to figure out how to get in front of this revolution. So the kind of foundational law, the first foundational law that has been on the books, is out of the European Union called the EU AI Act. I think this is going to be the bedrock for laws across the world. I'll get into the reasons. But the reason I think it's going to be kind of the model for the rest of the world is because we've seen this story before with privacy. Privacy in specifics of what it does here in a second. 2018, the General Data Protection Regulation was the law of the land in the EU.?

We've seen that regulation now copied. Here in California, we've seen in Brazil, in Israel, even part of China's new law, the PIPL kind of copies it. So we've seen this kind of play out with privacy. And so I think the same thing is gonna happen with the EU AI Act. In particular, the EU AI Act is all about a risk-based approach to gen AI. If you are a provider of general AI systems, you have certain requirements attached to you. If you're a requirer for our conversation, high risk AI systems that have an opportunity or a chance to impact the fundamental rights of an individual, those have other requirements and health is one of the rights, the fundamental rights that they list out in the EU AI Act is something that could put you in that high-risk bucket. If you are producing or deploying a high-risk AI system in the EU, you have to, in some cases, certify with an EU body. We don't know exactly what that means yet because it's all still forming. You have transparency requirements. You have quality requirements. You have all of this documentation that you're going to have to generate so you can prove if a regulator comes in how you built your AI with that law in mind.?

All of this is kind of really at the early stages because the EU AI Act was just the text of it was just released this year and voted on and now the text is what it is, but the actual implementing guidelines may take months, in some cases, maybe even years to figure out. So we still don't know exactly what that law requires, but whatever it is, I think it's going to be a blueprint for the rest of the world.

BRADLEY: That reminds me of the EU MDR, which is focused on med devices. That has also followed this pattern where it originated in the EU. It has to do with what do you have to report out on with respect to devices to ensure that they're complying with the regulations and can stay on market, basically. And it gets to the level of saying we need real world evidence that is allowing us to ensure that people aren't being harmed by devices, because some of the situations of metal on metal hips or leaking breast implants that were deployed pretty pervasively before it was determined that, hey, there's a problematic pattern here. It has taken longer than what would have originally been expected, but nonetheless, it's marching forward. It's basically now the developed nations that have these regulatory bodies are saying, what can we follow? And EU has taken that step of saying, this is how we would expect that reporting to occur and go well beyond what was historically expected.?

Now, in the case of a medical device, generally, and now this is changing too with technology, but generally you've got something that, let's just use an obvious one, like an artificial hip, which isn't a lot of life or death, but it's nonetheless something that has a big impact on your quality of life. You design it, you manufacture it, you test it, you monitor it, it doesn't change a lot. There's not AI in there, so it's easier to regulate. With AI, especially Gen AI, it's by definition designed to learn and to be altered. What is your perspective on how you could possibly have a regulation or a regulatory framework that contemplates that reality of the temporal nature of any Gen AI model at any given time?

Transparency and Accountability in AI Systems

NICK: Yeah, that's a great question. I think it comes down to transparency on how the AI system was built. And if you look at the EU AI Act, or you look at just the NIST AI risk management framework, which is a standard that is forming to try to understand how you think about risk with AI. If you look at kind of these bedrocks of where people are going, to me, if I had to sum it up in one sentence, it would be transparent of how this thing works.?

Because there probably will be issues with AI systems, bias issues, inaccuracy issues, and it's all about how does the company or how does the provider of that AI system respond and correct the issue? So one of the things in the EU AI Act, for example, is that you have to have an incident handling process where if our high-risk AI system, for example, causes some harm or gives some inaccuracy and that's reported in, what are you going to do about it? So you have to think about that at the front end. So to me, I think it's all about transparency.?

You're also actually seeing this, the United States doesn't really have much in the world of AI law yet, but there's a New York City requirement for automated employment decision-making tools. That requirement is all about transparency again, it's saying that, and really trying to avoid bias. And so if you're gonna be using those tools, you have to do an annual report I think it has to be published on how you have designed that or tested that tool to avoid bias of race and sex, et cetera. So to me, I think it all comes down to transparency. How does it work and how did you get there? And what do you do when it breaks?

The Role of Technology in Monitoring AI

BRADLEY: Yeah, that's interesting. That actually also is a model that reflects the device situation, where it's all about how do you identify as quickly as possible, where you may have had some type of, people call it hallucinations, but I mean, it's generally like these models are as good as the data that are fed to them. If there's some kind of data that's fed, that's off track, you're not going to get the result.

This is just a random sidebar, but the interesting thing about Gen AI to me, and one of the interesting things is that it's not always right, but it always sounds sure. The way that it constructs the feedback and is able to do follow on responses feels very human. So your intuition tells you, hey, this is formulating answers in a way that mirrors the way a human brain works. In reality, it's not as advanced and sophisticated. I believe very much that this human reinforced type of AI use, where you have the experts that are able to curate and inform, give feedback, have this human feedback kind of model. That's the kind of things that we do in our data companies that, it's not trying to make it so the technology does something independent of a professional, an expert.

How in the regulatory framework do you anticipate or are you seeing, and by the way, I will absolutely read up on this EU AI Act. I had heard some rumblings about this, but didn't realize that it had become prominent like you're describing. But how do you put this in place to monitor it? Do you think there'll be like technologies that monitor these other technologies? So you have some type of an audit that's built in based on some other tech. Is this what a lot of these folks who are departing open AI and other companies with this, we're gonna make AI safe kind of mission, does this relate to, do they relate to each other as far as you know?

NICK: I think for sure, we will have technology monitoring technology. I'll play it back to what we've seen in the cybersecurity space. We've had for years, data loss prevention tools that use machine learning to try to catch things before they go out the door and try to find errant behavior. So we've had technology monitoring technology forever. I do think there will be AI systems designed with the sole purpose of monitoring for accuracy of AI and reporting in real time or blocking possibly output that doesn't conform to requirements. I think that's, maybe that's already being done by some companies, I don't know, but that seems like a natural evolution based on what we've seen before.

BRADLEY: Yeah, and another aspect of that is what kind of liability does a pharmaceutical company or a med tech company or even a healthcare provider have that they might be subject to if they are leveraging these technologies? You know, there's an example I recall from our previous episode where our legal consultant here, Emily was talking about a situation where physicians with breast cancer were identifying certain breast cancers and starting treatment in a way that could actually have unintended consequences of that result in different types of health issues with patients and how that all of a sudden has opened up this whole Pandora's box of should I even use this??

You know, and obviously healthcare is an industry where people can be somewhat resistant to change, especially folks who have been trained under the Hippocratic Oath and sometimes feel like there needs to be some certain level of rigor that maybe they don't believe is there, but I think everybody agrees that AI is here, it's happening, it does provide value in the right applications, but do you have any thoughts at all just in general, like from a macro perspective on liability as it relates to these new AI technologies and where you apply them and how you do it in the proper way so that it's powering things up and not exposing new liability.

AI and Liability: Navigating Legal Challenges

NICK: Yeah, and so I will give you an example outside of the healthcare space on where I think this might be going. And so let's take chatbots in particular. A lot of companies are putting customer service chatbots on their websites or ways you can interact with a company that's completely Gen AI automated. When those, most of the time you will see very clear disclaimers that, hey, you shouldn't rely on the output of this chatbot. If you really, you should double check the answer or we'll give you a link to maybe how it found the answer that you can go look at yourself. So that's been kind of the approach that folks have taken. And there's in California, there's some requirements about transparency when using chatbots as well.?

We've seen in Canada though, it was a Canadian airline that had a chatbot on its website you could interact with and a customer went to that chatbot and started asking about a discount based off of some condition that person had, whether it might have been a related to a death in the family or a pregnancy, I forget the exact facts, but essentially it asked, the person asked the chatbot if this situation fell into their policy to get a discount or a reimbursement on their flight. The chatbot said, yes, but in actuality, it did not fall within the confines of that policy. So this actually was decided in Canada where the company was held liable for the response of its chatbot that was incorrect. Even though it had all of the disclaimers and such on there and they were doing the right thing, they were held liable for that incorrect response.?

I think we do see litigation or something along those lines. It will be based off of publicly facing AI systems that provide inaccuracies that people rely on and then are harmed. And how that will flesh out in the United States courts based off of disclaimers and terms of service and all that, I don't know, I haven't seen any of those cases, but I think that's the likely first type of cases that we would see.

BRADLEY: Okay, that's very interesting. So what do you think about the change in the administration and how that'll affect these regulatory issues?

The Impact of Administrative Changes on AI Regulation

NICK: Yeah, I think the change in the administration is gonna be wildly interesting too because President Trump has said his stance on AI is that he wants our AI to be the best in the world and that's really the position he's taken. That sounds like a pretty business favorable position on its face, but I guess we'll see based on cabinet appointments and how the FTC is shaken up and those types of things, what will happen.?

Regardless of what the federal government does, I think there's at least you're hearing rumblings of folks who believe that state governments will step up to the plate and fill the void of laws that may not be on the books of the federal government. We've seen that in the privacy space already. Just in the last 24 months, we've had what, 14 states that have acted comprehensive privacy laws because there's no federal overarching privacy law on the books. And so I think you'll see the same thing happen with AI. We're already seeing that, like I mentioned, California, New York. I think you'll just see more and more of those with state governments stepping up and coming up with laws because the federal government has not.

The last comment you mentioned on Chairwoman Lena Khan and a new appointment there, we've seen Senator Cruz had a letter very recently asking the FTC to stop any, as he called partisan or divisive type of actions by the FTC. And then you also saw a similar type of rumblings from House Republicans.?

They're asking the FTC to kind of halt what they're doing who knows if they'll actually halt or not. But I do think a top priority of the administration will be appointing a new FTC commissioner and that person would likely become chairperson. So we'll kind of see who that person is and how that shakes out and how that changes the FTC enforcement strategy.?

BRADLEY: So there's just a lot of moving parts.

NICK: Oh yeah, there are a lot of moving parts. And if anyone was coming onto your podcast and saying, I know exactly what's gonna happen, I think they would be foolish because we're just so early days. I don't really know where this is going to go except that a lot of things will be different a year from now.?

BRADLEY: You know, with the internet and even the emergence of the web and then cloud, it seems like the general posture has been toward minimal regulation to let a lot of different things proliferate, to let people innovate, to let sort of the natural order of events figure things out a bit, you know, sort things out a bit. Something about AI feels different than that. It feels like we're in this new frontier that has exponential possibilities, but puts at the fingertips of anybody, whether they have good intentions or bad intentions, some pretty unbelievably powerful capabilities. Just maybe a couple of comments on what you think is different now than what we've seen in the past, and whether you think that we'll be allowed as entrepreneurs and innovators to have the opportunity without running afoul of regulations or the law, in terms of how we can start advancing the missions of organizations using AI without being too encumbered. There is a point of, you know, destruction also from an innovation perspective if you clamp down too hard and you regulate too much. Any thoughts on that?

NICK: Yeah, I guess I have a kind of high-level thought and I'll compare it to lessons learned that we had with social media. When social media, I was in college when Facebook, I think I was a sophomore when I joined Facebook, and that was, you know, I'm 41 now, so that was 20 years ago. And in these 20 years, we've seen the social media space change so much so quickly that I don't think it would be possible for lawmakers to stay up to speed on creating and implementing laws because once they get like a hard law on the books, the technology has probably completely changed. Or if you go through the whole rulemaking process and then you get done, well, now the technology has changed. I think it's going to be even more quick of a change with Gen AI. So you're going to have lawmakers, standards development, and such trying to play catch up because the technology is moving so quickly.

So I don't have a comment on whether it was intentional or not for there not to be laws that have kind of clamped down on innovation. You're probably correct that there's no intention to do that, but I also think it's just a product of the technology is moving so fast that it's hard for us to put laws in the books quick enough to address it. So that's why I think it's so important to go back to that kind of transparency requirement, aligning to a good industry standard like the NIST AI Risk Management Framework, using something like the EU AI Act, which is a really good bedrock of a foundational law and kind of treating an AI system with those guideposts in mind and just kind of doing the right thing responsibly with AI is probably going to be what companies should do instead of trying to wait to see where the regulatory landscape plays out. That's at least where my thought is personally.

AI's Exponential Growth: Transforming Productivity and Regulation

BRADLEY: Yeah, no, it's interesting. Well, just to close that topic out, I recently learned that people are familiar with Moore's Law, which is that the processing power of a microprocessor will double every 12 to 18 minutes, and the cost will be cut in half. That has played out over time. There's a similar power law that's happening with AI, but it's dramatically explosively faster. So in the last few years, the chat GPT, open AI capabilities have increased by, I think I read, nine orders of magnitude. It's basically like exploding so fast that it is the case that we will have these models that can perform job functions as well as humans.

So that's going to give, I think, humans an opportunity to have more productivity if you think about what is the economy. It's the number of people in the country and times the average productivity. We're constantly in this pursuit of how do we expand that? How do we have this abundant future in a world of change? And I personally lean more toward that abundance view versus the doomsday view.

I have recently rewatched 2001, A Space Odyssey. I do have a t-shirt that says, what do you think you're doing, Dave? So I do fall into that ultra nerdy camp that finds that fiction interesting, especially as it's come to be more real these days. But it is the case that it's moving so fast that you're not going to be able to regulate your way out of leveraging the technology. I think you're right. I think it's going to be interesting to envision these, I'm sure, regulated, approved technologies that are actually out there monitoring the technology. So you're right. I mean, that's the only way. Because if you don't have that, humans can't keep up. I mean, we can't possibly keep up with the trillions of data points that are now informing these models.

I think the other thing I would say as we wrap up this topic and move to cybersecurity next is the reality of AI is that it's only as good as the data that's fed to it. And ultimately, it's not difficult to access or create AI models. I mean, Hugging Face has, I don't know how many now, but a massive base of AI models you can access. The hard part is having the data to train those models. That's enriched, that's normalized, that's dependable, that's trustworthy. And to me, that's why there are some of these companies like Reddit and others that are really, you know, taking off right now because they have content that can be used to train these models. So that could be a whole other podcast conversation for another time, but how that world will evolve because I think the next big provider of picks and axes in this revolution are those people, those companies, those services that can supply that highly, that data with high certitude that can then be used to train models so that they are accurate and then how do you scale that with the human reinforcement to make them better and better.?

I also think that AI goes ultimately to deep vertical. I think the chat GPT construct is very much like a novelty. I mean, it's not that the stakes aren't that high. When you ask it a question like the other day, I was in a suburb and trying to figure out how to make the center console go back and I couldn't remember where the switch was to do it. So I asked chat GPT and it insisted there was a lever somewhere that I had to pull and release some things and it was dead wrong. But man, it was very convincing with the way that it described it. Thankfully, I figured it out just in time so I didn't look clueless.

But it's interesting. You can ask chat GPT a lot of questions and it can be totally wrong. It sounds like it knows what it's talking about. No big deal but when you start applying these technologies to deep vertical issues, especially health care, which is where I work, I think you just are going to see human reinforcement, human feedback built in and the humans are experts. I use this kind of saying that it makes everybody bionic. I jokingly use the reference of that $6 million man show that used to be on. That's the $6 billion person, whatever these days, the numbers have gotten bigger.

I think ultimately the promises from a productivity perspective, from a quality of life perspective, these technologies, when they are applied the right way, they have the right data that's informing them, they have the right reinforcement from people that do have human intelligence, you get a big power up. Super helpful to hear the background on the EU AI Act, how that's becoming a standard that others are following, and that that will likely proliferate across the world. It sounds like it already is. And, you know, we'll have to see how it all plays out. Any other comments on the regulatory landscape and AI before we move into the cybersecurity topic?

AI and Human Collaboration: Enhancing Productivity

NICK: One, I, you know, I'm a technologist at heart. I think with any new technology comes opportunities. I'm not of the camp where AI is going to replace people's jobs. I know there's some fear mongering of that. I am really of the mind that AI is going to create new jobs. I mean, when the car was invented, the people that were riding or that were steering horses, I guess, had to find something else. But there were need we needed car mechanics, we need people actually, in the early days to stand in front of cars and warn people that a car is coming.

I think any technology change is going to come with opportunities for new jobs. And it's just we'll have to shift how we think to embrace the AI and use it to make us the best producer we can be.

BRADLEY: Yeah, I agree with you. I think that it's a way to make people much more effective at their jobs and also make it so we can cover a lot more ground and be more productive and more effective. Frankly, in many respects, it's similar to when automation first became a thing. Then people said, wow, we're going to have factories now do our jobs with these simple robots that screw the screws and move the steel and as it turned out, it gave people a chance to reskill and focus their energies on that higher order, design better cars, produce them more efficiently, make them more affordable. I agree with you.?

There will always be a place and the place for humans will be one that we just get to use more and more of our intellectual horsepower to create the sort of next generation of innovation. I think these tools and technologies and the access to information we have now, it gives anybody a chance to participate. You know, you obviously have to have some foundational level of education, but it's a whole different world. You can learn anything you want to learn with a few keystrokes and you can participate in making the world much, much better. You know, it's a it's I think a really exciting time and I'm looking forward to seeing how things play out.

NICK: I agree.

The Verdict: Legal and Regulatory Insights

BRADLEY: Welcome to The Verdict with Emily Johnson , where we're on our podcast episode with Nick Merker, AVP of Privacy, Cybersecurity and Artificial Intelligence with Lilly, a medicine company from Indianapolis. While talking to Nick, we spoke about regulatory issues and AI, but then we transitioned to cybersecurity and he offered some insight that was pretty general.?

The Unique Case of Change Healthcare?

I thought with Emily, we could dig in a little bit more on both what happened and why it was so exceptionally unique with respect to change health care, because we talked with Nick about the fact that you should always have some type of a plan to have continuity in the case of a massive breach or issue. But there was something kind of different about the Change health care issue that maybe is not something that makes it very easy to have continuity. Could you talk a bit about that?

EMILY: Yes. So the Change health care incident was a ransomware attack. There was actually a double extortion attack where there was one ransomware attack. Changes data was compromised. It was held for ransom and then another threat actor group so the original ransom was paid. Another threat actor group came in and claimed to also have the data and also demand payment. It was sort of this mess of this double extortion, which led to a longer investigation timeline.

What was unique, ransomware attacks, as you know, are not unique. What was unique about Change is Change plays a particular role in the health care industry. This breach impacted, I think the last statistic I saw was like 91 percent of health care providers in the country because of the services that they provide. They are basically a billing company. So when change was breached, health care providers lacked the ability to submit claims for reimbursement. If you don't submit claims for reimbursement, it impacts your revenue, impacts your ability to continue to provide services.

Continuity Challenges in Healthcare?

Why Change is different than any other ransomware attack is going to this concept of continuity. You should always have a backup plan. You should always have procedures in place to address a security incident, whether you cause it or a vendor causes it. You should have some sort of plan to having two engines.?

BRADLEY: It’s like if you're flying in a jet, it's good to know if one fails, the other one's going to work.?

EMILY: Correct. Anybody in health care knows lifecycle management is extremely expensive. Nobody has two billing companies. It just doesn't make sense. You're not going to engage somebody to perform billing services on a raining day because your first billing company got breached and so when this happened, it drastically and significantly impacted the ability of providers to get paid.

So we saw health care providers, particularly those in like high, high cost practices like oncology who had to personally subsidize the cost of their practice because they weren't getting revenue from their payers. Typically, reimbursement from a payer once you submit claims is anywhere from 30 to 90 days, sometimes longer. But if you're not submitting claims, you're not getting paid, who knows when that's going to end, you still have patients that you have to treat. Taking a look at oncology providers where the drugs are really expensive. I had clients who were taking second mortgages out on their houses, borrowing money from friends and families.

BRADLEY: ?So, so hold up. So when we talk about healthcare, a lot of times people don't realize it's a whole bunch of small businesses. So all these small businesses had their cash flow just stopped because Change healthcare was compromised and so you couldn't generate a bill and therefore you couldn't generate the payment. There's a delay in when you get paid because of the way insurance works. So it, I mean, how did anybody continue to function? You're saying they took out second mortgages.

EMILY: A lot of folks took out second mortgages, took out personal loans, borrowed money from friends. Some practices that were more cash positive.?

BRADLEY: Did the business, I'm sorry to, did the government step in or what, what happened?

EMILY: So interestingly so Change is owned by Optum. Optum stepped in and said that there is this process and I haven't vetted it. So, but there's this process where you can submit claims and the amount that you believe to be owed to you and they will pay it. As a healthcare attorney, I know nobody's just going to pay you money based on you saying they owe that to you. So they're going to come back and audit you. But at least that was the advice the last time I looked into it, which is fascinating. The government didn't really do anything other than say, Hey, we should take a look at the fact that change had this power and had the ability to disrupt healthcare the way that they did.

Legal and Operational Implications of the Change Healthcare Breach

BRADLEY: So it kind of reminds me of the scene in Dumb and Dumber when he has the suitcase full of receipts. He's like, Hey, that's an IOU. That's for a Lamborghini. You're going to want to hold on to that one. Exactly. So, so, so basically Optum though had the scale to step up and help provide a cashflow to these practices that were compromised because change was compromised.

What, what has transpired? I haven't tracked it since all of the headlines and other than I know that healthcare providers are getting paid again.?

EMILY: They're getting paid again. There's a tremendous amount of class action litigation associated with this. Folks are both patients and providers, right? Patients are suing because their information was compromised. You see that all the time in cybersecurity incidents. It typically used to be the threshold was somewhere around 50,000 for a class action litigation to get filed. Now we actually just saw one last week where there were 750 people impacted and a class action was filed.?

So there is no breach now that's too small for a class action to be filed, which is fascinating. The plaintiff's litigators are pretty aggressive. But there are also lawsuits by the healthcare providers against change for basically negligence or breach of their duties because they had an obligation to maintain the security of the data that was entrusted to them. One of the things that Nick spoke about was, you know, continuity and having processes in place. I don't represent Change, so I don't know what their processes were, but the allegations are that there should have been some additional backup controls to safeguard the data.

BRADLEY: That rather than when we talk about continuity, a lot of times you're thinking about that in the context of who would be the customer of Change. But in this case, we're saying change should have had a level of continuity in place so that if this could happen, they would be able to continue to operate.?

EMILY: Correct.?

BRADLEY: Maybe you would be down for a day, but then by tomorrow, you'd be able to operate again.?

Strategies for Managing Vendor Breaches

EMILY: Right. In certain circumstances, the billing data was lost. And so, and I know to your point about healthcare providers being small, oftentimes once they submit the billing data to the billing company, they don't maintain good records or any records because they assume that the billing company is doing that. So once they sent that off, they really didn't have record of it other than going back to their medical records, recreating.

BRADLEY: So it's like send it and forget it because that's the customary way. So Change, though, it's almost like Tampa getting hit by four hurricanes in a month, you know, one every week, like it was this maybe hopefully once in multiple lifetimes shutdown. There are other services that are more, you know, they're critical, but it's more feasible to have redundancy. Right. So I'm a provider. What's an example of one where you could have the right kind of redundancy and continuity plan in place? And it would be something that would be kind of the model for, I guess what I'm asking is, how would you advise somebody to handle this kind of continuity? What's an example of a specific area where it would apply? Because…Is like you're saying with revenue cycle having a whole separate revenue cycle vendor isn't really that feasible.?

EMILY: Right. So in the, when we're talking about the context of like a vendor breach, like the Change situation, there's you know, courier services, there's reference labs, let's say reference lab has a breach, right? You send out a specimen to the lab, the lab has a breach, is there a backup source that you can use that you can easily transition, right? If a lab core has a breach, can you send it to Quest? You have a relationship with Quest where you're able to immediately funnel specimens that way.?

BRADLEY: And that means you would have gone through the validation and setup and ensuring that you could send orders and reliably get results that were going to be in the best interest of the patient and not be shut down because of the fact that there is a breach.

EMILY: And those are easy examples with LabCorp and Quest, those things are going to be pretty readily available, but maybe a courier arrangement is a little bit harder, but having those processes in place, as somebody who does a lot of privacy work, we do a fair amount of tabletop exercises with our clients to walk through these worst case scenarios. It's a ridiculous fact pattern, like the FBI comes and you've been the victim of a ransomware attack, but they're going to raid everything and information's getting posted to Facebook and people are writing articles about this and what are you doing and is there a breach and who are you calling, right? So testing all those processes.

I think one of the interesting breaches, not in healthcare, but the Sony breach years ago impacted the ability for the Sony executives to communicate via email and phone. So they were literally having pages run across the Sony production lots to deliver messages because they couldn't trust any communication. That's sort of a worst case scenario, but it's something to think about when you're in that situation, when you get that blue screen of death, how are you reacting? What are your resources? If it is a phone tree to mobilize your incident response plan, where is that phone number and that list of those people so you can start having those?

BRADLEY: So the point is there are lots of different layers to having continuity and how do you address this if that communication channel is shut down? Is there some kind of a bat signal you can put up in the sky??

EMILY: Right, it's a fascinating exercise. The number of times I've gone to sophisticated health systems and they're like, we have all these plans, we're totally fine. And then you start asking them the questions and they're like, we don't even know the number to the IT help desk.?

BRADLEY: Right, right, right. Yeah, no, that's true. Well, appreciate your perspective. It was a fascinating conversation with Nick and super helpful for you to get a little deeper into some of these issues. Ultimately, in the interest of advancing healthcare, we want providers to be in a good position to deliver and to be able to get paid and have the lights on. And obviously, it'd be great if we lived in this world where there weren't these risks because there weren't bad actors, but there are. And so to the extent we can all learn from the Change healthcare incident, that at every level, we need to be vigilant about these continuity plans and how to have redundancy. It's something that will make healthcare much better.

So thanks so much for being on and appreciate you addressing this topic. Thanks so much for joining us for the first in this two-part series with Nick Merker, where we covered the regulatory landscape and artificial intelligence in particular. He touched on the EU AI Act and the way that's affecting all nations at this point. We also touched on cybersecurity and both the high profile issue with Change healthcare, which really brought the provider world to its knees in the United States. But we also talked about some of the practical issues as it relates to how can you really handle continuity plans?

Get Involved:

Are you passionate about transforming healthcare? Join the conversation by listening to our latest podcast episode here. Share your thoughts and insights in the comments below or reach out to us directly. Let's work together to drive meaningful change in healthcare!

Stay tuned for our next edition, where we'll feature more expert insights and innovative ideas shaping the future of healthcare. Don't forget to subscribe to Boombostic Health for regular updates!


要查看或添加评论,请登录

Bradley Bostic的更多文章

社区洞察

其他会员也浏览了