Artificial Intelligence Explained
By Paul Grundy | August 2 2021

Artificial Intelligence Explained

“Artificial intelligence” is a buzzword saturated with hope, excitement, and visions of sci-fi blockbuster movies, but it isn’t the same thing as “machine learning.”?

Machine learning is slightly different from deep learning, and neither matches up with cognitive computing or semantic analysis.

As the healthcare industry moves quickly and irreversibly into the era of big data analytics, it is vital for organizations looking to purchase advanced health IT tools to keep the swirling vocabulary straight. Hence, they understand precisely what they’re getting and how they can – and can’t – use it to improve the quality of patient care.

Artificial intelligence is the branch of computer science associated with studying and developing the technologies that would allow a computer to pass (or surpass) the Turing test.?

So, when a clinical decision support tool says it “uses artificial intelligence” to power its analytics, consumers should be aware that “using principles of computer science associated with AI development” is not the same thing as offering a fully independent and rational diagnosis-bot.

Machine learning

Machine learning and artificial intelligence are often used interchangeably, but conflating the two is incorrect.?Machine learning is one small part of the study of artificial intelligence. It refers to a specific sub-section of computer science related to constructing algorithms that can make accurate predictions about future outcomes.

Machine learning accomplishes this through pattern recognition, rule-based logic, and reinforcement techniques that help algorithms understand how to strengthen “good” outcomes and eliminate “bad” ones.

?Machine learning can be supervised or unsupervised.?In supervised learning, algorithms are presented with “training data” containing examples with their desired conclusions.?For healthcare, this may include samples of pathology slides containing cancerous cells and those that do not.?

The computer is trained to recognize what indicates an image of cancerous tissue so that it can distinguish between healthy and non-healthy images in the future.

When the computer correctly flags a cancerous image, that positive result is reinforced by the trainer. The data is then fed back into the model, eventually leading to increasingly more precise identification of progressively complex samples.?

Unsupervised learning does not typically leverage labeled training data.?Instead, the algorithm is tasked with identifying patterns in data sets on its own by defining signals and potential abnormalities based on the frequency or clustering of certain data.?

Unsupervised learning may have applications in the security realm, where humans do not know exactly what form unauthorized access will take.?If the computer understands what routine and authorized access typically look like, it may be able to quickly identify a breach that does not meet its standard parameters.

Deep learning

Deep learning is a subset of machine learning that deals with artificial neural networks (ANNs), which are algorithms structured to mimic biological brains with neurons and synapses.?

ANNs are often constructed in layers, each performing a slightly different function that contributes to the end result.?Deep learning is the study of how these layers interact and the practice of applying these principles to data.?

?“Deep learning is in the intersections among the research areas of neural networks, artificial intelligence, graphical modeling, optimization, pattern recognition, and signal processing.?

Just like in the broader field of machine learning, deep learning algorithms can be supervised, unsupervised, or somewhere in between.?Natural language processing, speech and audio processing, and translation services have particularly benefitted from this multi-layer approach to processing information.

Cognitive computing

Cognitive computing is often used interchangeably with machine learning and artificial intelligence in common marketing jargon.?In 2014, the Cognitive Computing Consortium convened a group of stakeholders including Microsoft, Google, SAS, IBM, and Oracle to develop a working definition of cognitive computing across multiple industries:

The cognitive computing system offers a synthesis of information sources and influences, contexts, and insights to respond to the fluid nature of users’ understanding of their problems. For this process, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right.”?They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the moment’s needs. Their output may be prescriptive, suggestive, instructive, or simply entertaining.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, “remember” previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information, the Consortium added.

This view of cognitive computing suggests a tool that lies somewhere below the benchmark for artificial intelligence.

Cognitive computing systems do not necessarily aspire to imitate intelligent human behavior but instead to supplement human decision-making power by identifying potentially valuable insights with a high degree of certainty.

Clinical decision support naturally comes to mind when considering this definition.?

Natural language processing

Natural language processing (NLP) forms the foundation for many cognitive computing exercises.?The ingestion of source material, such as medical literature, clinical notes, or audio dictation records, requires a computer to understand what is being written, spoken, or otherwise communicated.?

Speech recognition tools are already in widespread use among healthcare providers frustrated by the burdens of EHR data entry, and text-based NLP programs are starting to find applications in the clinical realm, as well.

NLP often starts with optical character recognition (OCR) technology that can turn static text, such as a PDF image of a lab report or a scan of a handwritten clinical note, into computable data.?

Once the data is in a workable format, the algorithm parses the meaning of each element to complete a task such as translating into a different language, querying a database, summarizing information, or supplying a response to a conversation partner.

Natural language processing can be enhanced by applying deep learning techniques to understand concepts with multiple or unclear meanings, common in everyday speech and writing.?

In the healthcare field, where acronyms and abbreviations are ubiquitous, accurately parsing this “incomplete” data can be highly challenging.?Other data integrity and governance concerns and the large volume of unstructured data can also raise issues when attempting to employ NLP to extract meaning from big data.

Semantic computing

Semantic computing is the study of understanding how different data elements relate to one another and using these relationships to draw conclusions about the meaning, content, and structure of data sets.?It is a key component of natural language processing that draws on both computer science and linguistics elements.

?“Semantic computing is a technology to compose information content (including software) based on meaning and vocabulary shared by people and computers and thereby to design and operate information systems (i.e., artificial computing systems),” wrote Lei Wang and Shiwen Yu from Peking University.?

The researchers noted that the Google Translate service relies heavily on semantic computing to distinguish between similar meanings of words, especially between languages that may use one word or symbol for multiple concepts.

In 2009, the Institute for Semantic Computing used the following definition:

[Semantic computing]?brings together those disciplines concerned with connecting humans’ (often vaguely formulated) intentions with computational content. This connection can go both ways: retrieving, using, and manipulating existing content according to the user’s goals (‘do what the user means’); and creating, rearranging, and managing content that matches the author’s intentions (‘do what the author means’).?

Currently, in healthcare, however, the term is often used in relation to the concept of data lakes, or large and relatively unstructured collections of data sets that can be mixed and matched to generate new insights.

Semantic computing, or graph computing, allows healthcare organizations to ingest data once in its native format and then define schemas for the relationships between those data sets on the fly.?

Instead of locking an organization’s data into an architecture that only allows the answer to one question, semantic data lakes can mix and match data repeatedly, uncovering new associations between seemingly unrelated information.

?Natural language interfaces that leverage NLP techniques to query semantic databases are becoming popular in interacting with these freeform, malleable data sets.

For population health management, medical research, and patient safety, this capability is invaluable.?In the era of value-based care, organizations need to understand complex and subtle relationships between concepts such as the unemployment rate in a given region, the average insurance deductible, and the rate at which community members are visiting emergency departments to receive uncompensated care.

As a buzzword, semantic computing has been very quickly overtaken by machine learning, deep learning, and artificial intelligence. But all these methodologies attempt to solve similar problems in more or less similar ways.?

Vendors of health IT offerings that rely on advanced analytics are hoping to equip providers with greatly enhanced decision-making capabilities that augment their ability to deliver the best possible patient care.?

While the field is still in the relatively early stages of its development, healthcare providers can look forward to a broad selection of big data tools that allow access to previously untapped insights about quality, outcomes, spending, and other key metrics for success.

How can artificial intelligence allow us to achieve precision medicine at scale?

80-90% of “analytics” work on clinical data is actually spent on data cleanup and “data munging.” This makes clinical data inaccessible to many (like clinicians and researchers) and painful to even advanced data experts.?Most of the precious clinical data reside solely in text reports. These include physician notes, radiology reports, and pathology reports. Diagnostic images in the form of radiology studies or scanned pathology slides can also contain a treasure-trove of critical clinical information. However, making sense of raw text and images is a challenge that requires mountains of data, top clinical experts, and machine learning engineers all working together.

Even after multiple layers of extraction and analytics, some data points remain unreliable due to inconsistency in how they are defined and captured.

The first task for AI/machine learning in healthcare would be to extract the most important clinical features from the unstructured text and image sources. Humans have manually extracted information from a clinical note and populated a database. The paired structured-unstructured data sources can be used to train machine learning models to accomplish the same task quickly and reliably. However, it is critical to note that consistent clinical definitions and data standards still must be established to allow the building of large data repositories comprising data collected via multiple means of extraction.?

The next task for AI in healthcare would be in surfacing the most critical elements from the clinical data to the clinicians. Understanding which data points are most important requires a collaborative effort between physicians, informaticists, data scientists, and engineers.

Once the first two tasks have been realized, it can become possible for machine learning models to (roughly) replicate physicians’ decision processes in interpreting the fully extracted and summarized data. Eventually, machine learning models will even be able to identify the best treatment for a given patient, even if that treatment is not part of any known standard of care currently used by physicians.?

Reverse Engineering AI (exciting new development)

As artificial-intelligence systems become widespread, the chances grow that their glitches will have dangerous consequences. For example, a TSA glitch that was identifying a plastic toy turtle as a rifle. But researchers are now developing tools to sniff out potential flaws among the billions of virtual “brain cells” that make up such systems. This is known as reverse engineering.

Many image-recognition programs, car autopilots, and other forms of AI use artificial neural networks in which components dubbed “neurons” are fed data and cooperate to solve a problem — such as spotting obstacles in the road. The network “learns” by repeatedly adjusting the connections between its neurons and trying the problem again. Over time, the system determines which neural connection patterns are best at computing solutions. It then adopts these as defaults, mimicking how the human brain learns.

A key challenge of this technology is that developers often do not know how networks arrive at their decisions. This can make it hard to figure out what went wrong when a mistake is made.?

Reverse engineering is a program designed to debug AI systems by reverse engineering their learning processes. It tests a neural network with a wide range of confusing real-world inputs and tells the network when its responses are wrong so that it can correct itself. For example, it could determine if an X-ray image fed to a healthcare AI system mistakenly drew the wrong conclusion. The debugging tool also monitors which neurons in a network are active and tests each one individually. Previous AI debugging tools could not tell if every neuron had been checked for errors.

In tests on 15 state-of-the-art neural networks, including some focused on medical imaging and oncology, reverse engineering discovered thousands of bugs missed by earlier technology. It boosted the AI systems' overall accuracy by 1 to 3 percent on average, bringing some systems up to 99.37 percent.?Reverse engineering techniques could help developers build “more accurate and more reliable” neural networks.?


You can read more about “Building the AI-Powered Organization” in this report by Tim Fountaine, Brian McCarthy, and Tamim Saleh from the Harvard Business Review Magazine (July–August 2019).

要查看或添加评论,请登录

CareAsOne的更多文章