Understanding the “I”: Artificial Intelligence and its Root in Human Neurology
Image of a full Petroleum System created from an intelligently deigned neural network. (C) 2018 Actus Veritas Geoscience, LLC.

Understanding the “I”: Artificial Intelligence and its Root in Human Neurology

Scotty Salamoff

Geophysical Advisor | Data Scientist

Actus Veritas Geoscience


NOTE: The terms “AI”, “Neural Network” or any related terms are used interchangeably and generically below to refer to any application of machine learning.

If you want to successfully understand and use "AI", you must first understand the "I" part. This isn’t an ugly jab at anyone’s intelligence - it’s simply a phrase that the designers, users, and interpreters of machine learning algorithms and outputs should keep in the back of their minds. Why is the biological definition of neurology so important to artificial intelligence? You’ve likely already figured it out by now, but you’re equally as likely to not have given it much thought in the past.

Let’s start with what we’ll call “passive neurological activity” – that is to say, the synaptic chains responsible for tasks such as respiration, blood circulation, and completing the thousands of mental calculations required to successfully park your car (this doesn’t include “muscle memory”, which is more akin to iterative training of a neural network on the same dataset). Obviously, the brain is in charge of regulating all these things, much like the basic framework or macros embedded into a neural network are in charge of regulating its “maintenance” operations as it runs. Synaptic chains, as the name implies, are not single organs or neurons just as the fundamental logic-based design of a neural network isn’t a single line of code but rather multiple lines with many, many layered functions programed within each. Understanding that these “background functions” are equivalent to the biologic functions and neural integration that keep our metaphorical engines humming is the first step to approaching any type of AI, along any point in its programmed cycle (from development to interpretation and all the steps in between). The intelligence you create needs a working, functioning “body” in which to reside.

Now let’s talk about what goes into that metaphorical “body” we’ve created to house our Artificial Intelligence. On one side of the AI scale, filtering and cherry-picking the input information you intend to use (by using PCA or any number of data filters) will almost always result in the introduction of some type of bias. On the other side, filling the AI bucket with random snippets of code and thousands of unnecessary variables (attributes) would see its biological equivalent in a human that has suffered massive head trauma and resulting memory loss. The random snippets of information (memory fragments) that may or may not relate to each other coupled with the overwhelmingly massive influx of new data (friends or family attempting to help “fill in the memory gaps for you”) would cause some form of mental breakdown in most people. It’s too much, too disorganized, too quickly. Finding the healthy balance between the two sides of the AI scale is the key to building your digital intelligence, and letting it know what it’s designed to do. Does it need supervision? Should it look in the same spot all the time, or look at random points? Does it look at ALL of the data or just a statistical representation? How is this statistical pool created? Does the AI receive any input from non-seismic sources, such as petrophysical data from well logs or static models? This is the interesting (read: fun) part of creating intelligence – giving it purpose. You’re literally creating an intelligent entity, and then granting it meaning and purpose. It really doesn’t get much cooler than that.

Much like the average human is driven to strive for achievement and meaning in their lives, so goes artificial intelligence. Without knowing their name, where they are, and what they do, a neural network is nothing more than a random assemblage of 1’s and 0’s floating through cyberspace. So how does one give “meaning” to an AI program? The answer is simple, because it’s the same way you became who you are – you give an AI meaning by designing and influencing its environment. The same principles that govern environmental effects on the cognitive development of a human apply to the cognitive development of artificial intelligence. Put a perfectly healthy program in a bucket full of garbage, and it will begin to behave accordingly. Place the same program in an environment designed to foster its growth and facilitate its purpose, and it will also behave accordingly. One observation that consistently stands out, surprisingly enough, is that many new adapters to AI technology in our industry seem to forget these programs learn – they aren’t static like a gridding algorithm, and they quite literally develop and grow in a cognitive sense the longer they run (there is of course a point at which neural networks will “plateau” in their learning, but the goal of the designer should be to push this bar up as high as possible). In this manner, an artificial intelligence will differ from biologic intelligence. The neurology of the human brain allows for one to continue learning new concepts, often at the expense of older, less called-upon knowledge - a fancy way of saying that “we never stop learning”. Current physical limitations of hardware put a ceiling on the intelligence level at which a neural network can reach, but the number is very high and growing every day. As one friend and mentor of mine used to say, “stop one step away from sentience”. Give the program what it needs to succeed in its task, give it information to identify new patterns, and don’t gum up the works with unnecessary data. You don’t need 1,200 attributes to come up with a Base of Salt interpretation. You know it, I know it, and physics knows it. A neural network however doesn’t, and will attempt to make use of anything you put into it, regardless of its applicability or relationship to other data or the question being asked.

So, we’ve discussed some parallels between an AI framework in a digital brain and the essential automatic processes controlled by synaptic chains in biologic brains. These processes form the support, the pillars of your program – without these basic functions, there wouldn’t be a place to create the intelligence part, just like without automatically breathing your body would cease to function leading to your brain activity being impacted. We also discussed how important environment variables are in creating a successful artificial intelligence program, an environment that provides nourishment, a steady, moderated stream of external input, and giving the AI “meaning” through the careful, informed, and deliberate selection of those environmental factors (once again, I’m referring to seismic attributes). Now let’s discuss what the output is and how to read it.

“Garbage in, garbage out”. If you’ve never heard the term, tattoo it on your wrist. If you have heard it, a refresher never hurt anyone. Logically speaking, this should be the most obvious part of the design-implementation-interpretation cycle of a neural network. Realistically however, the mindset that rules the day is often more on the “fill the bucket” side of the spectrum. What precisely is meant by “garbage in, garbage out”? Put simply, you can’t cram a bunch of unrelated parts into a car and call it an engine. It’s not an engine, it’s a bunch of garbage. The same holds true for AI outputs - were a bunch of unrelated components just randomly crammed into a space and set to run wild, or is there purpose, logic, and reason behind where everything is and how it connects? In this way a graphic of a neural network should strongly resemble that of a logic-based synaptic process, much like the image below:

As one can see from this specific example, there are input components, learning components, analytical components, predictive components, filtering components, and visualization components. Each component of the neural network needs to be in the correct place at the correct time in order for the program to run accurately. There’s really nothing special about this part…it’s rooted firmly in logic, it’s easy to understand because it’s really a series of smaller steps that make the whole, and generating a useful output requires knowing what input data would be relevant to the desired goal. More isn’t always better, Deep Neural Nets hold no real benefit over Shallow Neural Nets in the Oil and Gas industry - in fact, recent work seems to indicate the opposite is true - and no one is creating a terminator in the E&P world (I hope). Leave that sci-fi stuff to Boston Dynamics. The fundamentals of what we do as geologists, geophysicists, and petroleum geoscientists cannot be forgotten in the rush to integrate “engineered intelligence” into workflows that work pretty well on their own.

I leave you with the following pieces of advice for further contemplation:

1) Procure a copy of Connectome by Sebastian Seung (or comparable) and read it. Try to understand what you're reading through the lens of human psychology, not as a geoscientist. The connections between biologic and artificial intelligence will become clear as you read.

2) Commit to Logic, marry yourself to it, and never betray it. If something doesn't make sense, it doesn't make sense. Don't force a square peg into a round hole.

3) Read about AI applications in the Medical and Financial industries. You'd be surprised at the overlap between what the E&P industry is trying to do and what the medical and financial sectors have been doing successfully for decades. This prevents you from reinventing the wheel.

4) Don't reinvent the wheel. There are certain things we already know how to do very well, and haphazard integration with an unstable AI design will result in skewed, biased, illogical, or just plain nonsensical outputs.

5) Remember that AI is not divinely created, nor was the first AI program written by another AI program. This isn't a chicken-egg situation. Behind all the various processes we loosely define under the umbrella of "AI" sits a human being. That human is the creator, the "divine power" behind a neural network.

6) Understand the "I" before tackling the "AI". Not only will you be able to design more efficient networks that produce more reliable results, you will also gain insight into how (or more accurately, why) people think, form memories, identify and interpret patterns, and recall information. If you've been paying attention, these are some of the fundamental tasks of a neural network.

Finally, I implore everyone currently researching this field to pace themselves. This might be the next best thing in Oil and Gas, but we're 20 years behind other industries. Look to them for insight and ideas, as they've had enough time for their technology to be proven and withstand scientific scrutiny. This is perhaps the easiest way around the "prove it works before we use it" death spiral - refer to the fact that these principles and in many cases the very same code being used was developed in the 80's and has been steadily improved upon since. Today we have an opportunity to bring new technology into our industry the right way - let's seize it by learning exactly what we're creating and how it can be integrated into our industry practices.

Nicolas Martin

Seismic Quantitative Geoscientist, ML/DL/Geothermal Integrator, Geophysical Advisor & Remote Professional Trainer

6 年

Great article Scotty. I agree with you...there is a tendency today in E&P to think that Machine learning and AI are just new tech from the 21 century. But, the true, from my point of view, is that both are extensions of the old techs but optimized to deal with big data and get hidden data-driven patterns (even big data is not a new concept in E&P because after 3D tech was imposed, the seismic data has been big data). Also, I agree that creating a huge amount of attributes to feed the process is not necessarily creating a better result..it has shown before that totally not related attributes can create some kind of? connection as result of "background" bias...I rather prefer to use attributes that are related logically and have at least an initial deterministic relation among them and the variable to be estimated (I call it the neural spine) and let the Data mining works on them to get useful and hidden patterns. Thanks.

要查看或添加评论,请登录

Scotty Salamoff的更多文章

社区洞察

其他会员也浏览了