Differences in the impacts of AI and ML upon science and engineering
The emoji shown is from https://tools.picsart.com/text/emojis/

Differences in the impacts of AI and ML upon science and engineering

#science #engineering #culture #society #attitudes #approaches #AI #ML


Both scientists and engineers love to use the knowledge and fruits that have thus far been delivered by developments in science and engineering.

However, there remain some basic differences in the overall approaches of scientists, and engineers, to the subjects of science and engineering.

Scientists like to use the fruits of engineering as tools with which to pursue the practice of science.

Engineers like to use the fruits of science as tools with which to pursue the practice of engineering.

Scientists are trained to add to fields of scientific knowledge (and sometimes to create new fields) by asking and answering questions, with the express intent of furthering human understanding of how things tend to be logically and causally related, in ways that are communicable.

Engineers are trained to use existing fields of scientific knowledge, and to apply these to the addressing of pressing human problems, with the express intent of identifying solutions that are workable, and implementable.

By and large, therefore,....

Scientists derive the greatest pleasure, joy and satisfaction from understanding things, and engineers derive the greatest pleasure, joy and satisfaction from identifying and implementing solutions, and answers, without necessarily (or always) caring about explanations.

This dichotomy, it seems to me, affects how scientists and engineers are likely to view AI and ML, or adopt these, during the times that we are currently passing through.

Of course, both scientists and engineers have known about the basic concepts (as well as algorithms and statistical precepts) that underlie AI and ML, not just for a few months, or years, but for several decades now.

As someone who was trained in both disciplines [along with hundreds of others who graduate every year, after studying both disciplines over a period of five years, from the Indian academic/research institution known as the Birla Institute of Technology and Science (BITS), Pilani, which has encouraged and also allowed students to pursue dual degrees in science and engineering, for over four decades now], I can clearly remember first hearing about neural networks and artificial intelligence in the late nineteen eighties.

By that time, I had already decided to work in science (and to use the additional training in engineering only as an aid to the pursuit of the science that I do). As a scientist, I can remember thinking the following about neural networks (see the paraphrased words below), when I first heard about neural networks: “Oh, what’s the use of it all, if the neural network cannot explain what it has learned, in a manner that can be communicated and transmitted to future generations, if all that it happens to do is to recognize patterns, based on some un-explainable learning that it does, which is itself based on some weighted influencing of notional neurons by other notional neurons, over multiple layers of networked neurons?”

I also remember thinking then, and also later, that practicing engineers would love neural networks (NNs) and support vector machines (SVMs), once developments in hardware, computer memory, and parallel processing had occurred sufficiently, to make it feasible for things like artificial intelligence and machine learning to transform from being pipe dreams into living realities (note: of course, the words ‘machine learning’ and ‘support vector machine’ had not yet been used, in the eighties, and nineties, and all the talk was only about 'neural networks' and 'artificial intelligence' to the best of my recall).

Even then, there were some computer science professionals I knew, who practiced bioinformatics, and who waxed eloquent about how neural networks would one day solve the ‘protein folding problem’, once processing power and speeds, and machine memory, had all advanced sufficiently.

Well, that time is upon us.

Google’s Deep Mind machine, and others like it, have laid claim to using AI/ML to predict the three-dimensional structure of every protein molecule whose sequence is known to man.

To give to friends and acquaintances some idea about what the ‘protein folding problem’ is, I have half a mind of one day creating and uploading a one-hour video here, just to explain the problem, in lay terms, to those who are interested. Watch this space. It is indeed a really fascinating problem, and it is also extremely fascinating that AI/ML now lays claim to having solved the problem.

Fascinating? Yes, absolutely!

Satisfying? No, absolutely not!

As long as Google’s Deep Mind remains unable to explain how it predicts the structures of proteins, I (and many others whom I know) shall remain largely unsatisfied, even if I choose to use its outputs. I shall remain unconvinced about the extent to which it is reliable, until enough examples have been created to show that Deep Mind, and programs like Alpha Fold, really work.

Interestingly, I do have some ideas about what the machine might have learned, by the way, i.e., about the patterns that the machine must have identified (subconsciously; and I use this word with care), and about the approach that it must be taking, as a consequence. I shall speculate about this in the video that I shall make, when I talk about how protein sequences may have been designed by nature to have evolved to fold during synthesis, upon ribosomes, in a difficult-to-demonstrate phenomenon known as co-translational folding. I think that Deep Mind might have learned to identify contiguous sections of chains that happen to fold together, as individual domains, and then to take these domains up in sequential order, beginning from the N-terminus of the protein chain and proceeding towards the C-terminus, by predicting the folding and assembly of these domains in the sequence in which they occur in the chain. However, I cannot think of any method, or manner, by which to ever verify whether what I think Deep Mind may have learnt/learned is what it actually did learn.

To a scientifically-trained mind (and even to a mind that is completely open and accepting of the fact that there is much that science does not yet understand about how consciousness creates experiences), this can be more than a little frustrating.

So, in summary, I can see why engineers are so gung-ho, and excited, about AI/ML, given that what they mostly care about is to be able to address existing problems by finding and implementing solutions, without caring too much about how the solution was achieved, as long as the solution appears to be a workable solution. Certainly, AI and ML do offer implementable solutions to many problems, even if they do end up being either used, or misused, or used ethically, or unethically, like every other human invention that has been exploited by samaritans and charlatans alike.

To that extent, AI and ML must satisfy most engineers who (with exceptions, obviously) care mostly only about whether something works, and constitutes a solution.

Thus, as long as AI and ML are based on the use of training sets of data that do not feed algorithms garbage, or misinformation, most engineers would love to see AI and ML implemented everywhere, e.g., in the predictions of the upswings and down-turns that markets will take, replacing actuaries with machines, or in the practice of using inputs from chemical sensors to taste and assess things like tea, or wine, replacing human tasters with machines, or to inform the spot-decisions that will one day need to be taken by robots and robotic machines that replace human actions, and tasks, and especially grueling, tedious, or dangerous tasks, or mandatory tasks that incapacitated humans are unable to perform. And that won't be such a bad thing as long as it is used with care, and not always taken at face value; exactly as we do with humans, and perhaps somewhat less trustingly so, at least to begin with.

However, I doubt that AI and ML will ever really satisfy practitioners of science, in the near future, because such practitioners derive the greatest joy from understanding things, and because AI and ML do not offer any opportunity for understanding to take place.

Therefore, practitioners of science are likely to view (and continue to view) AI with the same suspicion that they reserve for every report that claims to address an issue without actually explaining how the issue happens to have been addressed, for the near future.

Science wants solutions to be not merely verifiable, but also understandable, and communicable. This is both one of science’s greatest strengths and also one of science’s greatest weaknesses, depending upon the context in which it is applied. As science progresses, there is no doubt that yesterday’s mysteries have become today’s facts without necessarily having been fully understood (and some say that it takes about ~40 years for something to be either accepted, or questioned), but whether AI and ML will one day come to be grudgingly accepted, despite not being fully understood, by the scientists of the future, is something that remains to be seen. Meanwhile, the above differences in the attitudes of scientists and engineers to AI and ML must be considered to be likely to remain, at least for some years/decades yet. ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了