Automated Learning for Computers Useful for the Marine Vertical
The need for relevance

Automated Learning for Computers Useful for the Marine Vertical

The Need for Relevance

In our second Newsletter published in September we discussed some of the experts’ questions: “Is the chatbot that has taken the world by storm a vastly improved Large Language Model or is it AI? And what is AI? ?And we saw that the experts are meticulous in demystifying ChatGPT and are careful in their definitions of AI. To set the scene for this month’s sequel we will quote from Eric Siegel, “The AI Hype Cycle Is Distracting Companies”:

Here’s the problem: Most people conceive of ML as “AI.” …Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker (appellation) invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do. This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value.”

So, our purpose in this article is to discuss learning for computers that adds value to the business enterprise. Our focus is the maritime enterprise as it exists within the reality of the maritime industry.? Last month we purposefully left some questions unanswered. Questions to which we propose answers in the context of learning for computers that imparts benefits and value. The questions are: Can we have machine learning that goes beyond the commands and constraints and statistical correlations to ensure search results are fitting the users situation? Can it provide relevance? Can investors, in our case marine investors, expect timely information that warns them of risks in relation to their goals?

Can marine investors expect timely information that warns them of risks in relation to their goals?

Human Intelligence versus machine learning

Noam Chomsky, speaking from the vantage point of the science of linguistics and philosophy of knowledge, has this to say about “machine learning programs like ChatGPT”:

“However useful these programs may be in some narrow domains... we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language.”

Plans based on experience

The fundamental pillar of intelligence is to make good plans based on experience. Although most of us rarely think this in a deliberate way, we lead our lives by making plans. For example, hours before we prepare a meal, we will take out frozen food to thaw in time. Few would disagree that this plan is good because it is born of experience. A few minutes of self-examination will tell us that in all situations, we make plans. Experience, be that our own or that of our peers and elders, helps us make better plans. So then, it makes sense to think about how we build up this capability. And then to examine if it is a particularly human capability by asking if computers can do the same.

Can computers make plans?

Can computers make plans, and can they do so in any situation? Before answering, let’s first look at what plans involve. Because among other things, plans involve understanding, expectation, reasoning. This, computers cannot do without an immense amount and variety of learning. Because even simple plans for us are demonstrably hugely complex for a computer.

For example, finding one’s way in the dark. As humans we use recollection of past experience. We also rely on tactile memory because we have memories of the feel of things we may run into. Whether we are finding our way to the kitchen, or to the bathroom or inching our way along a ship’s corridor, we have a fair grasp of the expected proximity of one expected constituent to another.? Essentially, we plan our way in the dark from diverse spatial experiences and by knowing quite a bit about empirical physics.

Likewise, a machine, if it is going to provide relevance, must acquire spatial and enterprise experiences through various situation models that include cause and effect logic. Logic that derives from every generalized discipline needed and which is then applied to each domain. The model can be as broad or as simple according to the scope it serves. But it needs to be well organized into what affects what, and in what context and rich in contextual variations. ?

Supervised learning on cause and effect

Therefore, a computing machine that has already been introduced to the parts of generalized disciplines it needs to know, is then ready for supervised learning on cause and effect. That is, the ability to identify that a certain activity affects another activity inscribed within a plan (or a goal).

For example, breakdown of a machinery function can be critical to the bollard pull efficiency of a vessel, which if insufficient, affects vessel compliance with charterers and could cause the vessel to go off-hire. In this example we are intimating that an underlying maritime enterprise model supports the computer’s learning. The computer has thus learnt the cause and effect relationships between an equipment component, the activities it participates in and the business goals that are at risk if the component fails to function as expected. And in the cause and effect propagation of the example we provided, the computer also demonstrates enough understanding of physics to trigger the propagation.

Consequently, if information is to be of any value to an investor in maritime enterprise: It is vital that supervised machine learning includes the physical attributes affecting domain activities relevant to enterprise goals.

Learning by reflection, by simulation or by doing

Between learning by reflection, simulation or by doing, which is best? Humans learn from doing and reflection mostly, and the result of the two are very different. In fact, learning by doing is far richer than learning by reflection. And provides much of the wide coverage in learning that humans draw upon to achieve knowledge. Especially knowledge of universal opportunities or constraints. For example, we learn practically that we can fit flexible material like a blanket into a relatively confined space like a draw, but not a rigid material like a piece of wood the shape and size of a blanket.

And as in learning through reflection, learning by simulation is also less rich than learning by doing.

What does this mean for a computer?

What it means is that even the best supervised computer program will not learn by doing. Not to any great extent. It also means that much of the contextual influence around the goal will not be learnt either. So, for example, the computer will not know the concept of being tired or scared. Yet, crew members being tired or scared during a sensitive cargo loading operation is a contributing risk factor. And obviously, a computer cannot build its own model of being tired or scared. But a ship manager, who is an ex-seaman, would be able to build a model of performing an operation in varying conditions and contexts. And of course, the computer does not tire out and fatigue is a human problem. But in that case the computer cannot easily help people when they are tired or scared as another person can.

Consequently, the computer, depending on the problems it needs to help with, must learn a certain number of opportunities or constraints that humans experience and know. Thus, when a machine reaches a certain level of knowledge it can be fed process models that abstract.

Abstracted processes

?Abstracted processes are those that can be used in different enterprise models. There are several reasons why processes abstract. Let’s mention two. Either there are processes that generalize and can constitute a whole that can abstract, or there is a model that is a kernel of a physical process that abstracts. So, when an abstracted model is introduced into a particular enterprise model, it diversifies the processes that are comparable. Accordingly, with the right supervised learning, a machine can compare what has changed in the existing model after it has been compared to an abstraction And without losing the ability to manage activities and their attributes regarding sequence or cause and effect. However, only people, experts in fact, can supervise the machine’s propensity to learn, through their personal and common experience and situational awareness of the world around the activities.

Adaptations of generalizations

However, a machine can find candidate adaptations of generalizations on its own and match them.

Learning Matching processes through linguistics

To do so, however, it needs to match processes using linguistics. And the matching must follow processes in many contextually persisted cause and effect or sequential propagations. This actually is a huge departure from current methods of computing through commands.

Because machines, so far, rely on Entity Relationships (ER). This reliance means they cannot draw similarities. Since, to recognize an attribute as being the same in two distinct systems, the machine must see the exact same wording in both systems.

Whereas a system that matches processes using linguistics and compares the way the processes propagate has a convincingly superior advantage to discern similarity.

The model must be rich

Finally, to insert an abstracted model for matching to another model, the hosting model must be rich enough. As mentioned earlier, the model needs to be well organized into what affects what, and rich in contextual variations. It must be accommodating:

  • in the area that needs improving
  • in places where abstractions or similar models could apply

This means the hosting model must be able to compare and accept or compare and reject. So, some repetition with minor changes is needed for the machine to find acceptable and similar candidate propagations to host.

Machines that provide relevance

How else do machines provide us with relevance by similarity? Fundamentally, by including different propagations with few differing attributes within the expectation of similarity. ?But if the kernel of a process like lifting a heavy object has a common abstraction, and involves universal constraints such as in lifting, the machine must be taught this too. Subsequently, the system may have to ask for explanations and, from these, formulate new universal constraints in the form of propagations that we would colloquially call theorems. Because when we need information to solve a physical commonality like lifting, the machine helping us must find relevant information regarding the physical and mechanical constraints, when these affect an enterprise goal.

Theorems as part of automated learning

So, let us consider at which point machine learning engines need to be fed with theorems.

If the concern is to pull a boat out over a beach, how would the search engine know where to go and retrieve its internet garnered information to provide a solution? For example, what is the explicit link between:

  • dragging a heavy object
  • formulas about friction
  • solutions to the goal of overcoming friction?

More so, after reading all the internet as ChatGPT can do, how would it know where it would need each piece of information to exist?

Said differently, what would help index all the information it collects, so it can solve one of millions of common problems that people encounter and need help with?? Would it index by goals? And whose goals? And for which domain? Because computers, like humans, consolidate their learning from successes and failures. And more than likely, to guard against retrieval errors, humans have set millions of commands and constraints, and separations of concerns by which ML-engines should also be storing successes and failures.

However, this type of organized capture, storage, retrieval of information is not going to be useful in helping people find content relevant to their problems.

Relevance comes first in importance.

So, for companies well entrenched on the internet to lead in machine learning solutions, providing relevance ought to come first in importance.

The question, however, is how can a machine such as ChatGPT pursue relevance in the results it returns? Because the system’s impressive ability to read through reams of text? that? exists on the internet does not instantiate relevance. And the system itself does not know if the written text is about solving a problem the way people know. To quote Roger Schank (1947-2023), AI theorist, cognitive scientist, and pioneer in education:

“Human understanders know what they are trying to understand. Computers not so much.”

The goals and the mechanism to fit relevant information into plans.

People can set their goals, and they have the cognitive mechanism to fit relevant information into plans, although it is not in the scope of this article to describe how. Computers, on the other hand, don’t have these capabilities.

The salient point then is that the computer cannot know where to look to find the information that serves a plan. Knowledge of the world as organized by ChatGPT and similar systems is used by people to qualify expectations, verify explanations, advance their understanding, instantiate events, etc. Because people already have a goal model in mind upon which to try-out the imparted information.

Our opinion

Our opinion is that learning for a computer must be limited to one domain or sub-domain at a time.

We are certain that computer learning needs the support of a model that simulates the real-world workings of a Vertical.

Automated learning must include the propagations of activities that help achieve business goals.

The activity model must be capable of retrieving dynamically sourced information and propagating status and data relevant to a goal.

Finally, investors must have the freedom to target new areas and enrich the model in an absolutely traceable way.

Transparency and agency on the part of the person soliciting information

The above capabilities presuppose transparency and agency on the part of the person soliciting information that is unimaginable at present with current large language models (LLMs) and their algorithms, which search engines rely on.

Who defines these algorithms and what goal do they serve?

There is one thing we can say as relatively grateful users of chatbots, but also frustratingly passive ones. ?Namely, that we are the end users of non-transparent algorithms, which ?are the domain of the Information Technology status quo. And ChatGPT is certainly about algorithms controlled from Silicon Valley.

In other words, Marine Investors have no access to transparent algorithms that serve their domain and also have no say in the learning material that feeds a chatbot. And perhaps Bill Gates euphoria about ChatGPT is a new attempt by Silicon Valley to try and win back the advantage it started with over the Vertical approach to applications and modelling.

Software with situational awareness must be dedicated to one vertical at a time

Let us finish with a far from trivial warning from Noam Chomsky:

“Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.”

Closing words

We sincerely hope you found value reading “Automated Learning for Computers Useful for the Maritime Vertical”. We focused on explaining the prerequisites for a machine to learn relevance. Information retrievals that are relevant to planning and achieving enterprise goals is the way to measure its success.

In a similar vein, we feel you will definitely find the article "A way of looking at the future of AI" by Dimitris Lyras insightful.

And if you enjoyed the newsletter, please help us by subscribing and sharing. Our goal with our newsletters is to enter into discussion with our readers on topics discussing hot technology subjects in the context of the maritime industry’s painpoints. This is the reason our current discussion is dedicated to the need for dynamic information retrieval relevant to problem-solving. For ship managers, investors and crews onboard the ships.

Ulysses Systems is a Maritime Software specialist. Its award-winning Task Assistant? Software enables office and seagoing personnel to work intuitively and efficiently with minimal training and just-in-time information. Managers should expect a fast return on total software lifecycle cost thanks to mature process optimization, bridging of information gaps and refined integration technologies. Currently Ulysses Systems is pioneering fast development of new annexes to existing software, including monitoring underlying systems for cybersecurity compliance.

References:

Eric Siegel, https://hbr.org/2023/06/the-ai-hype-cycle-is-distracting-companies

Noam Chomsky, https://english.aawsat.com/home/article/4208906/noam-chomsky-ian-roberts-and-jeffrey-watumull/noam-chomsky-false-promise

Kirk Wedge

Head of Shipping Solutions at SEDNA

1 年

Good article. Meanwhile, I'm still waiting for the miracles of Blockchain to materialise :-) ChatGPT is saving me a lot of time every day, but it requires a certain level of knowledge to be useful. In particular, to detect when it is lying which it does quite confidently.

Per Starup Sennicksen

Logician | Logistician | Humanostician

1 年

Excellent write-up. AI is all about logic and data. If either one or both are not shrewd, then you are screwed.

Ray Bareiss

Executive VP and Principal AI Scientist at Socratic Arts

1 年

My response to Noam Chomsky's quote: "“However useful these programs may be in some narrow domains... we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language.” is SO WHAT? Why does that matter if they produce useful results, and do generative AI systems have to be human-like to be intelligent? Although it might be a slight tangent, I suggest reading Peter Norvig's thought-provoking article: https://www.noemamag.com/artificial-general-intelligence-is-already-here/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了