Systems Theory from a Cognitive and Physicalist Perspective.

Systems Theory from a Cognitive and Physicalist Perspective.

Note that this article has now been superseded by an updated version at https://rationalunderstanding.quora.com/Systems-Theory-from-a-Cognitive-and-Physicalist-Perspective-Updated

Abstract.

This paper discusses systems theory from a cognitive and physicalist perspective. The cognitive perspective holds that we are our minds and cannot escape the constraints imposed by their biology and evolutionary history. Nevertheless, human cognition is a reasonably accurate representation of reality. Physicalism holds that space-time comprises the whole of reality and that everything, including abstract concepts and information, exists within it.

From this perspective, I describe some of the main concepts in systems theory. They include: the importance of structure in forming meaningful systems; the nature of relationships, causality, and physical laws; and the significance of recursion, hierarchy, holism, and emergence. I also discuss cognitive factors including: our mental limitations; the nature of information and language; and our search for knowledge in a world of complexity and apparent disorder.

The paper concludes with the implications of this perspective for General System Theory and Social Systems Theory and suggests further work to advance these disciplines.

A downloadable pdf is available at https://rational-understanding.com/my-books/

1.???Introduction.

This paper discusses systems theory from a cognitive and physicalist perspective.

The cognitive perspective holds that, no matter how much we may wish it were otherwise, we are our minds and cannot escape the constraints imposed by their biology and their evolutionary history. We cannot escape our humanity and must understand its nature if we are to understand the world we inhabit. The German-American psychologist, Ulric Neisser, sometimes referred to as the father of cognitive psychology, defined cognition as "those processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used."[1]

Physicalism is a form of philosophical realism. Realism maintains that things exist beyond the mind. However, physicalism takes this a step further, and holds that everything, including abstract concepts and information, exists in space-time. For example, justice comprises all just acts, and all just acts are events that occupy space-time. Information is matter or energy with an organised structure and matter and energy do, of course, occupy space-time. Examples include letters on a page, electrical pulses on the internet, and neural connections in the brain.

Cognitive physicalism maintains that space-time and the entities it contains combine to form reality. It also holds that, with exceptions, our direct experience of reality is a reasonably accurate representation of it. Human cognition has evolved naturally through random mutation and natural selection. If it did not represent reality reasonably accurately, then it is unlikely that our species would have survived and become as successful as it is.

The exceptions mentioned above are, firstly, inherited biases which enable us to respond quickly to threats and opportunities without conscious thought. These are survival characteristics, but do not always result in a successful response. They can also lead to errors. For example, such biases are a cause of our belief in the super-natural, the transcendent, the metaphysical, and realities inaccessible to the senses. Secondly, by “direct experience”, I mean averages in the functioning of our senses and in our interpretation of their inputs around which most of us cluster. There are, of course, individuals who deviate significantly from these averages and whose perception of reality may, therefore, be incorrect. Finally, information not acquired directly from physical reality, but obtained via third parties can be false.

2.???Description of some main concepts in systems theory.

a)???Space-time.

The concept of space-time was first proposed by the German mathematician, Hermann Minkowski, in 1908.[2] It is a single continuum comprising three dimensions of space and one dimension of time. Within space-time there is a complex flux of matter and energy, parts of which constantly interact with one another and change state, much like a river in flow.

b)??Physical entities.

A physical entity or, more simply, an entity is anything that occupies a region or regions of space-time. Entities can, for example, be physical objects, relationships, events, or circumstances. They can be represented diagrammatically using a simplified space-time diagram, such as the one below, in which the three spatial dimensions are condensed into one.

No alt text provided for this image
Figure 1. A Space diagram showing a single entity. The entity is shown as coming into existence, travelling through space, and finally, ceasing to exist.

c)???Meaningful entities.

A boundary or boundaries can be drawn around an entity separating what the entity is from what it is not. This boundary is subjective and defined by the human observer. However, we do not draw our boundaries randomly. Human beings have very strong pattern recognition skills.[3] That is, an ability to recognise structure, organisation, or order in an entity. This is almost certainly an evolved trait as it is held by many other animals. When we perceive structure, our cognitive processes cause us to draw a boundary that contains and maximises that structure. A bus, for example, is perceived as a single entity rather than two or more. We do not split structure in that way, unless there is good reason to do so. Nor do we include the air around the bus and the road beneath it, because they can change and are not part of the structure that we perceive.

There are infinite ways in which an entity can be disordered. The likelihood of us experiencing the same disorder more than once is therefore very small. However, the number of ways in which an entity can be ordered is finite, and so, recurrences are more likely. We also have an evolved ability to recognise such recurrences. To continue the river analogy, we recognise vortices not only because they have structure but also because they recur relatively frequently. This is a survival trait that enables us to predict the behaviour of entities from experience.

In summary, therefore, entities that are meaningful to us are those in which we recognise structure and that recur. We symbolise meaningful entities by, for example, naming them, or creating a mental image of them.

Several entities structured in the same way form a collection. The features that members of a collection hold in common are known as characteristics. Because we recognise characteristics as recurring, we also give them a name or create a mental image of them. Characteristics are often abstract. Nevertheless, they are anchored to space-time by the physical entities that possess them.

No alt text provided for this image
Figure 2. What do you notice about this image?

d)??Static and dynamic structure.

Structure refers to patterns in the way that the parts of an entity are arranged. These give the entity the features that we recognise. However, meaningful entities can have a static structure, unchanging in time, or a dynamic one.

Static structure relies on stability. Stability, in turn, relies on the way that an entity’s parts are arranged being in static or dynamic balance with the forces acting upon them.

No alt text provided for this image
No alt text provided for this image
Figure 3. Stability and instability.

A gravitational force acts on this pencil and the pencil’s orientation relative to that force determines its stability.

Examples of entities with a static structure are crystals and buildings. However, static structure is only static relative to the human observer. In practice, everything decays with time but, in many cases, this is too slow for us to notice. Some atoms, for example, persist for billions of years. Nevertheless, they were originally assembled from sub-atomic particles and may ultimately return to them. So, static structures are states of organisation that persist from a human perspective.

Entities with static structure are more likely to recur, more likely to be recognised, and more likely to be meaningful to us than ones that are less complex but apparently randomly structured. ?

Entities that do not have a static structure are, by definition, in a state of change. Unless there is dynamic structure to that change, then we are unable to recognise recurrences. If an entity has dynamic structure, then the change taking place within it is not random, as would be the case with a decaying building. Rather, it is ordered, occurring for example in cycles. For example, a statue of a horse always occupies the same region of space irrespective of time, and so, has static structure. On the other hand, a living horse is dynamic, taking different shapes and occupying different regions of space at different times. However, the shapes that it takes when for example galloping occur in cycles.

No alt text provided for this image
Figure 4. Examples of dynamic structure.

The recurrence of entities with dynamic structure is more difficult to recognise. This is because the mental resources needed to remember a dynamic structure are much greater than those for a static one. The longer the cycle, the more the resources needed, and the less likely we are to recognise recurrences.

e)???Relationships.

Usually, to depict a relationship we use an arrow between the two related entities. However, this image can be misleading. A relationship is not something separate and distinct from other physical entities. Rather, it comprises two related entities in conjunction for a period of time. When they are related, the two entities are in a different state than when they are not. Thus, a relationship is also a physical entity, albeit one comprising two parts. The nature of the relationship is the nature of the conjunction of those parts.

This implies that all entities, even physical objects, can be regarded as relationships, albeit reflexive ones.

No alt text provided for this image
Figure 5. The physical nature of relationships.

f)???Causality.

Causality is almost ubiquitous and underlies the flow of matter and energy in the universe. However, because it is almost ubiquitous, we expect it to be entirely so. It is no surprise therefore that Einstein doubted the existence of the entanglement of particles, famously referring to it as “spooky action at a distance”.

Entities which recur are of a type and form collections. Causality relates collections of one type, the cause, to entities of another type, the effect. Thus, a collection of causes of a particular type is related to a collection of effects of a particular type by a collection of causal relationships. Because these relationships are, in fact, the two entities as they exist in conjunction for a period of time, the relationships are also of a type. We see that these relationships recur, and they are, therefore, meaningful to us.

One entity of a type is known as an instance. Thus, we have an instance of a cause related to an instance of an effect by an instance of a causal relationship. The instances that are related to one another are determined by the geometry of space-time. For there to be an instance of a causal relationship, an instance of a cause must begin before an instance of an effect and the two must share a region of space-time. An instance of a cause must be an entire entity. However, an instance of an effect can be an entity for so long as it exists or a change in its state. This change in state may be its beginning, its end, or the alteration of a characteristic. The characteristic altered may be a variable one that can be quantified, such as the entity’s mass, and, so, amenable to mathematical representation. Alternatively, it may be a characteristic which cannot be quantified, such as the entity’s existence. The latter is more amenable to linguistic or logical representation.

In summary, a causal relationship is recognised when instances of an entity of one type and those of another regularly occur in proximity. This physical proximity is necessary because an instance of a cause must pass something to its effect. In other words, a cause must have outputs which the effect takes as inputs. For example, there must be somewhere for the effect to take place. So, the instance of the cause may provide this space, e.g., a factory in which to assemble cars. However, in most cases the instance of the cause passes matter, raw energy, or information to its effect. Information, rather than being abstract and incorporeal, is organisation imposed on the matter and energy that conveys it. Matter, as Einstein pointed out, is organised energy. So, what the cause passes to the effect can be described more simply as raw or organised energy. Causality, is, therefore, the general energy flux in the universe.

Relationships can be static, that is, unchanging with time, or dynamic. Physical objects, for example, can be treated as static, reflexive relationships. Causality, on the other hand, is dynamic. This is because there is a time delay between a cause and its effect. Something is passed from the cause to its effect and changes take place in the latter. Both require time.

Causes are described as being necessary or sufficient for their effect. For an effect to take place it requires certain inputs. If a cause is sufficient for an effect, then it provides all the necessary inputs. Thus, an effect always occurs in the presence of a sufficient cause. However, the same effect may result from any one of several different sufficient causes. On the other hand, if a cause is necessary for an effect, then it is the only source of some of the inputs needed by the effect. Thus, an effect cannot occur in the absence of a necessary cause.

No alt text provided for this image
Figure 6. Sufficient cause. This is an example of an effect being caused to begin. The effect always occurs in the presence of a sufficient cause. However, it may also occur in the presence of other sufficient causes.
No alt text provided for this image
Figure 7. Necessary cause. This is an example of an effect being caused to begin. The effect cannot occur in the absence of a necessary cause. However, the effect does not always occur in its presence.

It is usually the case that an effect needs several causes to provide all its inputs. Thus, causality is often more complex than a single cause leading to a single effect. The American epidemiologist, Kenneth Rothman [4], noted that several recognised and named causes may be necessary for the effect. However, it is only their un-named and unrecognised conjunction that is sufficient for it to occur. He referred to this as the “sufficient component cause model”. For example, factory space, assembly instructions, parts, electricity, people, and machinery are all needed to manufacture cars, but only together are they sufficient.

No alt text provided for this image
Figure 8. Several necessary causes combining to form a sufficient cause. Note that, whilst the necessary causes may be meaningful, the sufficient cause may not.

g)??Causal patterns or structures.

Like other entities, causality can be structured and meaningful, or apparently random. Again, what is meaningful relies on our ability to recognise structure and its recurrence. Causal patterns are formed of several, often many, causal relationships. Unfortunately, the recognition of these patterns is far more difficult than the recognition of a single relationship. We have, therefore, recognised relatively few to date.

Harvard Graduate School of Education lists the following [5].

  • Linear Causality. This is causality at its simplest, comprising a single sufficient cause, and a single effect.
  • Domino Causality. A chain of linear causality leading to the sequential unfolding of events over time.
  • Cyclic Causality. A chain of causality in which the types of entity alternate, for example, chickens and eggs.
  • Spiralling Causality. This is also known as a feedback loop or, more accurately, spiral. It is a circular chain of causality in which changes in the state of one entity can be a consequence of changes that the entity has previously wrought in another. This may have been directly or via a causal chain. The classic example is a microphone placed in front of a loudspeaker. The resulting sound is a consequence of positive feedback. Negative feedback is also possible. In this case a variable feature of an entity is reduced to, and maintained at, zero. Finally, regulating feedback holds the variable characteristic of an entity at a particular value. For example, a governor regulates the speed of a steam engine.
  • Relational Causality. The relationship between two entities acts as a cause. For example, if one entity has a mass greater than the other, then the effect occurs but not otherwise.
  • Mutual Causality. In mutual causality two entities affect one another, for example a flea causes an effect in a dog and vice versa.

To this list, I would add the following.

  • Cascading Causality. In this structure, the components in a causal chain also affect an entity outside of the chain, steadily amplifying or reducing a variable characteristic. For example, the familiar human practice of “digging oneself into a hole”. Like feedback, cascading causality can be positive or negative.

h)??Physical laws & scientific theories.

A physical law or scientific theory is a statement of a causal relationship in which entities of one type, the cause, always result in changes to entities of another type, the effect. Often, variations in a characteristic of the cause result in variations in a characteristic of the effect. If so, then the law or theory can be expressed mathematically. However, this is not always the case and mathematics cannot always be applied.

Physical laws and scientific theories are a subset of all relationships, and the same principles apply to them.

i)????Recursion.

Space-time is a continuum. Every part of space-time comprises yet smaller parts. Every part also shares smaller parts with other parts. So, because every entity occupies a region of space-time, every entity can be broken down, or disaggregated, into parts. Those parts are shared with other entities. This recursion begins with the universe in its entirety and continues downwards in scale to the sub-atomic level.

The reverse is also true. Every entity can form a part of several greater entities, and several entities can be aggregated to form a greater entity. This begins at the sub-atomic level and continues upwards in scale to the entire universe.

Because relationships, including causality, are two entities in conjunction, they are recursive in the same way. Every relationship comprises several lesser ones and is a part of greater ones.

In practice, however, we disaggregate entities into meaningful parts, known as components. For example, we would disaggregate a wall into its bricks, and not into random sections of wall or parts of bricks. Thus, we impose discreteness on the continuum in order to comprehend it.

No alt text provided for this image
Figure 9. Diagrammatic representation of a continuum. Every entity or triangle comprises parts or circles. The parts or circles intersect to form other entities or triangles, and so on ad infinitum. Note that the parts are also entities and the entities also parts. They are shown in threes and in these shapes for ease of explanation. In practice however, they can be of any number or shape.

j)???Lower limit to recursion.

However, there may be a lower limit to recursion. The lowest level of recursion, known to us at present, comprises the fundamental sub-atomic particles and the four fundamental forces of physics. The latter are the strong nuclear force, the weak nuclear force, the electro-magnetic force, and gravity. For the present, at least, these particles appear not to comprise lesser particles, and the forces not to comprise lesser forces.

Apart from gravity, the fundamental forces have been shown to involve transfers of energy using “exchange particles”. In the strong nuclear force, the exchange particles are gluons; in the weak nuclear force, they are bosons; and in the electromagnetic force, they are virtual photons. The latter are temporary fluctuations in energy at a point in space. These transfers occur over a period determined by the speed of light and the transfer distance. The fourth fundamental force, gravity, is believed by some physicists also to involve an exchange of particles, i.e., gravitons. However, gravitons are hypothetical, have never been detected, and the equipment needed to do so is beyond our ability to manufacture at present. Thus, the nuclear forces and the electromagnetic force are the foundation of causality.

k)??Systems.

A system comprises a collection of meaningful entities that interact with one another. These are known as its processes. These processes do not necessarily result in the emergence of properties, and do not necessarily form a recurring and recognisable structure. Thus, a system is not necessarily meaningful. Systems lie anywhere on the scale of complexity from a single sub-atomic particle to the entire universe.

Systems include the necessary inputs for their processes and these inputs comprise space, matter, energy, or information. The processes also deliver outputs of the same kind. Furthermore, systems interact with one another and the outputs of one become the inputs of another. Thus, a system is more than a static physical object. It is another way of looking at causality, as demonstrated by the diagram below. [6]

No alt text provided for this image
Figure 10. Systems can be regarded as causes and effects. The output from one system is the input to another and this is equivalent to a transfer of space, matter, energy or information between a cause and its effect.

Like causality, a system can have several necessary inputs that only together are sufficient for is processes to function. It can also have more than one output. Thus, there can be complex causal interactions between systems.

Systems are entities and have components in the same way as any other entity. These components are also systems and so too are their sub-components. Thus, systems are recursive.

l)????Complexity.

Every physical entity can be regarded as lying on a scale of complexity, from a single sub-atomic particle or interaction to the entire universe. The term “complexity” does not imply that the entity is either ordered or disordered. Rather, it merely refers to the number of sub-atomic particles and interactions that it comprises.

No alt text provided for this image
Figure 11. The complexity of entities. Each white dot represents a sub-atomic particle. The complexity of an entity lies on a scale from a single sub-atomic particle to all sub-atomic particles in the universe.

Ultimately, every physical entity, its properties, and its relationships with other entities are the consequence of a complex of sub-atomic particles and their interactions. The more complex an entity, the greater the number of particles and interactions. The same is true of relationships. The complexity of a relationship is the sum of the complexity of its two components.

No alt text provided for this image
Figure 12. The Complexity of Relationships. Each white dot represents a sub-atomic particle. Relationships include causal ones and physical laws. Because the relationship comprises the two related entities in conjunction, its complexity grows with the complexity of the related entities.

m)????Granularity.

Granularity is a measure of the extent to which a physical entity is broken down, conceptually, into parts. As the granularity of an entity increases its number of parts also increases and their complexity decreases. Least granularity comprises just two parts; greatest granularity typically comprises all the sub-atomic particles of the entity. This is approximately 7x10^28 for a human being.

No alt text provided for this image
Figure 13. Granularity.

n)??????Entropy.

Entropy is a measure of disorganisation in an entity. The concept was first introduced, in 1865, by the German physicist, Rudolf Clausius [7]. Later, the Austrian physicist, Ludwig Boltzmann, described it as a measure of the number of ways in which particles can be arranged in an entity consistent with its large-scale general condition. It has been suggested that, because entropy is a measure of disorder and because information is related to order, that information is the reciprocal of entropy. This is not strictly correct. Nevertheless, entropy plays a significant role in the universe.

The second law of thermodynamics was developed in the 1850’s based on the work of Rankine, Clausius and Lord Kelvin. This laws applies to closed systems into which energy cannot enter and from which it cannot escape. The law states that, in a closed system, as energy is transformed from one state to another, some is wasted as heat. Importantly, however, the second law also states there is a natural tendency for any isolated system to degenerate from a more ordered, low entropy state to a more disordered, high entropy one.

Overall, entropy is thought to be increasing, therefore, and the universe becoming ever more disorganised. Thus, we cannot expect everything to be a structured and recognisable entity. Locally, however, entropy can decrease, and organisation increase. Life is one example of this, but local decreases in entropy are not its sole preserve.

In 1944, the Austrian physicist, Erwin Schrodinger, raised an apparent paradox in his book “What is Life” [8]. This was the tendency for living systems to become more organized as time progresses. This appears to contradict the second law of thermodynamics. There is no real paradox, however, because living beings are not closed systems. Rather they use free energy from the sun. In striving to maintain their integrity they increase entropy in their surroundings, and, in total, net decay still occurs. Nevertheless, this anti-entropic behaviour is a distinctive feature of life.

o)??The emergence of properties and physical laws.

As the complexity of entities increases, properties that their components do not have can emerge. The static or dynamic stability of structures has a significant part to play in this. An emergent property may, for example, be the consequence of a stabilising feedback loop between the entity’s components. It appears, therefore, that a certain level of complexity is necessary before a stable structure becomes possible.

Emergence and holism are closely related concepts. When a new property emerges, the whole is indeed more than the sum of its parts.

So, stable structure appears to emerge in discrete steps. For example, a human cell has a structure that we recognise as recurring elsewhere. However, as complexity increases, cells join apparently randomly. It is only when we reach the level of an organ, e.g., the heart, that stable structure appears again. Thus, as we ascend the scale of complexity, we have order, then disorder, then order again, and so on.

No alt text provided for this image
Figure 14. Increasingly complex entities. Green circles indicate meaningful entities. Red circles indicate meaningless ones. Starting from the right, A is a single meaningful component. B is a simple system with a small number of meaningful components but no recognisable structure. C is a complex system with many meaningful components but no recognisable structure. D is a recognisable entity with many meaningful components and a recognisable structure. D can, therefore replace A, and growth in complexity can continue with D as the meaningful component.

However, stable structures do not all emerge at the same level of complexity. Rather they occur within a range of complexities. Molecules, for example, can vary in complexity from a simple hydrogen molecule to DNA, and the number of sub-atomic particles in each differs substantially.

This is reflected in granularity. Not all meaningful components that make up an entity emerge at the same level of complexity. As we rise through the levels of complexity in an entity, meaningful components emerge and then disappear, beginning with the least complex and ending with the most complex. Thus, although a stable structure emerges at a particular level of complexity the meaningful components that form it do not. Granularity must, therefore, comprise a range of complexities rather than just one level if it is to comprise meaningful components. Fortunately, such ranges are relatively narrow in comparison with the total range of complexity.

As a granularity range ascends the scale of complexity the entity will comprise a collection of meaningful components, then not, then larger meaningful components, then not, and so on in cycles. For example, a human being can be regarded as comprising, atoms, then molecules, then cells, and then organs.

This pattern is also reflected in human cognition and language. We only give names to things that are structured and that we recognise as recurring, for example, "cells" and "organs". We do not give names to entities of intermediate complexity, just descriptions such as "a clump of cells".

The same is true of physical laws and scientific theories. A relationship comprises the two related entities in conjunction for a period. Thus, stable structures that emerge with increasing complexity are related in a way that is impossible for their components. In the field of biochemistry, for example, the shape of molecules has a significant role in catalysing reactions and the manufacture of proteins. Such relationships can, of course, be causal. In turn, causal relationships can be laws of nature or scientific theories. Thus, new laws of nature or scientific theories will also emerge as the complexity of the related entities increases.

p)??The hierarchy of disciplines.

As entities increase in complexity, new disciplines arise when the entities gain stable structure. These entities then form the meaningful components of the discipline. The relationships between meaningful entities emerging for each discipline can be physical laws or scientific theories. As complexity increases, entities comprise ever more meaningful components and the relationships between them. The discipline continues to describe these relationships until a level of complexity is reached at which stable structures emerge once more. A new discipline is then founded.

The components of each discipline are stable structures of relationships between meaningful sub-components. In turn, those meaningful sub-components are formed in the same way, but at a lower level of complexity.?For example, organs derive their stable structure from relationships between cells. Those cells, in turn, derive their stable structure from relationships between molecules, and so on. However, a discipline considers only the components and not their sub-components or sub-sub-components.

Because they are dependent on a minimum level of complexity, the laws and theories that emerge for a discipline cannot apply to disciplines at lower levels. This leads to a hierarchy of disciplines, each with its own elementary entities, relationships, and theories, each dependant on the speciality below, and each lying on a path of increasing complexity.

The following path of increasing complexity shown in the diagram below is relevant to human social systems. Each level of emergence, results in a new discipline.

Social Science includes psychology, social psychology, sociology, economics, and political science.

No alt text provided for this image
Figure 15. The emergence of disciplines with increasing complexity.

q)??Disorder and the limits to our ability to comprehend complexity.

It is in our nature to understand entities and systems in terms of their meaningful components and the interactions between them. Between levels of emergence, systems comprise interactions between multiple meaningful components that have emerged at lower levels of complexity, for example, clusters of cells. When there are relatively few meaningful components, we can comprehend the way that they interact, and this is referred to as a simple system. However, when there are many, we are unable to comprehend the way they interact, and the system is referred to as complex.

Our reason for wishing to understand a system is to predict its behaviour, to grasp opportunities, to avoid threats, and thus satisfy our needs. However, two factors conspire against this. Firstly, in very complex systems, chaos theory applies. The smallest error in any parameter can quickly become magnified by the number of interactions. Thus, the outcomes with and without the error diverge, becoming increasingly dissimilar with time. Secondly, random events can affect outcomes. For example, the radioactive decay of atoms and the appearance of “virtual particles” are thought to be entirely random. These events can have a similar effect to that of a small error. This means that we can only model complex systems a very short distance into the future before errors become significant.

There is also a limit to the number of interacting components that we can understand. As complexity, and thus, this number increases, this threshold is eventually exceeded. The interactions then appear disorderly, random, or chaotic, and thus, unpredictable. No pattern can be detected, and complex systems are, therefore, not meaningful to us.

No alt text provided for this image
Figure 16. Simple and complex systems.

r)???Simplification.

When faced with a complex entity or system, our natural inclination is to simplify the interactions between its components, bring them back within the limits of our mental capacity, and, thus, reinstate a degree of predictability. Typically, simplification involves placing entities into broader categories using fewer shared characteristics. For example, we might use the category “forms of transport” rather than the category “cars”. We may also select variable characteristics to which a value can be given, such as weight, and create categories based on value ranges, for example “light”, “medium” and “heavy”. In this way, the number of entity types is reduced, and so too are the number of relationships between them.

However, error-free simplification is only possible at a level of complexity where new stable structures emerge. If complexity becomes too great for us before that level is reached, then any simplification will introduce error. Thus, there will be a gap between two levels of emergence in which complex systems cannot be accurately understood. Furthermore, simplification only helps us to deal with increasing complexity until our threshold of comprehension is reached once more. If this occurs at a level lower than the next level of emergence, then further simplification will be needed. With each simplification comes error. So, if we wish to avoid an unacceptably large accumulation of error, then we cannot rely on pure theory, and must carry out experimental observation. For example, in attempting to understand society, simplification can lead to ethnic type-casting. Becoming acquainted with people from other ethnic communities counters this.

No alt text provided for this image
Figure 17. Progressive simplification.

We simplify systems if our threshold of comprehension is reached before the next level of emergence. With each simplification the threshold of comprehension can be reached once more, and we must simplify yet further. With each simplification, information is lost.

s)???Divergence of paths of increasing complexity.

Increasing complexity does not follow a single path but many. One may, for example, be from a sub-atomic particle to the cosmos, and another from a sub-atomic particle to an ecosystem. These paths diverge, as shown in the diagram below, to form a tree-like structure.

Paths also combine, and branches of the tree can merge. For example, ecology is a discipline that depends not only on the life sciences but also physical sciences such as geology. Ultimately, all paths merge when the level of complexity reaches the universe in its entirety. At this point, all physical laws and scientific theories interact to form the universe as a whole.

Different natural laws and scientific theories emerge on each path. Those which emerge for life will, for example, differ for those which emerge for astro-physics.

Each discipline on a branch is dependent on the disciplines below. That is, the natural laws or scientific theories at a lower level in the branch also apply at the higher levels. However, the reverse is not true. Laws and theories that have emerged at a higher level do not apply at lower levels.

No alt text provided for this image
Figure 18. Diverging paths of increasing complexity. The small brown triangles represent the different disciplines, for example physics or psychology. The coloured figures represent the meaningful components used by each discipline.

t)???The nature of information.

We tend to regard information as something intangible, as being in some way separate and distinct from matter and energy. But this is not so. Information is not merely conveyed by matter and energy; it is integral to it in the form of order and structure. Thus, for example, information is held in the shape of letters written on a piece of paper, in the modulation of sound waves, radio, or electrical signals, in the way that neurons are connected in the brain, in patterns of magnetisation on a hard disk, and so on. Thus, information has a physical presence in the same way as all other entities in the universe.

Information exists at source, i.e., within the entity that it describes, or in replicated form. The information held by an entity is the structure formed by its meaningful component parts and the relationships between them. For the entity to be recognised and meaningful, this structure must recur elsewhere. However, it cannot recur within the entity. The structure of a cat exists only once within the cat. It is not duplicated there, or we would have two cats. The entity’s components do, of course, have their own information content or descriptions. These do recur within the entity, but they are descriptions of the components rather than of the entity itself. Thus, they are not included in the information held by the entity.

In summary, the information held by an entity is the structure inherent in the least granularity that displays that structure just once.

No alt text provided for this image
Figure 19. The same entity with increasing granularity. Green circles are meaningful entities. Red circles are not. A represents the entity. B is the entity at low granularity. A recognisable structure imposed on the entity results in a small number of meaningless parts. C is the entity at medium granularity. An intermediate number of parts form an unrecognisable structure. D is the entity at a higher granularity. Several recognisable components form a recognisable structure. D is therefore the information held by the entity at source. If the components of D are replaced by A this process continues.

Imagine a typical event, for example a hammer striking a nail. Events comprise one entity interacting with another. The world is full of things which strike one another, and so, just two components, the hammer, nail, and the relationship between them, are sufficient to meaningfully describe the event. This then becomes the information inherent in the event. However, if it is broken down further into, say, random pieces of iron, some of which are part of the hammer, some of which are part of the nail, then no recognisable pattern of relationships exists between them. This is because the way in which these relationships are organised exists nowhere else. These components are therefore disordered and provide no information about the event.???

The event can be broken down yet further into atoms. These atoms do interact in an ordered way to form the molecules of the hammer and nail. However, rather than providing information about the event, they provide information about molecules. Whilst it is true that this information is repeated elsewhere, in an asteroid for example, it is also true that it is repeated many times within both the hammer and the nail. Thus, it does not constitute information about the event.

To give a further example, stars recur in their trillions. A single star shares information at its level with all others, and so, is a recognisable entity. On the other hand, a collection of stars can take many forms, none of which are likely to recur. At this level no information is shared with other collections of stars, and so, it is not a recognisable entity. It has a description but no name. At the next level up, a galaxy does frequently recur and shares information at its level with many other galaxies. So, galaxies are recognisable entities.

Note that entropy is not the reciprocal of information at source. Entropy is understood in physics to be disorder at every level of an entity from the molecular or atomic level upwards. It is, therefore, the reciprocal of information at all levels, i.e., of the total information in an entity and in all its parts. It is not the reciprocal of information at just one particular level, as used in human reasoning.

Also note that, between the levels of complexity at which stable structures emerge, an entity must have an unstable static structure that, by definition, is in a state of change.

u)??The properties of information.

The general properties of information are as follows.

  • Because information is order inherent in matter and energy, an item of information occupies a region of space-time.
  • Information is recursive. Any item of information comprises lesser parts and is a part of greater entities. Some of these parts and greater entities are meaningful, whilst others are not. Only the former are information.
  • The least or atomic component of information is the word. This is a symbol representing a meaningful entity.
  • The molecular component of information is the proposition. A proposition comprises two entities and the relationship between them.
  • An important feature of information is that it can be replicated, whilst matter and energy cannot. Structure in one place can be copied to another. The term “replication” is used because information is established in the latter place, whilst also being retained in the former.
  • Information at source comprises meaningful components in a meaningful structure. It is inherent in the least granularity that contains it just once. Information at lower levels of granularity is excluded. For example, we may name an individual person or film them, and replicate that information, but we do not replicate their cellular structure or thoughts. The matter or energy onto which the name or film is replicated may comprise an entirely different sub-structure, e.g., the cellulose of paper, the neural connections of a brain, or the magnetic particles in a hard disk.
  • Information can be transmitted from place to place causally. This can be via a medium such as a book, or via a chain of micro-causality such as that in electrical cables. In living beings, information is transmitted via DNA or RNA molecules. These media are also known as channels.
  • Information at source is, by definition, always true. However, replicated information can be true or false.
  • Information is translatable. Structure in one medium can represent, rather than replicate, a different structure in another. Notably, patterns in the physical universe are encoded as patterns in the mind and in language. My book, “The Mathematics of Language and Thought” [9] gives a detailed explanation of how physical entities and relationships translate into natural language, symbolic logic, and mathematics.
  • Although information is not the reciprocal of entropy, entropy and information are related. The second law of thermodynamics states that, in a closed system, entropy increases with time. Thus, in a closed system, any structure held by matter and energy, for example information, must decrease with time. This includes information at source or in replicated form. So, information, naturally decays unless it exists in an open system and is maintained. Meaning is lost through errors of transmission. Individuals and societies forget.
  • According to the American mathematician, Claude Shannon, and the physicist, Warren Weaver [10], decay in transmission is caused by noise in the channel. Noise is anything which can alter information during its transmission. However, this theory neglects the many other ways in which human communication can fail.
  • The problem of noise and other factors interfering with communication can be minimised by redundancy. Redundancies can comprise repetition of the same component of information or duplication of the channels through which it is transmitted. They can also comprise recursion, i.e., the same component of information repeated at different scales. Information can also contain irrelevances, i.e., meaningless components which have no influence on the information content of the entity. Thus, when redundancies exist, information can often be condensed without any loss of meaning.

No alt text provided for this image
Figure 20. Repetition and recursion.

  • The principle of darkness [11] states that no system can be known completely by anything less complex. However, this principle assumes that the information content of a system must be replicated in the one that “knows” it, and thus, that the latter must be sufficiently complex to hold it. However, information can often be condensed without loss of meaning. Thus, the information transferred to an entity does not necessarily specify its structure in detail. However, it must provide the rules by which the structure within the entity becomes established. Thus, a modified principle of darkness would state that no system can be known completely by anything insufficiently complex to hold its information in a condensed form. Failing that, the information must be simplified and will, therefore, contain errors. Importantly, therefore, the transfer of information can provide a basis for establishment of the relationships needed for a stable structure.

v)??Language.

Natural spoken language has evolved alongside human cognition. This is evidenced by the fact that there is no central language processing part of the brain. Rather, language processing is distributed throughout it. Any processing centre is concerned only with motor functions, i.e., with turning language into speech. [12]

The purpose of language is, of course, communication. We are a social species, and the aim of communication is, as far as reasonably practicable, to unite our minds and co-ordinate our behaviour. This too is a survival characteristic. To achieve this, language must resonate with how we think and how we understand the world that we inhabit. It must reflect the structure of human cognition. Natural languages contain “universals”. These are features common to every language, and the most notable is the proposition. A proposition comprises two entities and the relationship between them. For example, a simple natural language proposition comprises a subject (entity 1) an object (entity 2) and a verb (relationship). For example, “The apple (entity 1) is (relationship) green (entity 2)”. Here “green” is a simplification of the phrase “a green thing”. Propositions are fundamental to the way that we reason. They reflect our understanding of the universe, which comprises physical things and the relationships between them. Thus, language is, in turn, a reasonable reflection of the world that we inhabit.

Logic is a formalisation of natural language in which the rules of reasoning are stated clearly, leaving no room for confusion or doubt. Thus, logic demonstrates that reasoning is inherent in natural language and provides further evidence of the close relationship between language and human cognition.

Mathematics on the other hand, although it is a formal language, is a subset of natural language. It contains the same universals. For example, 5 (entity 1) < (relationship) 6 (entity 2). However, it applies only to characteristics that are variable in nature and can be quantified. For example, the weight of an object.

w) Possible Limits to Human Knowledge.

No alt text provided for this image
Figure 21. Knowledge at increasing levels of complexity. The triangles represent disciplines, for example physics. The coloured dots represent the fundamental entities in each discipline, for example molecules in chemistry. The search is in the direction of increasingly complex fundamental entities.

There are difficulties in observing reality at very high levels of complexity. In general, the more ordered the elementary components and relationships within an entity, the greater the likelihood of it recurring and being recognised. However, it is also true that the more complex an entity is, the greater its size, and the greater its size, the less likely that we will be able to perceive it. Furthermore, it is less likely that an entity’s structure will recur within a timeframe that allows us to recognise its recurrence. Thus, there may be an upper threshold to complexity beyond which we are unable to perceive recurrence, and thus, recognise and name entities. This includes natural laws and scientific theories.

No alt text provided for this image
Figure 22. The effect of size on human perception. We can only perceive a part of large things that are close to us. Physical distance is necessary if we are to perceive the whole.

In practice, the starting point in our search for understanding was reality at the human scale, i.e., the world in which we live and its direct impacts upon us. From here, the search has not only been in the upward direction towards ever greater complexity, but also in the downward direction towards ever less complexity.

There are also practical limits to our perception of the very small and, thus, to our understanding of it. Nevertheless, both processes are ongoing, and the more we understand what underlies the sub-atomic world, the more this increases the complexity above it.

No alt text provided for this image
Figure 23. The Search for knowledge in practice.

x)??Life.

A fundamental feature of living things is their response to information. Causes can transfer information, and, in life, this can result in an effect. People and other living organisms react to information either by processing it or exhibiting behaviour because of it. Rocks, sheets of metal, and planets do not. This response to information is the basic emergent property that defines life.

In non-living things, stability depends on geometry and forces being in balance. This does, of course, raise the questions: “What is a stable living structure?” and “What part does information have to play?”. The answer lies in self maintenance. Living things are inherently unstable and constantly changing. However, information enables them to carry out self-maintenance and this creates a form of stability. Imagine, for example, a tightrope walker. When information from his senses tells him that he is beginning to lose balance, he will adjust his pole to correct it. This is a regulating feedback process that depends on information, and that is constantly in play throughout the tightrope walk. Similar feedback processes proliferate in a living being. Together they constitute self-maintenance, the anti-entropic feature of life.

From self-maintenance develops autopoiesis, the organism’s definition of its own boundaries, or “what is me”.?People define the boundaries of a non-living thing by optimising its information content. However, the boundaries of living organisms are defined by what must be maintained. Cells, for example, have evolved membranes on their boundaries to contain and protect themselves.

In higher living organisms, self-maintenance can also include responses to information about changes in the environment. These are, of course, a survival trait and become ever more sophisticated as we ascend the evolutionary tree. Thus, a second boundary forms. Not only do we maintain “what is me” but also “what is mine”.

Social species such as humanity go on to develop boundaries around “what is us” and “what is ours”, and around “what is them” and “what is theirs”.

Finally, self-maintenance is now becoming a feature of machines and other systems designed by human beings.

3.?Discussion

a)??General System Theory.

General System Theory is probably best defined by a quote from one of its founders, the Austrian biologist Ludwig von Bertalanffy: "...there exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relations or ‘forces’ between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general." [13].

A scientific theory normally begins life as a precisely stated but often speculative hypothesis. We then design experiments to support or refute it. However, what von Bertalanffy proposed is a hypothesis about a theory. Unfortunately, the only way to test his hypothesis empirically is to produce the theory. Failing to produce it does not disprove the hypothesis. In other words, what von Bertalanffy proposed cannot be dis-proven.

As things stand at present, a General System Theory either does not yet exist, or if it does, no-one recognises the fact.

In his paper “An Outline of General System Theory (1950)” [14], von Bertalanffy offers several isomorphisms as evidence for a General System Theory. The term isomorphism means the same structure in different places. In this context, it means the same structure in different laws in different disciplines. The examples von Bertalanffy cites are:

  • The law of exponential growth or decay, i.e., the rate of growth or decay of a parameter is proportional to the value of the parameter.
  • The logistic law, i.e., the increase in a parameter, although initially exponential, is limited by some restricting conditions.
  • The parabolic law. This describes competition within a system, each element taking its share according to its capacity, as expressed by a specific constant. The parabolic law underlies Pareto’s law of the distribution of income within a nation, i.e., that roughly 20% of the population receive 80% of the income.
  • The principle of least action, i.e., that true motion is the optimum out of all possible motions.

However, these principles are very fundamental, and underlie even physics. So, one would expect them to apply in every discipline where physics has a part to play.

Von Bertalanffy offered one further isomorphism which cannot be expressed mathematically. This is:

  • “…the formation of a whole animal out of a divided sea-urchin or newt germ, the re-establishment of normal function in the central nervous system after removal or injury to some of its parts, and gestalt perception in psychology.”

However, this unnamed principle applies only to life, and has probably emerged with life.

It is in our nature to seek order in the world around us, but we can sometimes be led astray by cognitive biases. We have evolved a powerful ability to quickly recognise recurring structure. However, like all evolved traits, this ability can sometimes lead us astray. A similar evolved trait, known as the "hyperactive agency detection device", was proposed by the American psychologist Justin Barrett [15]. This trait enables us to quickly recognise stalking predators without conscious thought. Unfortunately, because it is hyperactive, it also causes us to believe that there is agency in things which, in fact, have none. It is, therefore, thought to be a significant cause of our religious nature. The same seems to be possible when it comes to our powerful ability to recognise structure. We may unconsciously sense similarities of structure when in fact there are none. We may also have an unconscious expectation that there is just one structure underlying all others. In other words, we may have an unconscious bias which causes us to seek a “theory of everything”.

There are, however, reasons to doubt that a “theory of everything” is possible. It may be that there is no single set of laws applicable to all systems. Rather, there may be multiple, wholly independent laws that interact to create the universe we know. So, for example, human social systems will have their own set, some of which are shared by less complex levels, and some of which are particular to the field.

Because physical laws or scientific theories require a minimum amount of complexity before they emerge, they cannot also apply in a less complex field. So, the only similarities between sub-atomic systems and living systems, for example, will be those that rely solely on the laws of sub-atomic systems.

Furthermore, there are different paths of increasing complexity, with new laws emerging on each path. There is no reason to believe that the laws emerging on one path will share common features with the laws emerging on another. The only commonality will be features that depend on laws that emerged before the two paths diverged. Our inability to unify the four fundamental forces of physics adds weight to this argument.

Further weight is added by Kurt Godel’s proof that there are infinite axioms in mathematics. Axioms are fundamental truths that are self-evident and cannot be proven.?One axiom cannot be derived from another. Mathematics is a formal language and a subset of natural language. So, natural language must also have infinite rules. Natural language, in turn, is a reflection of human cognition, and so, there are also infinite rules of human cognition. Finally, human cognition is a reflection of physical reality, and so, there may be infinite laws in physical reality. This is not, of course, proof of the absence of a theory of everything. Rather it is evidence to suggest that there is not a single law from which all others can be derived.

A possible way of testing for a theory of everything would be to map isomorphisms onto a diagram of the disciplines and the dependencies between them. This will be a tree like structure with particle physics at its trunk and more complex disciplines, such as ecology, on its branches, some of which may merge in places. One would expect isomorphisms as one ascends vertically because of the dependencies between disciplines. However, if natural laws or theories emerge or vanish as one ascends, then this is evidence to suggest that there is not a theory of everything.

So, a General System Theory appears to be possible, but based only on fundamental principles that underlie physics. However, because new laws emerge for more complex disciplines, its usefulness will be limited, and it cannot be regarded as a “theory of everything”.

b)??Social Systems Theory

In the absence of a useful General System Theory, I advocate that effort is focused on our greatest concern, society and its difficulties. The focus that I would advocate includes the following.

  • Causality. Patterns of causality are powerful tools for understanding social trends and events. Potentially, they also provide a way of managing those trends and events by intervening in the process. However, because they are dynamic and take place over time, we find it more difficult to recognise causal patterns than static ones. Tools are, therefore, needed to assist us.
  • Information. Information and our ability to act on it are a feature of life in general, and humanity in particular. Human needs, emotions, culture, values, norms, and beliefs are all information. They are held in the minds of individual people in the way that the brain is structured. They are also held by a larger organisation in the minds of its members. It is also held in any other places in which information is stored, such as computers, paper files, etc. The relative priority of our needs, the levels of our emotions and of our satisfaction are variable characteristics held in the same way. Thus, much of social systems theory is concerned with the flow of information.
  • Language. Language is the most powerful tool that we have for representing reality. Mathematics is a specialised subset of language covering only variable characteristics that can be measured. It is recommended, therefore, that language in its full richness be used as a basis for modelling society.
  • Organisations. An organisation can be defined as any group of people who come together and act with a common purpose. Organisations in the very general sense, include families, clubs, businesses, nations, groups of nations, and humanity as a whole. Such organisations are inherently unstable and would not exist without self-maintenance. If a parameter deviates from an optimum, then an agent can gain this information and act to correct it. Self-maintenance stabilises organisations, giving them not only dynamic structure such as cycles, but also changes to those cycles which too have structure. In the physical world, stability is a balance of the forces acting on an object, and the arrangement of its parts in space-time. However, in an organisation the equivalent of these forces is the state of the organisation’s needs. The equivalent of arrangement in space-time are the organisation’s patterns of behaviour. In a stable organisation the pattern of behaviour satisfies all needs.

In the social sciences, the search is for valid causal rules, or theories, yet to be identified, which emerge at high levels of complexity. However, in the same way as other entities, their recognition depends on their recurrence. Unfortunately, the greater the level of complexity, the less frequently these recurrences will be observed. Furthermore, the larger the scale at which they operate, the less likely it is that we will recognise them.

To add to these difficulties, human behaviour is caused by emotion, knowledge, and reasoning skills. If we were to recognise a new causal rule, then this would constitute new knowledge, and thus, alter our behaviour. This, in turn, might invalidate the theory. For example, if it becomes known that an event, x, always causes war, then, whenever x is encountered, effort will be put into avoiding its consequence. So, rather than stating that “x always causes war”, the theory should in fact state that “x, and not knowing that x always causes war, and the absence of any inhibitors always cause war”.

The decreasing likelihood of our recognising causal rules or theories as complexity increases could well mean that there is a maximum level of complexity at which human recognition can occur. If so, then beyond that point, causal rules are unrecognisable, and so, not affected by human agency. However, whilst they are fixed, we are unable to recognise and take advantage of them.

If for example, they could be identified using advanced artificial intelligence, or by a General System Meta-theory, then the newly discovered causal rules would no longer be fixed, and the threshold would move upwards. This increased knowledge would, of course, also alter our culture.

Given the difficulties described above, the reader may question whether new social knowledge is worth pursuing. Personally, I believe that it is, but that an ethical framework is needed to control its use. New knowledge can change human behaviour for the better, for example, by avoiding war. However, it can also change it for the worse. An elite may, for example, keep the knowledge to themselves and use it to manipulate others, as may already be the case in the fields of politics and advertising. To avoid this, it is important that there be open access to any new knowledge in the social sciences, that it be used ethically, and that these requirements are policed.

4.?References

1.?????Neisser, U. (1967). Cognitive psychology. Englewood Cliffs: Prentice-Hall. ISBN 978-0131396678

2.?????Minkowski, H. (1908). Space and Time. Lecture given at the 80th Meeting of Natural Scientists in Cologne on September 21, 1908. (English Translation by Lewertoff,?F. and?Petkov, V. (2012). Space and Time, Minkowski’s Papers on Relativity, Minkowski Institute Press, Montreal, Quebec, Canada. ISBN: 978-0-9879871-2-9.)

3.?????Eysenck, M. W.; Keane, M. T. (2003). Cognitive Psychology: A Student's Handbook (4th ed.). Hove; Philadelphia; New York: Taylor & Francis. ISBN 9780863775512.

4.?????Rothman, K. J., Causes. American Journal of Epidemiology, Volume 104, Issue 6, December 1976, Pages 587–592, https://doi.org/10.1093/oxfordjournals.aje.a112335

5.?????Grotzer, T., Principal Investigator, Harvard Graduate School of Education (2010). Causal Patterns in Science, a professional development resource. https://www.causalpatterns.org/causal/causal_types.php .

6.????Korn, J.,?(2022).?Science and design of problem solving systems. Troubador,?Leicester, UK.

7. Clausius, R. (1867). The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst.

8.?????Schr?dinger, E (1992). What is life? : the physical aspect of the living cell ; with Mind and matter ; & Autobiographical sketches. Cambridge: Cambridge University Press.

9.?????Challoner, J. A. (2021). The Mathematics of Language and Thought. Open Access at https://rational-understanding.com/my-books/

10.?????Shannon, C. E. & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana: The University of Illinois Press.

11.?Cilliers,?P.?(1998).?Complexity?and?Postmodernism:?Understanding?complex?systems.?London: Routledge.

12.?Evans, V. (2014). The evidence is in: there is no language instinct. Aeon Essays. https://aeon.co/essays/the-evidence-is-in-there-is-no-language-instinct

13.?von Bertalanffy, L. (1968). General System Theory: Foundations, Development, Applications. New York: Braziller.

14.?von Bertalanffy, L. (1950). An outline of general system theory. British Journal for the Philosophy of Science, 1, 134–165.

15.?Barrett, J. L. (2004). Why Would Anyone Believe in God? Altamira, Walnut Creek, CA.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了