From Arithmetic to Quantum Mechanics

From Arithmetic to Quantum Mechanics

Why do we calculate multiplications first in expressions?


A simple question that goes a long way

A few days ago, my 12-year-old son, famous among those who know him for his tricky questions and intellectual challenges, asked me what at first seemed a trivial question: “Dad, why do we calculate multiplications and divisions first, and then additions and subtractions in expressions?”

A simple question, but a complex answer, I thought. At first, I considered answering by referring to the mathematical rules taught in school, mentioning the famous order of operations (PEMDAS: Parentheses, Exponents, Multiplication, Division, Addition, Subtraction). But he already knows this well, and during our conversation, it became clear that he was aware that the precedence of multiplication and division over addition and subtraction derives from the properties of operations, such as the distributive property. So, I realized later, the question wasn’t just the expression of a child’s curiosity but rather the push of a deeper curiosity, which would be more accurately described as scientific and philosophical in essence.

So, I started reflecting, and the thought of answering this question brought to light a story that begins at the dawn of mathematics, crosses several disciplines, and arrives at the foundations of modern physics, in quantum mechanics. And this is the story I’d like to briefly share here, starting from this seemingly simple question.

Where does the rule of precedence of operations come from?

When we solve an operation like 2 + 3 ? 4, the fact that multiplication comes before addition is not just a modern convention but a necessity that mathematicians developed to solve complex problems. Imagine a world where there was no clear rule: if everyone could interpret 2 + 3 ? 4 in their own way, we’d get different results each time! This confusion didn’t sit well with those who wanted to build a solid mathematical foundation, like the great mathematicians of the past.

A bit of history

When we ask why rules like operator precedence exist, it’s interesting to understand that these weren’t always as we know them today. It took centuries of mathematical evolution to arrive at the “conventions” we now take for granted.

Ancient Greece: Logic before notation

In ancient Greece, mathematics was closely linked to geometry and logic, as we can see in the works of Euclid. In his most famous work, The Elements, Euclid didn’t use symbols like + or ? for addition and multiplication, nor did he have a modern concept of notation for mathematical operations. However, Greek mathematicians reasoned rigorously to solve complex problems. For example, Euclid demonstrated the properties of geometric figures through logical inferences without ever relying on symbolic formulas or operator precedence rules as we know them today. Mathematical operations were expressed discursively and required a more complex language, as they lacked standard symbols for addition, multiplication, or division. Nonetheless, we could say that their organization of thoughts and demonstrations implied a primitive form of "precedence" in logical operations: the steps were ordered strictly to ensure correctness.

The Greek approach to mathematics focused more on geometric demonstrations than numerical calculations, and for this reason, the question of how to order operations wasn’t yet a central theme. However, their mathematical inferences laid the groundwork for the logical thinking that would influence future generations.

Middle Ages: Algebra takes shape in the Islamic world

The next crucial step in the development of the mathematical rules we know today took place during the Golden Age of Islam (around the 8th to 13th centuries), when scholars like Al-Khwarizmi (perhaps his name rings a bell for those who design and implement algorithms) began developing algebra. The term “algebra” itself comes from the title of one of his works, Al-Kitab al-Mukhtasar fi Hisab al-Jabr wa'l-Muqabala, which can be translated as “The Compendium on Calculation by Completion and Balancing.” I’ve only skimmed through it, so don’t quiz me on it!

Al-Khwarizmi was one of the first to formalize algebraic operations to solve linear and quadratic equations. However, even at this stage, there was no clear symbolic notation or standardized rules of operator precedence. Problems were described in words, and operations were performed according to context. For example, if you had to multiply or add, the process was explained textually, without the use of mathematical symbols like + or ? .

Despite this, Al-Khwarizmi helped lay the foundations for symbolic algebra, which would influence future mathematical traditions. Islamic scholars understood the potential of this systematic approach to calculation, paving the way for the development of more precise rules for handling mathematical operations clearly and systematically.

Renaissance: The rise of symbolic notation

It was during the Renaissance in Europe that mathematics became more symbolic, and clearer rules began to emerge for manipulating algebraic expressions. Here, two key figures come into play: Fran?ois Viète and René Descartes.

Fran?ois Viète (1540–1603), one of the founders of modern symbolic algebra, was among the first to use symbols to represent variables in equations. Viète introduced a more systematic algebraic language, using letters to indicate known and unknown quantities, an essential step for the future formalization of rules.

René Descartes (1596–1650) took it a step further in his work La Géométrie, where he formalized the use of algebraic variables and introduced a more rigorous way to perform geometric and algebraic calculations. Descartes was also the first to use notations like a2 to indicate the power of a number, and he popularized the use of symbolic notation, including symbols like +, ? and ? for addition, subtraction, and multiplication. With these innovations, the need for a standard order of operations became increasingly evident.

During this period, the rules of operator precedence began to clearly emerge. It was essential to have a shared convention that established that multiplication should be performed before addition unless other constraints (what we would call parentheses) were present. These rules prevented ambiguity in calculations, especially when dealing with more complex expressions.

I remember writing a paper on these topics on assignment during a math course at university.

From practice to rule: the need for clarity

The birth of symbolic notation made it necessary to standardize how operations were performed. Without precise rules, mathematical expressions would have remained ambiguous. Throughout the 17th and 18th centuries, the conventions we know today —such as multiplication taking precedence over addition— became established, leading to a coherent and shared system of calculation.

So, when my son asked me why we calculate multiplications first in expressions, I took the opportunity to show him that a rule we take for granted and accept almost dogmatically is actually the result of centuries of mathematical evolution, where each generation of mathematicians contributed to clarifying and structuring our way of thinking about numbers and operations. What appears to be a simple practical rule has a historical and conceptual depth that can take us far.

And indeed, I didn’t stop there.

Why does multiplication come before addition?

Multiplication is, in a sense, a "stronger" operation than addition. Think about how many times you can add a number to get the same result as a multiplication. If you want to add 3 four times, you get 12: 3 + 3 + 3 + 3 = 12, which is the same result as 3 ? 4 = 12. So, multiplication represents a more compact and powerful way of performing repeated addition. Giving multiplication priority simplifies many operations. Kids like my son learn this in elementary school.

But this is just the less interesting part of the story. From a theoretical perspective, the definition of operator precedence has been consolidated through the study of mathematical logic and abstract algebra.

For example, there are fundamental mathematical properties that influence how we group operators. We’re talking about the associative and commutative properties. Both addition and multiplication are commutative (the order of the operands doesn’t change the result) and associative (you can group the operations without changing the result). In other words, for expressions like a + b + c or a ? b ? c, the order in which you perform the operations doesn’t matter— you’ll always get the same result.

But what happens when we mix different operations, like addition and multiplication? This is where things get complicated, and it’s precisely to avoid ambiguity that we need a hierarchy of execution. Without a clear rule, expressions like a + b ? c could lead to different results depending on the order of operations.

The importance of abstract algebra

This brings us to algebraic structures like groups, rings, and fields, more advanced concepts in abstract algebra, where addition and multiplication are defined in specific ways. For example, the theory of integers is defined as what’s called a commutative ring. A ring is an algebraic structure that satisfies certain properties for two operations, which in the case of integers are addition and multiplication. The ring Z has the following properties:

  • Closure: The addition and multiplication of two integers result in another integer.
  • Associativity: Both addition and multiplication are associative.
  • Identity Element: There are identity elements for both addition and multiplication.
  • Inverse Element: There are inverse elements for both addition and multiplication.
  • Commutativity: Both addition and multiplication are commutative.
  • Distributivity: Multiplication is distributive over addition.

These are, in simpler terms, the things all children learn between elementary and middle school. In this context, multiplication naturally takes precedence over addition to ensure consistency with the distributive law, that is:

a(b + c) = ab + ac

This is a fundamental principle, not only in the arithmetic we learn as children but also in advanced algebra. Without this rule, calculations would become ambiguous, and mathematics, which relies on clarity and consistency, wouldn’t be as reliable. So, as you can see, the properties are defined in the abstract. You understand why multiplication comes before addition, but this fact is the result of centuries of mathematical thought and practice.

So, even though the rules of operator precedence seem simple, their evolution is a much more complex story. Thanks to this evolution, we can perform calculations efficiently, without ambiguity, and with a shared logic.

However, my son’s question was actually a different one - as I understood it - :“Do things have to be this way? Can we define different rules for operators?”

Rules aren’t absolute truths, but useful tools

I don’t want to delve too deeply into philosophy here, even though the philosophy of mathematics offers plenty of insights to explore these topics. But it’s worth noting that philosophers like Ludwig Wittgenstein show us that mathematical rules aren’t eternal truths but rather tools —some would say conventions (though I disagree on that point)— that we’ve constructed and adopted because they work. There’s no “natural law” that says multiplication must take precedence over addition; it’s just a way to ensure that we all get the same results when doing calculations. In the end, mathematics is a language that has emerged to communicate and solve problems.

In this context, mathematical rules, including operator precedence, only gain meaning through their use within shared practices, like the language games Wittgenstein speaks of. Their justification lies in their practical utility and their role in organizing our mathematical understanding, not in an independent mathematical reality. The mathematical rules in use are thus part of a form of life: a set of shared practices that give meaning to human actions. I know this only in part overlaps with what I think about mathematics, but it helps frame the topic.

Returning to my son’s earlier question about whether things have to be this way, the answer might be: no, they don’t have to be. These rules give meaning to a specific set of practices useful to humans, and changing them would strip those practices of meaning, thereby undermining the form of life based on them. However, defining different rules is always possible, and beyond being an expression of human creativity, it could be motivated by countless factors —including the observation of new phenomena— or could give rise to new practices that might prove useful to humanity.

Mathematics, long confined to the realm of cold rationality and rigidity, is, in fact, anything but rigid. Its beauty lies in its scope, abstraction, and flexibility—in a word, its freedom.

That’s the beauty of considering rules as something we humans create to solve problems and build worlds. At this point, one might be tempted to ask: okay, but what problems can this way of thinking help us solve, and what worlds can we build? And indeed, my son asked me that, in his own way.

From Arithmetic to Algebras: A leap towards abstraction

So, I had to continue. The rules for operator precedence can be seen in a different —and non-dogmatic— light when we move from arithmetic to the more abstract structures of algebra, where the definition of the properties of the operators themselves comes into play (e.g., associativity, commutativity, and distributivity). These properties establish, as we’ve seen before, how the operations work in practice. It’s no longer just about specific sets of numbers, but about structures of sets, variables, and abstract, more general operations.

Operations in algebra can even lose some of the properties that we consider fundamental in arithmetic. But they can gain others. For example, we can build non-commutative algebras, where the order of operations makes a difference. It’s a bit like saying that 3 ? 4 might not be equal to 4 ? 3, which may seem strange at first, but it makes deep sense when we explore certain processes and phenomena, many of which are part of our everyday experience.

And this brings us to the part that made my eyes light up when I answered my son.

Some examples of applications

The fact that we create rules to organize our experience isn’t just a fantasy. There are many notable practical applications of these abstract structures where operators follow different rules than those of arithmetic.

Computability Theory and Automata Theory

Computability theory, of which automata theory is essentially a part, brings us into a world where mathematics meets logic and computer science. In this field, operations aren’t just the classic ones we use in arithmetic expressions; they transform into instructions that determine the behavior of a machine or system.

Imagine an automaton as a simple robot that follows instructions —in reality, it’s an abstract machine whose description is captured by an algebraic structure. The operators in this context are the “actions” the robot performs, like “move forward” or “turn right”. Here, operator precedence becomes crucial. If an automaton has to choose between two actions—maybe walking or turning—which one should it perform first? In automata theory, we can define rules that establish the priority among different instructions depending on the state the automaton is in, and these rules don’t always follow the conventional logic we know from arithmetic. The operators can, in fact, be constructed based on the specific needs of a given algorithm or computational system.

More generally, this idea expands further in computability theory. Computability theory studies what can be computed and how. Even here, the definition of operators with unconventional precedence allows for the creation of abstract machines that can perform any kind of computation, from the simplest to the most complex. This allows us to rigorously define any algorithm and express the functions that can be computed by a machine, like a computer, for example.

Formal Language Theory: The hidden rules behind programs

This digression brings us straight to formal language theory, closely tied to computability theory and the heart of every programming language. Many of us have used or still use languages like Python, Java, or C++. My son is taking his first steps in this world and is learning Python. So, imagine his curiosity: how do computers run programs like video games? Do computers just execute programs, or can they also think? A flurry of questions that I had to dodge, opening Pandora’s box. And then you think, what does the order of operations have to do with everyday life…?

However, when you write a line of code, there’s an entire system behind the scenes that determines how and in what order the operations are executed. And here, too, the definition of operator precedence rules comes into play.

Just like in mathematics, in programming languages, you need to establish a clear order to avoid misunderstandings. Take, for example, the following expression in Python:

result = 2 + 3 * 4

If the language didn’t follow the rule of giving precedence to multiplication, it could easily interpret the expression as (2 + 3) ? 4 = 20 instead of the correct 2 + (3 ? 4) = 14. The rules of precedence ensure that the code does what we expect it to do.

In formal language theory, these processes are rigorously defined. Every programming language has a syntax that follows precise rules. These rules determine how to combine symbols and operators to form valid expressions and, most importantly, in what order to execute the operations. It’s as if the language follows its own "operating manual," where every operation has a well-defined level of priority —not just arithmetic ones, but logical and other operations as well.

These rules are crucial for avoiding ambiguity and confusion. Imagine if every programming language used a different logic for operator precedence: a program written in one language could yield completely different results if executed in another!

Of course, this applies both to standard operations and to the definition and implementation of new operations. It’s wonderful to see that this knowledge and technology offer us tools to tackle a variety of scenarios with freedom and creativity.

Algebras with multiple basic operators: Lie Algebras

Then there are algebraic structures that use multiple operators, and in these cases, it may be necessary to define priorities between the operators themselves. An interesting example is found in Lie algebras, where operations between elements (called commutators) do not follow traditional commutativity.

Essentially, a Lie algebra is an algebraic structure that describes a set of elements with a binary operation called a commutator or Lie bracket. This operation is similar to multiplication but is non-commutative and satisfies certain particular properties, making it useful for describing continuous symmetries and transformations in physics and mathematics. I know —I almost lost him at this point, but I spared him the details that I’ll share here:

A Lie algebra is formally defined as a pair (G, [?,?], where:

  • G is a set (a vector space) whose elements can be vectors, matrices, or operators.
  • [?,?] is a binary operation called a commutator or Lie bracket, which takes two elements of G and returns another element of G.

This operation satisfies two fundamental properties:

  • Antisymmetry: [x, y] = ?[y, x] ; This means that the commutator of two elements is the opposite if the elements are swapped. The operation is non-commutative, meaning that, in general, [x, y] ≠ [y, x].
  • Jacobi Identity: [x, [y, z]] + [y, [z, x]] + [z, [x, y]] =0 ; This property ensures a consistent structure for the commutator and allows us to systematically describe relationships between continuous transformations.

When order matters

Traditional commutativity states that the order of elements in an operation doesn’t affect the result. For example, in ordinary multiplication of real numbers, a ? b = b ? a. However, in Lie algebras, the commutator operation is non-commutative. This means that, as mentioned earlier, the order of the elements affects the result, i.e., [x, y] ≠ [y, x].

The commutator is an operation that measures how much two transformations or operations differ when applied in different orders. This is crucial in physics because many transformations (like rotations or symmetries) are not independent of the order in which they are performed.

Lie algebras have applications in many areas of physics, especially in fields involving symmetries and groups, such as quantum mechanics, field theory, and general relativity. Their ability to describe continuous symmetries makes them particularly useful for studying complex physical systems, like fundamental interactions and the structure of spacetime.

Example of non-commutativity: Angular momentum

Before diving into non-commutativity in quantum mechanics, it’s helpful to take a step back and clarify what angular momentum is. In classical physics, it’s quite intuitive: imagine a spinning top. Angular momentum L is a quantity that describes how fast and in which direction it’s rotating. It’s calculated with a simple formula:

L = r x p

Where:

  • r is the position of the top relative to the center of rotation,
  • p is the linear momentum (which depends on mass and velocity).

This angular momentum is important because if there are no external forces, it remains conserved. It’s like saying that if no one pushes the top, it will keep spinning in exactly the same way forever. The conservation of angular momentum is a fundamental law, and we also see it on a larger scale, such as in the motion of planets around the sun.

Angular momentum in quantum mechanics: where things change

But in quantum mechanics, things get much more interesting. We’re not just talking about bodies rotating in space but also subatomic particles like electrons and their spin —an intrinsic property that doesn’t have a direct classical counterpart. It’s as if particles had an "internal rotation," even though they’re not spinning through space like a planet.

Angular momentum in quantum mechanics is described by operators (rather than simple vectors) called Jx, Jy, Jz , which represent the components of angular momentum along the three axes of space. These operators have a very peculiar feature: they don’t follow classical rules. While in classical physics, you can measure all components of angular momentum without any issues, in quantum mechanics, there’s a fundamental limitation. And this is where non-commutativity comes into play.

In summary

These operators Jx, Jy, and Jz? just so happen to follow the rules of the Lie algebra we described earlier, and because of this, they are non-commutative. But what does this mean? It means that the order of operations changes the result. If you apply a rotation along the x-axis and then along the y-axis, you’ll get a different result compared to doing it the other way around. Mathematically, the commutators between these operators are:

This equation may seem complicated, but the key point is that the order in which you perform rotations matters! It’s not like multiplying 2 ? 3, which is the same as 3 ? 2; here, changing the order changes the result. If you perform Jx,? Jy, you get one thing; if you do Jy ? Jx,? you get something else.

Why is non-commutativity so important?

Non-commutativity isn’t just a mathematical oddity. It has a profound physical significance. In quantum mechanics, this property reflects fundamental evidence: we cannot measure all components of angular momentum precisely at the same time. If you try to measure Jx, your measurement disturbs Jy and Jz, and vice versa. This is directly linked to the famous Heisenberg Uncertainty Principle, which states that we cannot simultaneously know both the position and momentum of a particle with precision.

Thus, the non-commutativity of operators in quantum mechanics is one of the reasons why the quantum world seems so different from the classical world. It reflects the fact that particles aren’t just little balls that we can observe easily; they’re complex, elusive entities whose behavior changes depending on how we observe them and in what order we perform certain operations.

From an innocent question to a deep connection with the universe

When my son asked me, “Why do we calculate multiplications first?”, he didn’t know that he had just touched on one of the deepest connections between mathematics, physics, and computer science. The answer I gave him started with an explanation of an arithmetic rule but went far beyond… perhaps even too far. After all, I did write a LinkedIn post about it!

However, the way we calculate expressions reflects a long evolution of mathematical thought and the emergence of abstract languages that have allowed us to better understand the physical world around and inside us. The very rules we start learning in school are the first step towards understanding complex concepts that somehow describe what we consider the fundamental laws of the universe.

So, the next time someone asks you why we calculate multiplications before additions in expressions, remember that the answer doesn’t stop at a simple practical rule or mnemonic trick. Behind every “convention” is a rich history spanning centuries, starting from humanity’s first attempts to organize calculations, passing through deep philosophical reflections on the nature of mathematical rules, and arriving at the most exciting discoveries of science.

When we teach these rules to our children or ask similar questions ourselves, we’re really venturing into a world of interconnected ideas. This adventure pushes us to cultivate curiosity and the critical thinking needed to ask why things are the way they are.

Critical thinking, openness to the world, and the rejection of dogmatism are the real outcomes we can hope to achieve even through these seemingly simple questions. And with them comes an attitude, even an ethical one, towards solving the problems that affect us all, both small and large, on this planet every day.

And this approach is exactly what I strive for at OXHY , where finding apparently remote connections, like those underlying macroscopic applications of Quantum Mechanics, Energy Harvesting and Engineering are key to developing distruptive and impactful technologies. By embracing complexity and building systems rooted in deep mathematical and scientific understanding, we can push the boundaries of innovation, creating solutions that not only meet today’s challenges but shape the future in groundbreaking ways.

Alessandro L.

Ingegnere dell'informazione

1 个月

Thank you for this very interesting article! I have the feeling that Geometric Algebra, where the geometric product is not commutative, is very effective in expressing concepts in a more compact and clearer way. Thanks to GA I think I have finally understood complex numbers and I think it's because commutativity is a more artificial concept compared to the rest of our mathematical models and that it was limiting us in describing reality.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了