Hyperoperations Implementation in Python, Part 1.
AJRobbins at English Wikipedia: https://en.wikipedia.org/wiki/Tetration#/media/File:TetrationComplexColor.png

Hyperoperations Implementation in Python, Part 1.

A Note about Audience

This article is doesn't really have a good specific audience. This is probably not going to be interesting to someone interested in general "programming" topics. It's also towards what I would call mathy, but for any real/trained mathematician, this is probably so basic, it is probably not interesting at all. It also follow my typically informal overly-verbose style. If you are not a patient reader, you should save your time and either just read the code, or skip the whole thing altogether. If you do happen to find anything useful or interesting in this at all, please let me know. I'm interested in knowing the background of folks who think this of any interest at all.

The Code

https://github.com/casten/number/blob/main/Number.py

Begin

I had some thoughts while falling asleep late on night about the nature of basic arithmetic and common mathematical operations such as addition, multiplication, division, etc. I've always had a fascination with the mechanics of how these operations we learn in elementary school can be calculated by a variety of analog, and later digital machines. There are some great videos by Chris Staecker that cover many of these. I was thinking about division by zero which is generally described by High School math as being either undefined, or if pressed with interpretation by limits, etc., as either negative or positive infinity depending on the direction you are coming from. I recalled some mechanical calculators get stuck in an infinite loop when you divide by zero. Generally this is because these particular ones (maybe most?) implement division by subtracting the divisor from the dividend until some condition is met. They subtract 0 from part of the dividend, it remains unchanged and the halting condition is never met. So it just keeps turning and turning until it is manually stopped.

The whole division by zero is unallowed, undefined, etc. seems very unsymmetric to me. I understand some things are just like that in math. Nevertheless, we can add zero, subtract zero, multiply by zero, we can divide zero by another number, but as soon as we divide by zero, that not ok.

I decided it shouldn't be too hard to write a program that actually implemented a number and some basic arithmetic operations. I'm no mathematician with no higher level math experience beyond college Calculus, some Differential Equations, and Linear Algebra. And maybe a little Discrete math. But the adding machines did it, so why couldn't I?

The plan was to implement everything in with a couple basic python language primitives. A number would be a list of arbitrary things or units. It didn't matter what they were, but there would be one for every unit in the number. The number 0 would have zero of these units. 1 would have one, 2 two of them, etc. This is what is called a unary representation. I'd store these things in a Python list. This list of things would keep track of the number.

Next we want to be able to compare two numbers. Originally I just used the len() function on the list to get the numeric representation. But I felt like that was cheating, so now comparisons are done by walking along one list and chopping the head off the comparison list. If the first list runs out and the second has also, they are equal. If while walking the first, the second runs out, the first is bigger. The the first runs out, but there is more in the second, it is smaller. Again, we could convert to base 10 and compare in python directly, but we want to avoid using conventional numbers and operators.

So we have the unary representation, a comparison operator and the last thing we initially need is a way to interact with the number. A number is no good if you can't do any work or calculations with it. So the fundamental operation we'll use is increment (often called successor). This will be implemented as an operation where you call inc() and what you get back is a number that is one unit larger. This will be the foundation for addition and other operations.

As a side note, since we are representing numbers as these lists of units, I decided I wanted to implement this in what can be considered a "functional" style. This is not to say I am going to program using functional type operators in Python. By functional, I mean that the operations when applied to a number return the result as a completely new number. This makes each number immutable and the series of actions for doing calculations a sequence of new results dependent on, but separate in representation from the original ones. There's no strong reason for this choice, but I think it helps reinforce the hierarchical and re-applicational(?) structure of what is to follow. Like other implementation choices, this choice is extremely inefficient memory-wise. The whole thing is extremely inefficient with the unary representation. But that's not the goal here.

Finally, to tighten up things operationally, I wrapped the number object in a Python class. This allows operations to be on the number itself rather than a group of compatible functions. It's just a style thing.

Construction

@property
def state(self):
    return self._state

def __init__(self, copy_state=None):
    if None is copy_state:
        self._state = []  # Unary zero
    else:
        self._state = copy_state.copy()

@staticmethod
def create_zero():
    #  I kind of hate to add this, but it might make creating a 0
    #  clearer for new readers
    return number()        

The code above shows the member variable that hold the unary representation. It's defined as a property to enforce immutability. I didn't really like how things looked with the Python Final or final operator.

__init__ allows the construction of a "zero" number or making a new number with the same value as one passed in.

The zero() function is a completely unnecessary helper function. I added it so that when new numbers are made, it is clear they are zeros.

inc()

def inc(self):
    # Make a new number that is the previous list + a new element
    # (conscious effort to not use the + operator here even if it
    #  is just a cosmetic distinction)
    return number([*self.state, 'x'])        

Here's inc(). I takes the list of state and return a new one with an 'x' slapped on the end. That 'x' is the unit but could be anything.

compare()


def compare(self, b):
    for _ in self.state:
        try:
            b.state[0]
            b = b.dec()
        except:
            return "greater"
    try:
        b.state[0]
        return "less"
    except:
        return "equal"        

Here is compare that compares one number with another. Once again, we could call python's len() on the list. But that's using numbers, so we'll use some brute force here. We'll walk through the units of our number and as we go, chop off head of the one we are comparing against. If we try to remove more items than we have in the comparee, it will raise an exception and we'll know the comparer is greater in size. If we walk all the comparer units, and then have any left in the comparee, the comparer is less. Otherwise, if we ran out in the comparee, the sizes are equal.

And you may notice we are using a number.dec(). We're not going to do much else with it now. But you might imagine this might eventually be useful for subtraction. For now, here is the definition.

def dec(self):
    try:
        self.state[0]
        return number(self.state[1:])
    except:
        raise("Oops.  You tried to decrement a zero.  Just natural numbers for now.")        

Fist we check if there is an element. If the list is empty (zero), we raise an exception. For now we'll only support natural numbers. If there is at least one element, we return a new number that is a copy of the original with the first element removed.

What's next? inc(inc)!!!

Now that we have a number with a representation and a inc() and some other tools, we can implement addition!

def add(self, b):
    # 0th iteration is identity
    result = number(self.state)
    for _ in b.state:
        result = result.inc()
    return result        

Here's addition. We implement it by walking the elements in the addend (here named 'b') and incrementing the augend (self.state). And of course we make and return a copy rather than modify the augend directly.

We've made addition by walking and incrementing. There are no tables, concatenation, or chunking. One could imagine implementations that use those types of operations to be more efficient. But here we are going for the simplest and most dependency free version.

You'll notice the comment about the 0th iteration. This actually gets interesting when we compare this versus higher order operations like multiplication and power. If the addend is empty/zero, no walking happens and we return the original value. So zero is the identity value, which is right for our standard addition. X + 0 = X

Mul and Pow

Multiplication and power are much like addition. But as add() is based on walking inc() operations, mul() walks and does add() and pow() likewise with mul(). This building of more complicated functions based on the previous is called Hyperoperation . Here are the implementations:

def mul(self, b):
    # 0th iteration is 0
    result = number.create_zero()
    for _ in b.state:
        result = result.add(self)
    return result

def pow(self, b):
    # 0th iteration is 1
    result = number.create_zero().inc()
    for _ in b.state:
        result = result.mul(self)
    return result        

You'll notice the only difference between them is the initial state before the walking. For multiplication, the 0th iteration returns 0. X * 0 = 0. And for power, it is 1. X^0 = 1.

Where Does It End?

Exponentiation gets big fast. But there is no reason why the hyperoperation needs to stop at the third level above the successor function. For basic algebraic expressions, we use forms like "y = Ax^2 + Bx + C" where the calculation is comprised of combinations of exponents, factors and constants. If we go one step higher than exponents, we get something you don't generally lean in high school algebra. The next Hyperoperater past power/exponent is called Tetration. It is basically the multiple application of the power function and results in powers raised to powers raised to powers, etc. So the Tetration of 2 to 3 is the same as 2 to the power of (2 to the power or 2). Or 2^(2^2) or 2^4 which in this case equals 16. There are various notations for this, none of which look great in with this basic Linked-In article's markup limits. A typical is M (uparrow symbol, uparrow symbol) N, but here I'll just use double caret or M^^N.

I'll note that the utility of this function is fairly limited in practical terms. The basic implementation in this system looks like:

def tetr(self, b):
    # Stupid piecewise definition
    try:
        b.state[0]
    except:
        # 0th iteration is 1
        return number.create_zero().inc()
    result = number(self.state)
    once = True
    # For the non-zero tetration case, you loop b - 1 times.
    for _ in b.state:
        if once:  # Skip the first so do one less
            once = False
            continue
        result = result.pow(self)
    return result        

Again, we follow the implementation choice of checking for an element to check for a non-empty list. len() would work, but this is how I chose to do it. It does hurt readability. The definition of tetration is defined piece-wise depending on the value of the passed value. N^^0 = 1 with no iteration. Otherwise it power is applied (n-1) times. Tetration is a new idea to me and I'll admit why it needs to be defined this way is unclear. But I suspect the 0th iteration needs to be 1 to satisfy some higher level algebraic requirements, much like the subsequent operations. For example, without special cases for add, mul, and pow, the functions would produce behavior that would not meet the algebraic properties we find useful for our typical operators. In the next article I'll cover how attempted to create an alternate set of simpler hyperoperators. But as a teaser, the resulting operations caused properties like commutivity. Finally, it's not obvious what writing typical algebraic equations with the tetration operator should/would look like. The general polynomial form has terms for each hyperoperator combination. There is probably some generalized math (matrix?) for constructing polonomial-like forms for equations with hyperoperators of arbitrary degree. For tetration compatible polynomial, maybe it looks something like:

y = A*(x^N)*(x^^M) + B*x^O + Cx + D

The first term, A*(X^N)*(X^^M) is a guess based on the fact that for lower order polynomials in the form Ax^N + Bx + C. Let's define a generic hyperoperator notation, H where H(J,K,O) is the hyperoperation of the quantities J with K using the Oth order hyperoperator. Some examples:

y = Ax^m + Bx + C

can be written as

part 1.: Ax^m = H(A, H(x,m,3), 2)

part 2.: Bx = H(B,x,2)

part 3.: C

so

part 1. + part 2. = H(H(A, H(x,m,3), 2), H(B,x,2), 1)

part 1. + part 2. + part 3. = H(H(H(A, H(x,m,3), 2), H(B,x,2), 1), C, 1)

Obviously Ax^m + Bx+ C is a lot more concise that the representation above. Not only is adding additional hyperoperations not obvious (to me at least), but without some simplification mechanism or rules, going from that to a more normal looking polynomial is not going to be easy.

But we can see the layering of hyperoperations. Perhaps with Tetration it might be:

y = Zx^^n + Ax^m + Bx + C

But I can't help feeling that the Zx^^n part might need a x^m factor since x^^n grows so quickly that a simple additional factor F for (A+F)x^m is not going to provide complete coverage of the range between Zx^^n and Ax^m.

Ok, maybe that was an interesting diversion. But I hit the limit of what I can usefully think about. It is sort of neat though to think about the relationship between functional application and how we represent plain old algebra for the same operations and representations.

Factoring The Code


The previously shown definitions of the hyperoperators had some redundant code and structure. One could refactor the code to share the walking, and then just override the initial condition. The code might look something like:

defs = {
   "inc": [lambda n: number([n.state,'x']), lambda n: n],
   "add": [lambda n: n, lambda n: n.inc()],
   "mul": [lambda n: number.create_zero(), lambda n: n.add(n)],
   "pow": [lambda n: number.create_zero().inc(), lambda n: n.mul(n)],
   "tetr": [lambda n: number.create_zero().inc(), lambda n: n.pow(n-1)],
}
def hyperoperate(self, op, b=number.create_zero()):
    result = defs[op][0](b)
    for _ in b.state:
        result = defs[op][1](result)
    return result        

While that is pretty unreadable, I think it is still provocative. Calls look like:

one = number.create_zero().hyperoperate("inc")
two = one.hyperoperate("add",one)
four = two.hyperoperate("mul",two)
sixteen = four.hyperoperate("pow",two)
sixteen_b = two.hyperoperate("tetr",two)        

While working on an iteration of the code that looked a bit like this, I started to wonder what the behavior of hyperoperators that didn't have the standard pre-loop initializers for arithmetic operations with zero. But more on that in the next article.

See the next article here:

要查看或添加评论,请登录

社区洞察

其他会员也浏览了