A Distributed Programming Language is Possible
Brian Greenforest
Chiplets LLM CV SDR Massively Parallel Distributed Compute Nonlinearity Causality Consciousness
A distributed programming language is a surprisingly simple idea. Imagine you have an int main() {} function. That function calls other functions. In typical programs, those functions are local files, and in fast programs, those files don't even exist when it's time to execute the program (it's compiled into binary machine instructions).
Now think about a dynamic programming language, like Python or JavaScript. These files, which the "main()" function calls, can remain physical files, the same shape as we commit them in a repository.
In ordinary local programming, all sub-functions, called from "main()" reside in local files. And it's easy to reason about them or change them. You can always locate the source code, modify it, and see what happens, when you change it, debugging the code locally on your machine.
Now, just one leap is required to make it distributed. "Demand" that sub-functions can be ran on multiple threads, many cores, in hybrid environment (GPU, other interpreters, FPGAs), or on another remote machines, being linked over networks.
I think that was initial idea behind "Remote Procedure Call (RPC)". But the way we deployed it massively ended up being the firewalled client-server "secure" server architecture.
The reason we got the "ownership" tree wrong at the first place in this microservices/service-oriented architecture.
You can access a root of the entire code tree from your machine if working on a "staging" environment: your machine has the "main()" function. Then the web server is just a class instance, modifiable in your local source code file. And all connected clients are instances of another source code file, which is also local. And because you own the "root", when you change a source code of client class instance, it gets uploaded to the server first, then applied to the running client, implementing a "hot reload" process.
But from your developer's perspective, you are just editing a sub-function actively running.
To make all this work, it was necessary to make "functions" a stateful mix between objects from OOP and functions from FP.
So instances can be a part of the "grand program structure", and you can commit the existing set of connected clients to GitHub, while your entire distributed web application is running--right from your dev machine.
Of course, a careful balance between what to consider "runtime data" and "inherent source code part" is important. You don't want to commit the entire database in a git repo. The same issue exists in digital circuit design, where on power on all memory cells are typically initialized with random values. A balance for the concept of prototype inheritance becomes very important too, because in general, it's impossible to copy an existing "live" object because streams of input data make its state never "complete".
This very issue made constructor functions and reset wires important in the industry of software and hardware.