Semantic Functions 2.0
Functions that can yell at one another!
Nearly two years ago, I wrote a piece called Natural Language Programming: A Semantic Assembly. The goal was to show how one can define a function semantically, rather than by using the standard syntax most programming languages demand. Fast forward to today, and we’re swimming in robust tools and more consistent language models that inspire me to revisit and expand upon that blog post.
The Math and Compiler Heritage: We started with math’s f : D \to S: a clean, and perhaps most abstract, mapping of inputs to outputs via a piece of logic encoded in f. Programming languages like C and Python took that idea but replaced all that lovely simplicity with curly braces, semicolons, or whitespace-based syntax. If you think of information workers (like myself, or maybe you), they are also functions! They receive inputs (e.g., emails, new docs, chats), process them, and generate outputs. It’s a complex processing, sure, but fundamentally still a “function with memory and stuff.” The processor in this case (which is me, or the f) has memory, knowledge, and reasoning, that applies to the given inputs, and generate outputs. So, a complex f with memory and knowledge and what not, but still a "logic".
One thing to note: The mathematical definition of a function is just a description of a logic—it doesn’t actually execute anything by itself. Same with a function in a programming language: the logic is there, but you need someone or something (a compiler, an interpreter, or a real human in a day job) to put that logic into action. The compiler’s job in programming languages is to translate your semicolons and brackets (aka, syntax) into 0s and 1s that the machine can actually run. Over time, we humans have tried to make code look more like English—and compilers have been rolling their eyes ever since.
Enter (Large) Language Models (LLMs)—GPTs, Geminis, Mistrals, or whatever is on-device. They take in textual input, you prompt them with some rules, and out comes a response—be it a short answer or a questionable joke. In other words, they’re basically functions too: inputs, outputs, and embedded “logic.” The biggest difference is that you can describe your goals in much more flexible language. Of course, that comes with the risk of the model misunderstanding you—natural language is as powerful as it is vague.
Where once we had a regular compiler, now we have a smart interpreter that attempts to understand your intentions, rather than simply transforming source code to machine instructions. In other words, you might say the programming language is now, well, just language.
The SemanticFunction Decorator: Below is a minimal, concrete example in Python. It highlights how to delegate function logic to an LLM simply by writing a docstring and passing arguments:
import openai
class SemanticFunction:
def __init__(self, llm="gpt-4o", temperature=0.7, max_tokens=1000):
self.llm = llm
self.temperature = temperature
self.max_tokens = max_tokens
def __call__(self, func):
def wrapper(*args, **kwargs):
# Build a prompt from the docstring & arguments
prompt = f"{func.__doc__}\n"
prompt += f"Arguments: {args}, {kwargs}\n"
# Call the LLM API
response = openai.ChatCompletion.create(
model=self.llm,
messages=[{"role": "user", "content": prompt}],
temperature=self.temperature,
max_tokens=self.max_tokens,
)
return response['choices'][0]['message']['content']
return wrapper
Now, a simple use-case would be to write a function, with Python syntax, and decorate it with the decorator we just created:
@SemanticFunction()
def summarize(text: str, expertise: str):
"""
Summarize the given text based on the specified expertise.
"""
pass
And now call the function just like any other:
result = summarize("Some lengthy text...", expertise="medicine")
print(result)
Here, your function's “logic” is basically English. The docstring plus arguments turn into a prompt, and the LLM interprets them however it can. If it decides to give you a poem instead of a medical summary, well, you might need to refine your prompt (or pick a specialized model).
Imagine
You can go easily beyond this: Of course, you don’t have to use the biggest model for every single function. Smaller models can handle simpler tasks (like text cleaning, classification, or generating boilerplates) at lower cost or offline. You can also easily request structured output—for instance, using Pydantic to define a schema for the LLM to follow. That way, if the LLM’s output doesn’t match your schema, you can raise an error or try again.
领英推荐
You can even have multiple tasks—text or image or voice generation—pass their outputs around to other semantic functions. Basically, get the functions "talk" to one another in English, or even in voice, and you can listen to them yelling at one another (and maybe interject)! Imagine an entire Python system composed of classes whose methods do nothing except feed docstrings and arguments to a whimsical or monstrous ensemble of LLMs. Each class can act like an agent, combining old-fashioned deterministic code (tools/APIs) with semantic ones. Suddenly, you’ve got a system where some tasks are handled by safe, predictable code, and others by some foundation models that may or may not decide to color outside the lines. If it breaks, well…that’s your problem. That’s where the smart interpreter truly shines: it can decide when to call your reliable “traditional” tools and when to tap into the creative or generative side of a large model.
Now, if you want to call your semantic functions or classes "agents" then it is up to you.
Caveats, Because Reality is Harsh
Conclusion: The Future is Semantic But Manage Your Expectations
So there you have it: a more robust, multi-model, and somewhat structured vision for building entire apps out of "semantic" pieces. In the grand scheme of things, we’re effectively letting LLMs interpret our docstrings and arguments to produce some (hopefully) predictable output. Whether you use big, powerful models, smaller specialized ones, or a combination of both, or even voice signals to hear what your functions are talking about in real-time, is up to you.
The idea of a purely "semantic" program is exciting—just write everything in natural language and let the machine figure it out. But keep in mind: with great power comes a lot of debugging, random failures, and potentially higher costs. Proceed with caution, but have fun with it—and maybe keep a fallback plan handy when the LLM decides your code "really needs more unicorn references."
So, prototyping is going to get much easier as many expect:
About the Author (see also Applied AI News)