Semantic Functions 2.0

Semantic Functions 2.0

Functions that can yell at one another!

Nearly two years ago, I wrote a piece called Natural Language Programming: A Semantic Assembly. The goal was to show how one can define a function semantically, rather than by using the standard syntax most programming languages demand. Fast forward to today, and we’re swimming in robust tools and more consistent language models that inspire me to revisit and expand upon that blog post.

The Math and Compiler Heritage: We started with math’s f : D \to S: a clean, and perhaps most abstract, mapping of inputs to outputs via a piece of logic encoded in f. Programming languages like C and Python took that idea but replaced all that lovely simplicity with curly braces, semicolons, or whitespace-based syntax. If you think of information workers (like myself, or maybe you), they are also functions! They receive inputs (e.g., emails, new docs, chats), process them, and generate outputs. It’s a complex processing, sure, but fundamentally still a “function with memory and stuff.” The processor in this case (which is me, or the f) has memory, knowledge, and reasoning, that applies to the given inputs, and generate outputs. So, a complex f with memory and knowledge and what not, but still a "logic".

One thing to note: The mathematical definition of a function is just a description of a logic—it doesn’t actually execute anything by itself. Same with a function in a programming language: the logic is there, but you need someone or something (a compiler, an interpreter, or a real human in a day job) to put that logic into action. The compiler’s job in programming languages is to translate your semicolons and brackets (aka, syntax) into 0s and 1s that the machine can actually run. Over time, we humans have tried to make code look more like English—and compilers have been rolling their eyes ever since.

Enter (Large) Language Models (LLMs)—GPTs, Geminis, Mistrals, or whatever is on-device. They take in textual input, you prompt them with some rules, and out comes a response—be it a short answer or a questionable joke. In other words, they’re basically functions too: inputs, outputs, and embedded “logic.” The biggest difference is that you can describe your goals in much more flexible language. Of course, that comes with the risk of the model misunderstanding you—natural language is as powerful as it is vague.

Where once we had a regular compiler, now we have a smart interpreter that attempts to understand your intentions, rather than simply transforming source code to machine instructions. In other words, you might say the programming language is now, well, just language.

The SemanticFunction Decorator: Below is a minimal, concrete example in Python. It highlights how to delegate function logic to an LLM simply by writing a docstring and passing arguments:

import openai

class SemanticFunction:
    def __init__(self, llm="gpt-4o", temperature=0.7, max_tokens=1000):
        self.llm = llm
        self.temperature = temperature
        self.max_tokens = max_tokens

    def __call__(self, func):
        def wrapper(*args, **kwargs):
            # Build a prompt from the docstring & arguments
            prompt = f"{func.__doc__}\n"
            prompt += f"Arguments: {args}, {kwargs}\n"

            # Call the LLM API
            response = openai.ChatCompletion.create(
                model=self.llm,
                messages=[{"role": "user", "content": prompt}],
                temperature=self.temperature,
                max_tokens=self.max_tokens,
            )
            return response['choices'][0]['message']['content']
        return wrapper        

Now, a simple use-case would be to write a function, with Python syntax, and decorate it with the decorator we just created:

@SemanticFunction()
def summarize(text: str, expertise: str):
    """
    Summarize the given text based on the specified expertise.
    """
    pass        

And now call the function just like any other:

result = summarize("Some lengthy text...", expertise="medicine")
print(result)        

Here, your function's “logic” is basically English. The docstring plus arguments turn into a prompt, and the LLM interprets them however it can. If it decides to give you a poem instead of a medical summary, well, you might need to refine your prompt (or pick a specialized model).

Imagine

You can go easily beyond this: Of course, you don’t have to use the biggest model for every single function. Smaller models can handle simpler tasks (like text cleaning, classification, or generating boilerplates) at lower cost or offline. You can also easily request structured output—for instance, using Pydantic to define a schema for the LLM to follow. That way, if the LLM’s output doesn’t match your schema, you can raise an error or try again.

You can even have multiple tasks—text or image or voice generation—pass their outputs around to other semantic functions. Basically, get the functions "talk" to one another in English, or even in voice, and you can listen to them yelling at one another (and maybe interject)! Imagine an entire Python system composed of classes whose methods do nothing except feed docstrings and arguments to a whimsical or monstrous ensemble of LLMs. Each class can act like an agent, combining old-fashioned deterministic code (tools/APIs) with semantic ones. Suddenly, you’ve got a system where some tasks are handled by safe, predictable code, and others by some foundation models that may or may not decide to color outside the lines. If it breaks, well…that’s your problem. That’s where the smart interpreter truly shines: it can decide when to call your reliable “traditional” tools and when to tap into the creative or generative side of a large model.

Now, if you want to call your semantic functions or classes "agents" then it is up to you.

Caveats, Because Reality is Harsh

  1. Monitoring & Logging: Logging what happens inside a typical function is easy, logging what an LLM (or, to be honest, any "intelligent" entity, including human) does with your description can be an epic drama of 10,000 tokens that someone should read (maybe that can be another LLM).
  2. Consistency & Reliability: LLM-based code might produce random or context-dependent results, so if you wanted a function that returns "42" every time, sorry, but you’re probably out of luck. You are working with a smart kid, that in this case sometimes is dump, it is hard to control, difficult to fully trust, but man if it works! Prompt engineering, temperature settings, and fallback strategies might rein in the chaos—but only so much.
  3. Cost, at least for now: Generating code or text with LLMs isn’t free. If every function in your app is semantic, you might find your credit card statement looking suspiciously like your rent.
  4. Security & Privacy: Don’t forget you’re piping your data into an external service. If your business logic is top-secret, maybe think twice about sending it to an LLM that’s storing who knows what.

Conclusion: The Future is Semantic But Manage Your Expectations

So there you have it: a more robust, multi-model, and somewhat structured vision for building entire apps out of "semantic" pieces. In the grand scheme of things, we’re effectively letting LLMs interpret our docstrings and arguments to produce some (hopefully) predictable output. Whether you use big, powerful models, smaller specialized ones, or a combination of both, or even voice signals to hear what your functions are talking about in real-time, is up to you.

The idea of a purely "semantic" program is exciting—just write everything in natural language and let the machine figure it out. But keep in mind: with great power comes a lot of debugging, random failures, and potentially higher costs. Proceed with caution, but have fun with it—and maybe keep a fallback plan handy when the LLM decides your code "really needs more unicorn references."

So, prototyping is going to get much easier as many expect:

Mark Zuckerberg says AI might claim software engineering jobs at Meta in 2025 | Windows Central

Andrew Ng Post | LinkedIn


About the Author (see also Applied AI News)


要查看或添加评论,请登录

Reza Bonyadi的更多文章

  • Applied AI News #9

    Applied AI News #9

    Listen to extended commentary on YouTube (a little over 8 min and 30 seconds). Industry News and Standards: Quantum…

  • Applied AI News #8

    Applied AI News #8

    Listen to extended commentary on YouTube (a little over 11 min). ??Highlights: In industry: OpenAI's potential…

    1 条评论
  • Applied AI News #7

    Applied AI News #7

    Listen to extended commentary in Spotify (to come..

    2 条评论
  • From Text to Tactics: Reinforcing Chess Reasoning in a Language Model

    From Text to Tactics: Reinforcing Chess Reasoning in a Language Model

    The field of “Learning to Think” in language models has been evolving rapidly, propelled by the idea that language…

    5 条评论
  • Applied AI News #6

    Applied AI News #6

    Listen to extended commentary in Spotify or YouTube (shorter than 9 min). ?? Key Highlights This Week: In industry…

    1 条评论
  • Applied AI News: Learning to Think

    Applied AI News: Learning to Think

    Watch my extended commentary on YouTube. Hello, and welcome to the Applied AI News.

    3 条评论
  • Applied AI News #5

    Applied AI News #5

    Listen to extended commentary in Spotify or YouTube (less than 10 min). ?? Key Highlights This Week In industry news:…

  • Applied AI News #4

    Applied AI News #4

    Listen to extended commentary in Spotify or YouTube (less than 10 min). In industry and standards: Microsoft's new…

  • Applied AI News #3

    Applied AI News #3

    Listen to extended commentary in Spotify or YouTube (a little over 9 min). In this issue: In industry and standards:…

    11 条评论
  • Applied AI News #2

    Applied AI News #2

    No time to read? Listen to a detailed commentary in Spotify or YouTube. Hello and welcome to the Applied AI News.

    5 条评论

社区洞察

其他会员也浏览了