Future Forward - 84th Edition - Last Week in AI - Prompting is Programming?
Future Forward - 84th Edition - Last Week in AI - Prompting is Programming?

Future Forward - 84th Edition - Last Week in AI - Prompting is Programming?

Welcome to the 84th Edition of Future Forward - the Emerging Tech & AI Newsletter!

This newsletter aims to help you stay up-to-date on the latest trends in emerging technologies and AI. Subscribe to the newsletter today and never miss a beat!

Subscribe to the newsletter here.

Each edition covers top AI news from last week and an AI-related topic - Primers/Tutorials/ How AI is being used.

Here's what you can expect in this issue of the Emerging Tech & AI Newsletter:

  • A summary of the top AI news from the past week.
  • Prompting is Programming?


AI News from Last Week

The field of AI is experiencing rapid and continuous progress in various areas. Some of the notable advancements and trends from the last week include:

Big Tech in AI:


Big Tech in AI. Cover Image by Author. Logos are copyright of respective companies

  1. Gemini 2 Flash model turns out to be very good at removing watermarks in images.
  2. Google Research and Muon Space launched the first AI-powered FireSat satellite.
  3. Google Taps MediaTek for Cheaper AI Chips.
  4. Apple shuffled AI leadership amid Siri crisis.
  5. Google NotebookLM added new Interactive Mind Map feature.
  6. Meta plans to roll out AI assistant across 41 European countries.
  7. NVIDIA Launched Family of Open Reasoning AI Models.
  8. Google released research paper on effective inference time search.
  9. Microsoft teamed up with AI start-up, inait, to simulate brain reasoning.
  10. Nvidia’s new GPU lineup includes Blackwell Ultra in late 2025, Vera Rubin in 2026, and Feynman in 2028.
  11. Nividia revealed Isaac GR00T N1, the first open humanoid robot foundation model.
  12. Google Revealed TxGemma, a new collection of Gemma-based open AI models.
  13. Google Gemini added Canvas, a new interactive space for refining your documents and code.
  14. Amazon.com (AMZN) Undercuts NVIDIA with AI Chip Discounts.


Funding & VC Landscape:

  1. xAI acquired Hotshot.
  2. Perplexity?is?set to raise?nearly $1B at an $18B valuation.
  3. BuildOps raised $127M in Series C funding.
  4. evroc grabbed €50M to build Europe’s hyperscale cloud and critical AI infrastructure.
  5. Warsaw-based Global Work grabs $1.25M.
  6. Brainomix secured £14M to expand AI imaging technology.
  7. Nerdio raised $500 million equity round.
  8. Nvidia Continued Torrid AI Startup Investment Pace.
  9. Former Cruise CEO Vogt's robotics startup valued at $2 billion in new funding.
  10. Browser Use raised $17M.


Other AI news:

  1. Baidu released ultra cheap models - ERNIE 4.5 & X1!
  2. Harvard and MIT agents released TxAgent, an AI agent for therapeutic reasoning across a universe of tools.
  3. Patronus AI released first MLLM-as-a-Judge.
  4. Figure announced BotQ, a manufacturing facility that can roll out 12,000 humanoids a year.
  5. Sesame open sourced their CSM-1b model.
  6. Vogent announced self learning AI agents.
  7. Y Combinator CEO Garry Tan says for about a quarter of the current YC startups, 95% of the code was written by AI.
  8. Roblox open sourced Roblox Cube: their Core Generative AI System for 3D and 4D.
  9. Zoom’s AI Companion went agentic.
  10. Mistral released Small 3.1
  11. Open source release: ReCamMaster to re-capture in-the-wild videos with novel camera trajectories.
  12. MagicLab’s AI-powered humanoid runs outdoors.
  13. Claude added realtime web search.
  14. Open AI launched its next-gen API-based audio models.
  15. METR released research on AI's Ability to Complete Long Tasks.
  16. LG released EXAONE Deep models.
  17. xAI released image generation models.
  18. Adobe AI Platform Unites Creativity and Marketing to Define the New Era of Customer Experience Orchestration.
  19. Hunyuan released 3D 2.0 MV and 3D 2.0Mini models.
  20. Graphite launched Diamond, an agentic AI-powered code review companion.
  21. Stability AI introduced Stable Virtual Camera.


Liked the news summary? Subscribe to the newsletter to keep getting updates every week.


Prompting is Programming

Introduction: Just as a programmer meticulously crafts lines of code to achieve a desired outcome, a prompt engineer designs language-based instructions with the same level of precision and intent. This isn't just semantics; it's a fundamental shift in how we interact with computational systems. So should we consider Prompting as new Programming?

Prompting, when done effectively, is a process of breaking down a problem into its constituent parts and translating that logic into a language-based instruction. This process mirrors the fundamental problem-solving approach of traditional programming, making prompting a logical extension of the programmer's toolkit.


Prompting is Programming. Cover Image by Author

The Logic of Language:

  1. Analogy - The core of programming lies in expressing logic, and prompting is no different. Just as a programmer uses variables to store and manipulate data, a prompt engineer utilizes placeholders and contextual information to guide the language model. Consider the prompt: 'Summarize the following article: [article text]'. Here, '[article text]' acts as a variable, holding the input data. Similarly, prompts can be structured like functions, taking inputs and producing outputs based on defined parameters. The command 'Translate the following English sentence to French: [sentence]' is analogous to a function call, where 'translate' is the function and '[sentence]' is the input. Furthermore, conditional logic, a cornerstone of programming, finds its equivalent in prompts using 'if...then' statements. For instance, 'If the customer rating is above 4 stars, then generate a positive review summary.' These linguistic structures demonstrate that prompting, like coding, hinges on the precise articulation of logical relationships.
  2. Precision - The language of programming demands precision, and so does effective prompting. Vague or ambiguous instructions yield unpredictable results, mirroring the errors that arise from poorly written code. Consider the difference between 'Tell me about cats' and 'Provide a detailed description of the physical characteristics, common behaviors, and dietary needs of domestic cats.' The former is open to interpretation, while the latter provides specific parameters, resulting in a more focused and informative response. This emphasis on clarity is fundamental to both programming and prompting. A well-crafted prompt, like well-written code, is unambiguous, ensuring that the system understands and executes the intended task. The logic resides in the precise delineation of input, process, and output, all expressed through language.
  3. Breakdown of a Prompt - To understand the logic of language in prompting, let's dissect a simple example: 'Write a short story about a robot who learns to feel emotions.' This prompt, though seemingly simple, contains several logical components. First, it defines the task—writing a story. Second, it specifies the subject—a robot. Third, it introduces a constraint—the robot must learn emotions. These elements mirror the variables, functions, and conditional statements of traditional code. The prompt effectively outlines the parameters of the desired output, instructing the language model to generate a narrative within specific boundaries. This breakdown illustrates that even natural language prompts are inherently structured, relying on logical relationships to guide the model's behavior.
  4. Implicit Logic - Often, the logic of a prompt is implicit, relying on the language model's understanding of context and common sense. Even a seemingly simple request like 'Summarize this email' relies on the model's ability to identify key information, understand the email's purpose, and distill it into a concise summary. This implicit logic mirrors the way programmers leverage libraries and frameworks, relying on pre-existing functionalities to achieve complex tasks. The model's understanding of language and context acts as a kind of built-in library, enabling it to interpret and execute instructions based on implicit logical relationships. This reliance on both explicit and implicit logic underscores the fundamental connection between prompting and programming.

Control Flow Through Prompt Engineering:

Just as programmers use loops and functions to control the flow of execution, prompt engineers employ chaining and few-shot learning to direct the language model's behavior. Prompt chaining involves feeding the output of one prompt as the input to another, creating multi-step processes. For instance, you could first ask the model to 'Extract all customer reviews from this document.' Then, you can feed those extracted reviews into a second prompt: 'Analyze the sentiment of each review and provide an overall summary.' This sequential execution mirrors the way functions are called and their outputs are passed as arguments in traditional programming.

Few-shot prompting, where multiple examples are provided, can be viewed as a form of iteration. Imagine providing the model with several examples of question-answer pairs and then asking it to answer a new question. The model iterates through the examples, learning the pattern, and applies that knowledge to the new input. This process is analogous to training a model with data, where the examples serve as training data points. The ability to control the flow of information and execution through prompt chaining and few-shot learning is a clear indication that prompting is a form of programming

Debugging and Optimization:

Just as software developers meticulously debug code to identify and rectify errors, prompt engineers engage in an iterative refinement process to optimize the performance of their prompts. The initial output of a language model, like the first run of a program, is rarely perfect. Unexpected or undesirable results are the 'bugs' in the prompt, requiring careful analysis and adjustment.

Rigorous testing and validation are essential components of both programming and prompt engineering. Just as programmers test their code with various inputs to ensure robustness and handle edge cases, prompt engineers must evaluate their prompts with diverse inputs to assess their performance. For instance, a prompt designed to translate text into different languages needs to be tested with a wide range of linguistic styles and complexities. Similarly, a prompt used for generating creative content should be evaluated for its ability to produce consistent and high-quality outputs. The development of metrics and evaluation frameworks for prompt performance is also crucial. These metrics, which may include accuracy, relevance, and coherence, are akin to unit tests in software development, providing a quantitative measure of prompt effectiveness. By systematically testing and validating their prompts, prompt engineers can ensure that their language-based instructions are reliable and produce the desired results

Optimization in prompt engineering, like performance tuning in programming, involves refining prompts to achieve specific goals, such as improved accuracy, faster execution, or reduced resource consumption. Techniques like prompt compression, where prompts are condensed without sacrificing clarity, can reduce the processing time and cost associated with language model inference. Similarly, techniques that improve the clarity and structure of prompts can lead to more accurate and consistent outputs. Additionally, prompt engineers might employ techniques similar to refactoring in programming, where prompts are restructured to improve maintainability and readability. Just as programmers optimize code for efficiency and clarity, prompt engineers strive to create prompts that are both effective and efficient.

The Future of Prompting:

  1. Emerging Tools and Frameworks: The future of prompting is not just about writing better sentences; it's about building sophisticated tools and frameworks that streamline the entire process. We're witnessing the rise of visual prompt builders, allowing users to construct complex prompts through drag-and-drop interfaces, much like visual programming environments. Prompt engineering platforms are emerging, providing collaborative workspaces, version control, and automated testing capabilities. These tools will allow for increased modularity, where prompts can be built from reusable components, much like libraries in traditional coding. Furthermore, we'll see the development of specialized languages and APIs designed specifically for interacting with language models, enabling more programmatic and structured prompt creation. This evolution will further solidify the connection between prompting and software development.
  2. Democratization of Programming and Beyond: Prompting has the potential to democratize programming by lowering the barrier to entry. Imagine a world where domain experts, like doctors or teachers, can build powerful applications without needing to learn complex coding languages. They can simply describe their needs in natural language, and the language model will generate the desired functionality. This accessibility will empower individuals to create custom solutions, fostering innovation and problem-solving across various fields. Beyond programming, prompting will transform how we interact with technology. We'll see more intuitive and conversational interfaces, where users can seamlessly communicate with machines using natural language. This will lead to more personalized and adaptive experiences, where technology anticipates and responds to our individual needs. The line between user and programmer will blur, with everyone becoming a 'prompt engineer' in their own way.
  3. The Interaction with AI Agents and Autonomous Systems: As AI agents and autonomous systems become more prevalent, prompting will play a crucial role in directing their behavior. Imagine instructing a self-driving car to 'Find the fastest route to the airport, avoiding traffic and toll roads.' Or commanding a robotic assistant to 'Prepare a presentation based on the latest market trends.' Prompting will enable us to communicate complex instructions and goals to AI agents, allowing them to perform tasks autonomously and intelligently. The ability to express intent through natural language will be essential for building trust and collaboration between humans and AI systems.

Ultimately, prompting is a powerful tool, a new way to build and interact with complex systems. It allows us to express our intent with remarkable precision, leveraging the inherent logic of language to achieve desired outcomes. Whether it's crafting intricate workflows, debugging complex outputs, or optimizing for specific results, the core principles of programming are evident. Prompting is not just a conversation; it's a carefully constructed instruction, a form of code that speaks the language of humans. As this field continues to develop, its practical applications will only expand, solidifying its role as an essential skill in the age of artificial intelligence.


Disclosure: The content in "Prompting is Programming" section was written with the help of Gemini. Please write to us in case of any gaps.

Thanks for reading. See you next week!

Let's explore the future of technology together!

Your Turn:

Did you like the content? Share with your network. Subscribe now to get updates directly in your mailbox.


要查看或添加评论,请登录

Arpit Goliya的更多文章