When prompt engineering first emerged as a mainstream workflow for data and machine learning professionals, it seemed to generate two common (and somewhat opposing) views.
In the wake of ChatGPT’s splashy arrival, some commentators declared it an essential task that would soon take over entire product and ML teams; six-figure job postings for prompt engineers soon followed. At the same time, skeptics argued that it was not much more than an intermediary approach to fill in the gaps in LLMs’ current abilities, and as models’ performance improves, the need for specialized prompting knowledge would dissipate.
Almost two years later, both camps seem to have made valid points. Prompt engineering is still very much with us; it continues to evolve as a practice, with a growing number of tools and techniques that support practitioners’ interactions with powerful models. It’s also clear, however, that as the ecosystem matures, optimizing prompts might become not so much a specialized skill as a mode of thinking and problem-solving integrated into a wide spectrum of professional activities.
To help you gauge the current state of prompt engineering, catch up with the latest approaches, and look into the field’s future, we’ve gathered some of our strongest recent articles on the topic. Enjoy your reading!
- Introduction to Domain Adaptation?—?Motivation, Options, Tradeoffs. For anyone taking their first steps working hands-on with LLMs,
Aris Tsakpinis
’s three-part series is a great place to start exploring the different approaches for making these massive, unwieldy, and occasionally unpredictable models produce dependable results. The first part, in particular, does a great job introducing prompt engineering: why it’s needed, how it works, and what tradeoffs it forces us to consider.
- I Took a Certification in AI. Here’s What It Taught Me About Prompt Engineering. “Prompt engineering is a simple concept. It’s just a way of asking the LLM to complete a task by providing it with instructions.” Writing from the perspective of a seasoned software developer who wants to stay up-to-date with the latest industry trends,
Kory Becker
walks us through the experience of branching out into the sometimes-counterintuitive ways humans and models interact.
- Automating Prompt Engineering with DSPy and Haystack. Many ML professionals who’ve already tinkered with prompting quickly realize that there’s a lot of room for streamlining and optimization when it comes to prompt design and execution.
Maria Rosario Mestre
recently shared a clear, step-by-step tutorial—focused on the open-source DSPy framework—for anyone who’d like to automate major chunks of this workflow.
- Understanding Techniques for Solving GenAI Challenges. We tend to focus on the nitty-gritty implementation aspects of prompt engineering, but just like other LLM-optimization techniques, it also raises a whole set of questions for product and business stakeholders.
Tula Masterman
’s new article is a handy overview that does a great job offering “guidance on when to consider different approaches and how to combine them for the best outcomes.”
- Streamline Your Prompts to Decrease LLM Costs and Latency. Once you’ve established a functional prompt-engineering system, you can start focusing on ways to make it more efficient and resource-conscious. For actionable advice on moving in that direction, don’t miss
Jan Majewski
’s five tips for optimizing token usage in your prompts (but without sacrificing accuracy).
- From Prompt Engineering to Agent Engineering. For an incisive reflection on where the field might be headed in the near future, we hope you check out
Giuseppe Scalamogna
’s high-level analysis: “it seems necessary to begin transitioning from prompt engineering to something broader, a.k.a. agent engineering, and establishing the appropriate frameworks, methodologies, and mental models to design them effectively.”
Ready to branch out into some other topics this week? Here are several standout articles well worth your time:
-
Payal Patel
invites us to explore the open-source LIDA library, which brings together the power of LLMs and the ability to generate data visualizations, and offers hands-on guidance on how to get started.
- What role can LLMs play in the ever-growing self-driving car ecosystem?
Shu Ishida
shares key insights from their recent research on LLMs’ potential to “write the code for driving itself.”
- In her debut TDS article,
Nicole Ren
walks us through a promising generative-AI project she collaborated on, designed to streamline report-writing for Singapore’s public-sector workers.
- Don’t miss
Sachin Date
’s latest stats-focused deep dive, where he turns to the partial autocorrelation coefficient (PACF), and focuses on its use in configuring auto-regressive (AR) models for time-series datasets.
- If you’re a data science educator or hackathon organizer, you should catch up with
Luis Fernando Perez Armas, Ph.D.
’s thorough guide to building a private, Kaggle-like platform where learners can share, compare, and discuss their work.
- Rebounding from job loss is about much more than learning new skills or going through the motions of submitting your resume.
Amy Ma
shares a several helpful insights and learnings from the year she spent between jobs.
Thank you for supporting the work of our authors! We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, don’t hesitate to share it with us.
Data Analytics| Electronic Design| Industrial Automation| SCADA & HMI
9 个月Super interesting collection of information. Knowledge is free, thank you very much for sharing it.
Thanks for sharing Towards Data Science ?? humbled to be mentioned alongside the amazing work of the other authors ????
Principal Data Scientist, Applied AI Practice Lead
9 个月Thanks for including my latest post in this write up! Great works by other authors as well!