LL3 - LLMs as More Than Just Chatbots

LL3 - LLMs as More Than Just Chatbots

I’ve noticed a strange pattern.

People will try tools like ChatGPT or Gemini for a few minutes, get a “meh” response, and instantly conclude, “See? These tools just aren't ready.”

Yet, they don’t apply that same standard to any other tool or skill that they have. They don't expect the code they write to compile the first try. They fat finger as they type constantly and rely on autocorrect to fix it for them. They ask me again to explain the same thing I explained to them last week, and the week before that.

Somehow, when it comes to using generative AI, they expect to get the exact result on the first try or without learning any of the mechanics that drive their output.

The reality? LLMs can handle a ton of workflow tasks if you feed them the right context and iterate.

Even something that starts out 60% correct can rapidly get to 90% or better with a few prompt refinements. It's like avoiding an escalator or those moving walkways at the airport because it can't get you the entire way to your destination.

Just get on. You don't have to overthink this.


LLMs: Beyond Chat, Into Your Workflow

One major misconception is that LLMs are just chatbots, like they’re a glorified Q&A machine. This effectively what Microsoft has offered with their general Office 365 Copilot experience (8000 characters is a context window for ants).

The Q&A mindset drastically limits what they can do.

Think of an LLM less as “just chat” and more as an input → most likely output machine.

You supply the input—often unstructured data, messy notes, or entire code snippets—and the model outputs structured summaries, suggestions, code snippets, CSVs, POVs, etc.

It’s like pointing a spotlight. If you keep the “cone of light” broad, you’ll get general brainstorming or overview-level help.

If you narrow it, zooming in on a specific question, you can get detailed, precise feedback.

The key is realizing that you’re not restricted to “Which city is the capital of France?”-style questions.

Instead, you can feed LLMs entire Slack threads, code segments, voice transcripts, or PRDs, and ask for deeper analyses. You can get the lay of the land before diving deep into the details.


Treating LLMs Like an Incredibly Smart, New Team Member

I’ve come to see LLMs as a versatile extra teammate—one that’s always available and thrives on context. If you had a new coworker, you wouldn’t walk up to them on their first day and immediately ask them to draft the implementation plan to a strategy it took you 6 months to formulate.

You’d give them a rundown of the product, the stakeholders, key resources, and a few areas to experiment with. Maybe you come back to them after a bit and ask them what they're thinking, answer their questions, and slowly expose them to more and more as you confirm they're understanding what they're seeing.

It's the same thing with LLM models.

The more you clarify the problem, the better the output.

While AI tools like ChatGPT and Gemini certainly CAN zero-shot complex problems with a single prompt, it's more often the case that I treat it like an incredibly smart, fast, and eager colleague.

I pose the idea and initial thoughts, get their first reaction, then iterate from there.


Real-World Ways I Use LLMs as Workflow Tools

Using LLMs as actual chatbots is something I rarely do these days. Typically, I'm deploying them to assist various meta processes. I need them to visualize, explore possibilities, validate and confirm assumptions, and pull out signal from the noisy context I'm swimming in.

Here are ways that I commonly use these tools:

Capturing Random Thoughts

When I’m half-awake and rambling into a voice memo, I’ll paste the transcript into an LLM. Suddenly, it transforms a scattered brain-dump into a structured outline. I’m not just “chatting” with AI; I’m leveraging it to turn raw input into a polished draft.

Summarizing Slack Threads

In a 40-message Slack chain, it’s easy to lose context. I feed the entire conversation into the LLM, tell it who I am and what little context I do have, and it returns key decisions, next steps, and any open questions. It’s much faster to correct a couple of AI oversights than to re-read everything myself.

Brainstorming Approaches

If I need ideas for a new feature, like how to cleanly resolve a new data validation we need to enforce, I’ll ask for multiple approaches, from a basic MVP to an elegant solution that considers our capacity constraints and current user feedback. Often, one “out there” suggestion triggers a creative angle I wouldn’t have considered otherwise.

Writing Bi-Weekly Release Reports

I gather general context about my functional area, mesages and notes from Slack, Jira tickets/Confluence docs, and dump voice notes speaking generally of anything that happened in the last two weeks I want people to know about. I throw this into Gemini along with a report template and I get a coherent draft in minutes. Years ago, this type of report would have realistically taken me days to put together in between meetings and prioritizing my core work.

Cross-Examining PRDs

I record myself talking through a messy Product Requirements Document and feed that transcript to the LLM, asking “What’s missing? How can I make this easier to understand? Can you factor in that our data model is extremely heterogeneous and hard to access via self-service?" This gets me 80% of the way there with 20% of the effort. Generally, the remaining 20% needs to be hashed out in collaboration with others anyway.

These examples show how LLMs function as workflow companions, automating the grunt work of structuring, summarizing, and analyzing large amounts of data.

It’s more than a chatbot conversation; it’s input in → curated output out.

Why I’m Baking LLMs Into Our Operations

Beyond personal productivity, I’m doing my best to integrate LLMs into our internal product operations. It's been challenging sorting out the red tape that comes with getting adoption of new tools, but I have a clear vision for how impactful this tech can be for operations product technology.

In an enterprise or internal tooling scenario, raw scale isn’t the only yardstick—outcome and speed to insight matter even more.

If a single LLM-driven step can shave off days or weeks of manual work, that’s a huge ROI. These use cases aren't even that sophisticated. Wherever you work, I guarantee you a shocking amount of critical processes are held together by little more than an Excel spreadsheet and way too much trust in one or two people. When I was at Intel, I remember working on a modernization project around the bill of materials that drove our chip fabrication facilities. What technology stack kept this crucial asset list up to date and protected?

Locally saved Excel spreadsheets.

All around you is opportunity to throw simple problems into the context window that take up way too much time and introduce variance and cognitive load that is hard to account for.

I promise you, unless you're an MBB consultant or an investment banker creating Excel models that require cell by cell perfection with Function calls that effectively make them an L5 software engineer, you would be better off throwing that data into Gemini 2.0 or Claude 3.5 Sonnet and asking AI to help you navigate and edit it.


Closing Thought: It Doesn't have To Be Perfect To Be Better

Don’t wait for LLMs to be perfect.

They’re already capable of speeding up tedious tasks and freeing you to focus on more creative, higher-value work. Sure, the first pass might be only 60% of what you need. But in a world where iteration is normal, that 60% is a starting block, not a verdict.

My advice? Recognize that LLMs are more than chatbots. They’re powerful input → output machines that can spotlight any segment of your problem, from broad overviews to pinpoint detail.

Step on the moving walkway, adjust as you go, and watch how much faster you reach your destination.

Want to Chat or Compare Notes? How are you working LLMs into your daily workflows? I’d love to hear how you make them work and any specific things you do that have made a difference in your workflows.

要查看或添加评论,请登录

Brandon Galang的更多文章

社区洞察

其他会员也浏览了