Developing with ChatGPT

Developing with ChatGPT

It seems like everyone, including their dog, is embracing the power of GPT/LLM for various applications in their workflows and software products. And for good reason, as it's an incredibly powerful tool. I've also been exploring ways to harness GPT's capabilities to automate tasks that were once considered nearly impossible. It's akin to acquiring a new skill power-up in a video game or tapping into some form of sorcery.

However, there are some key insights and tricks that can make wielding this magic more effective, and I'd like to share my experiences and insights so far.


Using GPT for Content Generation

One of the most common use cases for GPT is content generation. I've applied it in the real estate industry to produce landing page content, social media posts, and blog articles. With the right prompts and a bit of post-processing, you can generate high-quality content. Here are some tips to consider:

Address Speed Issues: While GPT appears fast when you see it typing on the screen, waiting for the API to generate the complete response can feel like an eternity from a user experience perspective. To mitigate this:

- Trigger the generation event before the user reaches the point where it's needed. Consider using cronjobs to pre-generate content, initiate background processes earlier in the user flow, or employ a queue system to parallelize generation tasks.

- Ensure that the page displaying the output is aware that processing might still be ongoing in the background. Provide users with animations or alternative actions to engage with while they wait.

- Implement caching to avoid regenerating content for the same requests, as it can be slow and costly.


Account for Variability: GPT doesn't always produce the desired output. Sometimes, you'll receive responses that fall short of your requirements, regardless of how detailed your prompts are. The solution is to:

- Create test cases to define acceptable character length ranges, expected keywords, anchor points, or other criteria.

- Incorporate checks in your GPT generation function to verify that the response aligns with your criteria and request a new one if it doesn't. Keep in mind that GPT can be slow, as mentioned in point #1.


Multi-Prompting: For more consistent output, consider using multi-prompting. Generate something with GPT and then ask it to refine it based on specific requirements. While this approach may be slower and slightly costlier, it can improve the chances of obtaining a satisfactory result.



Translating Conversational Input into Structured Queries with NLP/GPT

One of the most exciting applications of GPT is converting conversational queries into structured database queries. For instance, transforming a search query like "house with at least 3 bedrooms, north-facing, under $900,000 with a fireplace and a view in Langford or Victoria" into MongoDB/SQL queries. Here are some key points to consider:

1. Provide Clear Context: Ensure that you provide concise and accurate information about your database structure, field formats, and how fields are used within a search. This helps GPT understand the context in which it's working.

2. Handle Variability: Just like in content generation, be prepared for occasional deviations from your desired output. Create solid test cases and enforce the use of specific fields when necessary.

3. GPT Prompts Are Not Code: Understand that GPT responses may not always be consistent. Run tests to identify the range of variants that GPT may produce, and be ready to handle these variations.



Extracting Contextual Answers from Large Documents with NLP/GPT

One of the most challenging tasks I've tackled with GPT is extracting specific answers from extensive textual documents. For example, finding information like "Can I have a golden retriever?" within complex strata documents. Here's an approach that has worked for me:

1. Consider the Token Limit: GPT has limitations in terms of token count. It won't accommodate very large documents or queries. To address this, break down the content intelligently:

Use headings to identify relevant sections and narrow down the content.

- You can use GPT and ask something like

"I'm going to give you an array of headings. These are from a Strata document for a condo building. I need to find out information within this document to answer the question: "${UserPrompt}". Provide me with a list of headings for sections that may be relevant. Sort them by likelihood of relevance. Respond in JSON format. Headings as a JSON array: ${Headings}"

Between normal code narrowing down headings and GPT using its ability to understand the context of the headings, it will narrow down the sections you have to further chunk and process.


Keep chunking. Assuming you've narrowed the content down as much as you can, and it's still massive, you'll want to batch process the content. Whenever possible use a whole heading/subheading section, if it fits... otherwise try to find paragraphs/line breaks that help prevent you splitting the content in the middle of a subject.

Decide whether to process sequentially or parallelize, depending on the number of chunks and uncertainty. If GPT was able to narrow down the headings to just a few, and you were able to chunk the content of the most likely headings into complete corpus that can fit within a prompt, you can probably serialize the approach and get a win in the first API request or two... but if you have many chunks and uncertainty still, you may want to utilize a queue with parallel workers to speed up finding the answer

-Use simple prompt like: 'The following is an excerpt from a strata doc about a property. I am looking for the answer to "${UserPrompt}", if this content provides the answer, please provide it. If not, please reply with "no". ${Corpus}'


2. Utilize Caching: Implement caching mechanisms for heading-prompt relationships and previously retrieved answers. This can help improve efficiency.


3. Continuous Improvement: Continuously refine your code as you process more samples. Focus on preprocessing to narrow down the content before extensive chunking. Adapt your code to recognize patterns in the documents, especially if they follow specific templates.




Market Research and Data Extrapolation with GPT

A very cool use of GPT is to mash a ton of "potentially relevant" data into it, then hammer it quick questions and ask it to generate summation and reports. It can infer very insightful details that you may have missed and create some very complex data transformations far faster than you could with a pivot table, some formulas and a bit of VB Script in Excel. A couple tips when doing this though...

All the principles discussed earlier, such as speed, testing, multi-prompting, chunking, and consistency, apply here too.

Provide Clear Context: Always explain why you are providing the information, your ultimate goals, and the desired output formats.

Utilize Conversation Threads: Leverage the conversation thread structure to refer back to earlier prompts or incorporate previous responses into new queries. Just remember the context window limitation.


There are countless more possibilities with GPT, and other models like Claude Instant or self-hosted options like LlamaGPT. I'm excited to see what innovative applications people are developing with AI.


Feel free to share your own exciting projects and ideas in the comments!

Shiv Kumawat

Tech Entrepreneur & Visionary | CEO, Eoxys IT Solution | Co-Founder, OX hire -Hiring And Jobs

1 个月

Eric, thanks for sharing!

回复
Renee M Gagnon

Early risk taking leader in Cannabis & Psychedelics | Software / Hardware Innovator | MMPR Lic #005 2014 | 2x seed to IPO (TSX.V / NASDAQ ) > -High Times Top 50 Women - Ai/ML/HPC Architect - Advisor - Fresh Eyes

11 个月

Excellent article. It’s a force multiplier indeed. I think folk are seeking “oracle” o delphi stuff. As someone with aspergers I’ve found it understands me and can translate what I mean far better than anyone I’ve worked with. It’s helped me sort my brain into documents and content other folks seem to benefit from. I don’t have to reformat my thoughts /thinking conceptually as much as fleshy folk. Also just noodling. Zero ego/condescension or feeling, noodling. It’s so simply nicer. Even from a project conceptual phase point , the least happy person in room can nitpick why and what’s the point and find zillions more unknowns to express resistance. Just having a meeting to spitball is time wasting. I’m finding pre sorting huge balls o goo lurking in my brain into less funky format has been it’s biggest gain so far. With the plugins and the addition of dal-e it’s a time saver for me. Exponential. If you rely on its output or logic or calculations or data you’re risking a bunch. If you use what it is but with stuff you can validate six ways from Sunday , it’s almost as huge as the moment we got dialup. “It” can almost “understand us” which is important. That’s the hook I think that’s important. “

Brian Doyle

Founder & Representative Director of Ayodo Foundation and Yodo Inc. Accomplished leader in venture funding and management of innovative financial services used by more than 200 million users.

11 个月

Cool if you are accomplished at coding and AI we should talk.

要查看或添加评论,请登录

Eric MacDougall的更多文章

社区洞察

其他会员也浏览了