Construction is a scavenger hunt
This article is written for the technically inclined General Contractor interested in how AI is leveraged in modern cloud software, and specifically for construction projects.
AI search
Over the past few years it seems every software company is clamoring to define their AI strategy and bolt AI onto the side of their product. Impressive looking demos abound, but these demos tend to have limited use in real world scenarios. AI is amazing, but it’s not magic. So where do you start if you want to make AI truly useful to a customer?
Search is the foundation
The secret to making AI useful in business software is to begin with search. After all, one of the main things thing that differentiates AI in your product from something like ChatGPT is the access to your private business data. So, the better you are able to find and retrieve the data most pertinent to your task, the more useful AI can be for completing that task.
For example, on a construction project, a general contractor might create hundreds or even thousands of RFIs. There are standard formats for RFIs, but the data for filling out an RFI comes from the GC’s specific project. This data can come from the construction drawings, a soils report, emails, photos, and a thousand other places/documents. We often hear GCs refer to the process of collecting all of this data as “the scavenger hunt.”
No large language model on its own can truly help a GC generate an RFI unless it has the ability to sift through the project data and whittle it down to the relevant source material. This is why search is so crucial for AI in business software, especially in construction which is notorious for mountains of semi-structured data and where the cost of missing the needle in the haystack can cost the business millions of dollars.
How AI search works
In the old days, if a GC had a question they needed answered, they had no choice but to walk over to the cupboard, pull out whichever sets of drawings they thought might contain pertinent information, and spread them out on a big table. They would proceed to sift through these sets noting the specific pages and sections that are most likely to contain the necessary information. Finally, they would review these pages, citing specific details in order to formulate an answer.
领英推荐
Surprisingly, this is how we approach answering a question with AI, only instead of the process taking hours, it takes less than 1 second! In the world of AI, this is called a RAG (Retrieval Augmented Generation) pipeline, and it shows how search and AI work together with your data to answer a question or take some action.
The first stage of the scavenger hunt is called semantic search. This is where the AI goes to the “cupboard” and pulls out a large number of relevant sources. Semantic search enables a customer to track down data based on meaning, not just keywords. So, for example, you can search for the phrase “moisture protection” and retrieve information regarding “weather proofing.”
How does semantic search work? While this is the topic of a different article, the short story is that it’s actually based on AI again. LLMs are used to translate the customer’s database into sets of numbers known as embeddings. These embeddings numerically encode the meaning of the data, and this encoding allows us to find the bits of information that are closest (mathematically speaking) in meaning to some question. I know I said that AI isn’t magic, but embeddings get pretty close!
The rerank stage comes next. This stage uses a different kind of AI model called a reranker to take all of the material we pulled out of the cupboard and sift through it with a fine tooth comb in order to bump the most relevant material to the top of our research list. This is conceptually similar to the GC sifting through all of the papers spread out on the table to find the specific pieces of information that will be used to respond.
Finally, the reranked information is handed over to the LLM along with the original question or prompt so a final response can be generated. This is akin to the general contractor composing the RFI, or whatever else they are trying to accomplish.
This represents a high level overview of how search (and specifically Retrieval Augmented Generation) can be used to support AI in a B2B SaaS product, but the details and techniques go much deeper, in the same way that admiring the exterior of a building doesn’t reveal massive number of details that went into constructing it. We believe that amazing AI begins with amazing search, and we put a lot of effort in the early days building out our search foundation.
In future posts, I will talk about some of the techniques we employ at Constructable to enrich our customers data with context so that searches and responses become even more accurate, or how we keep context up to date as customer data changes and evolves, or how we enable semantic photo search, or how we leverage object detection to pull apart construction plans and link things together.
We are Constructable
At Constructable we have built the world’s first AI-first, all-in-one project management system. GCs have an incredibly hard job, and we believe that software can and should make your life easier, not more complex. While we are passionate about AI and its increasing potential, we believe the best product is the one you don’t have to think about.
If you are interested in seeing what Constructable can accomplish for you, fill out our contact form, or just email me directly at [email protected]—I’d love to meet you.
Solutions Architect | AI/ML | Full Stack | Maker
3 个月This is an excellent high-level overview of a RAG. I was playing around with it myself, and I noticed one of the pain points was reading accurate text from PDFs. Have you all encountered this when working with RFIs?