I have been developing the Jigsaw approach ???to solve AI problems assisted by LLMs especially for non developers or developers who are not familiar with AI. We (
Anjali Jain
and I) have been also developing this approach in our courses at University of Oxford.?
A recent paper(The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers) concludes that the usage of a generative AI code suggestion tools increase? software developer productivity by 26.08% (SE: 10.3%).??
However, we address a related problem
In this post, I explain the overall set of steps we undertake.?
Firstly, here? are some notes, based on my experience
- ?AI means machine learning and deep learning but we also include generative AI(RAG and GRAPHRAG)
- This is an evolving approach - I have learnt a lot both from my students in Oxford and also
Erdos Research
- The ‘jigsaw approach’ is based on what ‘complete’ looks like. Just like you see on the box of a jigsaw puzzle. There are essentially four things to aspire to: Model evaluation metrics, Acceptance tests, code templates and collaboration metrics(if working in a group for the design phase). The collaboration metrics are based on using lean canvas(which I shall elaborate in a subsequent post)?
- We continue to work with Magnus (a firefighter from Iceland) and Dr Chougule - an NHS doctor - from our first Oxford course - who also inspired me to develop these ideas further.???
- Here, Low code means assisted by LLM. You still need to understand how code works.??
- We consider a single workflow for generative and non generative AI. Broadly, the RAG framework helps to load a series of documents, from these you can create either an agent(traditional RAG) or a data structure or a graph. From the data structure, we can use traditional (non generative) algorithms like regression and classification. Thus, this single flow unites the generative and non generative modes.?
- This is, by definition, an iterative process. We store each iteration and build in the mindset of improving models. Development proceeds in cycles.?
- We deploy using GPT, streamlit or just a notebook. We do not cover MLOps in the low code approach
- The process is collaborative - either as pair programming or in groups for a hackathon - which we have tried before successfully
- In the future, multimodal AI will have a bigger role in the development of AI. I was always fascinated by viperGPT when I first saw it. Multimodal AI is the future of development and we have not seen its full impact yet
- Similarly, agentic workflows will be a key part of developing future applications.?
- The model evaluation metric is the north star. The metric ties to all the steps of the jigsaw methodology
- We start with a business metric and tie the business metric to an AI metric (classification, regression, RAG evaluation or GRAPHRAG evaluation metric)
- We see the methodology integrated with Partners, Tools (like cursor), clients and institutions
- LLMs are (largely) concerned with unstructured data. In fact, we can see the RAG process as moving from unstructured data to structured data and insights.
- By considering test driven development ?(ATDD, BDD and TDD) upfront, we bring domain experts into the picture earlier. But by using iterations, we do not get bogged down by the challenges of? test driven development.? (Why hasn't TDD taken over the world)? ? ?
- Similarly, we emphasise the creation of synthetic data upfront.? It is relatively easy to create synthetic data using LLMs
- One of the unique challenges in learning AI is the known unknowns problem. I.e. given a problem - you need to know multiple ways to solve it. We take this intro consideration
For a scenario (typically working in pairs)
Requirement analysis based on metrics
User stories based on metrics
Test criteria based on metrics
1. Define the Problem and Objectives
2. Determine Model Evaluation Metrics
3. Establish Test Criteria?
4. Data Collection and Preprocessing (collect data or create synthetic data) including unstructured data for RAG
5. Baseline Model Development
6. Iterative Improvement of the Model
????Feature Selection and Engineering
????Model Selection and Tuning?
7. Model Evaluation and Validation against the AI metric, Business metric and the tests
9 Deployment and Monitoring
10 Consider other user stories?
I will share more about collaborative metrics(design phase) in a subsequent post.?
CDI at Vidiia Ltd
2 周I recently had the privilege of participating in the 'Low-Code Data Scientist: Low-Code AI Apps including LLMs and ChatGPT' course by the University of Oxford, delivered by Dr. Ajit. This experience has convinced me that the Jigsaw methodology is a game-changer, allowing innovators like myself to rapidly prototype digital solutions and address complex problems that traditionally required extensive resources and time. By combining design thinking with LLMs, we can significantly accelerate the agile product development process. For instance, just a few weeks ago, I developed a new YOLO model for the Vidiia diagnostics device, to detect cellular repair capabilities—an application relevant to gene therapy, vaccine research, and diagnostics. This technique, originally conceived by Brunel University London, Testavec and Vidiia Ltd, will be showcased next week at Brunel University. This prototype has the potential to evolve into a real product, impacting various sectors of the medical industry. I firmly believe that empowering subject matter experts with the capabilities of LLMs will drive faster and more impactful advancements in science and engineering.
I am seeing a real desire to get things from pdf files into a more searchable format like an sql database or airtable. Such as retrieving information from tenancy agreements or other contracts into a RDBMS. This ties in with what you mentioned in point 15 of this newsletter. It’s really on the money. Thanks for sharing Ajit Jaokar
Safety & Security Assistant at International Rescue Committee
2 周Great advice