The query access language for the language...
While interacting with ChatGPT, you may have noticed occasional delays in response time. This may be due to the advanced mathematical and probabilistic computations that the model performs in order to generate its responses. In situations where the model's available computational resources are limited, it may take longer to process and generate an appropriate response.
The experience of encountering high latency when accessing structured data due to suboptimal query performance is a familiar problem for many users of SQL. However, as the Structured Query Language (SQL) and associated technologies have evolved, various optimization and tuning techniques have been developed to improve the efficiency of data retrieval. These include the use of indexes and query rewriting, which can automatically optimize and improve the performance of SQL queries, resulting in faster and more efficient data access.
While we have SQL for querying and interacting with the relational model of data, we currently lack a similar language for querying language models. To effectively use these models and retrieve answers in a cost-efficient and timely manner, there is a need to develop a specialized query language, similar to SQL. The current method of using prompts to interact with language models can be challenging, particularly when trying to create optimal prompts. Instead, a higher-level abstraction, call it Language Model Query Language (LQL) would be beneficial. The higher level abstraction would be easy to understand and implement, and would democratize and industrialize the use of language models by reducing the need for specialized prompt engineering skills.
To demonstrate the complexity of prompts, let me give you few example.
Example # 1 : In visualise.ai(which probably uses stable diffusion underneath), I tried the below two prompts
"A man without beard" and "A clean-shaven man". See the output it produced
So, it looks like language models ignores negative tokens in the prompts
领英推荐
Example # 2 : See how it interpreted the two prompts below. In the first case it thought I want two Tom Cruise one with natural lighting and other with blue background
These are not the only examples; for you to get what you need you need to be very specific and need to know certain prompt vocabulary. It is a tedious and relatively niche activity.
Imagine the convenience of being able to interact with a language model using a query such as:
SELECT "A caricature of Tom Cruise "
FROM DALL-E WHERE TEMPERATURE = 1
AND
ADDITIONAL_TOKENS IN ('natural lighting', 'blue background')
This type of query language would make it much easier to access the information and outputs desired from the language model, by specifying the desired parameters such as temperature and additional tokens in a structured and intuitive way, similar to how we use SQL to query relational databases.
A query such as the one described allows for greater optimization in the retrieval of information from the language model. By specifying the parameters in a structured way, it can make the optimizer responsible to perform token pruning more effectively and reduce the number of forward and backward traversals required to generate the next token with the highest probability. This aims to minimize the use of GPU resources and the amount of token scanning necessary to complete the query. Additionally, by providing an explainable function similar to SQL's EXPLAIN, the optimizer should be able to demonstrate how it retrieved the tokens used to complete the sentence or generate an image, addressing the issue of explainability
As the use of language models continues to grow in popularity, it is likely that we will see advancements in the methods used to query and interact with them. The development of a specialized query language, similar to SQL, would be a significant step in democratizing and industrializing the use of these models. It would make it easier for a wider range of users to access and utilize the vast amount of information and capabilities that these models offer, resulting in a more efficient and effective use of their potential.