We aim to minimize the amount of unnecessary work our SQL engine performs, but we're starting to run out of optimizations that benefit all queries equally. More often we can make one shape of queries faster at the expense of a different kind. This blog by Max Hoffman looks at how we have to balance performance trade-offs. https://lnkd.in/e5CD4kqX
DoltHub的动态
最相关的动态
-
?New prototype: instant-feedback SQL editing aka "query-as-you-type" uses DuckDB's parser, duckdb-wasm for local-first caching, and MotherDuck as the backend to enable keystroke-fast resultset previews. Also works on any CTE or subquery! Still early, but should enable arbitrary scale and still retain a near real-time feel while writing ad hoc queries.
要查看或添加评论,请登录
-
Quick Tips for Optimizing Slow SQL Queries 1. Use Indexing i. Index columns in WHERE, JOIN, and ORDER BY clauses. ii. Avoid over-indexing. 2. Limit Data Retrieval i. Use specific SELECT columns (not SELECT *). ii. Use LIMIT to restrict rows. 3. Optimize Joins i. Prefer INNER JOIN over OUTER JOIN. ii. Join on indexed columns. 4. Avoid N+1 Queries i. Use eager loading or batch queries. 5. Optimize WHERE Clauses i. Use specific conditions (e.g., =, BETWEEN). ii. Avoid functions on indexed columns. 6. Enable Query Caching i. Cache repeated queries. ii. Monitor cache performance. 7. Analyze Execution Plans i. Use EXPLAIN to identify bottlenecks. 8. Partition Large Tables i. Partition by date, region, etc. ii. Optimize for partition pruning. Implement these strategies to speed up slow queries and improve performance.
要查看或添加评论,请登录
-
-
The most common trap people fall into... Unbounded query complexity ?? Letting applications run wild with complex queries can cost you big time. Imagine a car speeding down a highway without brakes. Each query adds to the load, pushing your system to its limits. The smart move is to set boundaries, to control the speed, and ensure stability. It's not about restricting capabilities; it's about maintaining a balance between performance and cost. Frameworks like Mercurius for @GraphQL can help manage this balance effectively. → Here's how to handle Unbounded Query Complexity: ?? Limit Query Complexity: Set caps on query depths and the number of fields requested. ?? Use Cursor-Based Pagination: Break down large data requests into manageable chunks. ?? Avoid Dynamic JOINs in SQL: Static queries prevent costly transformations and full table scans. ?? Implement Caching: Use caching plugins to improve query performance. ? PS. DM me to check our out-of-the-box Caching solution on Platformatic ?? Regularly Review Queries: Analyze and optimize queries to ensure they are efficient and secure. What did I miss?
要查看或添加评论,请登录
-
Creating a table structure before inserting data into a temporary table can optimize performance in Microsoft SQL Server. By using INSERT INTO instead of Select INTO, you can enhance efficiency. #MicrosoftSQLServer #TempTable #PerformanceTuning #PAGELATCH_EX
要查看或添加评论,请登录
-
In my post this week, I will explain why accurate cardinality estimation is essential for creating efficient execution plans and share how I troubleshoot issues caused by inaccurate estimates. Using the StackOverflow2010 database as an example, I will demonstrate how correcting a query's cardinality estimates led to a 2x performance improvement. https://lnkd.in/gefbvyUF
要查看或添加评论,请登录
-
Want to run a lazy Polars query but just get a few rows back (like a LIMIT in SQL)? Formerly we used fetch, but this is now folded into the standard API with .head(N).collect() It's also a nice example of how a query optimization is applied when using .head in lazy mode
要查看或添加评论,请登录
-
-
Duck DB: Query Processing Is King... With in-process, open source DuckDB, you can create an in-memory database that does not save data, or you can use a local file. Here's how to get started.
要查看或添加评论,请登录
-
???NEW POST!?? Find unparameterized queries with the Query Store. In today's article, our colleague Sergio Roig Sluijsmans tells us how we can effectively detect and optimize Ad-hoc workloads, with the hashtag #QueryStore. ? ?? Find out more about it and with examples on our blog! https://lnkd.in/eUBP5iYB #SQL #SQLSERVER #DSTABASE
要查看或添加评论,请登录
-
With in-process, open source DuckDB, you can create an in-memory database that does not save data, or you can use a local file. Here's how to get started. By David EASTMAN
要查看或添加评论,请登录
-
Here’s a simple but very helpful tip for flexible queries. ?? Use parameters in your query. Having a value hard coded in your query is fine if you’re running a quick ad-hoc query. But for a query that will be used more than once, or that has the same value repeated in multiple places, it’s much better to use parameters. Parameters enable you to fill in the value once, and have it apply to all the places in the query that you need it to. For example, if you’re running a query with multiple CTE’s, each with tables that are partitioned by date, you have two options. Either input the relevant date each time, or use parameters and input the value once. This prevents mistakes, and also makes it more comfortable to run the query on a different date, without needing to change it in all the locations. When you run the query, a pop up will appear and you can input the value. Parameters look different in different IDE’s, but the idea is the same. When using Rubymine, I used :parameter_name. With Datagrip, I’m using $(parameter_name). I’m currently rerunning a query a number of times and I need the date value to be applied in 3 different places, and a name value to be applied in 4 places. Using parameters means that I don’t make mistakes by forgetting to change one of the values, and I only need to input them one time. #Parameters #SQL
要查看或添加评论,请登录
-