SQL Server Fine-Tune & Optimization Guide

SQL Server Fine-Tune & Optimization Guide

Table of Contents

  1. Indexes
  2. Query Optimization
  3. Statistics
  4. Hardware & Configuration
  5. High Availability and Disaster Recovery (HADR)
  6. Replication
  7. Partitioning
  8. Tempdb Optimization
  9. Regular Maintenance
  10. Different Types of Database Backups
  11. Use of Latest Features
  12. Resource Governance

?

1. Review and Reassess Indexes:

Conduct a thorough review of existing indexes. Analyze their usage, effectiveness, and impact on query performance.

???????????

v? Positive Impact of Indexes on Query Performance:

·???????? Faster Data Retrieval; Efficient Sorting and Grouping (order by; group by; distinct) clause;

·???????? Optimized Join Performance (JOIN operations by reducing the time it takes to find matching rows in the join tables.)

·???????? Improved Query Plan Selection (This can include choosing index seeks over table scans, which are generally faster and consume fewer resources.)

·???????? Consider Index Types: Use the appropriate type of index (clustered, non-clustered, columnstore, etc.)

?

?

v? Negative Impact of Indexes on Performance:

·???????? Insert, Update, and Delete Overhead;

·???????? Space Requirements (Especially for large tables or when there are multiple indexes on a single table.)

Index Fragmentation (Over time, indexes can become fragmented, leading to inefficient data retrieval.) Regular maintenance (like index rebuilding or reorganizing) is necessary to keep indexes performing optimally.

·???????? Inappropriate Indexes: (If the wrong type of index is created, or if the index is not aligned with the query patterns)

?

v? Power of New Features:

SQL Server 2019 introduces new indexing enhancements. Consider leveraging features like Accelerated Database Recovery (ADR), which can improve index operations' efficiency and availability.

ADR is a feature introduced in SQL Server 2019 aimed at improving database availability and speeding up the recovery process, especially after long-running transactions or crashes.

i.??????????????????? Transaction Log Redesign: ADR fundamentally changes the way SQL Server manages its transaction log. ADR uses a new log format that reduces log usage during routine operations, improving performance.

ii.????????????????? Faster Recovery: ?ADR significantly reduces the time required for database recovery after a crash or restart, especially when dealing with long-running transactions. Instead of rolling back or rolling forward transactions, ADR keeps track of changes differently, which leads to faster recovery times.

iii.??????????????? Long-Running Transactions: Traditional recovery methods could take longer when dealing with long-running transactions because they had to be rolled back or rolled forward entirely. ADR does it differently, storing changes in a way that helps the system get better faster.

iv.??????????????? Improved Availability: With ADR, databases can come online quicker after an unexpected shutdown or crash, reducing downtime and improving overall availability.

To leverage ADR in SQL Server 2019, databases need to be set to compatibility level 150 or higher.

ALTER DATABASE <DBMGK> SET COMPATIBILITY_LEVEL = 150;

?

?

v? Advantages of Columnstore Indexes:

A columnstore index stores each column of data separately. Highly efficient for read-heavy queries that need to scan large datasets and perform aggregations, common in analytical and reporting queries.

CREATE NONCLUSTERED COLUMNSTORE INDEX IX_ColumnStoreIndex

ON TableName (Column1, Column2, ...);

?

i.??????????????????? Use Cases for Columnstore Indexes:

a.????? Data Warehousing and Analytics:? Large tables where queries often involve full table scans or aggregations over large datasets.

b.????? Real-time Operational Analytics (OLAP) :? SQL Server supports combining columnstore and rowstore indexes on the same table

ii.????????????????? Improved Query Performance: ?Significantly improve query performance, especially for large datasets. They are particularly effective for OLAP (Online Analytical Processing)

iii.??????????????? Data Compression:? Data in columnstore indexes is highly compressed, reducing the storage footprint.

iv.??????????????? Batch Processing: Processes data in batches with columnstore indexes, more efficient for large-scale queries (data warehousing scenarios.)

v.????????????????? Parallel Processing: Multiple CPU cores can be used to process different columns simultaneously.

vi.??????????????? Types of Columnstore Indexes:

a.????? Clustered Columnstore Indexes: The entire table is stored as a columnstore, replacing the traditional row-based storage.

b.????? Non-clustered Columnstore Indexes : Allowing the same data to be stored in both row-wise and column-wise formats.

?

v? Weaknesses of Columnstore Indexes:

i.??????????????????? Write Operations: While read operations are fast, write operations (INSERT, UPDATE, DELETE) can be slower compared to traditional rowstore indexes, especially for clustered columnstore indexes.

ii.????????????????? Memory Usage: They can be more memory-intensive, particularly when the columnstore index is large.

iii.??????????????? Maintenance: Like traditional indexes, columnstore indexes also require maintenance, although the nature of the maintenance tasks can differ.

?

v? Dynamic Management Views (DMVs):

DMVs are a set of system views and functions that performance tuning, capacity planning, and overall health monitoring of our SQL Server instances.

?

??????????? Database Related DMVs (sys.dm_db_*):

These DMVs provide information about database-level objects, including file space usage, partition stats, index physical stats, and analyzing space usage.

·???????? sys.dm_db_index_physical_stats – Returns size and fragmentation information for the data and indexes of the specified table or view.

·???????? sys.dm_db_partition_stats – Provides page and row-count information for every partition in a table or index.

?

Execution Related DMVs (sys.dm_exec_*):

Offer insights into currently executing SQL, session details, query plans, query stats, and connection statistics.

Analyzing currently running queries, examining query plans, troubleshooting performance issues, and understanding resource usage.

·???????? sys.dm_exec_query_stats – Returns aggregate performance statistics for cached query plans.

·???????? sys.dm_exec_requests – Provides information about each request that is executing within SQL Server.

·???????? sys.dm_exec_sessions – Displays active user connections and internal tasks.

?

OS Related DMVs (sys.dm_os_*):

Expose information about SQL Server processes, memory, threads, and system resources , and locks..

·???????? sys.dm_os_wait_stats – Returns information about all the waits encountered by threads that executed.

·???????? sys.dm_os_memory_clerks – Shows memory usage by SQL Server memory clerks.

·???????? sys.dm_os_ring_buffers - It stores temporary details like errors, performance metrics, and system activities.

?

Transaction Related DMVs (sys.dm_tran_*):

Give details about transactions, locks, and related resources.

·???????? sys.dm_tran_active_transactions – Provides information about transactions that are active in the instance of SQL Server.

·???????? sys.dm_tran_locks – Provides information about currently active lock manager resources.

?

I/O Related DMVs (sys.dm_io_*):

Provide information about I/O operations, virtual file stats, and pending I/O requests, I/O bottlenecks, and monitoring tempdb usage.

·???????? sys.dm_io_virtual_file_stats – Returns I/O statistics for data and log files.

·???????? sys.dm_io_pending_io_requests – Provides information about pending I/O requests in SQL Server.

?

Execution Related DMVs: These focus on providing insights into queries being executed, their execution plans, resource usage, and query statistics.

i. sys.dm_exec_query_stats

ii. sys.dm_exec_requests

iii. sys.dm_exec_query_plan

?

Resource Utilization DMVs: These DMVs offer information about resource consumption such as CPU, memory, and I/O usage by queries or sessions.

i. sys.dm_os_performance_counters

ii. sys.dm_os_wait_stats

iii. sys.dm_os_sys_memory

vi. sys.dm_os_waiting_tasks = server health and performance.

?

?

Index and Storage DMVs: These provide details about indexes, their usage, fragmentation, and statistics, helping in optimizing query performance.

i. sys.dm_db_index_usage_stats

ii. sys.dm_db_index_physical_stats

iii. sys.dm_db_index_operational_stats

?

Server Health and Configuration DMVs: These DMVs give an overview of server health, configuration settings, and system-related information.

i. sys.dm_os_sys_info

ii. sys.dm_server_services

iii. sys.dm_os_sys_memory

?

Query Execution Statistics DMVs: Focused on detailed statistics related to query execution and its components like plans, cached query plans, and query text.

i. sys.dm_exec_query_stats

ii. sys.dm_exec_cached_plans

iii. sys.dm_exec_sql_text

?

TOP 10 useful Dynamic Management Views (DMVs) in SQL Server that can be quite handy for performance-related analysis:

o?? sys.dm_exec_query_stats: Provides statistics about query execution, identify which queries are taking up the most resources.

o?? sys.dm_os_wait_stats: What processes are waiting for resources, aiding in identifying performance bottlenecks.

o?? sys.dm_exec_requests: Shows details about the currently executing requests, such as query text and resource usage.

o?? sys.dm_db_index_usage_stats: How indexes are used, helping to determine which indexes are crucial for performance.

o?? sys.dm_exec_query_plan: Provides the execution plan for a specific query, understanding how SQL Server executes the query.

o?? sys.dm_os_performance_counters: server health metrics like waits, locks, and performance counters.

o?? sys.dm_exec_sessions: Offers details about active sessions, assisting in identifying resource-consuming sessions or connections.

o?? sys.dm_db_missing_index_details: Indicates missing index details, helping to optimize query performance by suggesting potential new indexes.

o?? sys.dm_os_sys_info: Provides system-wide information, like the number of processors, helping to understand the underlying hardware resources.

o?? sys.dm_exec_cached_plans: Offers details about query plans that are cached in memory, assisting in analyzing the efficiency of cached plans and identifying optimization opportunities.

?

2. Query Optimization

Review and optimize our queries. Use tools like SQL Server Profiler, Execution Plans, or Query Store to identify slow queries and optimize them by restructuring, rewriting, or adding hints to improve their performance.

·???????? SQL Server Profiler: GUI to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. Here are five essential trace templates commonly used for performance fine-tuning:

i. Tuning

ii. Deadlocks

iii. Errors and Warnings

iv. Profiler Standard

v. Stored Procedures

?

i. Tuning

Objective: Focuses on identifying slow-running queries and potential areas for optimization.

Events: RPC:Completed, SQL:BatchCompleted, SP:StmtCompleted, and etc.

Columns: Duration, CPU, Reads, Writes, TextData, and more to capture query-related information.

Filters: Duration > X milliseconds, CPU > X, Reads > X, etc.

sample:? Set a filter to capture only events where the duration is greater than X milliseconds (e.g., 5000 for procedures that run over 5 seconds).

?

ii. Deadlocks

Objective: Captures information related to deadlock events, helping identify conflicting transactions.

Events: Deadlock Graph, Deadlock Chain, Lock:Deadlock, etc.

Columns: Deadlock Graph, TextData, HostName, ApplicationName, etc.

Filters: No specific filters but focus on capturing deadlock-related events.

?

iii. Errors and Warnings

Objective: Monitors for various error conditions and warnings to identify problematic areas.

Events: ErrorLog, UserErrorMessage, ErrorReported, Warning, etc.

Columns: ErrorDescription, ErrorState, Severity, etc.

Filters: ErrorState > X, Severity > X, specific error numbers if necessary.

?

iv. Profiler Standard

Objective: A general-purpose template capturing a wide range of events for basic performance analysis.

Events: A wide range of events including SP, RPC, SQL:Batch, and others.

Columns: Basic columns covering Duration, CPU, Reads, Writes, TextData, etc.

Filters: Duration > X, CPU > X, Reads > X, etc.

?

v. Stored Procedures

Objective: Focuses specifically on stored procedure execution, useful when optimizing procedure performance.

Events: SP:Completed, SP:StmtCompleted, SP:Starting, etc.

Columns: Duration, CPU, Reads, Writes, TextData, and specific SP-related columns.

Filters: Duration > X, CPU > X, Reads > X, specific stored procedure names.

?

?

?

Setting Up the Trace:

Open SQL Server Profiler >>> New Trace >>> Choose Template (e.g: "TSQL_SPs" ) >>> Select Events ((SP:StmtStarting and SP:StmtCompleted = each statement in a stored procedure.) ::: (SP:Starting and SP:Completed = execution of stored procedures))

>>> Set Filters:

????? (Duration: e.g., 5000 for procedures that run over 5 seconds).

????? (CPU:e.g., 1000 for procedures using more than 1 second of CPU time).

????? (Reads: e.g., 10000 for procedures performing more than 10,000 reads).

????? (Object Name: (e.g., 'usp_UserSProcedure').

?

Start the trace. It will now monitor and capture data based on our defined filters.

?

·???????? Query Store is a feature introduced in SQL Server 2016 that automatically captures a history of queries, plans, and runtime statistics, and retains these for our review.

  • Enable Query Store: This is done per database. In SSMS, right-click on our database -> Properties -> Query Store -> Set "Operation Mode" to Read Write.

o?? Configure Settings: Adjust settings like Data Flush Interval, Statistics Collection Interval, Max Size, etc., according to our needs.

  • Monitor Performance: After enabling, Query Store will start collecting data. You can then use SSMS to review query performance.
  • Analyze Data in SSMS: Go to our database in SSMS, then under the "Query Store" folder, you'll find different reports like Top Resource Consuming Queries, Regressed Queries, etc.
  • Force Plan or Compare Plans: You can manually force a query to use a specific plan if you identify a more efficient one, or compare the performance of different query plans.

?

ü? Note:

  • Analyze which stored procedures are resource-intensive and might need optimization. You can look into the TextData column to understand what part of the stored procedure is causing the issue, and plan for optimization strategies like indexing, query rewriting, or schema design changes.
  • Running SQL Server Profiler, especially with detailed events and low thresholds, can impact the performance of the server. It's recommended to use it judiciously, particularly on production systems.

?

3. Statistics

Statistics help the query optimizer create efficient execution plans. SQL Server automatically creates statistics when we create an index on a table or indexed view. SQL Server automatically creates statistics when we create an index on a table or indexed view. Statistics assist in estimating the number of rows in a table or index, leading to better join algorithms and index selection. Proper statistics ensure the right indexes are used, reducing unnecessary scans or seeks. Proper statistics ensure the right indexes are used, reducing unnecessary scans or seeks.

?

??????????? Updating Statistics help the query optimizer choose better execution plans.

??????????? Manual Update:

Update statistics on a table

??????????? UPDATE STATISTICS TableName;

?

Update statistics on a specific index

??????????? UPDATE STATISTICS TableName IndexName;

?

Automatic Update:

??????????? ALTER DATABASE <DBName> SET AUTO_UPDATE_STATISTICS ON;

?

DBCC SHOW_STATISTICS is a command in SQL Server used to display the current statistics for an index or a table in the database.

DBCC SHOW_STATISTICS ('dbo.MyTable', 'IX_MyIndex')

?

??????????? This can reduce the overhead of statistics updates on large tables.

??????????? UPDATE STATISTICS <LargeTable> WITH SAMPLE 50 PERCENT;

?

Create filtered statistics for specific subsets of data to improve query plans for particular conditions.

??????????? CREATE STATISTICS StatsName ON TableName(ColumnName) WHERE Condition;

?

ü? Note:

o?? Remember, while updating statistics can significantly improve query performance, it's essential to balance the frequency of updates to avoid unnecessary overhead.

o?? Frequent updates can cause performance overhead due to the computation required for statistics generation. In certain cases, automatic updates might not trigger at the right time, leading to outdated statistics and suboptimal plans.

?

4. Hardware and Configuration

CPU: SQL Server can benefit from multiple cores and higher clock speeds. More cores enable parallel query execution and handle multiple user requests efficiently. SQL Server can benefit from multiple cores and higher clock speeds. More cores enable parallel query execution and handle multiple user requests efficiently.

In SQL Server 2019, parallelism can be controlled and influenced using query hints or server-level settings.

Syntax:

This query includes the MAXDOP (Maximum Degree of Parallelism) hint, which limits the number of processors used when executing the query. In this case, MAXDOP 4 sets the threshold for parallelism to 4 processors for this specific query.

SELECT * FROM <TableName> OPTION (MAXDOP 4);

?

Adjusting this setting impacts the parallelism across the entire server for query processing. ?It's often recommended to allow SQL Server to manage parallelism dynamically unless specific tuning is required for certain queries or workloads. For server-wide settings, you can configure the maximum degree of parallelism using the following command:

EXEC sp_configure 'max degree of parallelism', 4; -- Setting the maximum degree of parallelism to 4

RECONFIGURE;

?

Memory (RAM): Suitable RAM ensures that SQL Server can cache data and execution plans, reducing the need for disk I/O. It also supports larger buffer pools for improved performance.

Best Practices:

Memory Allocation: Configure SQL Server's 'max server memory' setting to prevent it from consuming all available memory, leaving some for the OS and other services.

sp_configure 'show advanced options', 1;

GO

RECONFIGURE;

GO

sp_configure 'max server memory (MB)', 8192; -- Set to an appropriate value

GO

RECONFIGURE;

GO

?

Storage: Optimal disk configuration (RAID arrays, SSDs, etc.) can significantly impact performance. SSDs offer faster I/O compared to traditional HDDs, reducing latency.

?

Here's an example using SQL Server 2019 to create a database with filegroups on different storage types:

Step 1:

-- Create a database

CREATE DATABASE StorageOptimization;

?

-- Create filegroups for SSD and HDD

ALTER DATABASE StorageOptimization

ADD FILEGROUP SSDFileGroup CONTAINS MEMORY_OPTIMIZED_DATA;

?

ALTER DATABASE StorageOptimization

ADD FILEGROUP HDDFileGroup;

?

-- Define files for SSD and HDD filegroups

ALTER DATABASE StorageOptimization

ADD FILE

(

??? NAME = 'SSDFile',

??? FILENAME = 'D:\SSDFolder\StorageOptimization_SSD.ndf', -- Path to the SSD storage

??? SIZE = 500MB,

??? MAXSIZE = UNLIMITED,

??? FILEGROWTH = 100MB

)

TO FILEGROUP SSDFileGroup;

?

ALTER DATABASE StorageOptimization

ADD FILE

(

??? NAME = 'HDDFile',

??? FILENAME = 'E:\The HDDFolder\StorageOptimization_HDD.ndf', -- Path to the ?HDD storage

??? SIZE = 500MB,

??? MAXSIZE = UNLIMITED,

??? FILEGROWTH = 100MB

)

TO FILEGROUP HDDFileGroup;

?

?

Step 2:

Creating Tables and Allocating to Filegroups:

?

-- Use the database

USE StorageOptimization;

?

-- Create a table on SSD filegroup

CREATE TABLE dbo.FrequentlyAccessedData (

??? ID INT IDENTITY(1,1) PRIMARY KEY,

??? DataColumn NVARCHAR(100)

) ON SSDFileGroup;

?

-- Create an index on SSD filegroup

CREATE INDEX IX_FrequentlyAccessedData ON dbo.FrequentlyAccessedData(DataColumn) ON SSDFileGroup;

?

-- Create another table on HDD filegroup

CREATE TABLE dbo.LessFrequentlyAccessedData (

??? ID INT IDENTITY(1,1) PRIMARY KEY,

??? DataColumn NVARCHAR(100)

) ON HDDFileGroup;

?

-- Create an index on HDD filegroup

CREATE INDEX IX_LessFrequentlyAccessedData ON dbo.LessFrequentlyAccessedData(DataColumn) ON HDDFileGroup;

?

?

?

Step 3:

Simulating Data Access: Insert some sample data into both tables to simulate frequent and less frequent access scenarios:

-- Insert data into frequently accessed table

INSERT INTO dbo.FrequentlyAccessedData (DataColumn)

VALUES ('Frequently Accessed Data 1'), ('Frequently Accessed Data 2'), ...;

?

-- Insert data into less frequently accessed table

INSERT INTO dbo.LessFrequentlyAccessedData (DataColumn)

VALUES ('Less Frequently Accessed Data 1'), ('Less Frequently Accessed Data 2'), ...;

?

ü? Note:

SSDs utilize NAND flash memory has high memory cell density, fast read speed, (from?SQL Server 2014). Higher number of I/O operations per second (IOPS) and to improved database performance.

Offering lightning-fast data access with no moving parts, significantly improving system speed and reliability. It is made up of millions of memory cells that are arranged in a grid-like pattern. Each cell can store multiple bits of data.

For performance-critical applications, SSD-based storage options are generally preferred due to their higher I/O speed and lower latency.

?

5. High Availability and Disaster Recovery (HADR)

Redundancy and High Availability: Implementing features like failover clustering or Always On Availability Groups improves fault tolerance and availability.

?

Failover clustering: In a failover cluster, when one server experiences a failure, another server within the cluster automatically assumes the role and responsibilities of the failed server. This process, known as automatic failover, ensures continuity of services without manual intervention, reducing downtime and maintaining system availability.

???????????

?

Advantages:

Automatic Failover: If one node fails, the resources are automatically moved to another node in the cluster.

High Availability: Ensures minimal downtime, as services are available even if a node goes down.

Simplified Management: Centralized management of resources and services.

?

??????????? -- Creating a Failover Cluster in SQL Server

USE master;

GO

CREATE LOGIN [CLUSTER_NAME\SQLSvc] WITH PASSWORD = 'StrongPassword';

GO

CREATE AVAILABILITY GROUP [MyAG]

??? WITH (CLUSTER_TYPE = EXTERNAL);

GO??????

?

Always On Availability Groups

Advantages:

Multiple Replicas: Allows for multiple readable secondary replicas for load balancing and reporting purposes.

Flexible Failover: Can be set to manual or automatic failover based on specific criteria.

Improved Read Scalability: Offloading read operations to secondary replicas enhances performance.

?

-- Creating an Always On Availability Group

USE master;

GO

CREATE AVAILABILITY GROUP MyAG

??? WITH

??? (??

??????? AUTOMATED_BACKUP_PREFERENCE = PRIMARY,

??????? FAILURE_CONDITION_LEVEL = 3,

??????? HEALTH_CHECK_TIMEOUT = 60000

??? );

GO

?

Failover Clustering Documentation:

Description: Failover clustering provides high availability by grouping multiple servers together to ensure that if one server fails, another server in the cluster can take over its workload.

Steps to Implement:

Create a login for the cluster.

Set up the availability group specifying the cluster type.

Advantages: Automatic failover, enhanced fault tolerance, centralized management.

?

Always On Availability Groups Documentation:

Description: Always On Availability Groups offer high availability, disaster recovery, and read scalability by replicating databases across multiple SQL Server instances.

Steps to Implement:

Create an availability group with preferences and failure conditions.

Advantages: Multiple readable replicas, flexible failover options, improved performance.

?

Log Shipping?

Log shipping in SQL Server: High-availability and disaster recovery solution.

Automatic backup and copy of transaction log backups.

Primary database ? Secondary database on a different server.

Creates a warm standby server for quick recovery.

Enables fast online transition if the primary server fails.

?

Log shipping in SQL Server involves three main steps:

i.??????? Backing up the transaction log of the primary database.

ii.????? Copying the transaction log file to the secondary server.

iii.??? Restoring the transaction log backup on the secondary database in standby mode.

?

Here are the main types of jobs typically associated with log shipping:

i.??????? Backup Job (Primary Server)

ii.????? Copy Job (Primary to Secondary)

iii.??? Restore Job (Secondary Server)

iv.??? Alert Job and Cleanup Job (Optional)

?

Configure Log Shipping on Primary Server

-- Step 1: Enable log shipping on the primary database

USE master;

GO

?

Ensure the database is in full or bulk-logged recovery mode

ALTER DATABASE <MGKDB> SET RECOVERY FULL;

GO

?

-- Configure primary server

EXEC sp_add_log_shipping_primary_database

??? @database = '< MGKDB >',

??? @backup_directory = 'C:\LogShipping\Backups',

??? @backup_share_name = '<_backup_share>',

??? @backup_job_name = 'LSBackup_<MGKDB>';

GO

?

-- Step 2: Configure secondary server

USE master;

GO

-- Create a database with the same name as the primary database

CREATE DATABASE <MGK_SecDB>;

GO

?

-- Configure secondary server

EXEC sp_add_log_shipping_secondary_database

??? @secondary_database = '<MGK_SecDB>',

??? @primary_server = '<the primaryserver>',

??? @primary_database = '<MGKDB>',

??? @restore_delay = 0, -- Optional delay in minutes

??? @restore_mode = 0; -- Standby mode

GO

?

Step 3: Set Up Jobs

-- Step 3: Set up backup, copy, and restore jobs on primary and secondary servers

-- Backup job on primary server

EXEC sp_add_log_shipping_primary_secondaries

??? @primary_database = '<MGKDB>',

??? @secondary_server = '<the secondaryserver>',

??? @secondary_database = '<MGK_SecDB>',

??? @backup_job_name = 'LSBackup_<MGKDB>',

??? @overwrite = 1;

GO

?

?

?

-- Copy and restore jobs on secondary server

EXEC sp_add_log_shipping_secondary

??? @secondary_database = '<MGK_SecDB>',

??? @primary_server = '<the primaryserver>',

??? @primary_database = '<MGKDB>',

??? @restore_job_name = 'LSRestore_<MGK_SecDB>',

??? @copy_job_name = 'LSCopy_<MGK_SecDB>',

??? @overwrite = 1;

GO

?

ü? Note:

Make sure that the secondary server is in a standby mode, allowing transaction log restores without recovery.

Monitor the log shipping jobs and regularly test the failover process to ensure the secondary server is ready when needed.

Test log shipping configurations in a non-production environment before implementing them in a production setting.

?

?

6. Replication

Replication?is not a High Availability and Disaster Recovery (HADR) option. Replication can be used to distribute data to different locations, consolidate data from different sources, or provide high availability for a database.

?

Transactional Replication:

Transactional replication is a method of replication in SQL Server that enables the continuous synchronization of data between databases. It works by replicating individual data modifications (inserts, updates, deletes) as they occur in near real-time from a publisher database to one or more subscriber databases.

Replication in SQL Server provides flexibility in data distribution and synchronization among databases, enabling various scenarios like data warehousing, reporting, and distributed applications.

?

Key Components:

Publisher: The database where data changes are captured and propagated.

Distributor: Acts as an intermediary, storing replicated data and delivering it to subscribers.

Subscriber: Receives and applies replicated changes from the publisher.

?

Process:

Publication: Define a publication on the publisher database, specifying the articles (tables, views, stored procedures) to be replicated.

Subscription: Set up subscriptions on subscriber databases, specifying which publication(s) to subscribe to and how changes should be delivered (push or pull).

Snapshot Generation: Initial snapshot of the published data is created and applied to the subscribers to establish a starting point.

Continuous Replication: Transactional changes (INSERTs, UPDATEs, DELETEs) made at the publisher are captured in the transaction log, then replicated and applied to the subscriber databases.

?

Create a transactional publication

USE master;

GO

EXEC sp_replicationdboption @dbname = N'MGK_DB', @optname = N'publish', @value = N'true';

GO

USE MGK_DB;

GO

EXEC sp_addpublication @publication = N'TranPub', @status = N'active';

GO

?

?

?

?

?

ü? Note:

Near Real-Time Synchronization: Offers real-time or near real-time updates to subscribers.

Selective Replication: Allows replication of specific tables or stored procedures.

Scalability: Permits replication to multiple subscribers, supporting read scaling.

Minimal Latency: Provides low latency in replicating changes to subscribers.

?

Network: For distributed environments or client-server architectures, a robust network infrastructure ensures smooth data transfer between clients and the SQL Server.

?

?

?

7. Partitioning

Partitioning in SQL Server 2019 offers several advantages, primarily aiding in better manageability, improved performance, and scalability. Here are some key advantages:

Manageability: Partitioning allows breaking down large tables and indexes into smaller, more manageable chunks (partitions). This makes maintenance operations, such as backup and restore, index rebuilds, and data archiving, more efficient and targeted.

Performance: By partitioning data, SQL Server can perform queries more efficiently, as it can access only the relevant partitions rather than scanning the entire table or index. This can lead to faster query execution times, especially when dealing with large datasets.

Scalability: Partitioning can improve performance and scalability by allowing you to spread data across multiple filegroups or storage media. This enables better utilization of hardware resources and can enhance parallelism in query execution.

?

Partition a table named Sales based on a column OrderDate. We'll create multiple partitions to store data based on ranges of OrderDate.

-- Create a partition function to define partition ranges

CREATE PARTITION FUNCTION OrderDateRangePF (DATETIME)

AS RANGE LEFT FOR VALUES ('2022-01-01', '2023-01-01', '2024-01-01');

?

-- Create a partition scheme to map partitions to filegroups

CREATE PARTITION SCHEME OrderDateRangePS

AS PARTITION OrderDateRangePF

TO ( [PRIMARY], [FG2019], [FG2020], [FG2021], [FG2022]);

?

-- Create the Sales table using the partition scheme

CREATE TABLE Sales (

??? SalesID INT,

??? OrderDate DATETIME,

??? -- Other columns...

) ON OrderDateRangePS(OrderDate);

?

ü? Note:

CREATE PARTITION FUNCTION defines the ranges for partitioning based on the OrderDate column.

CREATE PARTITION SCHEME maps these ranges to different filegroups (PRIMARY, [FG2019], [FG2020], etc.).

CREATE TABLE uses the partition scheme to allocate data across these filegroups based on the OrderDate.

?

?

8. Tempdb Optimization

Optimizing tempdb, the system database in SQL Server used for temporary data storage, is crucial for enhancing overall database performance.

·???????? Check current tempdb file configuration

USE tempdb;

GO

-- Verify the file configuration for tempdb

SELECT name, type_desc, physical_name, size * 8.0 / 1024 AS FileSizeInMB

FROM sys.database_files;

?

·???????? Check current tempdb configuration

SELECT name, physical_name, size, max_size, growth, is_percent_growth

FROM sys.master_files

WHERE database_id = DB_ID('tempdb');

GO

?

Proper Sizing:

·???????? Adding multiple tempdb files (example with 4 files)

USE master;

GO

-- Check current tempdb configuration

SELECT name, physical_name, size, max_size, growth, is_percent_growth

FROM sys.master_files

WHERE database_id = DB_ID('tempdb');

GO

?

·???????? Example: Configure tempdb with 4 data files (adjust based on CPU cores, up to 8)

·???????? Add or modify tempdb data files

ALTER DATABASE tempdb

ADD FILE

(

??? NAME = N'tempdev2',

??? FILENAME = N'E:\SQLData\tempdb_mdf2.ndf', -- Change path as needed

??? SIZE = 512MB,

??? FILEGROWTH = 128MB

) TO FILEGROUP [PRIMARY];

?

ALTER DATABASE tempdb

ADD FILE

(

??? NAME = N'tempdev3',

??? FILENAME = N'E:\SQLData\tempdb_mdf3.ndf', -- Change path as needed

??? SIZE = 512MB,

??? FILEGROWTH = 128MB

) TO FILEGROUP [PRIMARY];

?

ALTER DATABASE tempdb

ADD FILE

(

??? NAME = N'tempdev4',

??? FILENAME = N'E:\SQLData\tempdb_mdf4.ndf', -- Change path as needed

??? SIZE = 512MB,

??? FILEGROWTH = 128MB

) TO FILEGROUP [PRIMARY];

?

-- Configure the tempdb log file

ALTER DATABASE tempdb

MODIFY FILE

(

??? NAME = templog,

??? SIZE = 1024MB,

??? FILEGROWTH = 256MB

);

?

?

?

-- Verify changes

SELECT name, physical_name, size, max_size, growth, is_percent_growth

FROM sys.master_files WHERE database_id = DB_ID('tempdb');

GO

?

Change Recovery Model:

Set tempdb to SIMPLE recovery model to minimize log space usage.

ALTER DATABASE tempdb SET RECOVERY SIMPLE;

?

Monitor for Contentions:

Use DMVs to identify contention and address issues with tempdb-related waits.

SELECT session_id, wait_type, wait_duration_ms, resource_description FROM sys.dm_exec_requests

WHERE wait_type LIKE 'PAGE%LATCH_%' AND database_id = 2; -- tempdb

?

Optimizing Queries:

Avoid using temp tables if possible, utilize table variables which are stored in memory and generally have less contention.

DECLARE @tableVariable TABLE ( ... );

?

Schedule regular maintenance tasks like cleaning up unused objects.

-- Remove unused temporary tables

USE tempdb;

GO

EXEC sp_MSforeachtable 'IF OBJECTPROPERTY(object_id(''?''), ''IsTemporary'') = 1 DROP TABLE ?';

?

?

?

Trace Flag 1118: ?

Helps in reducing contention for allocation of new pages in tempdb.

These flags are automatically set in SQL Server 2016 (13.x) and later version.

DBCC TRACEON (1117, -1); -- Grow all files in a filegroup equally

DBCC TRACEON (1118, -1); -- Full extents only

?

ü? Note:

Analyze workload and size tempdb appropriately. Start with an initial size based on estimated usage and adjust as needed. Set multiple data files to mitigate contention (typically one per CPU core up to a reasonable limit, often 4-8 [max of 8 files]).? Store tempdb on a separate disk to minimize contention with user databases. Create separate temp tables for each session to reduce contention.

Optimizing tempdb improves overall SQL Server performance by reducing contention, minimizing I/O operations, and enhancing query execution speed. Remember to backup and test any changes in a controlled environment before implementing them in a production environment. Long transactions in tempdb can cause bottlenecks.

?

?

9. Regular Maintenance

Regular maintenance in SQL Server 2019, or any version of SQL Server, is crucial for optimal performance and data integrity. Here are some key maintenance tasks. SQL Server maintenance plan that includes cleanup tasks to remove old data and logs. This script creates a SQL Server Agent Job to perform the cleanup and maintenance tasks on a schedule.

?

USE [msdb]

GO

-- Create a new maintenance plan

EXEC msdb.dbo.sp_add_maintenance_plan

??? @plan_name = N'CleanupAndMaintenancePlan',

??? @description = N'Plan for cleanup and maintenance'

?

-- Add a new subplan to the maintenance plan

EXEC msdb.dbo.sp_add_maintenance_subplan

??? @plan_id = N'CleanupAndMaintenancePlan',

??? @subplan_name = N'CleanupSubplan',

??? @description = N'Subplan for cleanup tasks'

?

-- Add a job to the subplan

EXEC msdb.dbo.sp_add_maintenance_job

??? @subplan_id = N'CleanupSubplan',

??? @job_name = N'CleanupJob',

??? @description = N'Job for cleanup tasks'

?

-- Add a cleanup task to the job to remove old data

EXEC msdb.dbo.sp_add_maintenance_plan_subplan

??? @plan_name = N'CleanupAndMaintenancePlan',

??? @subplan_name = N'CleanupSubplan',

??? @job_name = N'CleanupJob',

??? @schedule_uid = NULL,

??? @task_id = NULL,

??? @subplan_id = NULL,

??? @subplan_order = 1,

??? @can_run = 1,

??? @notify_level_eventlog = 0,

??? @notify_level_email = 2,

??? @notify_level_netsend = 0,

??? @notify_level_page = 0,

??? @notify_email_operator_name = NULL,

??? @notify_netsend_operator_name = NULL,

??? @notify_page_operator_name = NULL;

?

-- Add a cleanup task to the job to remove old logs

EXEC msdb.dbo.sp_add_maintenance_plan_subplan

??? @plan_name = N'CleanupAndMaintenancePlan',

??? @subplan_name = N'CleanupSubplan',

??? @job_name = N'CleanupJob',

??? @schedule_uid = NULL,

??? @task_id = NULL,

??? @subplan_id = NULL,

??? @subplan_order = 2,

??? @can_run = 1,

??? @notify_level_eventlog = 0,

??? @notify_level_email = 2,

??? @notify_level_netsend = 0,

??? @notify_level_page = 0,

??? @notify_email_operator_name = NULL,

??? @notify_netsend_operator_name = NULL,

??? @notify_page_operator_name = NULL;

?

This script creates a maintenance plan named 'CleanupAndMaintenancePlan' with a subplan named 'CleanupSubplan'. Inside this subplan, a job named 'CleanupJob' is added, and then two cleanup tasks are included within the job. You can modify these tasks to suit the specific cleanup requirements by adding T-SQL commands for deleting old data or logs.

After creating this maintenance plan, you'll need to define the actual tasks to perform the cleanup. For example, you might use T-SQL commands within the job steps to delete old records from tables or remove old log files.

Remember to adjust the schedule, notifications, and tasks based on the specific cleanup needs and server configuration.

?

?

?

?

10. Different types of database backups

i. Full Backup:

A full backup captures the entire database:

BACKUP DATABASE DBMGK TO DISK = 'C:\DBMGKFolder\FullBackup.bak' WITH INIT, STATS=10;

?

WITH INIT: Initializes the backup media and overwrites any existing backup sets.

STATS=10: Specifies to display a progress report every 10 percent completion.

?

?

ii.? Differential Backup:

Captures changes since the last full backup:

BACKUP DATABASE DBMGK TO DISK = 'C:\DBMGKFolder\DifferentialBackup.bak' WITH DIFFERENTIAL, INIT, STATS=5;

?

WITH DIFFERENTIAL: Indicates a differential backup.

INIT and STATS: Similar to the full backup parameters, initializing the backup media and specifying progress reporting.

?

?

iii. Transaction Log Backup:

Backs up the transaction log: Frequent transaction log backups reduce the risk of data loss.

BACKUP LOG DBMGK TO DISK = 'C:\DBMGKFolder\LogBackup.trn' WITH INIT, STATS=1;

?

WITH INIT: Initializes the backup media.

STATS=1: Specifies to display a progress report after each log backup.

?

?

?

iv.? File/Filegroup Backup:

Allows backup of specific files or filegroups:

BACKUP DATABASE DBMGK FILE = 'LogicalFileName' TO DISK = 'C:\DBMGKFolder\FileBackup.bak' WITH INIT, STATS=5;

?

FILE = 'LogicalFileName': Replace 'LogicalFileName' with the logical name of the file or filegroup to be backed up.

?

?

v.? Copy-Only Backup:

Creates a backup without disrupting the regular backup sequence:

BACKUP DATABASE DBMGK TO DISK = 'C:\DBMGKFolder\CopyOnlyBackup.bak' WITH COPY_ONLY, INIT, STATS=10;

?

WITH COPY_ONLY: Indicates that this is a copy-only backup, not affecting the usual backup chain.

?

?

vi.? Partial Backup:

Backs up only the primary filegroup and read/write filegroups:

BACKUP DATABASE DBMGK READ_WRITE_FILEGROUPS TO DISK = 'C:\DBMGKFolder\PartialBackup.bak' WITH INIT, STATS=1;

?

READ_WRITE_FILEGROUPS: Specifies to back up only the primary filegroup and read/write filegroups.

?

?

vii. Tail-Log Backup:

A tail-log backup is taken when a database is in the RECOVERY_PENDING state or before a database is restored.

BACKUP LOG DBMGK TO DISK = 'C:\DBMGKFolder\TailLogBackup.trn' WITH NORECOVERY;

?

WITH NORECOVERY: Specifies that the database will remain in a non-recovered state after the backup, allowing additional transaction log backups to be taken.

?

?

viii. Copy-only Transaction Log Backup:

A copy-only transaction log backup doesn't disrupt the regular log backup sequence.

BACKUP LOG DBMGK TO DISK = 'C:\DBMGKFolder\CopyOnlyLogBackup.trn' WITH COPY_ONLY, NO_TRUNCATE;

?

WITH COPY_ONLY: Indicates that this is a copy-only backup.

NO_TRUNCATE: Prevents the transaction log from being truncated after the backup.

?

?

xi.? Differential Database Backup with Compression:

Performing a differential backup with compression for reduced storage usage.

BACKUP DATABASE DBMGK TO DISK = 'C:\ ?DBMGKFolder\DifferentialBackup_Compressed.bak'

WITH DIFFERENTIAL, COMPRESSION, STATS=5;

?

COMPRESSION: Specifies that the backup should be compressed, saving disk space.

?

?

x.? Backup Encryption:

Encrypting the backup for enhanced security.

BACKUP DATABASE DatabaseName TO DISK = 'C:\BackupFolder\EncryptedBackup.bak'

WITH INIT, STATS=10, ENCRYPTION(ALGORITHM = AES_256, SERVER CERTIFICATE = BackupCertificate);

?

ENCRYPTION: Specifies the encryption algorithm and the server certificate used for encryption.

?

Adjust the paths, filenames, encryption parameters, and any other settings according to the database environment and security policies. These scripts offer additional functionalities like encryption, tail-log backups, and more specialized backup types to cater to various backup strategies and requirements.

?

There are three main recovery models:

i. Simple; ii. Full and iii.? Bulk-Logged.

?

i . Simple Recovery Model:

·???????? This is the most straightforward recovery model.

·???????? Transaction logs are truncated automatically, freeing up space.

·???????? Only possible to recover to the point of the last full or differential backup.

·???????? Suitable for non-critical databases where minimal data loss is acceptable.

?

ii.? Full Recovery Model:

·???????? Provides complete log chain, allowing for full point-in-time recovery.

·???????? Transaction logs are retained until backed up or truncated explicitly.

·???????? Allows restoring to a specific point in time using transaction log backups.

·???????? Ideal for critical databases where minimal data loss and point-in-time recovery are essential.

?

iii. Bulk-Logged Recovery Model:

·???????? Similar to the Full Recovery Model but optimized for bulk operations.

·???????? Minimally logs bulk operations (e.g., bulk insert), reducing log space usage.

·???????? Allows point-in-time recovery, but bulk operations might limit recoverability.

·???????? Suitable for databases with periodic bulk operations where point-in-time recovery is less critical.

?

Change to Simple Recovery Model

ALTER DATABASE MGKDB SET RECOVERY SIMPLE;

Change to Full Recovery Model

ALTER DATABASE MGKDB SET RECOVERY FULL;

Change to Bulk-Logged Recovery Model

ALTER DATABASE MGKDB SET RECOVERY BULK_LOGGED;

?

?

11. Use of Latest Features

?

i.??????? In-Memory OLTP:

This feature allows you to move certain tables and stored procedures into memory for faster access and reduced contention.

Syntax:

CREATE TABLE MGK_InMemory (

??? SaleID INT NOT NULL PRIMARY KEY NONCLUSTERED,

??? ProductID INT NOT NULL,

??? SaleDate DATE,

??? INDEX IX_ProductID NONCLUSTERED (ProductID)

) WITH (MEMORY_OPTIMIZED = ON);

·???????? WITH (MEMORY_OPTIMIZED = ON) : This table resides entirely in memory, allowing for faster data access and reduced contention.

?

?

This feature set is particularly powerful for high-performance scenarios but requires careful planning

Syntax:

CREATE PROCEDURE usp_MGK

WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER

AS

BEGIN ATOMIC

??? WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English')

??? -- ?T-SQL Statements Here

END

·???????? BEGIN ATOMIC starts a transaction block, which is required in natively compiled stored procedures.

·???????? SCHEMABINDING ensures that the objects referenced in the procedure are not changed in a way that would affect the procedure.

·???????? EXECUTE AS OWNER specifies that the procedure runs with the privileges of the user who owns the procedure.

·???????? NATIVE_COMPILATION compiles stored procedure code into machine language before running it, making it faster when dealing with tables stored in memory.

?

ii.????????????????? Temporal Tables:

Introduced in SQL Server 2016, temporal tables provide a built-in feature for maintaining historical data without the need for manual coding or creating separate tables for versioning.:

-- Step 1: Create a temporal table

CREATE TABLE MGKTab

(

??? EmployeeID INT PRIMARY KEY, Name NVARCHAR(100), Position NVARCHAR(100),

??? ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START,? -- System-managed column for row start time

??? ValidTo DATETIME2 GENERATED ALWAYS AS ROW END,????? -- System-managed column for row end time

??? PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)???????? -- Define the period for system time

)

WITH (SYSTEM_VERSIONING = ON); -- Enable system versioning

?

-- Step 2: Insert initial data

INSERT INTO MGKTab (EmployeeID, Name, Position) VALUES

(1, 'GK', 'Developer'),

(2, 'MGK', 'Analyst');

?

-- Step 3: Update a record to see versioning in action

UPDATE MGKTab SET Position = 'Senior Developer' WHERE EmployeeID = 1;

?

-- Step 4: Query current data

SELECT * FROM MGKTab;

?

-- Step 5: Query historical data

SELECT * FROM MGKTab FOR SYSTEM_TIME ALL;

·???????? Query Historical Data: Retrieve all historical states of data, including the current and previous versions of rows, using FOR SYSTEM_TIME ALL.

?

iii.??????????????? Automatic Tuning: ?

The "Automatic Tuning" feature in SQL Server automatically adjusts query performance by utilizing machine learning algorithms. It can create and drop indexes, fix query plans, and optimize database performance without manual intervention. It's like having a self-tuning database that adapts to workload changes.

?

-- Enable Automatic Tuning for a specific database

ALTER DATABASE MGKDB

SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON);

?

·???????? (FORCE_LAST_GOOD_PLAN = ON): ?SQL Server to force the use of the last known good query plan if a regression in performance is detected.

·???????? SQL Server will revert to a previously known good execution plan.

?

?

iv.??????????????? Adaptive Query Processing (AQP):

AQP: This feature allows SQL Server to adapt query plans based on runtime statistics, potentially improving performance in various scenarios.

o?? Adaptive Query Execution: This involves altering the query execution plan

o?? Runtime Statistics Collection:? Continuous monitoring and statistics used to adapt query plans for better performance.

o?? Feedback-Driven Optimization: Learn from previous query executions and adjust strategies accordingly.

o?? d. Dynamic Re-optimization:?? Some systems allow queries to be re-optimized during execution.

o?? Adaptation to Workload Changes:? dynamically scaling resources or adjusting strategies to handle query

o?? Machine Learning and AI Techniques:? AQP systems employ machine learning algorithms to predict query execution plans

?

v.????????????????? Accelerated Database Recovery (ADR): ?

ADR (SQL Server 2019) helps to reduce the time it takes to recover a database after a crash or interruption by managing the transaction log differently. ADR, the recovery process is optimized, and the database will handle the situation more efficiently.

Once the server is back up and running (after server crash), we would check the status of the database and transactions:

-- Check the database status

SELECT database_id, name, state_desc FROM sys.databases WHERE name = 'MGKDB';

?

-- Check for any active transactions

SELECT transaction_id, name, transaction_begin_time FROM sys.dm_tran_active_transactions;

?

ü? Notes:

With ADR, SQL Server would leverage a new recovery mechanism that significantly reduces the time taken to recover. ADR is enabled by default for all databases created or upgraded to SQL Server 2019 or later.

It's a powerful feature designed to streamline the recovery process and minimize downtime in the event of crashes or interruptions.

?

vi.??????????????? Intelligent Query Processing (IQP):

Features like batch mode processing for rowstore tables and table variable deferred compilation aim to optimize query performance. Another feature is table variable deferred compilation, which improves the performance of queries that use table variables.

?

-- Enable batch mode on rowstore

-- This is usually enabled by default in SQL Server 2019 and later,

-- but this command ensures it's on.

EXEC sp_configure 'batch mode on rowstore', 1;

RECONFIGURE;

GO

?

-- Sample query to demonstrate batch mode processing

SELECT Product, AVG(Quantity) as AverageQuantity FROM SalesData GROUP BY Product

OPTION (USE HINT('ENABLE_BATCH_MODE'));

GO

?

-- Demonstration of Table Variable Deferred Compilation

-- This feature optimizes the performance by delaying the compilation of a query

-- that uses table variables until the actual cardinality is known.

?

-- Declare a table variable

DECLARE @SaleData TABLE (Product NVARCHAR(100), Quantity INT);

?

-- Insert data into the table variable

INSERT INTO @SaleData (Product, Quantity) VALUES ('Product A', 10), ('Product B', 15), ('Product C', 20);

?

-- Query using the table variable

SELECT Product, SUM(Quantity) as TotalQuantity FROM @SaleData GROUP BY Product;

GO

?

ü? Notes:

Enabling Batch Mode on Rowstore: This ensures that batch mode execution, which is efficient for large-scale analytical queries, is available for rowstore tables.

Batch Mode Processing Sample Query: Executes a query that demonstrates batch mode processing by using the ENABLE_BATCH_MODE query hint.

Table Variable Deferred Compilation: A table variable is declared and populated with data. The subsequent query benefits from deferred compilation, where the execution plan is optimized based on the actual table variable's cardinality.

?

vii.????????????? Monitoring and Tuning

Monitoring and tuning offer significant benefits for database health and performance.

i.??????? Optimize Performance: Detect and fix slow queries.

Identify and fix slow queries for optimal database speed.

-- Find top 10 slowest queries

SELECT TOP 10 * FROM sys.dm_exec_query_stats ORDER BY total_elapsed_time DESC;

?

ii.????? Manage Resources: Monitor CPU, memory, and I/O usage.

Monitor CPU, memory, and I/O to ensure efficient resource utilization.

-- Check current resource usage

SELECT * FROM sys.dm_os_performance_counters;

?

iii.??? Improve Query Efficiency: Identify and optimize inefficient queries.

Detect and optimize inefficient queries for better overall performance.

-- Identify poorly performing indexes

SELECT * FROM sys.dm_db_index_usage_stats WHERE user_seeks = 0;

?

iv.??? Prevent Issues: Catch problems before they impact the system.

Proactively identify and address potential problems to avoid system impact.

-- Monitor long running transactions

SELECT * FROM sys.dm_tran_active_transactions WHERE transaction_begin_time < DATEADD(MINUTE, -30, GETDATE());

?

v.????? Scale Effectively (Scalability): Adjust to growing data and user load.

Adjust database capacity to handle increased data and user loads effectively.

-- Analyze table sizes for capacity planning

SELECT TABLE_NAME, SUM(DATA_LENGTH + INDEX_LENGTH) FROM information_schema.TABLES GROUP BY TABLE_NAME;

?

vi.??? Secure Data: Spot unusual access patterns or security risks.

Identify and address security risks and unusual access patterns.

-- Audit login attempts

SELECT * FROM sys.dm_exec_sessions WHERE is_user_process = 1;

?

vii.? Forecast Needs: Plan for future capacity requirements.

Analyze trends to forecast future resource needs and scale accordingly.

-- Examine database growth trends

SELECT name, size FROM sys.master_files;

?

viii.Minimize Downtime: Keep the system running smoothly.

Implement proactive maintenance to minimize system downtime.

-- Check database integrity

DBCC CHECKDB;

???????????

ix.??? Reduce Costs: Optimize to save on hardware and operation expenses.

Optimize queries and resources to reduce hardware and operating costs.

-- Identify unused indexes to consider for removal

SELECT * FROM sys.dm_db_index_usage_stats WHERE user_updates > user_seeks;

?

x.????? Ensure Compliance: Meet performance-related SLAs and regulations.

Monitor query execution times for adherence to SLAs and regulations.

-- Report on query execution times for SLA compliance

SELECT * FROM sys.dm_exec_query_stats;

?

ü? Notes:

Each of these points highlights a critical aspect of SQL Server monitoring and tuning, demonstrating how SQL queries can be used to support these efforts.

?

?

?

?

12. Resource Governance

Resource governance involves controlling and allocating resources such as CPU, memory, and disk I/O to ensure fair and efficient use of resources among different workloads.

-- Enable Resource Governor if not already enabled

-- Resource Governor must be enabled before you can configure it

USE master;

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

ALTER RESOURCE GOVERNOR ENABLE;

?

-- Create a resource pool for low-priority queries

-- This pool will have limited resources compared to the default pool

CREATE RESOURCE POOL LowPriorityPool

WITH (MIN_CPU_PERCENT = 5, MAX_CPU_PERCENT = 20, MIN_MEMORY_PERCENT = 5, MAX_MEMORY_PERCENT = 20);

?

-- Create a workload group for low-priority queries

CREATE WORKLOAD GROUP LowPriorityGroup

USING LowPriorityPool;

?

-- Associate the LowPriorityGroup with a specific user or application

-- In this example, we associate it with the 'LowPriorityApp' login

ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.ClassifyWorkload);

?

-- Create a classifier function to route connections to the appropriate workload group

CREATE FUNCTION dbo.ClassifyWorkload()

RETURNS sysname

WITH SCHEMABINDING

AS

BEGIN

??? DECLARE @GroupName AS sysname;

??? IF SUSER_NAME() = 'LowPriorityApp'

??????? SET @GroupName = 'LowPriorityGroup';

??? ELSE

??????? SET @GroupName = 'default';

??? RETURN @GroupName;

END;

?

-- Monitor and adjust the resource settings based on performance

-- Use the following query to monitor the resource usage by each pool and group

SELECT * FROM sys.dm_resource_governor_resource_pools;

SELECT * FROM sys.dm_resource_governor_workload_groups;

?

-- Adjust the resource settings based on your monitoring results

-- You can alter the resource pool or workload group settings as needed

ALTER RESOURCE POOL LowPriorityPool

WITH (MAX_CPU_PERCENT = 30, MAX_MEMORY_PERCENT = 30);

?

-- If necessary, disable Resource Governor

-- This can be useful during maintenance or troubleshooting

-- ALTER RESOURCE GOVERNOR DISABLE;

?

ü? Notes:

Using Resource Governor provides a robust framework for managing resources, optimizing performance, and maintaining a stable and responsive SQL Server environment.

要查看或添加评论,请登录

Arunagiri Gopal的更多文章

社区洞察

其他会员也浏览了