Take advantage of performance optimizations in the .NET ecosystem.
ELCA Vietnam
ELCA is one of the biggest independent Swiss IT companies with 2'200 experts. We make it work.
As a developer, you're probably faced with the challenge of building high-performance or optimize applications to reduce bottlenecks. This article highlights a few best practices for delivering quality applications to our customers.
How to measure performance issues
The .NET community have a great choice of .NET performance tools and the profiling and diagnostic tools built into Visual Studio are a good place to start investigating performance issues.
Let’s explore other different types of .NET profiles.
Standard .NET Profilers
Traditional .NET profilers track process memory usage, time spent per line of code and frequency of method calls. The most popular commercial tools are:
·???????? Redgate ANTS Performance profiler
·???????? JetBrains dotTrace (including command line tools, helpful for customer environments)
ORM Profilers
ORMs can offer huge productivity gains, but they can also generate ridiculously bad queries without you even realizing it. Some tools therefore allow you to inspect all queries generated by ORMs. The most popular tools are:
·???????? Hibernating Rhinos NHibernate Profiler and Entity Framework Profiler
·???????? LLBLGen ORM Profiler
Application performance management (APM)
APM platforms aggregate performance details across all transactions, app, servers and database access so you can easily understand application performance across large distributed system. They usually collect enough details to quickly identify and resolve most common application problems. Following platform are worldwide leaders and can be integrated with the OpenTelemetry framework:
·???????? Azure Application Insights
·???????? Dynatrace .NET profiler
·???????? New Relic for .NET
·???????? Elastic Search
Other useful tools could help in your investigation to find the root issues. BenchmarkDotNet is useful for designing custom benchmarks in code and measuring execution times. It helps you decide the best optimization solution by comparing the execution times of different implementations. WireShark is the most popular network protocol analyzer that can capture and display real-time details of network traffic. It’s particularly useful for troubleshooting network issues and ensuring network security. But browser profiles like Chrome DevTools are sufficient in many cases.
How to use Entity Framework effectively
ORMs such as EF Core considerably simplify application development and improve maintainability, but they can sometimes be opaque, hiding performance-critical internal details such as the executed SQL. This section attempts to provide an overview of how to achieve good performance with EF Core.
Retrieve only the data you need
When dealing with massive volumes of data, you should strive to retrieve only the required records for the specific query. When fetching data, you should use projections to pick just the required fields and avoid retrieving unnecessary fields. Pagination is another technique for reducing the amount of data extracted from the database.
var p = dbCtx.Projects
.Take(10).Skip(0)
.Select(x => new { x.Name, x.ProjectImputs })
.ToList();
Load related entities eagerly when possible
With lazy loading, the related entities are loaded into the memory only when they are accessed. The benefit is that data aren’t loaded unless they are needed. However, lazy loading can be costly in terms of performance because multiple database queries may be required to load the data. To solve this problem for specific scenarios, you can use eager loading in EF Core . Eager loading fetches your entities and related entities in a single query, reducing the number of round trips to the database.
Use IQueryable instead of IEnumerable
When querying data using Entity Framework, it’s important to use IQueryable instead of IEnumerable. IQueryable allows Entity Framework to optimize the query by generating SQL statements that are executed on the database server, while IEnumerable retrieves all data from the database and performs filtering and sorting operations in memory.
领英推荐
var q1 = db.Projects;
var e1 = q1.Where(x => x.StatusDate > DateTime.Now).OrderBy(x => x.Name);
var q2 = db.Projects.ToList();
var e2 = q2.Where(x => x.StatusDate > DateTime.Now).OrderBy(x => x.Name);
Disable change tracking for read-only queries
The default behavior of EF Core is to track objects retrieved from the database. Tracking is required when you want to update an entity with new data, but it is a costly operation when you’re dealing with large data sets. Hence, you can improve performance by disabling tracking when you won’t be modifying the entities.
var p = dbCtx.Projects.AsNoTracking().FirstOrDefault(x => x.Id == 10);
Use batch updates for large numbers of entities
The default behavior of EF Core is to send individual update statements to the database when there is a batch of update statements to be executed. Starting with EF Core 7.0 , you can use the ExecuteUpdate or ExecuteDelete methods to perform bath updates and eliminate multiple database hits.
var p = dbCtx.Projects.ExecuteUpdate(p => p.SetProperty(x => x.Status, ProjectStatus.Closed));
How to master the lifecycle of Blazor component
Blazor apps are built using Razor components which render into an in-memory representation of the browser's DOM called a render tree and is used to update the UI in a flexible and efficient way. Understanding the component lifecycle is crucial for effectively managing the state and behavior of a component, as well as optimizing performance .
A Blazor component can go through five stages:
StateHasChanged method notifies the component that its state has changed. Be careful not to call it unnecessarily, which is a common mistake that imposes unnecessary rendering costs.
The ShouldRender method determines whether a component should be re-rendered, and you can use it to suppress UI refreshing. In that case the update state of the object will stay in memory, but the UI thread will have no idea what the new value is.
How to download large amount of files
Many developers have been faced with the challenge of implementing an ASP.NET Core endpoint to fetch a large amount of data. Let's take the common use case of a single export with several files.
Don't fall into the classic trap of physically creating each file on the server with File.ReadAllBytes(), create a huge Zip file and returning it to the client. You only must fight against problems (memory exceptions and concurrent file access on the server).
The magic keyword is streaming , which helps to minimize memory consumption. The above use case can be solved by constructing a Zip file on the fly and implementing callback to write part of the stream.?
public class ZipResult : ActionResult
{
...
public ZipResult(..., IEnumerable<(string name, Action<Stream>)> callbacks)
{
...
this.data = callbacks;
}
public override Task ExecuteResultAsync(ActionContext context)
{
...
SaveToStream(context.HttpContext.Response.Body, context.HttpContext.RequestAborted);
return Task.CompletedTask;
}
public void SaveToStream(Stream stream)
{
using var zip = new ZipOutputStream(stream);
foreach (var (name, callback) in data)
{
zip.PutNextEntry(new ZipEntry(ZipEntry.CleanName(name)));
callback(zip);
}
}
}
Find more information in Stephen Cleary Blog’s .
Another best practice could be to limit explicitly the amount of exported data for security aspects (e.g. maximum rows or total Mb).
Are you interested in this topic?
Don’t hesitate to read the excellent Medium article Lessons learned from optimizing performance in multi-layered .NET projects by Riina Pakarinen, Senior Architect at ELCA
Gérald Reusser for the .NETCC
Senior Architect
.NET Engineering Del BL
software developer
3 个月Very helpful!