Stresstest for your "Big Data" database in the cloud (Azure Cosmos DB)
Stephan Bail
Empowering enterprises with rock-solid Azure solutions - built to scale and built to last.
Will your Azure Cosmos DB withstand a peak of many simultaneous requests or not?
With the help of an Azure Cosmos DB, large amounts of data can be stored easily and efficiently in the cloud.
But it is difficult to find the best "Request Units"-setting for the Cosmos DB.
Are 400 RU/s sufficient or does the Cosmos DB possibly have to be set to 5.000 RU/s and more?
Perform a stress test with large amounts of data
To get an idea how much data your Azure Cosmos DB can handle at the same time, I created a stress test application for my customer and put it online as Open Source. Using a CSV file containing 1.5 million records, the application performs save operations in a preset number of 20 threads (concurrent operations).
Would you also like to perform a stress test on your Azure Cosmos DB?
It is a good idea to download and install the Azure Cosmos DB emulator from Microsoft on the local developer machine. Now clone the Git repository:
There is a ZIP file in the project directory. This file contains a CSV file containing 1.5 million data records. Extract this CSV file into the directory where your .NET Core console application is running. But you can also use your own CSV file at any time.
Now connect to your Azure Cosmos DB in the source code:
If you are using the local Cosmos DB emulator, you can find the settings (like the Primary Key) here:
https://localhost:8081/_explorer/index.html
After all settings have been set, execute the application.
Feel free to leave your thoughts on this project.