Performance Test Results for BigchainDB: 1 million transactions in 26 minutes.
Photo by Pietro Mattia on Unsplash

Performance Test Results for BigchainDB: 1 million transactions in 26 minutes.

And we’re off to the races!

Written by Troy McConaghy, Original post is on our BigchainDB blog here.

Over the past several months, we’ve been running tests to get a sense of BigchainDB’s performance. It has been a long slog, because:

  • When we started, our testing tools were fairly rudimentary, and we kept improving them over time.
  • There are many configuration parameters, including Tendermint configuration parameters, and it turns out that some of them affect performance a lot.
  • We learned that the (virtual) machine, and even the operating system, can strongly affect the test results, so we had to standardize those things.
  • Tendermint has an unresolved issue where it gets slower after being under sustained high load. We hoped they’d resolve that, but so far they didn’t.
  • Tendermint and BigchainDB kept changing. Sometimes the changes were due to issues uncovered in the course of performance testing, so there was a feedback loop.

About “Transactions per Second“

The meter, second and byte are standard-size things, so if we learn that Joey is 1.8 meters tall and Sara is 1.7 meters tall, then we will agree that Joey is taller than Sara.

Photo by Dawid Ma?ecki on Unsplash

Unfortunately, a blockchain “transaction” is not a standard-size thing. If we learn that Blockchain X can record 100 transactions per second and Blockchain Y can record 200 transactions per second, we can’t compare those numbers. The size of the transactions matters. One reason is that bigger transactions (with more bytes) take longer to write to disk. Therefore, when someone tells you that their blockchain can do N transactions per second, ask them the size of the transactions.

In the tests described below, all transactions were 765 bytes.

Some Performance Test Results

We have three performance test results to share today. BEP-23 describes the methodology and results in detail. Some test setup highlights:

  • The tested BigchainDB network had four nodes running Ubuntu 18.04.1, BigchainDB Server 6a9064 (between 2.0.0b5 and 2.0.0b6), Tendermint 0.22.8 and MongoDB 3.6.3.
  • All four nodes were in the same Azure data center, each one on a different compute-optimized Fsv2-series virtual machine.
  • The virtual machine generating the test load was in the same data center. The test load was generated by our bigchaindb-benchmark tool.
  • In the first two tests, one million transactions were posted to the BigchainDB network. In the last test, 16000 transactions were posted.
  • As mentioned above, all transactions were 765 bytes.

The First Test

The first test tested a BigchainDB network with all BigchainDB nodes running standard BigchainDB Server without any special options turned on.

One million transactions were processed in about 56 minutes without any failure. 99.7% of transactions were finalized within 9.392 seconds, 95% within 5.143 seconds, and 68% within 2.855 seconds. The network finalized an average of 298 transactions per second, with a median value of 320 transactions per second.

The Setup of the Second and Third Tests

The second and third tests had BigchainDB’s “experimental parallel validation” feature turned on. As the name suggests, BigchainDB checked independent transactions (e.g. transactions involving different assets) for validity in parallel, allowing each node to use more than one CPU for validation at the same time.

You shouldn’t use the experimental parallel validation feature in production unless you understand what you’re doing. It uses a hack where BigchainDB tells Tendermint (DeliverTx) that every transaction is valid, before checking it, so BigchainDB can get the next transaction from Tendermint right away. That way, BigchainDB tricks Tendermint into feeding it transactions-to-validate as fast as possible. BigchainDB then validates all independent transactions in parallel. BigchainDB won’t keep the invalid transactions (in MongoDB), but Tendermint keeps all transactions and doesn’t know which ones were invalid. That means Tendermint can’t prove that a node was misbehaving. The reason we did this experiment was to see if parallel validation could help with performance. As we see below, it does. The next step is so see if Tendermint can support parallel validation officially, e.g. by sending transactions to BigchainDB in bulk. It might be okay to use the above-described hack for now, but you need to understand what you’re doing.

In the second and third tests, BigchainDB always responded to Tendermint’s CheckTX requests with “true,” i.e. it didn’t spend any time checking the transaction before putting it in the mempool (to replicate it among all the nodes for inclusion in an upcoming block). That’s okay because BigchainDB did check the transaction before including it in BigchainDB’s blockchain. Invalid transactions still didn’t get included there.

In the third test, we changed the flag skip_timeout_commit to true in ${HOME}/.tendermint/config/config.toml. We noticed a significant delay to create the last block of the test, and skipping the timeout commit seems to help.

Test 2 Results

One million transactions were processed in about 26 minutes without any failure. 99.7% of transactions were finalized within 2.033 seconds, 95% within 0.113 seconds, and 68% within 0.018 seconds. The network finalized an average of 636 transactions per second, with a median value of 636 transactions per second.

Test 3 Results

16000 transactions were were processed in about 18 seconds without any failure. 99.7% of transactions were finalized within 4.358 seconds, 95% within 4.099 seconds, and 68% within 3.081 seconds. The network finalized an average of 889 transactions per second, with a median value of 1102 transactions per second.

Dig Deeper

You can read all the details of how we set up and ran the tests, including our configuration files and scripts, in the BEP-23 folder on GitHub. That folder also includes more detailed test results, including charts.



Follow us on Twitter and LinkedIn to get the latest announcements. Visit our website to learn more about us and to Get Started using BigchainDB software today. Ask any technical questions you may have to our developers on Gitter. If you’re a developer using BigchainDB, we want to hear from you. Send us an email at [email protected] and tell us your story.

Vinoth Gurumoorthy

Enabling Tech Product Companies & ISVs to hire Offshore Developers & set-up Offshore teams in India | Sr.Manager at ANGLER Technologies India

4 年

Hi Bruce Pon, Just finished reading your article "Performance Test Results for BigchainDB" in your Linkedin Artical I really liked it. Your content was valuable! Would be awesome to follow you and stay in touch. Let’s connect. Niaz

回复
Rene Pickhardt

being in love with Data. Data Scientist Consultant and Bitcoin lightning network enthusiast. Open Source fan.

6 年

I don't want to come off offensive or rude and I appreciate your honesty and effort as it clearly demonstrates that blockchain solutions cannot be properly scaled by adding hardware or using pure force. I just can't let this article stand without adding a couple remarks. (Disclaimer I am neither familiar with your code nor your software and I might have missed some points while reading your article) 1.) I would not trust neither a benchmark nor the software from people who just "learned that the (virtual) machine, and even the operating system, can strongly affect the test results". I might add that even with virtualization the underlying hardware is an important factor with benchmarking. I believe one should be aware of these kind of things before coming up with a solution.? 2.) With?765 Bytes per transaction you store about 0.75 GB for 1 Million Tx. On one hand this would yield in 2.7 Terabyte per day while processing the 42k tx per second which - as far as I am aware of - Visa processes. On the other hand you are currently far from achieving the throughput of 42kHz. In particular producing so much data would probably require a NAS at some point and not an internal hard drive. So this entire benchmark seems to be way to small for what you are trying to sell. 3.) More about the hardware: Running on SSDs with an expected live cycle of 5 years to store a ledger like the blockchain seems difficult. With the amount of expected storage how would you later maintain backups (raid systems) exchanging damaged hard drives and so on??? 4.) size of the benchmark: Almost everything is fast with a small dataset like yours. How quickly will your database be able to process and accept transactions once you have several terrabytes of tx in your bigchaindb? Even if your tx indices have O(log(n)) lookup time this will yield problems if not everyting is in memory / caches are not warm or you might even have network i/o. It would be so interesting to see how you run your experiment with 1mio, 10 mio, 100 mio and 1 bn tx and see if your throughput numbers are stable while increasing by orders of magnitude. ?--> You promise a sports car with your approach (which it could very well be) but you only seem to test that car in a small village never doing more than 30 km/h and state that you are faster then the people walking or riding a bicycle.?? 5.) I could not find your data set.? My conclusion of your article is: "someone tries with pure force to process a large amount of data and refuses to see that scaling is not about force but about better data structures and communication" (scaling and performance are two very different things) Also your system seems to evolve to a trust based system. Since it will be almost impossible to run this in a decentralized environment (just think of all the network overhead for syncing data)?In that case you could just omit the blockchain and use whatever technology visa is using.?

回复
Liviu Alexandru Nica

Founder and CEO of Easy 2 Stake at EASY 2 STAKE

6 年

Congrats Bruce and all the BigchainDB Team!! Piloting the blockchain space!!!

回复
Sridhar Kolapalli

Leading Decentralized and Distributed Products | Blockchain | Cloud | IoT | Data

6 年

That's great achievement...! Happy to use in certain use cases.

回复
Sascha Gartenbach

Chief Revenue Officer and Investment Strategy at Nordic Group

6 年
回复

要查看或添加评论,请登录

Bruce Pon的更多文章

社区洞察

其他会员也浏览了