Hardware for CFD
Since 2009, we’ve delivered hundreds of successful CFD projects. For years, people regularly keep coming to ask us to recommend hardware configurations suitable for their CFD (and FEA) simulations. For years, we keep publishing state-of-the-art hardware configurations on the popular website Hardware for CFD. To qualify for the list, there are two key factors: a) our own positive experience and b) performance/cost ratio.
While the particular up-to-date configurations are listed and regularly updated on the website Hardware for CFD, in addition, I decided to write down a few general tips and proven strategies that work for us at CFDSUPPORT. They are based on the typical users’ questions we keep receiving.
Minimal Requirements for CAE Simulations
First of all, please note that there are no specific minimum requirements for CFD simulations. You can successfully run your CAE (CFD and FEA) simulations anywhere on a personal computer, laptop, cluster, cloud, or even mobile device. However, in real practice, I wouldn’t ever recommend anything less than a 64-bit system, 4GBRAM memory, 500GB hard drive, and 15’’ screen.
Know your simulation size. Know your core*hours!
Before going for any hardware, you should know how demanding your typical simulation is. At least the rough estimation of simulation size has to be known. For the simulation size, there is a commonly used measure counted in core*hour units. It basically says a theoretical value of how many CPU cores would be needed for a single simulation job to finish in one hour, or the other way around, how many hours would a single job need to finish on one CPU core. And all the related combinations work too. For instance, a simulation that requires 100 core*hours, can be run on one core for 100 hours, or on two cores for 50 hours, or on 100 cores for one hour, or on 1000 cores for six minutes. So, every simulation requires its own core*hours to swallow and it’s the user who decides how to serve them. Sure, the core*hour concept is not perfect (because of scaling and different CPU power) but it does its job because it's very simple and understandable.
Simulation size. What matters the most?
When it comes to the simulation size, there are three things of the highest importance. Mesh size, Simulation time treatment (Time method), and Physical model complexity. These three are by far the most important factors in terms of the demands of any CFD simulation. These three factors have to be balanced with corresponding hardware or the project is in big trouble.
Mesh size - critical factor
The size of the mesh of your virtual model is the ultimate player. It has a major impact on the results accuracy, simulation time, convergence speed and stability, postprocessing, storage, ... wait, where do I stop? The mesh size is a super-critical factor. Perhaps factor number one. A larger mesh, compared to a smaller one brings two levels of complexity. First, larger mesh simulation takes more time to finish because more numerical operations have to be processed. Second, larger mesh simulations converge slower because of more degrees of freedom (more unknowns). Mesh size is absolutely a critical factor.
Time method - critical factor
Another critical factor related to the simulation size is the time treatment. Essentially, your simulation can be steady-state or transient. And if it is transient then it is very important how much physical time has to be simulated and what is the maximal time step.
Physical model complexity - critical factor
The physical model is another important factor. For example, you can simulate a case of very simple incompressible Navier-Stokes equations with a few basic quantities (p, u, v, w). That’s quite easy. Or, you can simulate, for example, a compressible Navier-Stokes, with an advanced turbulence model and a couple of reacting passive scalars with dynamic mesh and a couple of runtime-evaluation function objects that are executed after every iteration, and all of sudden your simulation gets a bit heavier than expected. All in all, the model complexity is maybe a little bit less important than the mesh size and the time method.
Resulting Hardware Recommendations
That brings us to the hardware recommendations. As stated above, hardware and simulation size have to be well balanced or no party concerned will ever be happy. Sure, one can say that we can wait a little bit longer if the simulation is too big for our hardware. We were there too. But trust me, waiting too long for the results is killing productivity and too many opportunities are missed. Time is everything.
CPU Processor - critical hardware component
At the end of the day, it is the processor that does the work. Processor power (speed) is critical. When it comes to CPU power, more CPU power is always better than less CPU power. On the other hand, it’s reasonable to find a good power/cost ratio. For years, we‘ve had a good experience with this CPU Benchmark website which compares various processors available in the market. There you can find reliable information about the particular processor power. Please note that the prices listed on cpubenchmark.net are only a rough guess. The particular costs are best to check with your hardware supplier because prices may differ in time and space. Power and price give you the basic information on what to pick. Another very important factor is the processor's possibility to built the cluster (or if it is just a lonely wolf).
Please note that all processors should be always compared as a whole unit - not per a single core or fraction of cores - that is a typical mistake that leads to confusion.
Motherboard - not that critical component
Although a motherboard is the main printed circuit in computers and it holds and allows communication between many of the crucial electronic components of the whole system, such as the processor and memory, and provides connectors for other peripherals, it is picked to fit the processor after the processor is picked. You should only check it has enough ports compatible with other devices and components. The choice of the motherboard is not a very critical decision.
Memory - not that critical hardware component
Regarding the computer memory, it’s all primarily about the mesh size. There is a golden rule: count for each one million cells mesh a need for 2GB RAM. In any case, we recommend at least 4GB RAM memory per CPU core. BTW, it may easily happen that the mesh generation or advanced visual postprocessing will be more demanding on computer memory than the simulation run itself. The memory is relatively cheap and we recommend taking as much memory as possible.
Monitors - surprisingly critical hardware component
There is only one rule. You need to see things! Get a display size you’re comfortable with. But remember, the devil is always in the details. Over time we realized that large screens work much better than small ones. Here is a little example. This is a comparison of what you can see on a laptop with a resolution of 1536x864 and on a PC with a resolution of 3840x2160.
Can you see the difference? Look how much more information one has at hand with the large display. FYI: the pixel-area ratio is 6.25:1. Remember, you need to see things. Every extra clicking, scrolling, opening, closing, searching for a button, and menu browsing holds you down and kills effectiveness. Maybe you’d be surprised (we were too) how much more large screens are effective compared to smaller displays. If you once get used to the luxury of two 4K monitors on your desk, there is no way back.
Graphic Card - not that critical factor
The choice of the graphic card goes hand in hand with the size of the display. First of all, all your actions in a graphical interface like zooming, scrolling, and rotating have to be smooth and enjoyable. No matter how complex the model is. Second, you may want to process many high-quality images for an animation and/or render complex multiple-layer semi-transparent images. If you want to see through the rendering process in a reasonable time you should definitely look for a high-end graphics card. Despite the fact that the graphic card isn’t the most important component for CPU simulations, it surely deserves to be at a similar level to other computer components.
Disks & Storage - not that critical factor
If possible, for daily work we recommend using only SSD disks. Standard hard disks are sufficient for data storage. You can make storage either right on your PC or better, on network storage with a proper backup.
Other equipment
It definitely makes sense to pay a little bit more for quiet coolers, a vertical mouse, an adjustable monitor, and maybe an adjustable chair or table. All these little things make your work more enjoyable. All over again we realize it quickly pays off to invest a little bit more in better hardware and comfort. At the end of the day, it is your joy, efficiency, and good use of your valuable time that matters. And sure, we did our homework already, we regularly update our website Hardware for CFD where we publish the reliable hardware configurations we find best at power/cost ratio.
Simulation costs
The simulation costs can be evaluated as the cost per one core*hour multiplied by total simulation core*hours. The cost per one core*hour on a PC can be calculated from the total cost of ownership and estimated simulation time. One core*hour on a PC starts at about $0.002 but in real life, it’s rather $0.005. Total simulation core*hours for a particular simulation case are impossible to calculate a priori because there are too many unknowns, especially the convergence speed. To get to know each particular simulation case core*hours, there is no other option but to test it.
The different CPU Speed Issue
The core*hour concept has two issues. The first one is scaling. The scaling topic is too large to discuss here in detail. Let’s talk about scaling some time later. The second issue is the core*hour concept doesn’t compare among different computers. It is not perfectly transferable. As you can imagine, a ten years old computer certainly delivers a very different CPU power than a hi-end computer today (while the core*hours remain the same). A better concept seems to be comparing some particular simulation benchmark that is ripped in stone. Then you can ask simple questions like: How much time does this benchmark take to converge on that computer? Or: How much do I pay for this simulation?
Make a Benchmark for Yourself
The good news is you can make one for yourself. It is simpler than it may look at first sight. There is an easy way to do it. You can try out the following example. Download vanilla OpenFOAM (you can do so on our website), run the basic tutorial motorBike, and measure the time to finish. Run it everywhere you can and measure the total simulation time. You'll be surprised how large differences you can find across the processors.
You can make it even more profound and refine the mesh several times. You may need to change the configuration file decomposeParDict to for example numberOfSubdomains 24; method scotch; and in the file, blockMeshDict refine the basic background mesh blocks a couple of times. Don’t forget to remove cell limits from snappyHexMeshDict. Run the corresponding simulations with otherwise default settings. Anyone can make an independent study.
You can be sure I did my homework too. Here's a little experiment as an example. I picked two not-too-bad processors from AMD and Intel and ran a couple of motorBike simulations.
Such a benchmark test can show you a couple of things. It gives you valuable time estimates to take into account. It can offer you a profound comparison between various processors. And it also shows the scaling potential of your simulation.
As a result, please remember, it’s important to know your simulation size and simulation time. That’s the best starting point for a responsible hardware/solutions decision.
Final Recommendations about Hardware for CFD simulations
As discussed above, it’s the simulation size that matters the most. And together with the fact that most CFD users prefer the results finished overnight or so, it’s not difficult to derive what hardware to get. Remember, the simulation size and used hardware have to be balanced.
For example, if you are a master's degree student and want to chase a stall point of an airfoil for your diploma thesis. Then your typical simulation may take up to 5 core*hours. Then you can easily make do it with a normal laptop.
If you are a designer or engineer and your typical simulation takes up to 500 core*hours then you may need just a good PC.
If you are a CFD professional whose typical simulation takes more than 500 core*hours, then you need a PC and also a simulation cluster. Preprocessing, case set-up, first tests, and postprocessing are typically done on PC while the productive simulations are run on a cluster.
All in all, PC is a golden standard. And you still need s station to sit for preprocessing and postprocessing after all. As you know, at CFDSUPPORT, we do believe that any successful project is a result of focus, skills, experience, patience, and dedication. And especially, if you are aiming high with your results' accuracy, it definitely makes sense to spend your effort in a pleasant environment and in an effective way.
If you want to see some particular hardware configurations we recommend, you can find them on our website Hardware for CFD: https://www.cfdsupport.com/hardware-for-cfd.html
Feel free to get in touch. And good luck with your projects!
Just some guy at 2515267 Ontario Inc.
1 年So....the TL;DR (although I read through your entire article/post) is: YMMV/it depends. re: CPU Ideally, you'd actually want the fewest number of cores, at the highest possible clock speeds (when all of the cores are working (100% load boost clock speed), followed by the highest number of cores. However, different CPU manufacturers and even different CPU generations within the SAME CPU manufacturer, will not perform in the same way, and that makes trying to compare between CPU generations and between CPU manufacturers, very difficult. As a result, I generally have a vastly simplified, but largely incorrect way of comparing CPUs which is where I would take the all core turbo (in GHz) * the number of cores/socket * number of sockets (if/where/when available/applicable) and then calculate the $/(GHz*core) metric to try and come up with a "cost efficiency" metric to assess CPU "performance". In my experience, it's a good enough approximation without having to dive 20-variables deep into trying to assess every minutae detail of what would make CPU A faster than CPU B, especially when you don't have the ability to run your benchmark on those specific CPUs to get a more definitive answer to this question. It works well enough for it
CFD/Thermal Engineer at Vertiv
2 年Now NVIDIA GPGPUs are becoming a great advantage for CFD simulations, those are not used just for rendering anymore, instead those are adding computing power by the CUDA processors.
Founder & Technology Specialist at R.K.S. Technology & Services? | Customized Solutions & Services | Energy Technology | R&D | CFD: AI-ML integrations | Alma Mater-IITB & IITM
2 年Lubos Pirkl , Such good guidance who are looking for ideas to build a PC or Server. I have noticed that you have not mentioned anything related to inlet processors against AMD processors. In your house, have you not used inlet processors? I have always had this confusion about which one process is good for computations in between inlet and AMD processors. If you could explain a little bit more it would be good for users. Any thoughts? If we compare cost wise, inlet processors are more costly against AMD processors. What about performance ? if some inputs on that side. #guidance #build #PC #Server #users #inlet?#AMD #processors?#performance #rajcfd
Dirigeant Airredic - Ingénierie Conseil - R&D - Mécanique des Fluides
3 年A point which is important is also the power consumption of the workstation. A good means of comparison is, depending on its use, to take into account the purchase price / computing power / consumption criteria. Personally, I have my own benchmarks under OpenFoam and they are quite similar to the results obtained with Cinebench or Passmark (a good way to compare CPU power very quickly). Knowing that a calculation can take several days on a cluster of bi-Xeon workstations, consumption becomes an important criterion. Buying a second-hand workstation makes it possible to obtain a very high power at a low purchase cost but, in the long term, the performances obtained by less consuming and more recent machines must be taken into account ... A 3rd column including the TDP (or even the consumption of the motherboard / CPU pair) would suitably complete the CPU benchmark provided.
Exploring ways to optimize (Environmental & Bio-)processes using Digital tools
3 年Great post! I'm sure Wim Audenaert will find this quite helpful.