Alf’s Musings #14
Alfonso Martínez de la Torre
Founder at OpenMagnetics, a Free Open-Source toolbox for designing inductive components
A bit out of context?
This is an Alf’s Musing that has been cooking for a long time.
Some months ago I started working in a data structure that were able to hold all possible combinations of information for any material used in magnetic cores, be it ferrite, iron powder, electrical steel, or nanocrystalline (which is here, if you are curious).?In that common magnetic language, the way of storing different core losses methods was one of the most important factors.
After that it followed reading all available papers in the state of the art. Once understood, they needed to be implemented, where I was lucky enough to count with the help of Riccardo Tinivella , who implemented the C++ version of the iGSE.
Once all that is done, the models need to be verified, as bugs are everywhere, so a comprehensive test suite
Finally, after the models are implemented and verified, they must be beautifully integrated in our online tool, where anybody can directly use them with a few clicks.
Will this be the beginning of another series on models about core losses and temperature? :)
A wonderful world
A magnetic component is an inductive device, and as such it has no resistance. All the power that goes into the magnetic is either passed along or returned in pristine condition, all apparent power is reactive power, no loss. And this is valid for any kind of magnetic: inductors, transformers, chokes, reactors, even flyback transformers multi winding inductors.
What a beautiful world would be if that were the case, dear reader. How beautiful indeed.
Sadly (or gladly for us Engineers, as we love challenges) reality is not ideal. The entropy of the universe always grows, nothing is faster than the speed of light (in vacuum, in other mediums it can be), and the magnetic components have losses.
As with so many things in magnetics, losses can be separated into two main sections: core and winding. I have already talked in length about the later (Alf’s Musings No. 1, No. 2, and No. 3 ( which has not been republished in this newsletter yet)) and not so much, but still quite a bit about the former (Alf’s Musings No. 4), so I won’t be repeating myself. Go and read those!
What I want to do in this article, and in some future ones, is to talk about the different models that exist to predict these losses and how well they replicate the behavior of a real magnetic.
As I explained in the introduction, I have been working on core losses, so I will focus on those for now, leaving the winding losses models for a more distant future.
Overview of core losses methods
The methods for predicting the losses in a magnetic core can be grouped in three different types:
Let’s talk a bit about each one:
Mathematical models
These models try to reproduce the macroscopic behavior that the microscopic effects of the core material produce. Traditionally the losses in these cases are broken down into two different origins: hysteresis loss and eddy current loss; although more recently a new third term has been taken into account: excess eddy current or anomalous losses.
And how to calculate each one? For the last two terms of the modern three-terms equation Bertotti introduced equations based on his research of soft magnetic materials. These equations try to take into account the energy wasted due the induced currents in the magnetic core.
The first term of the equation, the hysteresis loss, is modeled by the area of the BH loop created by the operation point that excites our magnetic core, times the frequency at which the magnetic field is switching. So now we only have to calculate that area, easy right?
I am sorry to say that no, it is not an easy task. Most of the efforts in the mathematical models are spent finding a way to reproduce this BH loop for any given non-sinusoidal excitation, as it is a highly non-linear behavior, with major and minor loops intertwined in the same period.
The most famous models of this kind are Preisach, Jiles-Atherton, and Roshen. Their work is too extensive to treat in this overview, but I will add them to the queue of topics to cover in future Alf’s Musings.
Empirical models
Empirical models are on the opposite corner of the mathematical ones: they try to model all the losses existing in a given core material with one equation or system of equations that are fitted with empirical measured data
This equation is composed of two types of parameters, the fitting coefficients and the inputs. The inputs are typically variables of the excitation or the state of the magnetic: frequency, magnetic flux density, temperature, or duty cycle.
The fitting coefficients are numbers that are chosen so that our empirical equation minimizes the error when evaluated in the measured data along its inputs.
It might sound a bit mathematical and complex, but if you have designed a magnetic component, I am quite confident you have used one of these methods: the universally known Steinmetz Equation.
Usually manufacturers provide the fitting coefficients for their materials implicitly in the form of printed graphs in the datasheets, though the most modern ones are directly giving the numerical coefficients so that they can be integrated in modern tools. An example of this are Ferroxcube or Magnetics, who provide detailed spreadsheets with the data for each material.
I must make a stop here to talk about a common misunderstandment regarding the Steinmetz equation. As many old manufacturers provided only the plotted volumetric losses in their datasheet, many Engineers believe that these plots are the source used to get the Steinmetz coefficients and that extracting the losses from there is more accurate than using the mathematical equation.
This a false myth, as these plots are the plotted Steinmetz equation after fitting the coefficients, they are a consequence of the mathematical equation, not the other way around.
领英推荐
Does this mean it is wrong to use the curves? Of course not, but they are, in the best case, as good as the mathematical equation, and worse in a common case, as there is always some error when extracting a value from a plotted curve.
Going back to the Steinmetz model, its development was started in 1896 by Charles Proteus Steinmetz in his great paper “On the Laws of Hysteresis”, although what he proposed at that time is not exactly what we know as the “Steinmetz Equation”,? as the frequency and temperature coefficients
Over the years many variations of this equation were done, most of them trying to use the same coefficients, already converted in a de facto standard by the manufacturers. These new versions adapted the original equation, developed at a time where all waveforms were sinusoidal and low frequency, to the new times, with triangular or trapezoidal magnetizing currents that can reach up to several megaHertz.
There are quite a few (MSE, NSE, iGSE, Albach, Barg, Ouyang, Stenglein) and each one deserves their own Alf’s Musing, so I won't get into their details further than what’s needed for the comparison coming in the next section.
But Steinmetz and its flavors are not the only empirical model for core losses. There have been other attempts to predict the losses of the materials, as for example, the Ridley-Nace formula or AI-based models.
Regarding AI-based models, many people consider them something exotic and alien, but at the end of the day they are nothing more than a really advanced equation fitting procedure
Academia researchers have worked with these models since the 90s, but it was not until recently that the computation power of computers and Open Source Software made possible new approaches with larger datasets: In 2021, Princeton and Dartmouth Universities took the time and resources to create one the best sources of core losses data and AI models that exist right now: MagNet project.?
Finite Element models
These are not models for calculating the core losses per se, but software programs that are capable of simulating a whole physical system, as a magnetic component, by breaking it down in really small volumes or elements and applying the desired physical equations in each of these elements, making sure that contiguous elements have contiguous properties.
These programs are able to calculate the magnetic field distribution over a magnetic component under a given excitation, and,in order to calculate the core losses, they apply an analytical equation (typically Steinmetz) to each of these elements, calculating the core losses in each tiny volume, and the total one by adding all all the volumes together.
The most famous one in this section is Ansys Maxwell, though quite expensive. There are other proprietary programs, like Dassault Systèmes SIMULIA; or Free Open Source Software (FOSS) like ElmerFEM, FEMM, or SparseLizard. Finally, there are some programs that are proprietary, but built upon FOSS software, like Trafolo or Quanscient.
Accuracy comparison of different core losses methods
As mentioned on the introduction, some of the aforementioned models are implemented in OpenMagnetics, and the code is available in our Github: https://github.com/OpenMagnetics/MKF/tree/main/src
Needless to say, it’s possible that there are bugs in these implementations, but peer review
I have based this study on the following implemented models: Steinmetz, iGSE, Albach, Barg, and Roshen. There are two implemented models, MSE and NSE, but they haven’t been included because their results are exactly the same as Albach and iGSE models. In the case of NSE, it was developed independently, by Alex Van den Bossche, but he arrived at the same equation as Charles Sullivan.
Regarding materials, the ones used for this study are: Ferroxcube 3C94, Ferroxcube 3C90, TDK N87, TDK N27, and TDK N49. The measurement data used for the total accuracy was obtained from the full MagNet database, courtesy of Minjie Chen. However, since this database contained hundreds of thousands of measurements, it was impractical to use all of them for the visual verification, so one hundred random samples from each material were used for the graphs.
Finally, regarding post processing, all the results for the same frequency, magnetic flux density, and duty cycle were grouped and averaged in order to provide enough readability.
If you participated in the poll from last week, you must be eager to know if you correctly guessed the model with the best total accuracy, so let’s get to that first!
The winner in this case is Albach’s model, although the difference with iGSE is so small that it does not really make sense to say one behaved better than the other.
After the two best models comes Barg’s model in third position, with a bit better accuracy than vanilla Steinmetz Equation.
In the last position comes the only mathematical model implemented, Roshen’s model, although not with a sufficient error over the vanilla to overcome its great advantage: it takes into account the effects of the core geometry, accounting for larger eddy current losses as the core size grows.
Let’s take a look at the error of each mode versus 3 basic variables, starting with the frequency. If the images are shown too small and the text cannot be easily read, I recommend to click with the right button of the mouse and select "Open image in new tab".
The frequency study has been done by averaging all the samples with the same frequency, and adding a rolling average of 3 data points to easily show how each model behaves.
We can see how in general iGSE and Albach behave much better, with Barg and Steinmetz behaving a bit worse, especially at high frequencies. Roshen has the highest error with the frequency.
If take a look at the error versus the magnetic flux density, we can observe something curious: vanilla Steinmetz behaves much better and consistently for lower values, while all models improve their accuracy at higher field values.
Finally, taking a look at the error versus the duty cycle of the signal, where 0 duty cycle represents sinusoidal waveform (I considered this option better than having two different graphs), we can see how Roshen model has a lower error for extreme duty cycles, while the rest behave much better for sinusoidals and triangular waveforms with duty cycles closer to 0.5, though as expected, iGSE and Albach behaves much better.
As a closing summary, I would recommend using Albach or iGSE for small-medium cores, or when the size of the core is unknown, but as none of the other modes account for eddy current and anomalous losses increase with size, I would suggest using Roshen for large ungapped cores, even if the accuracy is smaller.
Power Electronic engineer by day / Financial Quant by night
2 年have you tried to show also the fitting of NN (like Mag Net)?
?TRAFOLO ?? | FEM simulation software for Magnetic components
2 年Oh, finally, we have an alternative to PDFs, the pinnacle of modern technology. Who needs an extensive dataset when you can have a static, one-size-fits-all document that requires WebPlotDigitizer to read log values? Magnetic material manufacturers really know how to keep things exciting, so Alf please don't take away all the fun.
Magnetic Component Design Engineer
2 年Really interesting analysis. Thank you for sharing it. So, even with the best tools we have, we can be more than 1/3 out on core losses! I am curious on one point though; you have reported on overall error, but did you see any trend on + or - results? In other words, were any of the models consistently overestimating or underestimating loss? Or were there any trends apparent in this respect with frequency/duty cycle/flux density? That is important as overestimating would of course be the lesser of two evils.