I’ve found myself in discussions, at a frequency I perceive to be surprisingly often, about the intricacies of automatic stabilization in Abaqus and whether or not it’s appropriate in certain applications. Often these discussions boil down to a ‘he said’/’she said’ discussion of misinformation and conflicting ideas, even amongst experienced users. Given my surprise at how often this is a topic of conversation, I felt it only appropriate to write a blog post about this to reach a wider audience in a hope that it will enlighten those who were unaware of the potential consequences that automatic stabilization may unleash.
First off, it would serve this post well to begin by explaining what automatic stabilization is. The short answer would be, as the name suggests, that it is a way to automatically stabilise your model. To elaborate further, it is a method of allowing the solver to dissipate energy from the model, under certain criteria, to resist any divergence and increase the chances of obtaining a converged solution. This description also encapsulates the issue with automatic stabilization – “allowing the solver to dissipate energy”. By allowing the solver to bleed energy out of the model you are artificially changing the physics of the problem and therefore altering the results.
There are several different ways that stabilization can be applied to a model, however I’ve found that the most common, possibly because it’s the easiest to define, is to apply a “dissipation energy fraction” or a “damping factor” globally to the model as part of the step definition (see image). Selecting one of these brings up some new options that are rather conveniently automatically filled in with some pre-set defaults. All you have to do now is click “OK” and that’s the last you need to think about it. Hit run and this magic button will turn a model that wouldn’t converge into a nice, colourful contour plot. Brilliant! . . . or not.
A message that is echoed throughout the documentation, training manuals – and even the GUI itself – is that stabilization is “for advanced users as it may increase computational time, produce inaccurate results or (even) cause convergence problems”. It is clear that automatic stabilization is not intended as a default go-to for an easy win when troubleshooting problematic models. However, it certainly does have its place; this is to solve specific, localised, issues for small durations of the total simulation time that would otherwise impede convergence and (crucially) would not significantly alter the physics of the problem when overcome with stabilization. Yes, that very small, specific, infrequent issue is when stabilization is appropriate. Therefore, what it is not, is a “magic bullet” to side-step gross problems in the definition of the problem.
What’s important when using stabilization is checking whether or not the simulation results have been altered in any significant way, so that the predicted behaviour can still be considered valid. As mentioned previously there are several types of stabilization that can be applied and each will have its own individual output, however there is a “catch all” history output that can be checked to determine how much energy has been lost from the model due to all sources of artificial stabilization – ALLSD. It is stated in the documentation that ALLSD should be “a small fraction” of the internal energy (ALLIE) of the model. We often quantify “a small fraction” as absolutely no more than 2%.
Should the stabilization energy be inappropriately large, there are methods for reducing its impact on the model. Depending on the stabilization used these may include;
I’ll end this post in the same way that I end all discussions on this topic. If automatic stabilization is still something that you wish to use then I implore you to do adequate research into the topic so that you can ensure that its application will not adversely affect your results.