Convergence Issues

Included are some suggestions to aid convergence of transient analysis models which are large (greater than 6,000 nodes), contain advection, and are having problems converging with the adaptive time step defaults.

Often  symptoms of nonconvergence include a continuous reduction in time step to unreasonably small time steps (less than 1.0E-8 time units) and/or iterative delta-Toscillations which extend to the extremes (+-1000 degrees) of the allowable range within a time step.

Any user who is modeling a problem like this and is experiencing these symptoms should have mastered the skill of monitoring an analysis using the qstat thermal utility and the qout.dat file and terminating an analysis using the qkill command executed from the job-named subdirectory.  These skills are useful in assessing the effectiveness of any the suggestions which follow.

First, in order to evaluate any trend toward convergence, several short transient runs must be made.  To this end the user must reduce the duration of the transient to a reasonable level by reducing the model ‘Stop Time’ under Analysis/Solution Parameters/Run Control Parameters.

The user must then become familiar with the performance of the model using the default Analysis form settings.  While running the analysis as-is use ‘qstat c’ or a series of ‘qstat b’ calls to review the stat.bin file for a series of time steps. The qstat calls will display the model ‘Time’, ‘Time Step’, ‘CPU Time’, and Iteration related information.  If the model appears to be monotonically approaching convergence but fails to converge because it exceeds the ‘Maximum Iterations per Time Step’ then the an obvious attempt would be to increase the ‘Maximum Iterations per Time Step’ under the Analysis/Run Control Parameters/Iterations Parameter form.  Change the default from 36 to a larger number like 200.

Qkill the current job and rerun observing whether the time steps eventually converge within the new number of max iterations.  If it does converge then qkill the job and also reset the ‘Minimum Number of Iteration per Time Step’ to a number 1/3 to 1/5 the size of the maximum.  Reset the maximum down to a smaller number if appropriate.  That is, you’ve observed time steps to converge in consistently
fewer than 90 iterations then reduce 200 to 100 and make the minimum 25.

If the model shows signs of extreme oscillations there are several parameters that can be set to help mitigate these oscillations.  By mitigating the range of oscillating temperatures a model will very often bring itself back into a convergent iterative pattern.  Under Analysis/Solution Parameters/Convergence Parameters set the ‘Perturbation Temperature’ to a smaller number than the default of 0.05.
0.01 is a good recommendation.  Also, under Analysis/Solution Parameters/Run Control Parameters set the ‘Minimum Allowed Temperature’ to a value which is physically meaningful for your analysis.  For instance, if you are modeling in Fahrenheit use -460.0, if Celsius use -273.0, if K or R use 0.0.

More importantly clamp the ‘Maximum Temp Change per Iteration’ to a reasonable number for your conditions.  If your expected temperature gradient across the model from source to sink will 200.0 then set the  ‘Maximum Temp Change per Iteration’ to 20.0 or 25.0.  Clamping the oscillatory range may speed the switchover to monotonic convergence once oscillation begin.

Under Analysis/Solution Parameters/Relaxation Parameter change the ‘Relaxation Application Flag:’ from “1, Heat Transfer Mode Group” to “2, Node by Node”.  This will force a relaxation parameter computation for each node rather than specifying a single value for each node associated with a heat transfer mode, advection, radiation, etc.  In most cases these changes will yield convergence.