Force minimum time step

Hi @Engelhardt and Teams

I am wondering, if Qspice can incorporate a minimum timestep parameter? To force skip the step where the timestep is lower than the minimum timestep and simply moveon to the next step.

My enquiry originates from common convergence issue where the simulation solver forces very small time step down to pico-sec where the simulation will just ended up crashed. Or in other cases where it manages to converge but with very large datapoints and also being very slow.

While, in my opinion… a few bad datapoints doesnt necessarily means whole simulation data to be unreliable.

Often time my solution is to throw rser wherever I see fit or set .ic on capacitors. However, its often non-trivial and requires experiences.

If its already somewhat implemented, please kindly let me know.

Thanks
Arief

Knowing very little of the internals of Qspice, That sounds like a great idea! If the time step is too small, why not skip it, I don’t care if some net is oscillating at some nanovolt per attosecond, how much does that effect the larger simulation?

1 Like

Probably if you forbid the solver to go under an artificial limit the simulation will crash immediately.

@daddyzaur

Probably if you forbid the solver to go under an artificial limit the simulation will crash immediately.

What makes you think that the simulation would crash immediately? In essence the “time limit too small” is already a limit on how small the time step can go, and with that limit many simulations run with no issue. I think that part of the issue is that dv/dt or di/dt is heading to infinity because dt is getting so small so maybe a limit can be set that if dv/dt > some limit (or dt < some limit) it just fixes dv/dt for that net at the max dv/dt limit and emits a warning. This would be similar to adding assorted L, C and R around the circuit to “slow it down” but just applied globally.
Again I know nothing of SPICE internals but I have designed control a control system or two, maybe it’s similar.

@314159265 Probably Mike would clarify this issue immediately if there is some value in doing so. My line of thinking is that the results returned with an artificially high limit will stop the simulation immediately, pretty much like what we get normally with the default solver. You just get the convergence error much earlier.