Trunc() bug - simulator occasionally ignores trunc() timestep

Hi Mike,

I am running a co-sim with QSPICE and verilator model with timing.

My Trunc function is set-up to force the simulator to hit the exact timeslots required by the verilator model. If the incoming t is greater than the next verilator timeslot then I set *timestep to the required dt based on the t from the last evaluation. Here’s a code snippet.

if (((uint64_t)(t * 1e12)) > (vinst->nextTimeSlot()))
{
*timestep = vinst->nextTimeSlot() * 1e-12 - (opaque)->t_last;
printf(“>trunc_t = %f: * Shortening timestep: from dt = %f to %f, new t = %f\n”, t * 1e9, timestep_in *1e9, *timestep * 1e9, ((opaque)->t_last + *timestep ) * 1e9);
}

This runs fine for about 2.6ms of simulation time but then, for no apparent reason, the simulator ignores the *timestep that I requested.

See debug output below for a few timesteps upto and including the bug…

1st number is the proposed t coming into trunc()
2nd number is proposed t minus the last t evaluated.
3rd number is my *timestep requested
4th number is my calculated next evaluation time.

trunc_t = 261255.000000: * Shortening timestep: from dt = 10.000000 to 5.000000, new t = 261250.000000
trunc_t = 261300.000000: * Shortening timestep: from dt = 10.000000 to 5.000000, new t = 261295.000000
trunc_t = 261305.000000: * Shortening timestep: from dt = 10.000000 to 5.000000, new t = 261300.000000
trunc_t = 261350.000000: * Shortening timestep: from dt = 10.000000 to 5.000000, new t = 261345.000000
trunc_t = 261355.000000: * Shortening timestep: from dt = 10.000000 to 5.000000, new t = 261350.000000
trunc_t = 261366.376635: * Shortening timestep: from dt = 9.448432 to 3.071797, new t = 261360.000000
trunc_t = 261375.104877: * Shortening timestep: from dt = 10.000000 to 4.895123, new t = 261370.000000
trunc_t = 261375.181704: * Shortening timestep: from dt = 5.984263 to 0.802559, new t = 261370.000000
trunc_t = 261388.468266: * Shortening timestep: from dt = 9.101943 to 0.633677, new t = 261380.000000
trunc_t = 261398.871480: * Shortening timestep: from dt = 10.000000 to 1.128520, new t = 261390.000000
trunc_t = 261396.771123: * Shortening timestep: from dt = 4.514082 to 2.742959, new t = 261395.000000
trunc_t = 261400.485918: * Shortening timestep: from dt = 5.485918 to 5.000000, new t = 261400.000000
-V{t1,1}@261400.486000: missed 261400.000000
%Error: C:\Users\datkin\source\repos\verilator\install\include\verilated_timing.cpp:84: %Error: Encountered process that should’ve been resumed at an earlier simulation time. Missed a time slot?

There are no other trunc() functions active in the sim

Near the bug the analog simulator is reducing timesteps to follow a falling edge. I would expect the simulator to always respect the min *timestep that is requested by trunc. Is this a reasonable expectation?

With thanks

Dale

Hi, Datkin.

If you post the sources and *.qsch, I’ll take a look. Otherwise, all I can do is guess…

–robert

Hi Robert,

The attached demonstrates the issue.
trunc_bug.qsch (1.3 KB)
trunc_bug_x1.cpp (3.8 KB)

After eval at 125ns then Trunc is called by sim engine with t=130.226ns, my function sets *timestep = 5.00ns, the simulator ignores this and uses a timestep of 5.226ns.

I couldn’t get display to work from within trunc(). If you want to see the printf output then run from the command line.

Here’s the command line output for reference around 130ns:
X1: @120.795625: Next clock at 125.000000, Actual Timestep = 0.795625
X1: @122.386875: Next clock at 125.000000, Actual Timestep = 1.591250
@125.569375: my_timestep = 2.613125
X1: @125.000000: Next clock at 125.000000, Actual Timestep = 2.613125
@130.226250: my_timestep = 5.000000
X1: **** Trunc Bug! t = 130.226250, t_next = 130.000000 , Actual Timestep = 5.226250****
X1: @131.235000: Next clock at 135.000000, Actual Timestep = 1.008750
X1: @131.365000: Next clock at 135.000000, Actual Timestep = 0.130000

Many thanks
Dale

Thanks, Dale. Downloading now…

–robert

Ok, Dale, let’s make sure that I understand the goal. The original question seemed possibly different what with verilator stuff…

So, you want to ensure that QSpice takes a sample at exactly every 5ns of the simulation clock, right? The timing of that is independent of any input voltages, right?

–robert

Hi Robert,

The example I supplied is a minimal testbench to demonstrate the issue.

My end goal is to run a verilator simulation that “has timing”, so wants to execute at discrete time steps of 5 or 10ns. Verilator will tell you when the next timestep is by calling the nextTimeSlot() function - so what I want to do is use the Trunc function to let the analog simulator know my desired next timestep.

I want to co-simulate with analog circuits that have active features that create edges asynchrously and so I want the analog simulator to be able to change timestep to resolve these edges but also to hit the timeslots that Verilator needs.

The c code that I supplied tries to force the simulator to hit time slots exactly every 5ns - this emulates my Verilator simulation. The voltage source on the schematic creates edges every 13ns - this is emulating asynchronous analog stuff. The analog simulator will shorten the timesteps to resolve these edges. I’d expect the Trunc function to respect the minimum timestep that I request - but it doesn’t always do this.

A workaround is to set the voltage source to pulse at 10ns period, but this will add ~10 points at each edge and so slow the sim down by a factor of 10 which is very undesirable.

With thanks

Dale

Hi, Dale.

OK, I’ve played around with this for longer than I care to admit. Anyway, yes, I’m seeing the same behavior. I don’t know if it’s a bug but maybe.

I’ve attached a *.cpp with additional debugging in hopes that it will be useful. Mainly, it writes the output to a debug.txt file so that QSpice doesn’t suppress any of the output. (QSpice suppresses Display() output when evaluating Trunc() calls by design.)

That said, a couple of observations and suggestions.

To the extent that the C-Block can calculate a next trigger time-point in advance (what your code is doing if I understand correctly), I recommend using MaxExtStepSize() to “sneak up” on the time-point (essentially what you’re trying to do in Trunc()). See C-Block Basics #5 in my QSpice GitHub repo.

On the other hand, when the C-Block depends on some unpredictable external voltage change to drive the code, you are forced to use Trunc(). The canonical usage calls the evaluation function to detect a state change and reduces the time-step to some rather small tolerance value around the state change.

Importantly, the canonical usage passes a temporary copy of instance data from Trunc(). I’m not sure how the verilog stuff works but you must ensure that the hypothetical Trunc() calls to the evaluation function don’t alter the verilog state. That is, all of the verilog state must be contained in the per-instance data or somehow remain unchanged during Trunc() evaluations. Again, I’m not a verilog guy – I spent enough time looking at the QSpice demos to decide that the generated code is complicated. It may or may not be safe for use with Trunc(). Do share what you learn.

Oh, one last thought: QSpice strives to be fast. It does “dark magic” to optimize time-steps. Since your circuit doesn’t output anything that drives output side circuitry, it may simply be deciding that the Trunc() calls don’t matter. Maybe try adding some output circuit to see if that changes things. This is just a guess but is consistent with some oddities that I’ve seen.

Hope this helps.

–robert
trunc_bug_x1.cpp (4.4 KB)

Not sure if this input is useful. In SPICE simulation, you have no control over the timestep. The timestep dynamically adjusts throughout the simulation in SPICE-based simulators. Fixed timestep in general only found in PieceWise-Linear simulator.

Regardless of your actions, you cannot force the simulation to run, for example, exactly at 5ns, 10ns, or 15ns with a fixed step. For discrete simulation, approach is to execute discrete operations at consistent intervals. Generally, to achieve this timing consistency, Qspice involves a temporal timestep (ttol) in Trunc(), which aims to refine the simulation step to a precise enough level before reaching the desired interval time point to perform discrete updates, rather than to force time to fly to next timestep you decided.

In short, SPICE determines the next timestep based on the simulation’s need to converge on a solution. Your option is to make this step smaller but never larger. That’s why you don’t have precise control over the timestep. SPICE doesn’t know what the next timestep should be; it depends on whether convergence can be achieved (in general, it has a target maxstep, if this step can converge, it just take this step). If convergence is possible but you assign a smaller timestep in ttol, it can reduce this step to ttol accordingly. However, if your ttol setting is larger than that, SPICE simply ignores it.

1 Like

I believe @RDunn has better content than mine for explaining the DLL block on his GitHub.

Here just an example of how I implement a discrete feature at every 5us interval. My approach involves reducing the tolerance (ttol) when nearing each 5us interval where a change event is anticipated by Trunc().

example.FixedSampling.qsch (2.5 KB)
fixedsampling.cpp (4.1 KB)

@RDunn , @KSKelvin ,
Thanks for spending time on this.

I’ve done a bit more work on this and the final files are attached.

trunc_bug.qsch (1.3 KB)
trunc_bug_x1.cpp (4.3 KB)

Having looked at it again I realised I screwed up the pulse source settings on the original which is now fixed… I’ve incorporated @RDunn 's file logging which surprisingly speeds up the sims quite dramatically.

Hopefully you can now see more clearly how it is nearly able to run mixed signal and exactly hit the requested clock edge from the c block as well as slowing down to resolve the edges from the voltage pulse source. It just occasionally misses the clock. Keep zooming into the waveform and eventually you will see the individual timepoints.

I’m now waiting for a response from Mike E. to confirm whether this is expected behaviour.

With thanks

Dale

OK, Dale, I’ve messed around with the new version. See attached.

I added some circuitry to the schematic. This is an attempt to ensure that QSpice doesn’t “get bored” and fail to trigger calls to the C-Block component. (See the C-Block Basics #5/MaxExtStepTime() document for some minimal/vague details.)

More importantly, see the code and debug.txt (*** err1/2 ***). It’s possible for your Trunc() code to set the timestep to zero or negative. That shouldn’t happen. You might want to look carefully at the t_next calculation.

Personally, I’d do it differently: Save the next trigger time in inst and, each time that you pass it, add the 5ns increment to the next trigger time. That would ensure that rounding errors aren’t the problem.

Finally, I’ll refer you again to the MaxExtStepSize() paper. While you may be able to increment in specific steps using Trunc(), it is inherently less efficient and this isn’t what it’s designed for. (It’s intended to create a narrow sampling window around component state changes that depend on / are driven by changing inputs.) WIth Trunc(), QSpice is pre-calculating circuit state for possibly unneeded timesteps. With MaxExtStepSize(), QSpice doesn’t even propose a “too large” timestep so no wasted pre-calculations.

For these reasons, my “rule of thumb” is: If the code can calculate the next desired simulation timepoint without reference to input values, use MaxExtStepSize(). But, of course, that’s my humble opinion and I could be wrong. :wink:

Please continue to share what you learn. I’ll be curious to hear what Mike says.

–robert
trunc_bug_x1.cpp (4.7 KB)
trunc_bug.qsch (2.6 KB)

In general, Trunc() .cpp examples in Qspice are written in a way to define a meaningful state change and reduce the timestep before this event happens. A simple way to study nature of Trunc() is to use Qspice native device with TTOL in its instance parameter.

To demo the nature of TTOL, timectrl=none is used for pulse source, this instance parameter can remove any timestep control scheme in relate to pulse source in confusing the study. A .option maxstep=1e-7 is used to set a target timestep ceiling, and now from the simulation result, we can confirm that timestep is consistent at 1e-7s (i.e. 100ns) during entire simulation.

Everything is setup, and now we set TTOL for logic to 1e-8 to explain Trunc(). Trunc(), is a function that looks into the future to determine what timestep to be used in current step.

At [1], it previous timestep is 100ns, and in Trunc(), it check if using this 100ns as timestep, will the output change its state (that is tmp. != inst. purpose for), if it does, timestep will be forced to use TTOL value. Therefore, you can see at [1], it takes 1e-8 as the next step.
At [2], this is trickly part, now timestep is forced to TTOL, and what next? My study show that Qspice will double current timestep (2*TTOL = 2e-8) for the next step. Except this doubled timestep in prediction causing a state change, it will continue to double until it reach back to normal timestep floor.
At [3], the next double is (4*TTOL = 4e-8), however, it repeated the situation as similar to [1] if using 4e-8, and therefore, timestep be forced to TTOL=1e-8 again, and therefore, you see timestep reduced again.
At [4], it finally can pass the state change, and nothing to retrigger timestep to TTOL again, now you see timestep follow the times 2 pattern until its reach normal timestep floor, which is maxstep 1e-7 in this simulation. In here, what important to learn from Trunc() is that, you can decide when to force a timestep to become TTOL value, but you have no controllability of how its return to timestep target floor (in general this defined by maxstep)

By showing you this, what I want to point out is that, Trunc() may not be what you think it is. You have controllability to force a smaller timestep, but how its return you have no controllability of it. This is also the reason that @RDunn suggest you to play with MaxExtStepSize() instead of Trunc(). However, Trunc() has an important role in Qspice. Only by knowing how Trunc() works can help you to properly setup thing in Trunc().

Of course, this whole thing is study from the simulation pattern, as same as Robert, we could be wrong and you can share what you learn which is different to us.

1 Like

Hi All,

Thanks for your help and suggestions. I now have my sims working by using MaxExtStepSize() instead of Trunc().

See attached cpp for a working example. This shows the simulator successfully following the edges of the pulses (emulating some async analog stuff) while also hitting the exact timesteps requested by my C code (emulating some clocked digital).

trunc_bug_x1.cpp (4.2 KB)

I’m still waiting to hear back from Mike, because even though I can see that Trunc() is normally used to throttle back the ttol to a small value, I may have stumbled across a bug where occasionally it misbehaves and ignores the suggestion.

Thanks

Dale

Hi, Dale.

Glad that you got it sorted. Please let us know what Mike says.

–robert

I got a reply from Mike indicating that this is not a QSPICE bug and as discussed above the Trunc() function is really about throttling back the time tolerance rather than trying to aim for any particular timeslot.

2 Likes

Thanks, Dale.

Apparently Trunc() is merely “a suggestion” and less limiting than I thought / expected. Good to know. Guess that I need to update my unofficial documentation. :wink:

–robert