You already setup timing for your PWL one follows another, by just adding two source should do the trick (use a B-source or connect in series). But it seems you are looking for something else? Keep repeating V(z) pattern?
Thank you so much for your quick response. I realize I didn’t make my question very clear. I want to generate two waveforms using only 1 source.
The first waveform is:
PWL REPEAT FOR 20 0 0 5µ 0 5.1µ 12 10µ 12 10.1µ 0 ENDREPEAT
The second waveform is:
PWL REPEAT FOR 20 202µ 0 207µ 0 207.1µ 12 215µ 12 215.1µ 0 ENDREPEAT
Somehow, I am able to run the first and second waveforms consecutively in the same source. ?
I don’t actually mean to have two separate sources and then combine them. If we don’t have a function to do this, I think I can add two sources where the first source runs the first waveform starting at 0, while the second source runs at 202µ with second half waveform.
You mean you want to generate something like V(z) but with a single source setup?
I don’t have any good idea how this can be generated with a single source. (an option is to load the entire PWL signal from a file, but that is tedious to create this entire signal pattern).
I made a VCO symbol which accept frequency and duty to be independently controlled. In freq control, from ctrl pin, voltage equals minV gives freq=fmin and maxV gives fmax
In duty control, from duty pin, e.g. 0.5 = 50% duty (may not work if duty is too close to 0 or 1)
I didn’t exactly re-create your pattern, but just an example how this can be used to generate the pattern in your post.
Thank you so much for your reply. This isn’t quite what I’m looking for. I created the waveform using a microcontroller, and I want to test a circuit with it. I think I can generate the waveform using C++. Could you give me an example of how to create a square waveform in C++? Thank you so much for your support!
This is a C++ example for generating a PWM pattern. I assume that you need to send commands to the microcontroller to specify the desired frequency and duty practically. In the simulation, I use the input ports “frq” and “duty” to control the output. However, this implementation is almost the same as my previous VCO model, just with C++ code. Therefore, I am still unsure if this is what you are looking for.
In this example, the frequency steps between 250Hz and 1kHz, while the duty cycle increases from 10% to 90% over a 100ms test time.
For this type of DLL block, it’s better to include the Trunc() function for TTOL. If you are having trouble understanding this code, you can refer to my device guideline and study the Ø-Device section, or visit @RDunn’s GitHub for QSpice C++ paper. robdunn4/QSpice: QSpice tools, components, symbols, code, etc. (github.com)
Assuming that you haven’t already completed this project, I’ll offer a slightly different approach than @KSKelvin provided.
Kelvin used Trunc() to do the timing. I think that MaxExtStepSize() might be better suited for generating a custom pulse sequence. The advantage is that MaxExtStepSize() doesn’t have as much processing overhead as Trunc().
Conceptually, Trunc() “sneaks up” on the next state transition with multiple step-size reductions. Each time that the step time is reduced, QSpice restarts a step-time doubling algorithm. (Kelvin has a paper on the time-step algorithm I think.) MaxExtStepSize() allows us to limit the step-time to exactly what is needed for the next pulse without extra time-step reductions. Bottom line: It should process faster.
The C-Block Basics #5 paper on my GitHub QSpice reposistory demonstrates generating a clock pulse with fixed interval but could easily be modified to generate arbitrary pulse patterns by changing the “next clock trigger time” when each pulse is generated.
This is a very interesting idea that I hadn’t considered before. I modified the code using the MaxExtStepSize() approach. Unlike Trunc(), MaxExtStepSize() doesn’t look forward in time to decide what the current timestep needs to be, therefore, it won’t know when the transition will occur. However, in this case, since I input frequency and duty, I can calculate the minimum state time based on that (i.e. 1/frq*duty or 1/frq*(1-duty), depending on which one is smaller). In this example, I set MaxExtStepSize() to be 1000 samples to the minimum state time.
In this demonstration, I disabled timestep control in the V-source (timectrl=none), which eliminates other factors that affect timestep studies. I have a sub-circuit that can monitor timestep, with using the state() function, showing the timestep during simulation.
If a sharper edge is required, we can increase the value of the sampling in the source code (in MaxExtStepSize()). However, it’s important to note that, as mentioned before, maxstep doesn’t predict the future, so we always need to proceed with caution. Although this approach removes the extra processing that Trunc() introduces, it may generally increase the total simulation time as the timestep always has to be limited to a lower value.
This is the code with Trunc() for timestep control. Its total elapsed time is faster than using MaxExtStepSize(), as it typically runs the simulation at around ~100us in this example but only reduces the timestep when a state changes (reducing it down to 1ns) to provide higher resolution only at each moment of discrete change.
Therefore, it really depends on how people prefer the timestep to behave. If someone prefers maxstep to always be limited to a relatively small and consistent value, Trunc() (or TTOL) does not serve this purpose.
Here is a zoom in comparison of the time step difference in approaching a state change. MaxExtStepSize() limit the timestep based on user algorithm, but without seeing future, I cannot see how we can only reduce timestep only at transition. Trunc() only reduce timestep at transition as it see the future and make such decision, and it gives much wider step after that to maintain a fast simulation. But Trunc() will give a very aggressive timestep change throughout simulation, this is what I said some people may not like this. But knowing what Qspice do will help user to optimize as this is just a tradeoff in simulation time and precision for circuit contains switching devices or formula.
@KSKelvin, your clever and thorough analysis techniques always impress me. But, no, that’s not what exactly I had in mind.
The C-Block Basics #5 code works like this:
In per-instance data, we have:
double next_t; // next clock tick simulation clock time double incr_t; // simulation time step increment to next clock tick
Somewhere (constant, attribute, etc.) we have a double TTOL which sets the maximum rise/fall time when output state changes.
In the evaluation function:
We calculate a new incr_t = next_t - t. (If this goes negative, the below fixes that.)
If t >= next_t we:
Change the output state.
Calculate a new next_t. In my example, this is calculated based on a fixed clock frequency. However, it could come from an array of hard-coded values, an external input or attribute, etc. The point is that we can determine when the next tick should occur.
Set incr_t = TTOL to make the very next step short for sharp rising edge. This was defaulted above and set here only until the next evaluation function call.
In the MaxExtStepSize() we simply return the saved incr_t (if it is > 0). This should ensure that the next simulation point occurs no later than our desired next clock tick.
Side Note: I think that we could move the above MaxExtStepSize() logic to Trunc() (i.e., return incr_t from Trunc()) and get identical functionality (and without bothering to call the evaluation function). But then QSpice would need to pre-calculate hypothetical values and, if Trunc() reduces the proposed time-step, re-calculate the hypothetical values and call Trunc() again.
Anyway, as always, I could be wrong. I’ll see if I can modify your excellent example so that we have a proper comparison.
Kelvin, here’s my “MaxExtStepSize()” version of your code. I’m uploading my copies of your code & schematic with original filenames and my revised versions with *2.* names so that we can verify that I didn’t accidentally change anything else.
Note that we are solving different problems. You provided a VCO where the outputs are determined by inputs. In that case, Trunc() is the proper solution. However, the OP asked to be able to drive outputs based solely on code. For that purpose, the outputs don’t depend on inputs and MaxExtStepSize() may be more efficient. The below code could be more efficient for code-generated signals (the OP’s request) but I am trying to compare similar code and not change your example more than necessary to demonstrate the efficiency differences.
When I compare versions, the original Trunc() version runs in 0.0772412 seconds. The MaxExtStepSize() version runs substantially faster in 0.0223861 seconds.
Looking at your excellent graphical analysis instrumentation, it looks like the modified code timestep() plot shows that the timestep is only occasionally reduced in the MaxExtStepSize() version as compared to the Trunc() version. This would account for the much faster simulation time.
Of course, the outputs are not exactly identical presumably due to the Trunc() version resetting the timestep algorithm more frequently. They are very close, though.
As usual, I could be very wrong. (I hate it when that happens. ) What do you think?