Is there global variable that can feed data between different DLL's?

I have a simulation where I need to feed digital data between different Ø-Devices. Is there way to exchange data between DLL’s?
Wiring constantly changing digital signals between DLL’s make waveform viewer unstable in longer simulations.
I have tried to use global variable like @KSKelvin has shown in Device Reference Guide. I did not succeed to get that method working. I assume it only works if there is two instances of the same DLL in simulation.

Hi, Anssi.

Let’s make sure that I understand the goal:

  • You have two (or more) schematic components (let’s call them X1 and X2).
  • Each component calls a different DLL (let’s call them X1.dlll and X2.dll).
  • You need to exchange data directly between the two DLLs during the simulation.

If so, that much is possible. But it probably won’t do what I imagine that you want…

You could, for example, create a third DLL. Let’s call that DLL “IPC.dll.” This DLL would expose functions to be called by the existing DLLs to set some value and to fetch that value. That value would be stored in IPC.dll as a global variable.

Each of the existing DLLs would need to dynamically load IPC.dll and resolve the set/fetch function addresses and call them as needed.

That much isn’t really too complicated. Now here’s the catch: QSpice calls the component evaluation functions in X1.dll and X2.dll in an undefined order. Further, QSpice calls them serially – that is, the X1.dll evaluation function call must return before the X2.dll evaluation function gets called (or vice-versa since we cannot guarantee which gets called first).

So, if you want the two DLLs to “chat back and forth” within a single evaluation function call to one of the DLLs, well, I think it could be done but might be far more complicated…

Anyway, I’m just guessing about your goals. Maybe you could describe what you’re trying to do in greater detail and I could think it through a bit more helpfully.

–robert

@RDunn provided an excellent summary, and it is important to understand what you are trying to avoid moving towards an overly complex solution.

I jump in to elaborate on an important aspect of using DLL (Ø-Device) [also apply for ¥-, €-, and £-Devices]. For SPICE native devices (such as capacitors, inductors, diodes, etc.), the formulas all calculated simultaneously at the same timestep. However, for DLL-type devices, they do not resolve formulas through numerical methods but calculate through a programming language. Consequently, their output is always one step behind their input. For example, if you create a simple DLL block with OUT=IN, the OUT signal must exhibit a one simulation step delay compared to the IN signal.

In most scenarios, you may not encounter challenges due to this behavior. However, when exchanging data between these blocks, this nature must be taken into consideration. It’s important to understand that the assumption that the output of one DLL block feeding into the input of another DLL block can be processed at the same timestep is not what is expected.

When writing custom DLLs, consider consolidating actions within a single DLL block unless you are developing a module for reuse in other projects. Working with multiple DLLs is akin to working with multiple processors and requires careful consideration of how to exchange data, with delay becoming a crucial factor based on the specific task at hand.

Yep. The timing gets complicated. I tried to avoid going into it.

System have two or more processing units with individual time bases. In real device, first system output is electrically connected to second system input. In simulation, first DLL calculates its next time event to run, sets MaxExtStepSize according and ignores other calls to speed up the simulation. Second DLL has its own MaxExtStepSize to clock calculations like in real HW.

AI was kind enough to make simplified example for me.
For simplicity, test_write.cpp has no time prediction or fixed slew. It outputs PWM to output pin and to “global variable”.

image

Test_read.cpp reads input pin and clocks it to output pin with internal clock. It also tries to write
“global variable” to Global_out. As already said, this does not work. How could I make it work? Data transfer does not need to be bidirectional.

gobal variable test.qsch (1.1 KB)
test_read.cpp (2.9 KB)
test_write.cpp (2.7 KB)

Final goal is to have second DLL to be in small time shift from first. After initialization I could send the next runtime trough global variable. Now second DLL could run on higher clock until its done calculating. Then it can set the next execution time according to time given.

As mentioned, I could but my processing parts into one DLL but it gets way too complex for me to run different time bases efficiently and result could not be reused.

I reviewed your code. Well, it may not be necessary to tamper with the timestep… However, I would like to confirm:

Q1: Do you aim for the first DLL to generate a PWM signal, and the second DLL to receive this PWM but with an added delay (at both rising and falling edges)?
Q2: Is this delay fixed or controllable?
Q3: What is expected to write into Global_out? A number representing something?

Implementing a fixed dead time is straightforward in simulation. However, I presume you are not seeking a native subcircuit solution, is that correct?

Are you trying to have test_read clock at exactly 2us?

If so, you’ll need to modify MaxExtStepSize() to return the difference between the current simulation time and the next 2us increment. As is, it just tells QSpice to not take a step larger than 2us.

See C-Block Basics #5 in my GitHub QSpice repository for an example of a self-clocking component.

–robert

I played a round with trunc() and TTOL and could not get it behave the way I wanted. So I just guarantee one fast step and simulator seems to catch the change and updates all the analog stuff faster for a while if needed.
Sorry to confuse with meaningless example. I need to limit publishing the original use case.
Q1: In simulation second DLL should have the data available next time its executed after first DLL. Anyhow it would only use it for calculations, when it is clocked. Example just outputs the same data when clocked.
Q2: Delay is somewhat random in example. First DLL outputs when it wants to. Second DLL should have fixed time steps when output can change. I clock it relative to simulation time as its faster than using external clocks. Having several 50-200Mhz pulse voltage sources with fast slopes generate too many unnecessary simulation steps. Example code is clocked relatively slow.
Q3: In example, I expect that Global_out pin would be identical to Output pin. Calculated using global variable instead of wired signal.

Correct. Second DLL (there are multiple) mimics the FPGA even its written in C. It makes gate signals and calculates lots of stuff based on them and main circuit response.

Seems like I messed up the timing in test_write.cpp when I simplified it. In my eyes test_read.cpp works kinda same way you described. It can change outputs every 1us.

By the way, I realized I have made an assumption that this MaxExtStepSize() is device specific, not simulation specific. I need to double check this and remake my timing system if its simulation specific.

Nope, it’s “simulation-global.” That is, the next simulation timestep is limited by QSpice calling all component MaxExtStepSize() (first) and Trunc() methods (second) before calling each component’s evaluation function directly (third). The actual next timepoint may be less than any of the values returned by MaxExtStepSize() and Trunc(). In fact, QSpice may ignore the returned values and take a larger step anyway – the MaxExtStepSize() and Trunc() returned values are merely “suggestions” to QSpice…

–robert

We start to be far from original question but still this seems interesting.
In brief testing I could not find out way to fool incorrect MaxExtStepSize time when 2 DLLs are changing it each simulation step. This should fail fast if they both directly change global value one after another.
I was not aware that Qspice could take a larger step that MaxExtStepSize(). Do you have examples when this could happen?

Well… This should be an example of how things should be done in your second DLL block.

What is within if (Input != inst->lastInput) is equivalent to an interrupt routine to be executed when the Input state changes.

What is within if (t - inst->Ttoggle > Tshift & !inst->complete) is equivalent to an interrupt routine to be executed when the Output state changes (i.e., the time processed by Tshift since Ttoggle).

Trunc() is used in a way that, at the moment of the Input or Output state change, it reduces the timestep to ttol (in this case, 1ns).

This allows the simulation to only reduce the timestep at the event moment defined by the user. You can see the timestep automatically recover to the simulation’s maxstep target after each event occurs. Some users, like @physicboy, play with the timestep in different ways to achieve more effective simulation. Personally, I don’t mind if the simulation runs for a slightly longer time; I always follow the simplest usage case.

Well, how Trunc() operates is a complicated topic that we have discussed several times in this forum, in @RDunn’s documents, and in my documents.

gobal variable test.qsch (1.9 KB)
test_read.cpp (2.9 KB)
test_write.cpp (2.7 KB)

1 Like

A short answer: The value returned from MaxExtStepSize() does not “set a global value.” If two DLLs respond with different values, the smaller will be applied to the next simulation step.

–robert

1 Like

Hei… i am here

My reason for playing deep with these timing control is because I make simulation for multiphase/multiconverters system… so, the time can get much longer without such technique (even with them, the sim time is easily 5minutes for each run).
anyway, if you want to go down the rabbit hole of optimizing the timestep with Trunc() and MaxExtStepSize(), you can refer to my Github here.

Note: honestly, my latest coding style is already different…

In my opinion, Trunc() allows you to try a few timesteps before commiting to the one that suits you then move on. While MaxExtStepSize() is you directly commit to this limit.

These last few months I no longer use Trunc() for digital controller with PWM simulation, all the timing is handled by MaxExtStepSize(). It simplify the code by only doing the timing control in one function.

While, I will still use Trunc() if I use analog comparator.

1 Like

To be honest, I was already heavily influenced by your examples. Time management idea and use case in real code are very similar to yours. Each needed “digital” simulation time is pre-calculated. I got into same conclusion and I only use MaxExtStepSize() to set next time event I want. Using Trunc() would easily generate 10x more simulation iterations each digital signal edge.

In my case simulation would also be faster if I could feed data between DLL’s with no conversion to analog simulation between blocks.

I have started to understand why you are looking to pass data between DLL blocks, as if you can do that, this data is processed at the same timestep in all DLL. Unlike in the example I shared, second DLL is reacting to first DLL after one simulation timestep.

For the second DLL, it receives a signal with an Undeterministic Event. What I mean is that the signal is coming from external sources, and you have no way of knowing when the signal will change state before that event occurs. This is also why Trunc() is required, as it is meant to handle undeterministic events. However, in the first DLL, this signal is deterministic, and you can predict exactly when the output will change its state. Therefore, if you need the second DLL to react without any extra timesteps inserted, you have to obtain this information from the first DLL.

From my perspective, I would suggest considering consolidating everything into one DLL. If you can pass the timestep data from the first DLL to the second DLL, having an OUT to IN connection seems somewhat meaningless in this scenario. Since the second DLL does not respond according to the output of the first DLL, and the first DLL’s output always has a one simulation timestep delay, the second DLL actually responds before receiving the signal from the first DLL. Well… I may be wrong; let’s see what @physicboy’s opinion is.

My understanding of @physicboy work is that, he generally doing everything including clock generation in a DLL, that why everything is deterministic. This kind of simulation have to prevent using any ¥-, €-, £-Device or Switch with TTOL parameter in it, as any of these devices can introduce Trunc() with ttol back into your simulation.

can you please give us system level diagram on what you are trying to achieve?
it may help us to have discussion without confirmation bias.

and @KSKelvin is correct, all of my work have single DLL for the whole simulation (as big as interleaved totem pole PFC + DAB)…

@physicboy this is something provided by @AnssiK . Yes, system level diagram can be helpful.

@AnssiK

Do you understand how to write modular programming in C where the codes are separated into multiple .c and .h files then later compile it into a single .dll?

I kinda think if that could be your actual problem…

Here is something. I can imagine this will confuse even more.

  • System is 3-phase, even though its simplified here.
  • All control parts have their own clock source. Clocks are not in sync to each other.
  • Inverters make all computations by them selves.
  • Time tick in inverter side is constant as inverter calculate things like digital filters.
  • Somewhat realistic switch models are needed for calculations in inverters.
  • PWM generator is only used to create realistic like main circuit response.

Yep, even though my coding is rusty, I could but this all into one DLL. Funny part is to make that multi clock thing, where each function calculates its next output change time. For sure some modularity will be needed. For reuse and better maintainability I would like to avoid combined solution and find a way around. Wasting time to simulate analog behavior of signals between two digital processes does not make sense here. Like Ø-Device manual tries to guide.

I assume you are working on a microcontroller… Does “own clock source” refer to the controller clock (in MHz level) or its interrupt rate (i.e., the clock rate of discrete-time control, in kHz level)?