Can we interleave the two-way ranging between multiple nodes?
For example, if there are 3 nodes namely A, B, and C. Suppose node A sends poll message to node B and after that node C also sends a poll message to node B. Node B will send response message to node B and then to node C and so on…This will continue for the final message also.
Will it create any issue in two way ranging timestamp/calculation?
Generally you wouldn’t want to do that, the longer between packets the more impact clock differences will have on your accuracy. Since processing time is generally going to be short in comparison to packet transmission times interleaving like that doesn’t give you a significant time saving in total time for the two range measurements.
The PANS system where A sends a broadcast request to B, C, D and E who then reply in turn works out as a good way to reduce the time taken for multiple ranges but it does require that they all be measured from the same point.
What does the PANS system means?
We are using DW1000 based system.
PANS - Decawave Positioning and Networking Stack
See DWM1001 PANS Release 2 for details.
All decawave systems are DW1000 based. Even if you aren’t using their firmware you can always use the same techniques.
I’ve used a technique similar to the PANS one as detailed in their user guide but with some tweaks that allowed me under some situations to significantly increase the range measurement rate.
Thanks for the support.
Could you please briefly explain how they are managing the multiple node communication in PANS software. I have a situation as described below.
We are developing a positioning solution for tracking vehicles. In this solution, there is no Anchor/Tag concept. Here each node can communicate with any other node in the vicinity. We may have a maximum of 16 DW1000 devices present at any time that can range each other. We have a ranging interval of 100 - 500 ms. Each node start ranging with another node in this interval. So if 16 nodes are present, Node 1 will range with Node 2 to Node 16 periodically with the above interval. Here the problem is since there is no master node, the ranging is uncontrolled. We can’t assign time slots for the nodes. We have added some random delay to the ranging interval and also used pseudo-CCA for collision detection. But the ranging success rate is very less when the number node increases. We are getting a high packet error rate when the number of nodes increases. How to manage this issue? Is there a way to increase the ranging success rate in this solution.
Does the PANS software address the above-mentioned scenario?
Since I’ve not personally used PANS I’m not sure but I don’t think it will do what you want.
In your system I’d go with a timeslot by consensus approach.
You don’t have a master to assign timeslots but you know you have a fixed maximum number of nodes, if you don’t mind running at your worst case update rate even if only half the nodes are switched on then you can hard code the timeslots.
You know you have a maximum of 16 nodes. Each node needs to make 15 range measurements. Add a 1 time slot transition period and you end up with 16*16 = 256 time slots.
I’m assuming each node knows it’s ID number.
So Node 1 gets timeslots 1-15, node 2 gets timeslots 17-31 etc…
Each node powers up assuming it’s the timeslot just after it’s transmission time is over and listens to all packets even if they aren’t addresses to them. It it gets a packet it then syncs its internal timeslot count to match the data in the packets it’s seeing. Otherwise it increments the timeslot count at the expected rate.
This way everyone ends up in sync without a master setting the things up. Since when exactly each units timeslot ticks up isn’t in sync you will normally end up running a tiny bit slower than the intended rate but not by much. The one idle slot between units means that you should never end up drifting far enough out of sync to end up on top of each other.
256 timeslots, a two way range can easily be done in 2 ms so you have an update period of 512 ms. Less if you reduce the time taken for the range measurement.
I was reading this old post and your (semi)sync solution sound interesting. Do you have any visualization of this approach or would you like to clarify it a little extra?
I would liketo expirement with a non master including setup as i tried several ways without getting in sync for 100%.
Thanks in advance!
I don’t have anything detailing it other than internal documents that I couldn’t post.
But consider a packet structure of:
Now as long as
- Timeslot is known to be a count from 0 to n
- the length of a timeslot is known
- each tag knows which timeslots it has been allocated
Then as soon as a tag receives any packet, no matter who it was addressed to it can work out how many timeslots until its turn, multiply that by the timeslot duration and so calculate how long until its turn to transmit. If each timeslot was to contain a full two way range exchange of 4 packets then by inspecting the packet type part of the message a tag could fairly easily achieve time synchronisation to within ~1/4 of a timeslot. Depending on how deterministic the message times are and how flexible you are with how the tag firmware tracks the current time / timeslot you can in theory obtain far more accurate time synchronisation.
We ended up with an interesting issue that each tag had an internal clock that position outputs had to be in sync with. But this clock was not in sync between tags and for a number of reasons couldn’t be modified to be in sync. We ended up managing to get the radio packets and time sharing between multiple tags to be in sync to within ~50 us without an central master or coordination system. Each tag then tracks how far off its internal free running clock the radio system is at any point in time and interpolates between range measurements so that position calculations can be done for the required times rather than the actual radio transmission times.
Thank you for this fast reply and your explanation!
Does this mean that a tag, when it starts waits for its timelsot to begin transmitting and waits for other messages to be received in the other slots where it is waiting for its next turn?
What if in that case, multiple tags start transmitting because they are not in sync yet, will there be a collision which needs te bo handled with a slot reset or something?
Each tag waits for a period equal to a full cycle of the timeslot counter before first transmitting. Which means by the time it transmits it will either it will be in sync with the other devices or nothing else is transmitting.
If you powered two devices up at exactly the same time and had nothing else turned on then in theory they could collide. In this situation if one of the two is significantly closer to an anchor than the other then the reply from that anchor will include the timeslot number of the closer device which would clear the problem. If that didn’t happen they could potentially stay in a bad state until either 1) relative clock differences mean that they drift out of sync enough to hear each other or 2) a 3rd device is powered up at which point they will both sync to that and so fix the issue.
There are tricks you could do to prevent this but we considered it unlikely enough that we didn’t worry about it. You would need two tags in the same location and on the same power switch to cause this with any chance of it being repeatable. That’s not a normal use case for us.
We were aiming for a system that works things out eventually rather than working perfectly all the time.
Similarly if you had two systems out of range of each other that then moved closer together you could get a brief disruption when they first get in range but that will clear and everything will be in sync within one cycle through the timeslot counter which is all we were aiming for.
The higher level positioning system has to be designed to cope with a certain level of failed measurements and dropouts from the radio system. We take advantage of that fact, a few lost data points in rare situations isn’t a big deal as long as it’s only a brief interruption. The idea is to be simple, low overhead and good enough rather than being perfect.
In real world testing we haven’t seen this sort of thing cause any issues. But then our system is only intended for a few tags at a time which does minimise the risks.
Thank you for this explanation. It sounds like a good approach.
Do you have a strategy for debugging these tag setups while developing the slotting/timing algortims?
For example; measuring the signals or by outputing the current slot indexes per tag and visualize it with a kind of delay function?
Have a nice day!
Mostly I just wrote the code right to start with
I did re-purpose some of our old prototype hardware by putting special firmware on it so that all it does is listen for transmissions and give me a live count of packets per second of each type.
When instructed to it will then log details for the next n packets and output the receive timestamp on it’s internal clock and some basic information like the packet type, timeslot number contained, sender and target IDs etc… This has to be on demand / in bursts because at full speed the system is sending around 2600 packets per second and my output method is a uart, there is no way I can come close to keeping up while outputting any significant data per packet.
I then have a PC application which controls this unit, instructs it when to log and displays the results together with time delays between packets. This lets me passively monitor the transmissions and verify that everything is talking at the times I expect. I’ve only really needed the data this gave me a couple of times but when it’s been needed it was very helpful.
The same debug firmware also has the ability to log the CIR for the received packets so that I can easily check for weird reflections in different locations without having to interrupt the system operation.