I’m a newcomer to the UWB field for about three months, and I’m currently working on a TDoA project.
My anchors perform a scan every two seconds, while the tag transmits a packet every second.
Both the anchor and tag use dwm1001c as a device and use the internal crystal as a timestamp.
Here’s how my system works:
First, use a computer to simultaneously transmit the scan command to anchor1 and anchor2 via UART port.
The anchor immediately transmits the contents of the system time counter (register file 0x06) via UART to the computer and opens the scanning mode.
When the tag is scanned (my experimental environment only has one tag), the anchor transmits the RX_STAMP from the receive timestamp (register file 0x15) to the computer via UART.
I understand that this approach is not true anchor time synchronization, and the calculated ToF is also not the true ToF.
Anchor1’s ToF is calculated as (receive timestamp - system timestamp).
When I subtract the ToF of both anchor1 and anchor2 and convert it to distance using the speed of light, I get a chart showing a decrease from 2742 meters to 3 meters(1m in real-world), with 33 cycles (approximately 1 minute).
I have analyzed my system design from various aspects but have not obtained a correct answer.
Hope someone can answer my question or give me some advice.
Thank you for taking the time to read my post.
There might be one out of two things going on here: 1) UART is quite slow compared to SPI. Even when sending both commands “simultaniously”, it might not be as simultaneous as you want it to be. Be aware that even a slight offset in sending the command results in very bad distance measurement, as you just need an offset in time of 3 x 10^-8s to be 1 metre off.
In this case, your measurements wouldn’t just be off by an offset of X metres, but basically by random everytime you measure.
The other possibility is, that your measurements are all off by an offset of X metres. (So if you measure six times and receive 3.2m everytime as a result, but it’s actually 1.6m, you could just subtract the 1.6m from your calculated result, as the offset comes from internal antenna delays.
Also, be sure to have your UART connections to your computer the exact same length.
Just be aware, that on the scale you are operating (which is near to lightspeed), a few picoseconds can vary the results, which means that even a longer cable as a way of transmitting data could cause offsets in your result.
Thank you for taking the time to read my question and provide a response.
The drift I observed is not random but rather exhibits periodic behavior, as shown in the chart.
In my understanding, this periodicity could be due to the time difference between the central clock and the tag’s broadcast time without a central clock. However, this time difference should occur simultaneously on both anchors and should cancel out in my calculations. That’s why I am puzzled!
I am conducting a small-scale test where I am using a cable that is only 1 meter long. Could this potentially introduce errors that are twice as much or even more? I have doubts, but I will test it as thoroughly as possible.
During my recent development process, I have learned that antenna delay can cause significant errors in my system. Thank you for bringing this to my attention!
That are some interesting results indeed! Could it be that the periodic behaviour is bound to your algorithm calculating the distances? For example that it doesn’t take negative values. I’d try an approach with another device, an arduino for example. Hook both of your DW1000 up to that arduino via the same SPI wire. By that, you could transmit a clock reset at exactly the same time (Keep in mind that the wires need to be exactly to same length). After that, extract your results via UART.
You can use the computer to trigger the arduino too, so you get results quickly. That should eliminate any clock differences that may appear with your current approach.