We’re implementing DW1000 ranging starting from the examples at https://github.com/Decawave/dwm1001-examples and we’re a bit confused about how to achieve the kinds of accuracy we see mentioned in the documentation and in other folks’ posts (<10 cm).
The documentation indicates that the lowest 9 bits of
DX_TIME are ignored, and that
TX_STAMP is simply the sum of
TX_RAWST (with lowest 9-bits being zero) and
TX_ANTD (constant). That makes it seem like the resolution is only 125MHz, not 64GHz, giving us a precision of 2.4 meters, not <10 cm.
Are we misunderstanding how ranging is supposed to work? Are we not properly reading the documentation? Do we need to average over many readings?
Q: Are we misunderstanding how ranging is supposed to work?
Q: Are we not properly reading the documentation?
Q: Do we need to average over many readings?
There are tons of documentation on the Decawave web site under Design center folder.
If you really want to understand how everything works, then find an EVK1000 evaluation kit and run simple examples, they will guide you over the most useful features of DW1000 chip.
This is the “DW1000 Application Programming Interface with STM32F10x Application examples” under DW1000 section in the Software section.
Or you can port those API examples to the dwm1001 platform (few hours task).
Thanks for the speedy reply!
The referenced Github repository is a port of several of those examples to the nRF52 processor, as used on the DWM1001.
We are seeing quantization-like errors in our results, and we are trying to track down the sources, which led us back to the register descriptions above. If you don’t mind being slightly more explicit, how does one achieve sub-nanosecond accuracy when the timestamps referenced above don’t have that precision. A specific reference would be appreciated.
I think a lot of confusion comes from the way that some of the decawave examples are written and the fact that in some of the examples (we have worked from the TWR ones) they only send 32 bits of timer in the final messages.
Basically, the timers are 40-bit (which rolls every 17 seconds) and you can access these via the calls in the API. For timed transmission, they lose the bottom nine bits, which means you can only send on certain boundaries but that doesn’t really matter as you still know the exact number.
We have achieved consistent measurements below 10cm, but you do need to have calibrated your antenna delay or you will have offsets.
BTW we have spent about three months working our way through the examples and documentation. This isn’t a chip to be taken loightly, it really is tricky.
The precision/resolution of the TX timestamp is 40 bit. The lowest 9 bits are always zero because the actual transmit happens at these precise moments - so the lower 9 bits are not truncated or rounded or so, they are actually zeroes.
Did you solve this problem?