I just conducted some TWR tests (DWM1001C) in a large indoor building (exhibition hall).
The ranges between anchors and tags were not correct in some areas (LOS). Instead of the real distance the received range values were way to short (for instance: 7.5 meters instead of 30 meters). I know that NLOS and multi-path situations may affect the range quality, but it is new to me to get range readings that are consistently shorter than the real distance.
A very similar behavior was already described here 3 years ago, but without any helpful answers.
Below are some simple plots visualizing the described problem. Y-Axis: millimetres, X-Axis: measurement-id. A range value of ‘0’ indicates that no range was calculated at all due to uwb communication errors or very large timestamp errors.
Figure1: both uwb nodes were stationary with LOS. Correct distance is around 30 meters. But most readings are around 7.5 meters, or not valid at all.
Figure 3: both uwb nodes were stationary with LOS. Correct distance is around 10.5 meters (red line). But most/all readings are around 34 meters. This I would say is a typical reflection issue, because the range received is larger and not shorter.
Anyone has a explanation for the issue with ranges being too short and how to fix it (if even possible)? I already looked at the application note documents, did not find much except that very large reflections (>200 meters) may be the reason for ranges being to short.
Do you have tools to allow you to look at the CIR in those situations? My guess is that you will still see the correct signal at around an index of 745 but you will also see noise spikes before that, on the bad measurements these spikes will be above the detection threshold and so be falsely detected as the leading edge. These spikes are caused by very long lived reflections of the previous pulse in the UWB transmission sequence.
You can reduce the impact these have by increasing the threshold value for a pulse to be detected. There are a couple of registers that control this, the exact method by which it’s calculated isn’t published and questions here have met with silence. However you can still do this experimentally by changing values in the registers and seeing what impact they have on the threshold. The down side to this is that it will reduce your maximum range and make you more susceptible to signal fades producing a range that is too long due to reflections.
The other option is when receiving a UWB packet look at the leading edge index value. If it’s before some point (say 730 or 740) then discard the measurement as suspect. This means that usable packets may occasionally be lost and all of the bad packets are lost rather than generating the correct range. But it does work very well at removing these problem measurements.
We ended up going for a combination of the two methods.
Not right now unfortunately and I am not sure when and if a second test session in this building will be possible. But I will keep this in mind.
Any chance you know how long a reflection may be around? If this is not too long, slowing down transmission intervals may also help?
I tried using the recommended parameters from the application note for LOS (higher threshold) and NLOS (lower threshold), both producing problems. Fair enough, I may increase the threshold even more. But other tests showed that we can cover larger distances only with the NLOS parameters (low threshold) and that is a requirement.
This is something I will have to try. Up to now we are calculating a confidence level based on difference of first path and peak as well as at looking at magnitude values. I do not extract the accumulator buffer to save time (would slow down ranging too much). Anyway, up to now this calculated confidence level does not correlate very well to the quality of the measured range.
Regarding TWR: would I need to check this leading edge value for each and every uwb message (TWR needs 2 or 3 messages in total to calculate a distance)? Or may looking at the last message be enough (again: trying to keep the ranging as fast as possible)?
It’s not the time between packets, it’s the nature of the data within the packet. UWB packets consist of a series of very short pulses of radio energy which the receiver then corelates to detect and decode the message. The system gets confused between the start of pulse 2 and the reflection of pulse 1.
We check the value of FP_INDEX, since you need to read RX_STAMP this is a case of extending a 40 bit read to being a 56 bit read, a fairly minimal performance impact. We check every packet and simply ignore them if they fail the test, since the system needs to be able to cope with dropped packets it doesn’t add any extra special case handling. We can do this and still maintain our 2400 TWR measurements/second rate.
I agree, coming up with a good, range independent measurement quality indicator without having to read dozens of registers is very tricky. We have a measure we use based on a couple of values but it’s not great, it’ll help filter out a few obvious errors but that’s about it. We ended up putting a lot more emphasis on error detection at the position calculation stage rather than the measurement stage.
sounds good, I will definitely try checking the FP_INDEX for each and every packet. 2400 m/sec is not bad - congrats, I am somewhere at max. 1800-2100 m/sec with the dwm1001c depending on the setup
Same here. Basically it is an iterative process, looking at the previous positions to validate the current ranges (as long as the initial position is correct, i.e. has a high confidence level) and using only a subset to solve for the current position.
Saw this one earlier, just didn’t though that reflections from objects 150-200 meters far away are the reason for this. The indoor building would have been large enough for this effect though.
It’s always depending on the setup We need to be running a minimum of 8 anchors to get that rate, anything less and the rate drops. And it’s always a trade off with range, I ended up with a 20% rate decrease when we went from 6.8 Mb/s to 850 kb/s in order to get more range. Most of the time we could get away with the higher data rate and shorter range but not always. And to support both means two different modes to get through approvals testing with all the costs and paperwork that involves. Far easier to run at the lower rate, longer range mode all the time and live with the performance hit.