I am working on a project where I want to use the received frame quality for determining if that particular ranging measurement is likely to be good quality first path LOS measurement or not. I read the section 4.7 Assessing the quality of reception and the RX timestamp. A part that confused me, however, is the use of STD_NOISE value.
The description says it is the standard deviation of the noise. How do we use this value? Instead, if the average magnitude of noise was reported, it could be directly compared with FP_AMPL2 (by taking a ratio akin to SNR). Has anyone used this value? Is it really the standard deviation of the noise (all CIR taps before FP)?
Edit: Is the term “standard deviation” used to be synonymous of “root mean square” in the documentation? That would make perfect sense for zero mean noise.
as far as I know yes it is the standard deviation of the noise (please someone from Decawave correct me if I am wrong), hence, CIR taps before FP. But you can easily check that by yourself.
No why should ‘standard deviation’ be the same as ‘root mean square’?
And be careful by calling FP_AMPL2 / STD_NOISE the SNR, this is not correct.
Hi, I’ve also read the AP006 Part3 document and I agree it is very useful. Nevertheless, I have a question: with this document we can, roughly speaking, understand if a ranging is LOS/NLOS, but how can I compute a SNR measurement, related on, e.g., electromagnetic noise regardless of LOS/NLOS scenario?