I am currently researching the use of DWM1000 for an indoor TOF based localization system for use within drone research for swarm formations on the Technical University of Denmark.
First up, does any of you guys have some inputs regarding how to get the best accuracy on our localization? Our initial research is pointing towards using a cost-referenced particle filter, but is this the best way of doing it?
Secondly, does any of you have an example or some resources on how to implement, say eg. a cost-referenced particle filter or an extended Kalman filter, for trilateration estimates? Either as MATLAB or C code?
Third, by reading around on this Google Groups I can see a lot of posts regarding nonlinearity of the TOF as a function of eg. RX power and antenna orientation? Could someone explain a little about this? In my “narrow-minded” world, TOF will always be linear, no matter the power or the orientation, as it is merely the time from antenna to antenna?
I have written an EKF in c++ for use with a differential drive mobile robot. Wheel encoders and an IMU were also used in the filter. The code can be found here at my github account. Please let me know if you have any questions or would like to contribute.
If you are interested in just plain old trilateration algorithms, I have written one in both python and Matlab that perform nonlinear regression as is described in this stack exchange answer. Let me know if you are interested and I can post them for you.
Do you experience any noticeable difference between a nonlinear least squares implementation and an EKF implementation, if you only look at the decawave system? No IMU, encoders etc.
Because I also implemented the nonlinear regression in MATLAB, but when i look at the absolute positions they are off by quite a lot. I did not weight the nonlinear system as the stackexchange proposes, does that change a lot?
If i put the tag on a small robot and lets it drive e.g. a square of 2x2 meters 10 times, the resulting position estimates results in 10 very equal “squares”, but they are only like 1.7x1.7 meters and not truly square? So the results are extremely reproducible, but not very accurate.
The problem you describe sounds like a calibration issue. I don’t think an EKF would help with that, unless you added each Decawave measurement bias as a “state” of the EKF. Then the EKF would also estimate the bias on each measurement, effectively correcting it. You could also just take a bunch of measurements and calculate the bias yourself. But, that all sounds horribly ineffective compared to just calibrating successfully.
Unfortunately, I can not help you with calibration. I used the TREK1000 modules which already takes care of all that for you.
P.S. Like you, I did not weight anything in the nonlinear regression algorithm.
Oh, I forgot to mention, the main difference is that the EKF also estimates heading, velocity, and angular velocity by filtering many different sources, including wheel encoders, and IMU, and a kinematic model.
Just as a comment to the topic – maybe someone finds this useful.
We tried different algorithms for single point (stateless) localization (some of this can be found in our short paper from IPIN 2015)– we used both various types of linear and nonlinear approaches, but also geometric algorithms (e.g. Geo-N) and we found that they all are quite “sensitive” to errors in distance measurements, I would say that Geo-N performed quite well taking into account low complexity.
Performance was strongly depend on the shape of the area that in turn affected deployment of anchors.
The test we did with 2.4GHz chirp based devices (which measure the distance in ideal situation, say outdoors, with approx… 1 m accuracy, and are quite susceptible to reflections) showed that if the tests are done in narrow but long corridor with anchors deployed far from each other, then accuracy along the corridor is good but it is horrible across the corridor. Then in larger areas we found out that when anchors are deployed in square-like grid, then accuracy in different directions is comparable.
Above observations can be quite easily explained when linear least square algorithms are used for lateration, but similar was observed for nonlinear methods and we do did quite a lot of experiments in different areas, getting reproducible results.
To my understanding this is an issue with all stateless approaches. In contrast EKF (or Particle filter) which keeps track of the “state” can improve accuracy over a time given additional input information (e.g. on acceleration, direction of move etc.)
I was wondering if you had any insight/results to share regarding localization? I am too working with a particle filter but have trouble with the varying ranging bias. Did you also have problem with that, and did you find a solution?
A method of using all the measured ranges and is relatively easy to implement is a particle filter. It is not the most computationally efficient algorithm but it is very tolerant of some anchors missing blinks.
The idea is to start with a large number of guesses at the location (the guesses are particles). Each particle contains location and velocity estimates.
Then for each tag blink:
Update and dither the particles based on time of blink
Compute the likelihood of each particle.
Resample the particles based on their likelihood
Unlikely particles removed and likely ones copied.
The resulting cloud of particles will be clustered around likely tag location.
Location estimate can be most likely particle, weighted mean, etc.
The particle cloud can then be used as the initial cloud for the next blink.