DW3000 Indoor Positioning Calibration

Hello everyone,

I’m currently collaborating with a few team members on a Senior Capstone Project aimed at developing an indoor positioning system. This system is designed to emulate collision avoidance mechanisms for automated vehicles. Our setup involves using five ESP32 devices equipped with the DW3000, configured in a specific manner: three of these devices serve as stationary anchors, while the remaining two are utilized as tags. These tags are intended to be either mounted on vehicles or attached to individuals.

The core of our project is already in place, and the initial program setup is complete. However, we’ve encountered a challenge with the accuracy of distance measurements. The distances recorded by the system seem to be imprecise, and we’re exploring ways to refine this.

Our specific inquiry is about calibrating the devices to more accurately adjust the RX_ANT_DLY and TX_ANT_DLY settings. We’re trying to find a solution that goes beyond simple trial-and-error or brute force methods. While we have gone through the APS014 documentation, which provides insights into Antenna delay calibration for the DW1000, we’re at a bit of a loss on how to effectively implement these principles for the DW3000.

We would greatly appreciate any guidance, tips, or shared experiences related to this issue. Additionally, if anyone has relevant resources or examples, especially concerning the DW3000, it would be incredibly helpful

Here’s the GitHub link to the DW3000 Library

And here is the example code we have been going off of.


All of the procedures for the DW1000 are equally applicable to the DW3000.

Personally the approach I took was to accurately survey the locations of each device, both tags and anchors.
I then measured from each device to each device including anchor to anchor and tag to tag (this does assume line of sight). Well actually I measured a few hundred times and took the average. If you find your data has lots of noise spikes consider using the median rather than the mean.

With 5 devices this gives you 10 (4+3+2+1) measurements to solve 5 unknowns (the antenna delays).

This is all done as part of a configuration app now that automates the whole process but initially I would throw all of these numbers at a python script that performs a least squares optimisation to find the antenna delays that best match the measured data.

I assume the Rx and Tx delays are equal. This is approximately true unless you have amplifiers and RF switches in the signal path. And since two way ranging involves a one to one ratio between the transmits and receives any errors in splitting the value between the two will tend to cancel out anyway.

If you search this forum you will find a number of other approaches and alternative methods to get similar results.

Python code - not a language I’m that familiar with so there is probably a cleaner / simpler way to do this but it gets the job done.

import csv
import numpy as np
import matplotlib.pyplot as plt
import math
import scipy as sp
import time
from scipy.optimize import least_squares
from scipy.signal import butter, lfilter, freqz, filtfilt
from multiprocessing import Process, Queue

AnchorCount = 7

# array of actual antenna locations in x/y

# array of antenna to antenna range measurments in order e.g.
# 1 to 2, 1 to 3, 1 to 4, 1 to 5, 1 to 6
# 2 to 3, 2 to 4, 2 to 5, 2 to 6 ...
measuredRanges = np.array([119.1093,124.9640, 76.1164, 88.7342, 82.8843,93.7682,
                            97.0731, 95.4066, 112.9805,
                           112.0142, 95.0258,
                            97.1988]) #6-7

def pythag2d(x,y):
  return math.sqrt(x*x + y*y)

def rangeBetweenAnchors(b1,b2):
  return pythag2d(b1[0]-b2[0],b1[1]-b2[1])

def calcRangeErrors( x , *args ):
  errors = 0;
  measurmentCount = 0

  for Anchor1 in range (0,AnchorCount-1):
    for Anchor2 in range (Anchor1+1,AnchorCount):
      if (measuredRanges[measurmentCount] > 0):
       expectedValue = rangeBetweenBeacons(locations[Anchor1],locations[Anchor2]) + delays[Anchor1] + delays[Anchor2]
       errors += (expectedValue-measuredRanges[measurmentCount])*(expectedValue-measuredRanges[measurmentCount])
      measurmentCount += 1
  return math.sqrt(errors)

def doLeastSqr():
  initial = np.zeros(AnchorCount)
  results = least_squares(calcRangeErrors,initial,jac='3-point',ftol=0.001)

  print "delays are:"
  print results.x[0:AnchorCount]

if __name__ == '__main__':

  print "Done"  

Note - this gives the antenna delays in meters. Divide by the speed of light to get the delay in seconds and then divide by the DW clock period (1/(128*499.2MHz)) to get the delay in decawave clock ticks.
I always delt with antenna delays as the distance error they cause rather than a time delay, I find it more intuitive and relatable that way. And then I let the code convert the value to time or clock ticks as needed.
Plus in some situations I don’t worry about the range measurements being wrong. Instead I measure with poorly calibrated antennas and then correct for the antenna delays when performing the position calculation, in that situation having them in meters is easier. The advantage of this is while the raw ranges may be wrong you only need to worry about the antenna delay corrections in one place, wherever you are performing the position calculation. Whether it works out easier for each device to know it’s own error or for a single device to know all of the errors depends on how you plan to set things up.