# Alternative approach for antenna delay calibration

A fairly common subject here is how to perform an antenna delay calibration.
It’s fairly simple to do this using two way ranging, you place the unit to be calibrated a know distance from an already calibrated unit. The range error is the antenna delay.
Most of the questions are about the chicken and egg situation this creates: How do you get that first known calibrated unit. Application note 14 describes one method to create this known reference unit and I’ve described the method I’ve used in the past a number of times. The short version is to measure between 3 or more units at once and solve the maths to work out which part of the measured delay is due to which unit.

However as part of some current work I’ve come across an alternative way that allows you to delay calibrate a single unit without needing any other calibrated devices. You still need 3 units total. But rather than trying to calibrate all 3 at once it lets you calibrate just one unit in a way that is completely independent of the delays on the other two. If nothing else this makes the maths easier to follow.
Since I’m in a helpful mood I thought I’d share it.

Put 3 units in known locations.
Unit 1 - Transmits a fixed message at an interval.
Unit 2 (the one to be calibrated) - receives the message from unit 1. A fixed time later it sends a message.
Unit 3 - Receives the messages from 1 and 2 and calculates the time difference between them.

The interval of unit 1 and the delay of unit 2 aren’t critical but shorter is better. A shorter interval = more measurements in the same amount of time = more points to average, always a good thing. A shorter delay means clock rate errors and drifts are over a shorter time and so have less impact in the results. Basically make them as short as you can without running into issues keeping up.

Unit 1 only transmits. We don’t care what the transmit time on this unit is and don’t need to control it. The only time that matters is when the signal leaves the physical antenna, that is the same for everything receiving the signal. So unit 1s transmit antenna delay is irrelevant.

Unit 3 only receives. We don’t care exactly what time it receives a signal, only the difference in the times between two different receives. This will be the same for both signals. So unit 3s receive antenna delay is irrelevant.

Take the difference measured by 3.
Subtract the travel time from 1 to 3.
Subtract the travel time from 1 to 2 and from 2 to 3.
And finally subtract the delay that unit 2 added.

Whatever value we have left over is the sum of unit 2s receive delay and transmit delay. For most systems dividing this value by 2 and using that for both delays will give the correct operation.

Of course it’s not quite that simple.
You need to correct for the clock rate errors ( e.g. units 2 and 3 both calculate their clock difference from unit 1 and do all the calculations in unit 1 time units. If unit 2 reports this difference in it’s message then unit 3 can also check it’s difference from unit 2 and perform a sanity check, summing all 3 differences should give a value of 0 or very close).
You need to allow for the ability to schedule transmits not being as fine grain as the system clock, this is simple to do by including the error from ideal as part of unit 2s transmit.
You need to control for antenna orientation and signal level effects on the antenna, putting them in an equilateral triangle with antennas all facing the middle and setting all the transmit levels to the same is the simplest way to do this but there are others.

Thanks a million for your contribution Andy,

I have raised the topic to our System Team, and we’ll look into.

It will take a bit of time but I’ll come back to you with our feedback,

Kind regards
Yves