Swarm localization

Hi all,

I am wondering if anybody has accomplished swarm localization with the DWM1000 (or other sensors). The idea is to find relative location between tags without the use of anchors, so that the task can be performed outdoors and in mobile/dynamic environments. Has anyone looked in to this?

Any research you could point me to or advice on getting started would be appreciated! I am thinking of using a TOF distance measurement from one tag to multiple others (for each tag) and then feeding that in to a particle filter algorithm that performs relative localization simultaneously for all tags. Anybody have thoughts? Would it be worthwhile to explore combining two or more DWM1000s into a single sensor package so that I could estimate angle of arrival as well as distance?

Please feel free to chime in! Also feel free to email me at kristo.jorgenson at gmail

Thank you,

Kristo

1 Like

Additionally, does anyone know if it is possible to connect multiple antennae to a single DWM chip, such that angle can be calculated using PDOA but with a single DWM chip instead of two?

I’ve used a similar method for anchor surveying, 8 or more anchors all range to each other and then calculate their relative locations. In some tests I’ve had average 2D errors of under 5 cm from truth.
This was an addition to the system for diagnostics rather than its intended operating mode so it’s painfully slow and inefficient. It originally took fixed locations and calculated antenna delays but with enough data points you can open up the number of variables and also calculate locations. Not quite what you are aiming for but similar.

It would be hard to do a PDOA with only one chip and two antennas since you couldn’t receive the same packet from two antennas at the same time. You could do a basic angle of arrival system by ranging point to point, switching antenna, ranging again and looking at the difference. But measurement noise is going to cost you a lot of accuracy and possible changes to antenna delays would add a complication. My gut call would be that given the relatively low cost of the DW1000 it’s probably not worth the complication and performance drop to switch antennas.

Thanks for the tips, Andy! That sounds really encouraging that you were able to perform relative localization with a series of anchors. I’m still a bit confused what the difference between an anchor and tag is, since they are the same circuit after all… is it just the software?

Do you have any details about how you achieved this?

I see what you mean about PDOA with a single chip. Was just a thought, but you’re right given the cost and size I will at least initially start with multiple chips.

Thanks again and any details you can share about how to achieved relative localization would be super helpful!

Best,

Kristo

It wasn’t really anything clever.
I take a list of N tags. I instruct every tag to range to every other tag and store all of the results.
I then set up a least squares optimization to find the set of tag locations which minimize the errors between the expected and measured ranges. I fix the X/Y/Z location of the first tag and the heading from tag 1 to tag 2 and then let it vary the remaining variables without constraints.

It helps that I’m running a custom radio protocol rather than the decawave supplied one. I have a command packet that I can send telling a tag to range a fixed number of times to a different tag and then return the average and yield. It means that I can connect to any tag and have it drive the whole process, the rest of the system can sit in it’s normal idle state waiting for a command.

You are correct, a tag and an anchor are the same hardware only running different firmware (or the same firmware in a different mode) and normally one is static and one is dynamic.

This is great to hear, thanks for the info! Is your radio protocol running on top of the supplied firmware or is it lower level?

A little lower. Not a single line of decawave firmware/software anywhere in the product.

I think what you’re asking about is what I call Multi-Ranging.
Typically this uses a lot of air time, due to the separate ranging transactions that take place, as well as moving the location data to a particular node of interest.
There are ways of optimizing the on-air time of the units involved which can help this.
Usually the results are more robust, but power/time intensive than other methods.
+Paul

Hi Kristo,

something similar as Andy described is also in PANS (see MDEK1001) and it is called auto-positioning function. Basically each anchor do multiple measurements to the surrounding anchors and then calculate some average range and other parameters. These values are then acquired via Bluetooth to the Android application and the position of Anchors are estimated. The Bluetooth API is open and Android application is open-source so it might be a good starting point for you to do some tests.

This video might be what you are looking for: https://www.youtube.com/watch?v=V85wejcYyXs

Cheers,
TDK

Thanks all for the input. I realize air time is going to be an issue, especially if I hope to do this close to 5hz for hundreds of tags. TDK/leapslabs: Is there any more information available in regards to the video you posted?

Can anyone point me to some research around the air time issue?

Thank you

Kristo

Hi Kristo,

the article is here: http://ais.informatik.uni-freiburg.de/publications/papers/wendeberg12ipin.pdf
There are more articles from these guys which you can find on the internet. I have found also some patents related with that so better study them if you consider using it in a product.

Cheers,
TDK

1 Like

You are asking about what we call “mesh ranging”, computing the set of distances between peer nodes on a regular basis. We’ve built a system that works somewhat like this and it is deployed for a customer in their product.

The system works by each tag having a scheduled time to transmit. Say we have 5 tags, A, B, C, D, E. They will transmit in a schedule in sequence. Each packet will contain the serial number of the transmit packet, the local Decawave Time (DT) of the transmit, and a list of other nodes serial numbers and receive DT.

For example, when A transmits, the packet will contain A transmit time, and then B, C, D, E receive times, if they were heard. Then B will do the same, then C, D, and E. At whatever the repeat rate is set to, A will start over again.

A range can be computed between any two nodes knowing 6 timestamps (3 transmit, 3 receive). This takes two cycles of A and one of B to get enough data for B to compute the range A to B. It takes one more cycle for A to compute the range. Also, any other node, say C, can compute the distance between A and B just by listening to the packets exchanged between A and B. By keeping the cycle going, what was the third packet in a range computation one cycle now becomes the first packet for the next cycle. So each cycle produces a new set of range outputs.

The system has a present day capacity of about 300 Hz-tags. That means you can, for example, have 10 tags that cycle through at 30 Hz, or 30 tags that cycle through at 10 Hz, give or take. One issue is that as the number of tags grow, the size of the tag packets grow (more data being received). One effect is that transmit power gets reduced for larger packets and they become less reliable, so this can limit network diameter. Another is that your air time schedule has to allow for large packets. A further consequence is that the largest packet can only hold about 40 tag receptions in it.

Because tags have to keep their DT running, we can’t go below IDLE mode between UWB activity, so this lowers battery life on the tags. Also, since we are receiving a lot of the time, this hurts battery life as well. So mesh ranging tag battery life is not great and is not good for long lived tags. If the tags are fitted to mobile devices like robots or machinery, this is not an issue, or if the tags are used for relatively short duration events, then that isn’t a problem, either.

The system is not stand alone at present. There is one anchor which serves no locating purpose (though it can be mesh ranged to the tags). The anchor serves as the controller of the network, to send out tag configurations, and to keep network time the tags synchronize to for time slotting. The anchor also collects all the received packets and sends them to the back end server via Ethernet, which then does the range computations. The single anchor imposes a limitation that all tags need to be in range of the one anchor to operate. This particular architecture served the client’s needs well and made for an easy to manage and debug system. So in this respect, it doesn’t match your desired system architecture.

The system has obvious directions of improvement. One, we can improve air time efficiency which is presently under 50%, so doubling the capacity is possible. Two, we can define response packets which don’t need a slot for every tag and provide a selection process for which tags are sent back. This will shorten packet lengths, improve air time, and improve packet reception range. Three, allow more than one anchor to grow network physical size. Lastly, making a system that is fully anchorless would be possible, but introduces some complex distributed timing and control requirements that have to be solved, particularly if the nodes are distributed more than one network diameter apart.

The fundamental aspect of any mesh ranging system is that it takes a lot of data exchanged between nodes to do it. This necessarily limits capacity, uses up air time, and drives up battery usage. If you want to do this at 5 Hz for hundreds of tags, you have a problem if they are all in one area. If you have hundreds of tags so far spread out that any one node only sees a few others, then it might work using some sort of Aloha style system, but it will be low capacity.

If GPS or RTK GPS is an option, that wins hands down for outdoor swarm localization. I know it seems like cheating, but there’s no better way to get lots of locations than that. Then each node can broadcast it’s coordinates every so often and everybody knows where everybody is. Not every outdoor application is GPS compatible, of course, but something to consider.

That is potentially useful depending on your application. It would allow you to compute bearing for any packet being received. To be really useful, it needs to work 360 degrees, not just ~90-120 degree wedge the PDoA system currently does, which is something we are working on in our lab and should have out end of the year or early next. You would need only one short packet from every node to compute bearing to them, but that still doesn’t give you range until the packet grows and contains all the timestamps you need for that, so you still have the air time capacity issue even with bearing.

Mike Ciholas, President, Ciholas, Inc
3700 Bell Road, Newburgh, IN 47630 USA
mikec@ciholas.com
+1 812 962 9408

1 Like

May I ask, what did you use as (or how did you calculate) the expected range?
Also, fixing a location for a tag and defining a heading towards a second tag, would indeed yield a correct relative positioning but I think it could still be a reflected system across the heading’s axis.

When doing a self setup like this the expected range between two tags/anchors is the calculated distance between the calculated locations. The measured distance is the average value measured between the two.
Ideally these two should be identical but in reality they will be different. So I try to find tag/anchor locations that minimizes those differences.

And yes, this system can’t distinguish between the correct solution and a mirror image of that solution. However if I give it initial guesses that are within 10 meters of correct then it works things out correctly.
So you can’t just start with all the locations being at 0,0 but you can easily guess them or pace them out to within the required accuracy.

Giving initial guesses 10 meters out results in the same solution as giving an initial guess of the truth as measured by a total station.

From what I understand this is your ground Truth, but it is not clear to me how this was calculated.
For example:

  • Did you measure these distances using some other calibrated instrument?
  • Or maybe did you already knew their positions and so you calculated their Euclidean distances between each pair? (n which case the purpose is a bit defeated since you already know their positions)

The whole point of this process is that ground truth is not known and is not easily measurable. We are trying to accurately locate the positions of the system components with no known locations and no additional equipment. We are attempting to calculate ground truth using the UWB system itself.

The end results is a system that produces positions that are correct relative to each other but on a grid where the origin and rotation from north are not known.

A lot of the time this is good enough in terms of position output, especially if you can have some approximate control over origin location and orientation such as this gives.

e.g. our use case is automotive brake stop testing. We need to measure the distance between when the brakes were first applied and when the car came to a halt. Where exactly the car was and which direction it was going in doesn’t matter, we just need the relative distance between the points.

Setup consists of placing a set of anchors on either side of the test area using the highly scientific method of putting one every 30 paces. Since our anchors only require a small battery for power and no other cables / setup this takes the time required to walk the distances involved.
Once placed we run a program which instructs each anchor to measure the range 1000 times between itself and all of the other anchors. This gives us our set of measured ranges.
We then use a default first guess at the locations that assumes a perfect 30m grid. This gives the first anchor as being at a location of 0,0 and (assuming the second anchor is on the far side of the track and given as 0,30) the test track as running in the x direction.
The software will then adjust the anchor location estimates in order to minimize the differences between the locations and the measurements.

Once this is complete we then use these estimated anchor locations to locate a tag on the car.

We may not know the exact location of the car or which way north is but we do still know very accurately that between two points in time it’s moved a certain distance.

This isn’t as good as an optically surveyed setup, it costs us a couple of cm accuracy in position over using accurately measured anchor locations but means we can get a 30m x 150m site up and running in around 20 minutes rather than the 45-60 minutes it takes to do accurately using an external truth measurement. An optical total station is both expensive and not the easiest piece of equipment to use if you want accurate results. For a lot of applications the time saving, lower equipment cost and simpler, lower skilled setup more than make up for the loss of accuracy.

For internal development testing we normally only resort to a fullly survey setup if we need reliable transitions from RTK GPS to UWB and back, in that situation you need good absolute position as well as good relative position or the transitions look terrible.

How are you addressing anchor Z axis locations?

What we see typically for the auto survey tools out there is that Z axis is either assumed to be zero or some other fixed value for all anchors, or the users are expected to laser range each one to the floor and enter it, making it something less than automatic.

With anchors typically in or near one plane, no auto survey tool gets the Z axis right, the geometry just doesn’t allow it.

We used to be all excited about auto survey but we found that the only real use case in which it worked decently was a small setup with wide open spaces, exactly where using a total station is super quick and easy. When the install got complicated, say a large museum with lots of rooms, auto survey was DOA and you had to do multiple total station setups to get good results.

There is a class of applications where auto survey is useful, typically small, temporary setups in uncluttered environments, where accuracy, particular in the Z axis, can be sacrificed. If there is any multipath or occlusion, the auto survey can produce wildly bad results. Another issue is that anchor to anchor distances can be affected by ground plane effects, say the grid in a suspended ceiling. That slows down the radio wave slightly, leading to an over estimate of anchor distances and a distorted survey output.

If the setup is large, or going to be permanent, or needs the accuracy the system is fully capable of, a total station survey is the winning solution. We’ve also done surveys with laser plumb bobs and laser rangers, so a total station is not required per se.

By far the easiest survey is anchors on a suspended ceiling grid. You basically have graph paper in the sky, so count the tiles and measure offsets to the grid work. Done. The ceiling grid system is pretty precise, well under 1 cm over very large spans.

Mike Ciholas, President, Ciholas, Inc
3700 Bell Road, Newburgh, IN 47630 USA
mikec@ciholas.com
+1 812 962 9408

I agree. Trying to calculate heights in this way results in junk data.

Generally when we do this we are using tripods to mount the anchors, some we fully extend and some we leave with one span un-extended. We then use the approximate heights for the tripods in those two configurations.
In my testing height errors of ~10 cm don’t result in significant horizontal errors unless you are very close to the anchors. So while it’s not ideal it’s close enough given the errors inherent in this type of setup and doesn’t require any additional measurement.

We try to avoid putting all the anchors at the same height since that causes an ambiguity as to which side of the plane you’re on. For most applications this can be assumed and the solution forced to the correct location but I prefer to avoid that type of constraint whenever possible.

Generally when people are driving at 100 km/h they are in a fairly wide open area rather than going between rooms in a museum :slight_smile:

It’s a case of different solutions for different applications.

Half our use cases are permanent installs, for those a total station is our go to setup method. The other half are pop up tests where they have very limited time and anything more than 30 minutes to get everything up and running is too long. While you may think that a total station may be relatively easy to use in that environment it’s still extra equipment and extra training. We are aiming for a system that the end user can set up, a fast automated setup with no extra equipment is a massive plus in that situation.

Hi all,

I originally asked this question back in 2019, and then sort of gave up on this idea at the time. I’m wondering if there has been any change or development since then?

I essentially would like to create a real time visualization of a “swarm” of nodes moving close together outdoors. I don’t need z axis information. There will be ~200 nodes, often within 1m of each other.

Could you help me understand the biggest barriers to making this happen?

  1. Airtime (each node needs a set time to broadcast while all others are receiving).
  2. Physical interference (these nodes are located on people. I thought that UWB was good at overcoming physical interference, but I’m seeing contradictory reports on here).
  3. Battery - I’m not sure if this is a limitation of not since each device only needs to last a maximum of 8 hours.

Are there other limitations I should be aware of?

In terms of problem 1, I am thinking of something like the following (originally proposed by @mciholas in another thread):

  • Each tag is listening when not transmitting.
  • Each node broadcasts a packet containing its serial number, timestamp, and an array of serial numbers and timestamps from every other node. At 24 bytes/entry that is 4800 bytes per packet.
  • Thus at 6.8Mbps, each packet takes 705us to transmit.
  • If this is at 2Hz, that’s 400 packets per second, and 282ms of total airtime (30%). According to @mciholas this is outside the limits of the Aloha protocol.

I’m wondering if the following might help me:

  1. Can data be transmitted on a separate channel such that each tag only needs to transmit its own id and timestamp during the localization cycle? It seems inefficient to be transmitting data for every tag in every packet as the total airtime would seem to grown with O(N^2).
  2. Could the newer 3000 range chips help me in some way? I can’t quite figure out what the difference would be.
  3. I could potentially use a few anchors affixed to vehicles in the proximity of the tags, but at a distance of 10-100m. These would have much higher power supply and could be larger if that’s helpful. They would range with each tag in series and collect the relative position of each tag to the anchor and thus to each other. The downsides of this would be the need for these anchors, and presumably a lower accuracy in the distance of each tag to each other.

Is there anything else I should be considering here? Does this seem remotely feasible for a UWB application?

Thank you

Kristo

For node A to calculate the a range to node B minimum information is:

  1. The time difference from A transmitting a message and receiving a later message from B as timed by A.
  2. The time difference from B receiving a message from A and it sending its next transmission as timed by B.
  3. The difference in clock speeds between A and B

With those 3 piece of information node A can calculate the range to B.
A already knows or can calculate reasonable values for 1 and 3. The issue is how does it get value 2.
Since you already need to receive a UWB message from B the logical solution is for B to include it in that message. The other commonly used option is for it to be fixed but it must be a different fixed value for each node replying which doesn’t scale well for this application.
But as you indicated this has the down side that once you include the required information for multiple nodes the messages start getting long. Longer messages mean slower updates and higher risks of collisions and dropped packets.

If you used a secondary data radio then potentially each node only needs to include it’s own internal clock time in each transmission:

Node A sends a UWB transmission
Node B sends a transmission to A indicating “I saw your message at my internal clock time of nnn” via some other means.
Node B sends a UWB transmission with its internal clock time at the time of transmission.
Node A can now calculate time 2 without node B including any node specific data in the USB message.

Those messages sent “by some other means” are point to point rather than broadcast and only order n for each individual node. So it would scale well if you had wired network connections and a good switch. But assuming this is wireless then you still have a requirement to be able to cope with order n^2 packets flying around. But now you also have to fun of having to line data up from two radio systems with different latencies either of which could suffer a dropout for any given packet.