Time difference observed between the actual user positon and the sensor data when using the data for a machine learning project

I am currently using dwm1001 kit configured as listener and tags in my machine learning project where i am predicting the output of lights based on the user’s position.
I have used the following code to receive data from the listener connected to my computer via usb:

#UWB sensors connection
DWM=serial.Serial(port=“COM3”, baudrate=115200)
print(“Connected to " +DWM.name)
DWM.write(”\r\r".encode())
time.sleep(1)
DWM.write(“les\r”.encode())
time.sleep(1)
while True:
try:
#Read the UWB sensor data
line=DWM.readline()
parse = line.decode().split(" “)
tag1=[a for a in parse if “4DAA” in a] #4DAA is the name of the tag
tag2=[x for x in parse if “0BAE” in x]
tag3=[y for y in parse if “D632” in y]
tag4=[z for z in parse if “070E” in z]
“””" rest of the machine learning code for my project"""

Currently, i have noticed that due to iteratively predicting an output for each iteration using ML where user’s position is retreived, a certain lag is observed.
that is, due to high data rate of the dwm sensor, this data is getting queued up for each iteration and therefore there is a certain time difference between the actual user position and the data received from this code.
Is there a way i can stop this queuing? I tried changing the normal update rate and stationary update rate of the tag via the app but i still see no results.
I wish to receive the data from the sensor every 2 seconds so that at every iteration, i can get the actual and real time position and hence use that position for my machine learning part? i am aware that is time difference is mostly due to high computation requirement by the ML agorithm,

is there a way i can retrieve only the latest data at every iteration?
Is there a command like les or can i use les itself at the beginning of each iteration to receive fresh data at the start of every loop?

So the issue you have is that you are getting position output too fast for your machine learning code?

Why can’t your code simply throw away some positions rather than trying to process them all?
Rather than processing the next item on the queue change your loop to read all the waiting positions each time and then only process the last one.
.

Hi Andy,
Thanks for your quick response.
Can you please tell me how I can save the sensor data so that I can only fetch the last value?

I have tried saving the data but it makes my code slower since it requires computation power.
Is there are a more efficient way to do it?

Parsing serial data shouldn’t take any meaningful amount of processing power, especially in comparison to machine learning.

You need to set a timeout when opening the COM port otherwise readline() will block until the next line is received.
If you set inter_byte_timeout to 0.001 then you should get a timeout at the end of each block of data.
You can then do something along the lines of

lastLine = ""
while(true):
  lineIn = DWM.readline()
  if (lineIn.length < 10) // was an empty line due to a timeout
    break
  lastLine = lineIn

Now process lastLine as your input.

Hopefully that will give you a rough idea, I’m sure it has some bugs / issues, my python is a little rusty.

1 Like

Hello Sneha

I think what you are looking for is either asynchronously reading the data from the tag or discarding lines you are not fast enough to read.

For the first option you could look into threads, co-routines or Python 3’s newfangled asyncio. Then you just need a (preferably mutex protected) data object to expose the last position to the thread/coroutine/context doing the ML.

For the second option, I think reading one value, closing the serial port and re-opening it should work, maybe somehow clearing the RX buffer is required.