21.8 C
London
Friday, September 20, 2024

Guiding Robotics Competitors to Victory: Mapping Robot Paths with Computer Vision

This article was contributed to the Roboflow blog by Mason, a high school student interested in the technologies that drive the modern world. He plans to pursue mathematics and computer science.

Introduction

Object trackers can play a crucial role in sports game analysis, replay, and live breakdowns. However, these trackers aren’t just limited to traditional sports. They can also be useful for numerous other activities or games. One niche activity that benefits from this technology are robotics competitions. These systems can greatly improve strategy and analysis of opponents, giving teams an edge over the others. In this article, we will cover how object detection models, object segmentation, and object tracking can be applied to map out robot paths in the First Robotics Competition (FRC).

Project Overview

Specifically, this project will run robot object detection on a video the user uploads. Once detections are done running on the frames, each robot will be mapped from the 3D field to a 2D top-down diagram of the field. To calculate this, we will make use of field segmentation and simple mathematics. Once finished, the positions will be saved to a JSON file in order to be stored for analysis later on.

For this project, I used Node.js with packages as well as a bit of Python to connect with my Roboflow project.

Step 1: Build Robot Detection Model

First, sign up for Roboflow and create an account. Next, go to workspaces and create a new object detection project. This will be used to detect robots in the video frames. Customize the project name and annotation group to your choice.

Next, upload images to use for annotation. For FRC robotics and many other activities, videos of past competitions can be found on YouTube. Roboflow provides a YouTube video downloader, greatly reducing time spent finding images. Make sure to upload multiple different events/competitions to make the model more accurate in different environments. Now add the images to the dataset for annotation.

Next, add the classes for the different types of objects you need the model to detect. In the case of FRC robotics, the class names “Red” and “Blue” work well for differentiating robots on the red team and blue team.

Now annotation can begin. With large datasets, it may be useful to assign annotations to team members. Roboflow has this feature built in. However, you can also assign all images to yourself for annotation.

Using Roboflow’s annotation tools, carefully label the objects and assign the appropriate class. In the case of FRC robotics, I found it easiest to target the robot bumpers instead of the entire robot. This should prevent overfitting, as general robot shapes change each year. Additionally, this allows the model to be reused for other projects such as automated robot avoidance or autonomous defense.

Once we have our annotations and images, we can generate a dataset version of labeled images. Each version is unique and associated with a trained model so you can test out different augmentation setups.

Step 2: Train Robot Detection Model

Now, we can train the dataset. Roboflow provides numerous methods for training. You can train using Roboflow, allowing for special features such as compatibility with Roboflow’s JavaScript API. However, this method requires training credits.

Alternatively, Roboflow provides Google Colab notebooks to train all sorts of models. In this case, I used this Colab notebook. These notebooks provide great step-by-step directions and explanations. Once training is completed, it makes it easy to validate and upload the model back to Roboflow.

Step 3: Build Field Segmentation Model

After the robot object detection model is accurately trained, we can move on to training the field segmentation model. This step will allow us to quickly convert 3D world dimensions to 2D image dimensions later on.

Through testing, I found full field segmentation to be difficult and unreliable. Instead, targeting the central taped region of the field proved to work much more reliably.

For segmentation, I found less images are needed to have it working reliably, as most fields look very similar. However, as the fields for FRC change each year, it is important to use images of current fields, or else the tape lines won’t match up.

After using Roboflow’s segmentation annotation tools, we once again train the model. The Google Colab notebook that worked great for me can be found here.

Step 4: Build Field Segmentation Model

After using Roboflow’s segmentation annotation tools, we once again train the model. The Google Colab notebook that worked great for me can be found here.

Step 5: Segment Fields for Coordinate Mapping

At this point, we can begin the logic for our project. The first step is to segment the field, as this will allow us to map the robot detection coordinates later on. For segmentation, the process I chose goes as follows:

… (rest of the content)

Conclusion

In conclusion, this project demonstrates how object detection models, object segmentation, and object tracking can be applied to map out robot paths in the First Robotics Competition (FRC). This technology can be used to analyze robot movements, track positions, and gain a competitive edge in the competition. The full code for the project is available on GitHub.

Frequently Asked Questions

Question 1: What is the purpose of this project?

The purpose of this project is to demonstrate how object detection models, object segmentation, and object tracking can be applied to map out robot paths in the First Robotics Competition (FRC).

Question 2: How does the project work?

The project works by running robot object detection on a video the user uploads, then mapping the robot positions from the 3D field to a 2D top-down diagram of the field.

Question 3: What are the benefits of this project?

The benefits of this project include improved strategy and analysis of opponents, giving teams an edge over the others.

Question 4: How does the project handle missing frames?

The project does not currently handle missing frames, but an algorithm could be implemented to smooth out the tracking process.

Question 5: Can this project be applied to other activities or games?

Yes, this project can be applied to numerous other activities or games, such as robotics competitions, sports, or even autonomous defense.

Latest news
Related news
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x