Student Work

 

Feature Recognition from Aerial Imaging Public

Downloadable Content

open in viewer

Guided parachute systems are essential within the United States Army for delivering supplies to troops on the battlefield. However, the current GPS system used to detect and track the location of these supply drops is susceptible to jamming and spoofing and it is possible for the output signal to be tracked by others as well. Therefore, it is necessary for the military to find a way to better track the location of these parafoils without the need for GPS and to create a passive guidance system that does not rely on electromagnetic transmissions [1], [2]. In order to solve this problem, our group was tasked with designing a new guidance system which determines the location of these aerial parafoil supply drops using only passive video feed from a camera mounted to the package. To achieve this, we used an algorithm that takes in a frame from the aerial video camera and works to match key features in the given image to either a future video frame from the same video or to the corresponding Google Earth image. While designing this ensemble, we considered three widely-used feature matching algorithms. Unfortunately, we found that while these three matching algorithms are able to find different key points in a given image, none of them were able to individually identify enough features to match the images with a consistent and sufficiently high accuracy. Also, aspects such as rotations and scaling were issues for some of the detectors and not for others, whereas the other ones handled the Google Earth images better. To solve this issue we created an ensemble algorithm that utilizes image pre-processing and combines the predictions of the three feature detection algorithms we were considering. In order to choose which matches are the best, or correct, for a given image we created two different discriminator algorithms. For the first, we utilized the distance metric created between the two key points in a matching. For the second, we trained a neural network which seeks to determine the accuracy of any given match. We then needed to find the cut-off distance value for a correct match for each of the feature detector algorithms. In order to do this we created code which allows us to step-through each of the matches in a given matching and we tracked what correct match had the highest distance value. We did this for each of the algorithms. We then set this as the distance threshold in the distance discriminator. We also needed to train the neural network but did not have any data to use that was labeled. We had attempted unsupervised learning earlier in this project and saw very poor results. Therefore, we created a labeled dataset which included 15,500 rows of data that we could use for the supervised learning of the neural network. Overall, our distance metric and neural network discriminator algorithms achieve an accuracy of 94% and 89% respectively. These percentages rep-resent how many matches they accurately classified as either incorrect or correct.

  • This report represents the work of one or more WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its website without editorial or peer review.
Last modified
  • 08/29/2021
Creator
Publisher
Identifier
  • 21436
  • E-project-050321-130423
Advisor
Year
  • 2021
Sponsor
Date created
  • 2021-05-03
Resource type
Major
Rights statement
License

Relationships

In Collection:

Items

Permanent link to this page: https://digital.wpi.edu/show/gt54kr13r