Arizona Autonomous

  • Home
  • About
  • Members
    • Member's Corner
  • Calendar
  • Partnerships
    • Current Partners
    • Future Partners
  • Contact Us
  • Home
  • About
  • Members
    • Member's Corner
  • Calendar
  • Partnerships
    • Current Partners
    • Future Partners
  • Contact Us

The News

The Tech Behind AZA's Computer Vision

10/27/2016

2 Comments

 
After completing a successful takeoff, an onboard camera attached to Arizona Autonomous' Trex-700N comes to life and begins taking a deep look at the environment around it in an attempt to find very specific features. Teaching yourself how to recognize something probably takes just a few seconds but teaching a computer how to find features in an image is a very difficult process. Let's take a look at how AZA's computer vision algorithms work.
Once an image is captured by the camera, a series of transformations are completed on the image to find the given target. Pictured on the right is an overview of the specific process that was used to determine target validity, shape, color, ​embedded alphanumeric color, alphanumeric letter, and alphanumeric orientation.
Picture
The first process is completed in real-time using a Haar feature cascade classifier, which is a region of interest (ROI) detection algorithm that searches for objects of visual interest in an image. If an object is found, the image is then cropped to the detected ROI and passed into further modifiers. Several different channels are then extracted from the image including the ​Hue-Saturation-Value (HSV) and luminosity. The luminosity channel is then processed using the Canny edge detection algorithm which finds "strong edges" within the image and identifying connecting edges. A custom algorithm was created to combine edges based on their relative distance, reducing the fragmented edges to a larger outline and a smaller embedded alphanumeric outline.

Next, several different algorithms are run in parallel to further analyze the image and begin to identify its features. The target outline is approximated to a polygon using the Douglas-Peucker algorithm and the number of edges is compared to a table of polygon edge counts and names. The RGB values of the masked image are averaged in order to determine the target’s average RGB color. The resulting color tuple is compared to a table of common colors, and the closest match within acceptable error range is used to label the target.

To determine letter orientation, alphanumeric color, and alphanumeric character the outline generated for the embedded character is then processed. Optical Character Recognition (OCR) is used to determine the alphanumeric character embedded in the target using Tesseract. To enhance compatibility with Tesseract's algorithms, the extracted character outline is filled and redrawn on a blank background, yielding a binary image.The character is then deskewed using plausible character orientations or by rotating the image by 10 degrees at a time until a letter is recognized. Upon successful character recognition, the degree rotation and image metadata are used to determine the alphanumeric cardinal orientation.

If you were to look at that image it would be fairly easy to determine that the image contained a yellow hexagon with a black L inside, but teaching a computer to know that requires a much finer approach. In the 2016-17 academic year, AZA will be developing a new method of executing its computer vision tasks which is currently in development.
2 Comments

U of A Travels to CPS-VO Competition

10/27/2016

1 Comment

 
On October 3rd, the University of Arizona hosted the inaugural Cyber-Physical Systems Virtual Organization's VORTEX Competition, formally known as the 2016 NSF CPS Design Challenge. For two days the Tucson Internation Modelplex Park Association (TIMPA) Field became the home of the competition where four universities from across the nation gathered to compete.

This year's participants included the University of California, Los Angeles, the University of Pennsylvania, Vanderbilt University, and the University of Arizona. The U of A's team was lead by Dr. Jonathan Sprinkle who was joined by Matt Bunting, Richard Herriman, and Coby Allred. The challenge seemed simple, to autonomously deliver a mosquito trap using a quad-rotor drone, but in reality there was nothing further from the truth.

When the teams gathered for their first flights on October 3rd, there was not a single drone in operable condition. Whether it be hardware failure, firmware failure, or in the worst case, communication failure, each flight brought along unique and unforeseen challenges. The U of A's drone had to be rewired and reprogrammed several times before its maiden flight at the competition and several other teams ended up shorting circuits, autopilot modules, and even an Intel Nuc.

On the second day of competition things had improved dramatically and several dry runs were completed by teams in the first hours, a vast improvement over a total of zero from the first day. The first judged runs began soon after, with U of A completing the first attempt of the competition. In the end all teams were able to successfully complete both of the competition's missions before heading to a well deserved group dinner. It was a trying event, and several teams were feeling the stress until the very end, but it was an excellent opportunity for like-minded individuals to gather and discuss the future of autonomous flight and its further potential.

Below is a short video produced by the competition organizers in an effort to create a promotional piece for next year's event.
1 Comment

    Coby Allred

    Picture
    U of A sophomore studying ECE. Vice President of AZA and member of Engineering Student Council.

    Archives

    October 2016

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.