USF Logo
Lab Logo

Connected & Autonomous Transportation Systems Laboratory (CATS Lab)


Resources, Data and Links

youtube    YouTube Channel

fb    Facebook Page

github    GitHub Page

fuel    Optimal Biofuel Supply Chain Location Design
 

 

Connected & Autonomous Vehicles

CAVs

The CATS Lab led by Dr. Xiaopeng (Shaw) Li houses two full scale connected autonomous vehicles (CAV) (e.g., shown in the left figure). The CAVs are retrofitted from Lincoln MKZ hybrids with various sensors (e.g. Lidars, Radars, Cameras, Mobileye, NovAtel navigation units), drive-by-wire control platforms, Savari DSRC on-board units, and high-reliability industry computers. The vehicle software platform incorporates ROS based opensource packages such as CARMA and Autoware and a set of customized algorithms. The drive-by-wire platforms and software packages were developed by the CATS team in house. All the hardware and software components were assembled into the original manufacture vehicles at the CATS Lab. The lab also hosts several sets of portable Savari DSRC road side units (RSU) connected to portable traffic lights, which can be used to form an array of traffic signals along a corridor with customized specifications CATS CAV.

CAV Installation Guide: The CATS lab assembed the vehicles in house with numerous trial-and-errors. To facilitate peer researchers to assemble similar platforms, a detailed guide for installing AV key components is documented and can be accessed here.

The CATS lab also a fleet of scaled CAVs (controlled by PIC18f4550 and Arduino Uno). The scaled CAVs can be used to test CAV control algorithms in a controlled environment with low cost and zero safety hazards while incorporating limits and uncertainties of physical control and communications. The figure below shows a test on CAV platooning with (a) initial separate CAVs, (b) platooning and (c) unplatooning CATs Scaled CAV.

robot_car
 

Testbeds & Field experiments

CAV_LC

The CATS lab has developed a customized software platform to enable level-3 longitudinal and latitudinal controls of CAVs in mixed traffic even when interacting with connected vehicle infrastructure (e.g., coordination with signal timing). This platform incorporates ROS based opensource packages such as CARMA and Autoware and a set of customized algorithms. The left figure shows the screenshots of L3 CAV experiments in mixed traffic with human-driven vehicles (HV): (a) lane changing (Wang et al., 2019a), and (b) trajectory smooth at a traffic signal (Wang et al., 2019b). Relevant Videos

Currently, we have conducted a number of field tests, partnering with SunTrax, USF Campus, Busch Gardens (including their Henderson Field, an old airport runway), and the Tampa Uptown District shown in the left figure below. AV Testing at Busch Garden

Further, we have developed a USF Campus Safe and Connected Campus Community Testbed (SCCCT) concept that aims to strengthen transportation connections in the university area, and shrink the perceived and actual barriers to safe, efficient and sustainable travel for commuters and visitors. The right figure below illustrates the USF Tampa campus boundaries, along with notations at the locations of 13 campus traffic signals. With the SCCCT vision, we are upgrading these signals with wireless communications capabilities for vehicle to infrastructure (V2I) communications via dedicated short-range communications (DSRC). In the first phase, we are currently in the process of retrofitting the four signalized intersections along USF Alumni Drive with V2I RSUs (as highlighted in the red rectangle) to for a connected corridor. With the upgraded signals, we can experiment and analyze the communications between vehicles and infrastructure.

test_track
USF_campus

 

 

 

 

 

 

 

 

 

Video-Based Intelligent Road Traffic Universal Analysis Tool (VIRTUAL)

The University of South Florida (USF) team led by Dr. Li has developed a Video-Based Intelligent Road Traffic Universal Analysis Tool (VIRTUAL) that currently can automate the collection of vehicle trajectories on freeways and roundabouts (see Figure 1(a) and 1(b)). The core of this tool is built on deep-learning based computer vision/pattern recognitions. The core algorithm of this tool includes three steps: vehicle identification and tracking, camera rotation and shifting correction, and semi-automatic lane identification. This tool identifies vehicles using the advanced convolutional neural network model and tracks vehicles according to their movement properties. It calculates camera rotation and shifting parameters using a binary search correlation-based matching algorithm. And it generates lane structures based on the topographic properties of extracted trajectories semi-automatically. This tool has been used in a project for FHWA to extract high-definition trajectories close to 50 drone/helicopter videos from different sites including multi-lane freeways with complex traffic conditions (one of which is a two-mile segment on I-75 in Tampa). The data will be made available to the public in a few months, and the results have been communicated to US DOT, US DOE, State DOTs and other stakeholders and they all show great interests in utilizing this dataset in their transportation and energy related studies. VIRTUAL

vedio_tool

  • Reference: Li, X., Zhao, D., Video-Based Intelligent Road Traffic Universal Analysis Tool (VIRTUAL). Provisional Patent No., 62/701,978). July 2018.