Etd

Logic Design of Point Cloud Based Convolutional Neural Network Accelerators

Public

LiDAR-based point cloud convolutional neural networks have been widely used in recent 3D object detection tasks. However, for the model inference in edge applications, the performance is restricted from dense computational processing and limited platform resources such as power supply and memory. Hardware implementation of the inference is one solution for reaching high accuracy under relatively low power consumption and latency. For FPGAs or ASICs architectures, logic design for the system is the prerequisite. In this thesis, we firstly re-build the inference path of a well-known network, VoxelNet, then propose a system-level logic design for hardware implementation, including processing elements (PEs), on-chip input/output buffers, AXI4 based external memory read/write logic, and system controller by a finite state machine (FSM). Finally, through simulation, we implement and verify 3 x 3 convolution and fully connected logic designs with 4 in-channel and 4 out-channel parallelized processing in the int8 quantization and 32-bit memory bandwidth. To summarize, after a study in point cloud-based networks, we propose and verify a PEs stack-based accelerator logic design that could easily specify the parallel strategy for convolution by modifying the in-channel and out-channel stacks. The speedup equals the number of in channel stacks times the number of out channel stacks comparing to sequential processing.

Creator
Contributors
Degree
Unit
Publisher
Identifier
  • etd-22536
Keyword
Advisor
Committee
Defense date
Year
  • 2021
Date created
  • 2021-05-05
Resource type
Rights statement

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/mg74qp96f