Student Work

Applying Explainable AI to Taxi Driver Classification

Public Deposited

Downloadable Content

open in viewer

As data becomes more complex and more powerful deep learning architectures are used, the methods used to reach a solution become much less interpretable on the surface level. In recognition of such an issue, more developers are starting to incorporate explainable AI into their complex solutions in order to make these problems more understandable. This places more emphasis on human understanding, which is certainly crucial to a problem due to the degree of complexity that the most powerful solutions introduce. Furthermore, our research aims showcase the application of explainable AI to a complex and hard-to-interpret solution in the urban intelligence domain, specifically related to traffic data. In this paper, we build a deep classification model with the goal of predicting the identity of a driver responsible for a taxi trip, train it on complex data, and use explainable AI to better interpret the results of this model.

  • This report represents the work of one or more WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its website without editorial or peer review.
Creator
Publisher
Identifier
  • 95176
  • E-project-032423-131432
Advisor
Year
  • 2023
Date created
  • 2023-03-24
Resource type
Major
Source
  • E-project-032423-131432
Rights statement
Last modified
  • 2023-04-19

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/3n204231q