Etd

Explaining Deep Time Series Classifiers

Public

Downloadable Content

open in viewer

Deep neural networks are being used to build autonomous systems that will perceive, learn, decide, and act on their own. However, state-of-the-art deep learning models lack transparency in how they make their predictions. Explainable classification is essential to high-impact settings where practitioners require evidence to support their decisions. Various saliency methods have been developed to summarize where the deep neural networks "look" in the provided input as evidence for the predictions. One increasingly popular solution is attribution-based explainability, which finds the impact of input features on the model’s predictions. While these methods are designed for images, while very little has been done to explain deep time series classifiers. Unlike images, where a pixel has a predefined scale and representation i.e, 0 (black) - 255 (white), and the same is followed across all the image datasets, the distribution of time series varies vastly amongst datasets. Also, during classification, short contiguous subsequences often contain much of the discriminative information. However, existing explainability methods treat all input features independently, ignoring correlations and possibly disrupting discriminative subsequences. In this work, we study this problem and propose PERT, a novel perturbation-based explainability method designed to explain deep classifiers’ decisions on time series. PERT extends beyond recent perturbation methods to generate a saliency map that assigns importance values to the timesteps of the instance-of-interest. First, PERT uses Prioritized Replacement Selector to learn to sample a replacement time series from a large dataset, to perform meaningful perturbations and avoid creating network artifacts Second, PERT mixes the instance with the replacements using a Guided Perturbation Strategy, which learns to what degree each timestep can be perturbed without altering the classifier’s final prediction. These two steps jointly learn to identify the fewest and most impactful timesteps that explain the classifier’s prediction. We evaluate PERT using three metrics on nine popular datasets with two black-box models - Fully Connected Network and Recurrent Neural Network. The chosen datasets vary from instances having a sequence length of 96 to 720. We find that PERT consistently outperforms all five state-of-the-art methods by a margin of 26%. Using case studies, we also demonstrate that PERT succeeds in finding the relevant regions of the input time series.

Creator
Contributors
Degree
Unit
Publisher
Identifier
  • etd-43226
Advisor
Orcid
Defense date
Year
  • 2021
Sponsor
Date created
  • 2021-12-15
Resource type
Rights statement

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/2514np705