Recommendation for Machine Learning Interpretability options for a SeriesNetwork object?

조회 수: 1 (최근 30일)
Hello –
I have a trained algorithm (i.e., LSTM) for time-series regression that is a SeriesNetwork object:
SeriesNetwork with properties:
Layers: [6×1 nnet.cnn.layer.Layer]
InputNames: {'sequenceinput'}
OutputNames: {'regressionoutput'}
I have used some canned routines for machine learning interpretability (e.g., shapley, lime, plotPartialDependence) that work great with some object types (e.g., RegressionSVM) but not with SeriesNetwork objects. The relevant functions I have read about appear to be for use with image classification, e.g., rather than time-series regression.
My question is thus: Can you recommend a machine learning interpretability function for use with a SeriesNetwork object built for regression? I am confident such a function exists, but I can’t seem to find it. Any and all help would be greatly appreciated.
Thank you in advance.

답변 (1개)

Shivansh
Shivansh 2023년 11월 8일
편집: Shivansh 2023년 11월 8일
Hi Bart,
I understand that you want to find a machine learning interpretability function for use with a SeriesNetwork object built for regression.
You can use “gradCam” function for time series models. You can refer to the following link for an example on classification model using time series.
The method is designed specifically for convolutional networks so it may not give good results for LSTMs.
Hope it helps!

카테고리

Help CenterFile Exchange에서 Gaussian Process Regression에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by