| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 6.15 MB | Adobe PDF |
Autores
Passos, Dário
Orientador(es)
Resumo(s)
One of the criticisms that deep chemometric models usually face is their lack of explainability. In this work, three different explainability methods (Regression Coefficients, LIME and SHAP) are applied to different convolutional neural network (CNN) architectures, previously optimized for the task of multifruit dry matter content prediction based on NIR spectra. Additionally, a convolutional filter characterization is also performed to help clarify the type of modelling performed by the convolutional layers. The analysis allowed to extract information about the wavelength bands relevant to the models’ performance (feature importance) and to understand how different convolutional layer topologies transform the spectra leading to three types of modelling: data driven preprocessing, dimensionality reduction and hierarchical feature extraction. Feature importance analysis indicates that the relevant spectral bands used by the different CNN architectures for prediction of dry matter is basically the same. They are the same as the bands relevant to PLS and these bands can be attributed to specific known vibrational groups. Moreover, in the context of the multifruit prediction task, the analysis also points out that CNNs tend to identify and use spectral features that are informative across different fruit spectra, much like domain-invariant features identified by di-CovSel variable selection.
Descrição
Palavras-chave
Fruit internal quality NIR spectroscopy Convolutional neural networks Chemometrics ML explainability
Contexto Educativo
Citação
Editora
Elsevier
