A Multiscale Feature Extraction Network Based on Channel Spatial Attention for Electromyographic Sig

A Multiscale Feature Extraction Network Based on Channel Spatial Attention for Electromyographic Sig

Abstract:

Discriminant features captured from hyperspectral images (HSIs) can be used to accurately distinguish on-ground objects and materials for Earth observation. Typically, this process is hampered by inherent channel correlations and sample arrangements in the spectral and spatial domains. Deep learning (DL) networks play substantial roles in capturing discriminant features. However, their performances would be degraded when all information is modeled with equal importance. To tackle the aforementioned problem, we propose a deep spectral–spatial feature fusion-based multiscale adaptable attention network (SF2MSA2N). Our motivation is to strengthen the local considerable segments of spectral–spatial features by dual multiscale adaptable strategies. We aim to promote adequate fusion and collaboration between spectral and spatial features. Dual multiscale adaptable attention mechanisms are proposed to enhance the extraction ability for key informative features. Specifically, the spatialwise attention provides an adaptable weight construction strategy for neighboring regions by unifying two metrics. The spectralwise attention adaptively strengthens the significant spectral channels in local channel segments. Their adaptive multiscale frameworks reduce the effect of imbalanced information. Inspired by the multimodal compact bilinear pooling (MCBP), we design an outer product-based feature fusion strategy and its convolution-based variant for better feature extraction (FE) performance. Multiple experiments conducted on four typical hyperspectral datasets prove that SF2MSA2N is capable to extract better features than several state-of-the-art techniques.