Proceedings:
No. 7: AAAI-22 Technical Tracks 7
Volume
Issue:
Proceedings of the AAAI Conference on Artificial Intelligence, 36
Track:
AAAI Technical Track on Machine Learning II
Downloads:
Abstract:
In multi-view multi-label learning (MVML), each instance is described by several heterogeneous feature representations and associated with multiple valid labels simultaneously. Although diverse MVML methods have been proposed over the last decade, most previous studies focus on leveraging the shared subspace across different views to represent the multi-view consensus information, while it is still an open issue whether such shared subspace representation is necessary when formulating the desired MVML model. In this paper, we propose a DeepGCN based View-Specific MVML method (D-VSM) which can bypass seeking for the shared subspace representation, and instead directly encoding the feature representation of each individual view through the deep GCN to couple with the information derived from the other views. Specifically, we first construct all instances under different feature representations into the corresponding feature graphs respectively, and then integrate them into a unified graph by integrating the different feature representations of each instance. Afterwards, the graph attention mechanism is adopted to aggregate and update all nodes on the unified graph to form structural representation for each instance, where both intra-view correlations and inter-view alignments have been jointly encoded to discover the underlying semantic relations. Finally, we derive a label confidence score for each instance by averaging the label confidence of its different feature representations with the multi-label soft margin loss. Extensive experiments have demonstrated that our proposed method significantly outperforms state-of-the-art methods.
DOI:
10.1609/aaai.v36i7.20731
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence, 36