Learning Product Graphs From Spectral Templates

Learning Product Graphs From Spectral Templates

Abstract:

Graph Learning (GL) is at the core of leveraging connections in machine learning (ML). By observing a dataset of graph signals and considering specific assumptions, Graph Signal Processing (GSP) provides practical constraints in GL. Inferring a graph with desired frequency signatures, i.e., spectral templates, from stationary graph signals has gained great attention. However, a severe computational burden is a challenging barrier, especially for inference from high-dimensional product graph signals, i.e., graph signals live on the product of smaller factor graphs. Few product GL methods have been proposed for mostly inference with smoothness assumption, while they are limited to learning only two factor graphs, handle only the Cartesian products, and have not addressed GL with desired spectral templates. To bridge the mentioned gaps, we propose a method for learning product graphs from product graph signals in which the product GL problem can be broken into separated optimizations associated with each (significantly smaller) factor graph. Besides, unlike the current approaches, our method can learn from any type of product graph (possibly with more than two factor graphs) without needing to know the type of graph products beforehand, and with significantly reduced complexity than learning directly from product graph signals. In addition to devising theoretical sufficient recovery conditions and validating them numerically, experimental results on synthetic and real-world data, i.e., multi-view image and brain signal analysis, illustrate meaningful factor graphs supported by expert-related research and also superiority over the current methods for learning from spectral templates.