Abstract:
|
Multimodal data, corresponding to matched sets of measurements on the same subjects, have become increasingly common with technological advances in genomics, neuroscience and wearable technologies to name a few. Often, the subjects are separated into known classes, and it is of interest to find associations between the views that are related to the class membership. Existing classification methods can either be applied to each dataset separately, or to the concatenated dataset without taking the associations between the views into account. On the other hand, existing association methods cannot directly incorporate class information. We propose a joint framework for simultaneous classification and association analysis of multi-view data. We incorporate sparse regularization to make the method suitable for high-dimensional setting, and develop an efficient optimization algorithm. In addition to joint learning framework, a distinct advantage of our approach is its ability to use partial information: it can be applied both in the settings with missing class labels, and in the settings with missing subsets of views. Numerical studies support the advantages of the proposed approach.
|