Multi-view classification with convolutional neural networks

Abstract

Humans’ decision making process often relies on utilizing visual information from different views or perspectives. However, in machine-learning-based image classification we typically infer an object’s class from just a single image showing an object. Especially for challenging classification problems, the visual information conveyed by a single image may be insufficient for an accurate decision. We propose a classification scheme that relies on fusing visual information captured through images depicting the same object from multiple perspectives. Convolutional neural networks are used to extract and encode visual features from the multiple views and we propose strategies for fusing these information. More specifically, we investigate the following three strategies: (1) fusing convolutional feature maps at differing network depths; (2) fusion of bottleneck latent representations prior to classification; and (3) score fusion. We systematically evaluate these strategies on three datasets from different domains. Our findings emphasize the benefit of integrating information fusion into the network rather than performing it by post-processing of classification scores. Furthermore, we demonstrate through a case study that already trained networks can be easily extended by the best fusion strategy, outperforming other approaches by large margin.

Date
Jan 15, 2025 12:00 PM — 12:30 PM
Event
EMIL Spring'25 Seminars
Location
Online (Zoom)
Nooshin Taheri Chatrudi
Nooshin Taheri Chatrudi
Graduate Teaching Assistant

I am a Ph.D. student at the College of Health Solutions, Arizona State University (ASU). Currently, I am working under the supervision of Dr. Hassan Ghasemzadeh at the Embedded Machine Intelligence Lab (EMIL). My research interests include machine learning, clinical informatics, and health monitoring system development.