Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. A major obstacle in large-scale utilization of current wearables is that their computational algorithms need to be re-built from scratch upon any changes in the configuration. Retraining of these algorithms requires significant amount of labeled training data, a process that is labor-intensive and time-consuming. We propose an approach for automatic retraining of the machine learning algorithms in real-time without need for any labeled training data. We measure the inherent correlation between observations made by an old sensor view for which trained algorithms exist and the new sensor view for which an algorithm needs to be developed. Our multi-view learning approach can be used in both online and batch modes. By applying the autonomous multi-view learning in the batch mode, we achieve an accuracy of 83.7 percent in activity recognition which is an improvement of 9.3 percent due to the automatic labeling of the data in the new sensor node. In addition to gain the less computation advantage of incremental training, the online learning algorithm results in an accuracy of 82.2 percent in activity recognition.