featured

Trustworthy AI in Digital Health: A Comprehensive Review of Robustness and Explainability

We present a structured overview of methods, challenges, and solutions, aiming to support researchers and practitioners in developing reliable and explainable AI solutions for digital health. This paper is further enriched with detailed discussions of the contributions toward robustness and explainability in digital health, the development of trustworthy AI systems in the era of LLMs, and various evaluation metrics for measuring trust and related parameters such as validity, fidelity, and diversity.

Improving Shape Bias in Learnable Geometric Moment Representations

This paper revisits Deep Geometric Moments (DGM), a framework for learning shape-aware, geometry-aligned visual representations, by swapping the original ResNet backbone for ConvNeXt, a modern high-performing ConvNet. Using ConvNeXt feature maps as the input “stem” for DGM, the authors show consistent ImageNet-1K accuracy gains over a ResNet34-DGM baseline (up to ~+2.4% depending on ConvNeXt size) while preserving and strengthening DGM’s shape bias.

AI-Powered Wearable Sensors for Health Monitoring and Clinical Decision Making

A comprehensive review on AI-powered wearable biosensors, highlighting how machine learning and edge AI enable real-time health monitoring and personalized care, including digital twins, LLMs, and challenges in privacy, scalability, and clinical integration.