Hybrid Weakly Self-Supervised and Multi-Task Representation Learning for Label-Efficient Human Activity Recognition Using Wearable Sensors

Authors

  • Phillip Thomas Independent Researcher

DOI:

https://doi.org/10.63891/j-mart.v2i1.127

Keywords:

Human Activity Recognition, Wearable Sensors, Self-Supervised Learning, Multi-Task Learning, Representation Learning, Label Efficiency, Deep Learning, Sensor Data Analytics

Abstract

Wearable Human Activity Recognition (HAR) has taken a center stage in intelligent health monitoring, smart environments, and context-aware computer systems as a key component. In spite of the current developments in the field of deep learning, a majority of the state-of-the-art HAR systems require large amounts of labeled data, which restricts their scalability and application to a variety of real-life conditions. Sensor streams are costly, time-consuming, and unreliable to be labeled as this method is expensive and time consuming, there is noisiness in labeling as well as user variability. In this paper, a hybrid learning model that combines weakly self-supervised learning and multi-task representation learning to allow recognition of human activities using weak labelling of multimodal wearable sensors is proposed. The strategy builds on the auxiliary learning goals based on unlabeled sequences as well as correlated representation extraction based on the correlated predictive tasks. The framework performance is based on a combination of self-directed feature discovery and task-directed inductive bias to promote generalization, decrease reliance on manual labeling, as well as increase resilience to incomplete/noisy labeling. The methodology focuses on the temporal signal modeling, multimodal sensor fusion and adaptive optimization of joint loss goals. It is expected to yield better quality representation, competitive classification performance as compared to fully supervised baselines and scaling learning behavior to be applicable to large scale deployment. The paper helps to develop faster data-aware machine intelligence in wearable computing systems and gives hints on hybrid learning frameworks that can be used in contexts with larger time-series analytics.

References

Guo, P., & Nakayama, M. (2025). Towards User-Generalizable Wearable-Sensor-Based Human Activity Recognition: A Multi-Task Contrastive Learning Approach. Sensors, 25(22), 6988.

Sheng, T., & Huber, M. (2020). Weakly Supervised Multi-Task Representation Learning For Human Activity Analysis Using Wearables. Proceedings Of The Acm On Interactive, Mobile, Wearable And Ubiquitous Technologies, 4(2), 1-18.

Eldele, E., Ragab, M., Chen, Z., Wu, M., Kwoh, C. K., & Li, X. (2024). Label-Efficient Time Series Representation Learning: A Review. Ieee Transactions On Artificial Intelligence.

Sheng, T., & Huber, M. (2019, October). Siamese Networks For Weakly Supervised Human Activity Recognition. In 2019 Ieee International Conference On Systems, Man And Cybernetics (Smc) (Pp. 4069-4075). Ieee.

Eldele, E. A. I. A. (2023). Towards Robust And Label-Efficient Time Series Representation Learning.

Sheng, T., & Huber, M. (2020, May). Unsupervised Embedding Learning For Human Activity Recognition Using Wearable Sensor Data. In Flairs (Pp. 478-483).

Keating, L. (2024). Cross-Modal Weakly Supervised Learning For Multisensor Human Activity Recognition.

Rizk, H., & Elmogy, A. (2025). Self-Supervised Wifi-Based Identity Recognition In Multi-User Smart Environments. Sensors, 25(10), 3108.

Sheng, T.; Huber, M. Consistency Based Weakly Self-Supervised Learning For Human Activity Recognition With Wearables. In Proceedings Of The Aaai-22 Workshop On 4. Human-Centric Self-Supervised Learning, Virtual, 22 February–1 March 2022. Hc-Ssl’22.

Trirat, P., Shin, Y., Kang, J., Nam, Y., Na, J., Bae, M., ... & Lee, J. G. (2024). Universal Time-Series Representation Learning: A Survey. Arxiv Preprint Arxiv:2401.03717.

Sheng, T.; Huber, M. Reducing Label Dependency In Human Activity Recognition With Wearables: From Supervised Learning To Novel Weakly Self-Supervised Approaches. Sensors 2025, 25, 4032.

Surisetti, Vijaya. (2025). Ai-Driven Orchestration In Soa: Adaptive Workflows For Cloud- Based Enterprise Applications. International Journal Of Pharma Professional’s Research (Ijppr). 6. 2868-2880.

Deldari, S. (2024). Learning From Multimodal Time-Series Data With Minimal Supervision (Doctoral Dissertation, Rmit University).

Liu, C., Gui, G., Wang, Y., Ohtsuki, T., Niyato, D., & Shen, X. S. (2025). A Comprehensive Survey On Self-Supervised Learning For Specific Emitter Identification. Ieee Communications Surveys & Tutorials.

Ding, C., & Wu, C. (2024). Self-Supervised Learning For Biomedical Signal Processing: A Systematic Review On Ecg And Ppg Signals. Medrxiv, 2024-09.

Azizi, S., Culp, L., Freyberg, J., Mustafa, B., Baur, S., Kornblith, S., ... & Natarajan, V. (2022). Robust And Efficient Medical Imaging With Self-Supervision. Arxiv Preprint Arxiv:2205.09723.

Xu, M. A., Narayanswamy, G., Ayush, K., Spathis, D., Liao, S., Tailor, S. A., ... & Mcduff, D. (2025). Lsm-2: Learning From Incomplete Wearable Sensor Data. Arxiv Preprint Arxiv:2506.05321.

Himeur, Y., Varlamis, I., Kheddar, H., Amira, A., Atalla, S., Singh, Y., ... & Mansoor, W. (2023). Federated Learning For Computer Vision. Arxiv Preprint Arxiv:2308.13558.

Wichrowski, F., Ostrowski, M., Bortyn, M., & Kaczmarek-Majer, K. (2025). Review Of Explainable Semi-Supervised Methods In Multivariate Time Series Analysis. Authorea Preprints.

Olivia, S. (2025). Weakly And Self-Supervised Learning Strategies For Human Activity Recognition Using Wearable Sensor Data.

Downloads

Published

2026-02-14