

In-Ear User Authentication Systems Framework Using A Hybrid Progressive Neural Network (HPRGNN) Technique
Abstract
An emerging challenge in user authentication is handling continuously streaming data, especially when working with limited datasets. This issue becomes even more complex in biometric-based authentication systems, where ensuring the integrity and resistance of biometric traits against tampering is critical. This paper proposes a novel biometric user authentication system that utilizes in-ear acoustics and incorporates an innovative approach combining Regressive Symbolic Expression Programming with hybrid Progressive Neural Networks (rSEP-HPRGNN). The proposed system was evaluated using a real-time dataset collected from two subjects one representing abnormal or corrupted data comprising a total of 10 sample sequences. Simulation results demonstrated promising classification and prediction performance. Notably, the rSEP-HPRGNN model outperformed a state-of-the-art Long Short-Term Memory (LSTM) neural network, achieving a mean absolute percentage error (MAPE) of 0.1637 units, compared to the LSTM's MAPE of 0.7683 units.
References
Wang, Z., Ren, Y., Chen, Y., & Yang, J. (2022). ToothSonic: Earable Authentication via Acoustic Toothprint. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(2), 1-24.
Casanova, A., Cascone, L., Castiglione, A., Meng, W., & Pero, C. (2021). User recognition based on periocular biometrics and touch dynamics. Pattern Recognition Letters, 148, 114-120.
Akkermans, A. H., Kevenaar, T. A., & Schobben, D. W. (2005a). Acoustic ear recognition for person identification. In Fourth IEEE Workshop on Automatic Identification Advanced Technologies (AutoID'05) (pp. 219-223). IEEE.
Akkermans, T. H., Kevenaar, T. A., & Schobben, D. W. (2005b). Acoustic ear recognition. In Advances in Biometrics: International Conference, ICB 2006, Hong Kong, China, January 5-7, 2006. Proceedings (pp. 697-705). Springer Berlin Heidelberg.
Arakawa, T., Koshinaka, T., Yano, S., Irisawa, H., Miyahara, R., & Imaoka, H. (2016, December). Fast and accurate personal authentication using ear acoustics. In 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) (pp. 1-4). IEEE.
Xu, W., Yu, Z., Wang, Z., Guo, B., & Han, Q. (2019). Acousticid: gait-based human identification using acoustic signal. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(3), 1-25.
Wang, Z., Li, N., Wu, T., Zhang, H., & Feng, T. (2020). Simulation of human ear recognition sound direction based on convolutional neural network. Journal of Intelligent Systems, 30(1), 209-223.
Alsawwaf, M., & Chaczko, Z. (2020). Multimodal access control: A review of emerging mechanisms. Smart Innovations in Engineering and Technology 5, 183-200.
Liu, Y., & Hatzinakos, D. (2014a). Earprint: Transient evoked otoacoustic emission for biometrics. IEEE Transactions on Information Forensics and Security, 9(12), 2291-2301.
Liu, Y., & Hatzinakos, D. (2014b). Human acoustic fingerprints: A novel biometric modality for mobile security. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 3784-3788). IEEE.
Refbacks
- There are currently no refbacks.