Fuzzy-Enhanced Lightweight CNNs for CulturallyRelevant Food Classification: Advancing Assistive Technology for Visually Impaired Nigerians
DOI:
https://doi.org/10.31181/sa33202553Keywords:
Fuzzy convolutional neural networks, Assistive food classification, Visually impaired technology, Lightweight deep learning, Nigerian indigenous foodsAbstract
A research project fills the support technology deficit concerning assistive technologies for visually impaired Nigerians by creating lightweight Convolutional Neural Networks (CNNs) designed for indigenous food identification. We improved upon a past CNN-based Automatic Food Classification Model (AFCM) called AFCM by introducing our novel Custom Convolutional Fuzzy Neural Network (CCFNN) that features fuzzy logic and benchmark models MobileNet and LeNet-5. The database contains 4,000 images that represent 20 Nigerian swallow foods across various freshness states from fresh to 2-day-old. The images were transformed and resized to 100×100×3 to improve generalization and further enhanced through 30% image transformation methods (rotation, flipping, and zooming). The CCFNN implemented on a Lenovo T480S laptop using an Intel i7 processor and 16GB RAM demonstrated outstanding performance by attaining 95.2% accuracy when no augmentation was used and 80.9% accuracy with augmentation while having fewer than 30K parameters. This outperformed the accuracy results of AFCM that decreased to 37.4% when augmentation was activated. The combination of MobileNet and LeNet-5 operated at high accuracy (~95.7%) with light computing needs while CCFNN utilized fuzzy Gaussian inputs for understandable decisions crucial for assistive use. The study demonstrates CCFNN achieves outstanding results by maintaining an AUC-ROC of 0.9982 without augmentation and 0.9958 with augmentation which exceeds AFCM's 0.9545 with augmentation assessments. The study demonstrated that CCFNN delivered better deployment feasibility because it completed training steps in 850ms while requiring 200 times fewer parameters than MobileNet (4.2M) and running faster training steps than AFCM (998ms). Cultural adaptation of AI systems is needed according to research findings and CCFNN demonstrates strong potential to support mobile services in poor neighborhoods. The future research plan will investigate fuzzy pooling layers together with quantization methods for achieving improved real-time functionality. The proposed research brings inclusive AI by closing the discrepancy between Western-based models and localized requirements while offering sustainable solutions for assistive technology implementation in underdeveloped areas.
References
Afif, M., Ayachi, R., Pissaloux, E., Said, Y., & Atri, M. (2020). Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimedia tools appl., 79(41–42), 31645–31662. https://doi.org/10.1007/s11042-020-09662-3
Olasina, J. R., & Aliu, O. H. (2024). Development of convolutional neural network (CNN)-based automatic food classification (AFC) model for visual impairments of Nigerians. In Applied mathematics, modeling and computer simulation (pp. 316–324). IOS Press. http://dx.doi.org/10.3233/ATDE240774
Bhandari, A., Prasad, P. W. C., Alsadoon, A., & Maag, A. (2021). Object detection and recognition: using deep learning to assist the visually impaired. Disability and rehabilitation: Assistive technology, 16(3), 280–288. https://doi.org/10.1080/17483107.2019.1673834
Kawano, Y., & Yanai, K. (2014). Food image recognition with deep convolutional features. Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing. Adjunct Publication. https://doi.org/10.1145/2638728.2641339
Aktı, Ş., Qaraqe, M., & Ekenel, H. (2022). A mobile food recognition system for dietary assessment. https://doi.org/10.48550/arXiv.2204.09432
Christodoulidis, S., Anthimopoulos, M., & Mougiakakou, S. (2015). Food recognition for dietary assessment using deep convolutional neural networks. In New trends in image analysis and processing -- iciap 2015 workshops. iciap 2015. lecture notes in computer science (Vol. 9281, pp. 458–465). Springer, Cham. https://doi.org/10.1007/978-3-319-23222-5_56
Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. https://doi.org/10.48550/arXiv.1610.02357
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. https://doi.org/10.48550/arXiv.1704.04861
Sun, J., Radecka, K., & Zilic, Z. (2019). Foodtracker: A real-time food detection mobile application by deep convolutional neural networks. https://doi.org/10.48550/arXiv.1909.05994
Alawadhi, A., Ahmad, R. B., Almogahed, A., & Abrar, A. (2023). Deep learning techniques in mobile edge computing for internet of medical things. 2023 3rd international conference on emerging smart technologies and applications (ESMARTA) (pp. 1–6). IEEE. https://doi.org/10.1109/eSmarTA59349.2023.10293737
Tahir, G. A., & Loo, C. K. (2021). Explainable deep learning ensemble for food image analysis on edge devices. Computers in biology and medicine, 139, 104972. https://doi.org/10.1016/j.compbiomed.2021.104972
Chang, W. J., Yu, Y. X., Chen, J. H., Zhang, Z. Y., Ko, S. J., Yang, T. H., … & Chen, M. C. (2019). A deep learning based wearable medicines recognition system for visually impaired people. 2019 ieee international conference on artificial intelligence circuits and systems (AICAS) (pp. 207–208). IEEE. https://doi.org/10.1109/AICAS.2019.8771559
Devi, S., & Cn, S. (2021). Deep learning based audio assistive system for visually impaired people. Computers, materials & continua, 71, 1205–1219. http://dx.doi.org/10.32604/cmc.2022.020827
Rahman, F., Jahan, I., Farhin, N., & Uddin, J. (2019). An assistive model for visually impaired people using YOLO and MTCNN. The 3rd International Conference. Association for Computing Machinery. https://doi.org/10.1145/3309074.3309114
Chen, G., Jia, W., Zhao, Y., Mao, Z.-H., Lo, B., Anderson, A. K., …& Sazonov, E. (2021). Food/non-food classification of real-life egocentric images in low-and middle-income countries based on image tagging features. Frontiers in artificial intelligence, 4, 644712. https://doi.org/10.3389/frai.2021.644712
Bird, J. J., Barnes, C. M., Manso, L. J., Ekárt, A., & Faria, D. R. (2022). Fruit quality and defect image classification with conditional GAN data augmentation. Scientia horticulturae, 293, 110684. https://doi.org/10.1016/j.scienta.2021.110684
Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of big data, 6(1), 1–48. https://doi.org/10.1186/s40537-019-0197-0
Gao, X., Xiao, Z., & Deng, Z. (2024). High accuracy food image classification via vision transformer with data augmentation and feature augmentation. Journal of food engineering, 365, 111833. https://doi.org/10.1016/j.jfoodeng.2023.111833
Huang, H., Oh, S. K., Fu, Z., Wu, C. K., Pedrycz, W., & Kim, J. Y. (2024). FSCNN: Fuzzy channel filter-based separable convolution neural networks for medical imaging recognition. IEEE transactions on fuzzy systems. IEEE. https://doi.org/10.1109/TFUZZ.2024.3450000
Deepika, J., Rajan, C., & Senthil, T. (2021). Security and privacy of cloud‐and IoT‐based medical image diagnosis using fuzzy convolutional neural network. Computational intelligence and neuroscience, 2021(1), 6615411. https://doi.org/10.1155/2021/6615411
Suddul, G., & Seguin, J. F. L. (2023). A comparative study of deep learning methods for food classification with images. Food and humanity, 1, 800–808. http://dx.doi.org/10.1016/j.foohum.2023.07.018
Senthil, G. A., Prabha, R., Sridevi, S., Nithyashri, J., & Suganya, A. (2024). A novel meta-analysis and classification of herbal medicinal plant raw materials for food consumption prediction using hybrid deep learning techniques based on augmented reality in computer vision. World conference on artificial intelligence: advances and applications (pp. 1–24). Springer. https://doi.org/10.1007/978-981-97-4496-1_1
Tan, R. Z., Chew, X., & Khaw, K. W. (2021). Neural architecture search for lightweight neural network in food recognition. Mathematics, 9(11), 1–14. https://doi.org/10.3390/math9111245
Ataguba, G., Ezekiel, R., Daniel, J., Ogbuju, E., & Orji, R. (2024). African foods for deep learning-based food recognition systems dataset. Data in brief, 53, 110092. https://doi.org/10.1016/j.dib.2024.110092

All site content, except where otherwise noted, is licensed under the