Details



DEVELOPING A SMART FACE RECOGNITION SYSTEM TO ENHANCE THE EFFICACY OF EMOTION BASED MUSIC PLAYER

Ahmed Abbas Naqvi

64-69

Vol. 9, Jan-Jun, 2019

Date of Submission: 2019-02-20 Date of Acceptance: 2019-03-28 Date of Publication: 2019-04-10

Abstract

This study develops a framework for face emotions that can be used to investigate fundamental human facial expressions. Humans used the suggested method to categorize people's moods and then used this result to play the audio file related to human emotion. As part of the process, the device first takes the human face. Facial recognition is used to carry it out. Attribute extraction methods can then be used to identify the human face. The image element can thus be used to identify human emotion. Extracting the tongue, mouth, and eyebrows reveals these signature points. We will play the emotional audio file by identifying individual emotions if the input face matches the emotion dataset face precisely. Faces trained with limited characteristics can be replica horloges recognized in various environments. A simple, dependable, and efficient solution is proposed. The system is very important in the identification and detection process. If you want to buy uhren replica cheap and quality fake watches, you had better choose best rolex replica watches UK online.
Hot Swiss perfect fake watches for Canada are available on this web.
2023 cheap replica watches UK with high quality are worth having.

References

  1. Bharati Dixit, Arun Gaikwad, (2018), Facial Features Based Emotion Recognition. IOSR Journal of Engineering, ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 8
  2. J Jayalekshmi, Tessy Mathew, (2017), Facial expression recognition and emotion classification system for sentiment analysis, International Conference
  3. Suchitra, Suja P. Shikha Tripathi, (2016), Real-time emotion recognition from facial images using Raspberry Pi II, 3rd International Conference
  4. Dolly Reney, Neeta Tripathi, (2015), An Efficient Method to Face and Emotion Detection. Fifth International Conference.
  5. Monika Dubey, Prof. Lokesh Singh, (2016), Automatic Emotion Recognition Using Facial Expression: A Review. International Research Journal of Engineering and Technology (IRJET)
  6. Anuradha Savadi Chandrakala V Patil, (2014), Face Based Automatic Human Emotion Recognition. International Journal of Computer Science and Network Security, VOL.14 No.7.
  7. Songfan Yang, Bir Bhanu, (2011), Facial expression recognition using emotion avatar image, IEEE International Conference.
  8. Leh Luoh, Chih-Chang Huang, Hsueh-Yen Liu, (2010), Image processing based emotion recognition, International Conference
  9. Jiequan Li, M. Oussalah, (2010), Automatic face emotion recognition system, IEEE 9th International Conference.
  10. H. Yang, D. Huang, Y. Wang, and A. K. Jain, (2018), Learning face age progression: A pyramid architecture of GANs, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 31-39
  11. Z. Wang, X. Tang, W. Luo, and S. Gao, (2018), Face aging with identity-preserved conditional generative adversarial networks, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 7939–7947.
  12. P. Li, Y. Hu, R. He, and Z. Sun, (2019), Global and local consistent wavelet domain age synthesis, IEEE Trans. Inf. Forensics Secur., vol. 14, no. 11, pp. 2943–2957.
  13. H. Ding, K. Sricharan, and R. Chellappa, (2018), Exprgan: Facial expression editing with controllable expression intensity, in Proc.32nd AAAI Conf. Artif. Intell., pp. 6781–6788.
  14. L. Song, Z. Lu, R. He, Z. Sun, and T. Tan, (2018), Geometry guided adversarial facial expression synthesis, in Proc. ACM Multimedia Conf. Multimedia Conf. (MM), pp. 627–635.
  15. A. Pumarola, A. Agudo, A. M. Martinez, A. Sanfeliu, F. Moreno-Noguer, GANimation: Anatomically-aware facial animation from a single image, in Proc. Eur. Conf. Comput. Vis., pp. 818–833.
  16. Y. Zhou and B. E. Shi, (2017), Photorealistic facial expression synthesis by the conditional difference adversarial autoencoder, in Proc. 7th Int. Conf. Affect. Comput. Intell. Interact. (ACII), pp. 370–376.
  17. F. Qiao, N. Yao, Z. Jiao, Z. Li, H. Chen, and H. Wang, (2018), Geometry contrastive GAN for facial expression transfer, arXiv:1802.01822. [Online]. Available: http://arxiv.org/abs/1802.01822.
  18. K. Li, J. Xing, C. Su, W. Hu, Y. Zhang, and S. Maybank, (2018), Deep cost sensitive and order-preserving feature learning for cross-population age estimation, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 399–408.
  19. Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, (2018), StarGAN: Unified generative adversarial networks for multi-domain Image-to Image translation, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 8789–8797.
Download PDF
Back