Inferencia de la respuesta afectiva de los espectadores de un video

  1. Pardo Muñoz, José Manuel
  2. Fernández Martínez, Fernando
  3. Callejas Carrión, Zoraida
  4. Kleinlein, Ricardo
  5. Luna Jiménez, Cristina
  6. Montero Martínez, Juan Manuel
Journal:
Procesamiento del lenguaje natural

ISSN: 1135-5948

Year of publication: 2019

Issue: 63

Pages: 155-158

Type: Article

More publications in: Procesamiento del lenguaje natural

Abstract

In this project we propose the automatic analysis of the relation between the audiovisual characteristics of a multimedia production and the impact caused in its audience. With this aim, potential synergies are explored between different areas of knowledge including, among others: audiovisual communication, computer vision, multimodal systems, biometric sensors, social network analysis, opinion mining, and affective computing. Our efforts are oriented towards combining these technologies to introduce novel computational models that could predict the reactions of spectators to multimedia elements across different media and moments. On the one hand, we study the cognitive and emotional response of the spectators while they are watching the media instances, using neuroscience techniques and biometric sensors. On the other hand, we also study the reaction shown by the audience on social networks by relying on the automatic collection and analysis of different metadata related to the media elements, such as popularity, sharing patterns, ratings and commentaries. |

Bibliographic References

  • Baveye, Y., E. Dellandrea, C. Chamaret, and L. Chen. 2015. Liris-accede: A video database for affective content analysis. IEEE Transactions on Affective Computing.
  • Demarty, C.-H., M. Sj¨oberg, B. Ionescu, T.- T. Do, H. Wang, N. Q. K. Duong, and F. Lefèbvre. 2016. Predicting media interestingness task. In MediaEval 2016.
  • Fernández-Martínez, F., A. H. García, A. Gallardo-Antolín, and F. D. de María. 2014. Combining audio-visual features for viewers’ perception classification of youtube car commercials. In 2nd International Workshop on Speech, Language, and Audio in Multimedia, SLAM’14.
  • Fernández-Martínez, F., A. Hernández-García, and F. D. de María. 2015. Succeeding metadata based annotation scheme and visual tips for the automatic assessment of video aesthetic quality in car commercials. Expert Systems with Ap- plications.
  • García-Faura, A., A. Hernández-García, F. Fernández-Martínez, F. Díaz-de María, and R. San-Segundo. 2019. Emotion and attention: Audiovisual models for grouplevel skin response recognition in short movies. Web Intelligence, 17.
  • Hernández-García, A., F. Fernández-Martínez, and F. Díaz-de María. 2016. Comparing visual descriptors and automatic rating strategies for video aesthetics prediction. Image Communications.
  • Hernández-García, A., F. Fernández-Martínez, and F. Díaz-de María. 2017. Emotion and attention: Predicting electrodermal activity through video visual descriptors. In International Conference on Web Intelligence, WI ’17.
  • Luo, W., X. Wang, and X. Tang. 2011. Content-based photo quality assessment. In Proceedings of the 2011 International Conference on Computer Vision, ICCV ’11.