#计算机科学# 💻 :robot: A summary on our attempts at using Deep Learning approaches for Emotional Text to Speech 🔈
Learning to ground explanations of affect for visual art.
翻译 - 学习为视觉艺术打下对情感的解释。
#计算机科学#Official implementation of the paper "Estimation of continuous valence and arousal levels from faces in naturalistic conditions", Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos a...
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
#自然语言处理#This is my reading list for my PhD in AI, NLP, Deep Learning and more.
#Awesome#A curated list of awesome affective computing 🤖❤️ papers, software, open-source projects, and resources
#计算机科学#Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
#计算机科学#A machine learning application for emotion recognition from speech
#计算机科学#From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org...
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
Self-supervised ECG Representation Learning - ICASSP 2020 and IEEE T-AFFC
Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
#计算机科学#EmoInt provides a high level wrapper to combine various word embeddings and creating ensembles from multiple trained models