Skip to main content

Research Repository

Advanced Search

All Outputs (5)

Spontaneous Versus Posed Smiles—Can We Tell the Difference? (2016)
Conference Proceeding
Mandal, B., & Ouarti, N. (2017). Spontaneous Versus Posed Smiles—Can We Tell the Difference?. . https://doi.org/10.1007/978-981-10-2107-7_24

Smile is an irrefutable expression that shows the physical state of the mind in both true and deceptive ways. Generally, it shows happy state of the mind, however, ‘smiles’ can be deceptive, for example people can give a smile when they feel happy an... Read More about Spontaneous Versus Posed Smiles—Can We Tell the Difference?.

Multimodal Multi-Stream Deep Learning for Egocentric Activity Recognition (2016)
Conference Proceeding
Song, S., Chandrasekhar, V., Mandal, B., Li, L., Lim, J., Babu, G. S., …Cheung, N. (2016). Multimodal Multi-Stream Deep Learning for Egocentric Activity Recognition. . https://doi.org/10.1109/cvprw.2016.54

In this paper, we propose a multimodal multi-stream deep learning framework to tackle the egocentric activity recognition problem, using both the video and sensor data. First, we experiment and extend a multi-stream Convolutional Neural Network to le... Read More about Multimodal Multi-Stream Deep Learning for Egocentric Activity Recognition.

Egocentric activity recognition with multimodal fisher vector (2016)
Conference Proceeding
Song, S., Cheung, N., Chandrasekhar, V., Mandal, B., & Liri, J. (2016). Egocentric activity recognition with multimodal fisher vector. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp.2016.7472171

With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently. In this paper, we build a Multimodal Egocentric Activity dataset which includes egocentric videos and sensor data... Read More about Egocentric activity recognition with multimodal fisher vector.