ENERGI: a multimodal data corpus of interaction of participants in virtual communication
https://doi.org/10.17586/0021-3454-2025-68-12-1011-1019
Abstract
A statistical analysis of the multimodal ENERGI (ENgagement and Emotion Russian Gathering Interlocutors) data corpus containing audio-video recordings of communication in Russian by a group of people obtained using the Zoom teleconference system has been performed. The corpus data is annotated into three classes: participant engagement (high, medium, low), emotional arousal (high, medium, low), and emotional valence (positive, neutral, negative), as well as ten classes of communicative gestures. The corpus contains 6.4 hours of video recordings of group communications, with a total of 18 unique speakers; the data is annotated using 10-second time intervals. ENERGI’s advantages over other corpora include its multimodality, Russian language support, speaker diversity, natural recording conditions, and extensive annotation across several behavioral parameters of communication participants. The corpus can be used to develop a multimodal automated system for analyzing the behavioral aspects of participants in virtual group communications.
Keywords
About the Authors
A. A. DvoynikovaRussian Federation
Anastasia A. Dvoynikova — St. Petersburg Institute for Informatics and Automation of the RAS, Laboratory of Speech and Multimodal Interfaces, Junior Researcher
St. Petersburg
A. N. Velichko
Russian Federation
lena N. Velichko — PhD; St. Petersburg Institute for Informatics and Automation of the RAS, Laboratory of Speech and Multimodal Interfaces; Senior Researcher
St. Petersburg
A. A. Karpov
Russian Federation
Alexey A. Karpov — Dr. Sci., Professor; St. Petersburg Institute for Informatics and Automation of the RAS, Laboratory of Speech and Multimodal Interfaces; Head of the Laboratory
St. Petersburg
References
1. Uzdiaev M.Yu., Karpov A.A. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, no. 5(24), pp. 834–842. (in Russ.)
2. Gupta A., Balasubramanian V. arXiv preprint, arXiv:1609.01885, 2016.
3. Ben-Youssef A., Clavel C., Essid S. et al. Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI), 2017, рр. 464–472, DOI: 10.1145/3136755.3136814.
4. Del Duchetto F., Baxter P., Hanheide M. Frontiers in Robotics and AI, 2020, vol. 7, DOI: 10.3389/frobt.2020.00116.
5. Kaur A., Mustafa A., Mehta L., Dhall A. 2018 Digital Image Computing: Techniques and Applications (DICTA), 2018, рр. 1–8, DOI: 10.1109/DICTA.2018.8615851.
6. Delgado K., Origgi J.M., Hasanpoor T. et al. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, рр. 3628–3636.
7. Churaev E.N. Personalizirovannyye modeli raspoznavaniya psikhoemotsional’nogo sostoyaniya i vovlechonnosti lits po video (Personalized Models for Recognizing Psycho-Emotional State and Facial Engagement from Video), Candidate’s thesis, St. Petersburg, 2025, 134 р. (in Russ.)
8. Karimah S.N., Hasegawa S. Smart Learning Environments, 2022, no. 1(9), pp. 31, DOI: 10.1186/s40561-022-00212-y.
9. Celiktutan O., Skordos E., Gunes H. IEEE Transactions on Affective Computing, 2017, no. 4(10), pp. 484–497, DOI: 10.1109/TAFFC.2017.2737019.
10. Pabba C., Kumar P. Expert Systems, 2022, no. 1(39), pp. e12839, DOI: 10.1111/exsy.12839.
11. Chatterjee I., Goršič M., Clapp J. D., Novak D. Frontiers in Neuroscience, 2021, vol. 15, рр. 757381, DOI: 10.3389/fnins.2021.757381.
12. Sümer Ö., Goldberg P., D’Mello S. et al. IEEE Transactions on Affective Computing, 2021, no. 2(14), pp. 1012–1027, DOI: 10.1109/TAFFC.2021.3127692.
13. Vanneste P., Oramas J., Verelst T. et al. Mathematics, 2021, no. 3(9), pp. 287, DOI: 10.3390/math9030287.
14. Dresvyanskiy D., Sinha Y., Busch M. et al. Speech and Computer. SPECOM 2022. Lecture Notes in Computer Science, 2022, рр. 163–177, DOI: 10.1007/978-3-031-20980-2_15.
15. Cafaro A., Wagner J., Baur T. et al. Proceedings of the ICMI, 2017, рр. 350–359, DOI: 10.1145/3136755.3136780.
16. Busso C., Bulut M., Lee C.C. et al. Language resources and evaluation, 2008, no. 4(42), pp. 335–359, DOI: 10.1007/s10579-008-9076-6.
17. Ringeval F., Sonderegger A., Sauer J., Lalanne D. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2013, рр. 1–8, DOI: 10.1109/FG.2013.6553805.
18. Kossaifi J., Walecki R., Panagakis Y. et al. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, no. 3(43), pp. 1022–1040. DOI: 10.1109/TPAMI.2019.2944808.
19. Dvoynikova A.A. Almanac of scientific works of young scientists of ITMO University, 2023, vol. 1, рр. 251–256. (in Russ.)
20. Certificate of registration of the database 2023624954, Baza dannykh proyavleniy vovlechennosti i emotsiy russkoyazychnykh uchastnikov telekonferentsiy (ENERGI — ENgagement and Emotion Russian Gathering Interlocutors) (Database of Manifestations of Engagement and Emotions of Russian-Speaking Participants in Teleconferences (ENERGI - ENgagement and Emotion Russian Gathering Interlocutors)), A.A. Dvoynikova, A.A. Karpov, Priority 25.12.2023. (in Russ.)
21. Dvoynikova A.A., Karpov A.A. Journal of Instrument Engineering, 2024, no. 11(67), pp. 984–993, DOI: 10.17586/0021-3454-2024-67-11-984-993. (in Russ.)
22. Sloetjes H., Wittenburg P. Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008), 2008.
23. Lyusin D.V. Psychological diagnostics, 2006, vol. 4, рр. 3–22. (in Russ.)
Review
For citations:
Dvoynikova A.A., Velichko A.N., Karpov A.A. ENERGI: a multimodal data corpus of interaction of participants in virtual communication. Journal of Instrument Engineering. 2025;68(12):1011-1019. (In Russ.) https://doi.org/10.17586/0021-3454-2025-68-12-1011-1019
JATS XML






















