Our research group has conducted a series of user studies concerning the use of deepfake (i.e., AI generated) images and videos, focusing on how people perceive and react to these deepfakes in the context of personas.
Findings from a series of user studies show that using artificial pictures in persona profiles did not decrease the scores for Authenticity, Clarity, Empathy, and Willingness to use the personas [4]. Deepfakes were perceived as less empathetic, credible, complete, clear, and immersive than other persona modalities such as profile or narrative [3]. There are differences in how (a) users perceive deepfakes, (b) users detect deepfake glitches, (c) deepfake glitches affect information comprehension, and (d) deepfake glitches affect task completion [2]. The presence of images in AI-generated personas did not significantly impact user perceptions of these personas [1].
We have also evaluated four widely used face detection tools, which are Face++, IBM Bluemix Visual Recognition, AWS Rekognition, and Microsoft Azure Face API, using multiple datasets to determine their accuracy in inferring user attributes, including gender, race, and age [5]. Results show that the tools are generally proficient at determining gender, with accuracy rates greater than 90%, except for IBM Bluemix. Concerning race, only one of the four tools
provides this capability, Face++, with an accuracy rate of greater than 90%. Inferring age appears to be a challenging problem, as all four tools performed poorly.
Want more? Read these research articles!
- Salminen, J., Santos, J. M., Jung, S.G., and Jansen, B. J. (2024) Picturing the fictitious person: An exploratory study on the effect of images on user perceptions of AI-generated personas, Computers in Human Behavior: Artificial Humans, 100052. https://doi.org/10.1016/j.chbah.2024.100052
- Kaate, I., Salminen, J., and Jansen, B. J. (2024) There Is Something Rotten in Denmark”: Investigating the Deepfake Persona Perceptions and Their Implications for Human-Centered AI, Computers in Human Behavior: Artificial Humans, 2(1), 100031, https://doi.org/10.1016/j.chbah.2023.100031
- Kaate, I., Salminen, J., Santos, J., Olkkonen, R., Jung, S.G., and Jansen, B. J. (2023) The Realness of Fakes: Primary Evidence of the Effect of Deepfake Personas on User Perceptions in a Design Task. International Journal of Human Computer Studies. 178, Article 103096.
- Salminen, J., Jung, S.G., Kamel, A. M., Santos, J. M., Kwak, H., An, J., and Jansen, B. J. (2022) Using Artificially Generated Pictures in Customer-facing Systems: An Evaluation Study with Data-Driven Personas. Behaviour & Information Technology. 41:5, 905-921. DOI:10.1080/0144929X.2020.1838610
- Jung, S.G., An, J., Salminen, J., Kwak, H., and Jansen, B. J. (2018) Assessing the Accuracy of Four Popular Face Recognition Tools for Inferring Gender, Age, and Race (Short Paper), International AAAI Conference on Web and Social Media (ICWSM 2018), Stanford, CA, USA, 25-28 June, 624-627.