For segmentation, one often need to use sentiment analysis services.
Large commercial sentiment analysis tools are often deployed in software engineering due to their ease of use. However, it is not known how accurate these tools are, and whether the sentiment ratings given by one tool agree with those given by another tool.
We use two datasets – (1) NEWS consisting of 5,880 news stories and 60K comments from four social media platforms: Twitter, Instagram, YouTube, and Facebook; and (2) IMDB consisting of 7,500 positive and 7,500 negative movie reviews – to investigate the agreement and bias of four widely used sentiment analysis (SA) tools: Microsoft Azure (MS), IBM Watson, Google Cloud, and Amazon Web Services (AWS).
We find that the four tools assign the same sentiment on less than half (48.1%) of the analyzed content.
We also find that AWS exhibits neutrality bias in both datasets;
Google exhibits bi-polarity bias in the NEWS dataset but neutrality bias in the IMDB dataset, and IBM and MS exhibit no clear bias in the NEWS dataset but have bi-polarity bias in the IMDB dataset.
Overall, IBM has the highest accuracy relative to the known ground truth in the IMDB dataset.
Findings indicate that psycholinguistic features especially affect, tone, and use of adjectives – explain why the tools disagree. Engineers are urged caution when implementing SA tools for applications, as the tool selection affects the obtained sentiment labels.
Jung, S.G., Salminen, J. and Jansen, B. J. (2022) Engineers, Aware! Commercial Tools Disagree on Social Media Sentiment. ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS22), 21-24 June. Sophia Antipolis, France. Article 153.