IT Brief UK - Technology news for CIOs & IT decision-makers
Story image
Warwick study reveals sentiment shifts in text rewritten by AI
Fri, 27th Oct 2023

Revolutionary research from academics at Gillmore Centre for Financial Technology at Warwick Business School has shown that Generative AI and Large Language Models (LLMs) give rise to unintentional distortions in the sentiment of original text.

The study, entitled "Who’s Speaking, Machine or Man? How Generative AI Distorts Human Sentiment," found that the changes introduced by LLMs can render existing results unreliable. A significant trend observed through comprehensive analysis is that LLMs tend to shift the sentiment of the text towards neutrality, affecting both positive and negative sentiments, substantially altering original content.

Ashkan Eshghi, Houlden Fellow at the Gillmore Centre, commented: "Our findings reveal a notable shift towards neutral sentiment in LLM-rephrased content compared to the original human-generated text. This shift affects both positive and negative sentiments, ultimately reducing the variation in content sentiment."

He went on to explain, "While LLMs do tend to move positive sentiments closer to neutrality, the shift in negative sentiments towards a neutral position is more pronounced. This overall shift towards positivity can significantly impact the application of LLMs in sentiment analysis.”

Expressing concern over this issue, Ram Gopal, Director of the Gillmore Centre, highlighted the potential biases introduced through the use of LLMs: “This bias arises from the application of LLMs for tasks such as paraphrasing, rewriting, and even content creation, resulting in sentiments that may diverge from those the individual would have expressed without LLMs being used.”

To help overcome this issue, the researchers proposed a novel mitigation method. It involves predicting or estimating the sentiment of original texts by analysing the sentiments of their rephrased counterparts. However, they also acknowledged the necessity of further exploration to understand whether other linguistic features of user-generated content would also change if AI were used.

Joining the conversation, Dr Yi Ding, Assistant Professor of Information Systems at the Gillmore Centre, said, “We have seen that there are around 180 million OpenAI monthly active users worldwide, and more businesses are jumping aboard the AI hype train, harnessing its usage as a business tool. Conducting this study, looking at the use of Generative AI alongside human sentiment, will play a critical role in LLM future developments, ultimately enhancing output, helping remove biases, and improving efficiency for anyone who uses it.”

The Gillmore Centre for Financial Technology specialises in emerging technologies' impact on financial activities. Its future plans include employing other predictive models to deduce authentic human sentiments and proposing further mitigation strategies.

The Centre was launched by Warwick Business School, funded by a £3 million donation from Clive Gillmore. It conducts advanced research bridging the gap between finance and technology, targeting three core aspects: the consumer, the firm and broader society.