S Twatter: When Text-to-Speech Goes Off the Rails
In a hilarious yet embarrassing blunder, UK water company Severn Trent learned a lesson about the limitations of text-to-speech systems. A routine robocall to customers about potential water discoloration during planned works went hilariously wrong.
A Reg reader received an automated call, which was standard procedure, warning of discolored water and advising customers to run taps for twenty minutes if necessary. The message was clear, if a bit robotic.
However, the robot took a tumble when it came to reading the URL for Severn Trent's website. The intended URL was http://www.stwater.co.uk/discolouration, but the robot stumbled and said 'S Twatter' instead.
The reader, likely concerned about the water quality, was redirected to a secure connection. This gaffe highlights the potential for unintended consequences when text-to-speech systems are used without human verification.
It's almost reassuring that in an era of advanced Artificial Intelligence, these systems can still produce amusing blunders. Text-to-speech technology has a history of producing similar mishaps, from home computers with primitive speech synthesizers to Interactive Voice Response Systems (IVRS) that needed validation to avoid customer confusion.
Severn Trent's blunder serves as a reminder that even with AI, human oversight is crucial. Calling themselves 'S Twatter' would be a bold move, especially in the UK, and a quality check on the robocall system might be in order.
The Register's coverage of this incident includes other amusing tech stories, such as Windows 2000's 'rust in peace' (https://www.theregister.com/2026/01/14/windows2000rustinpeace/), a moon hotel startup's ambitious plans (https://www.theregister.com/2026/01/13/moonhotelstartupreservation/), and Lego's innovative use of ASICs (https://www.theregister.com/2026/01/06/legocramsanasic_in/).
This incident underscores the importance of human oversight in AI-driven systems to prevent embarrassing and potentially costly mistakes.