Can contact centre quality assurance be automated with speech analytics?
The power of new technology can often be hyped by business and media alike. The allure of a competitive advantage achieved through the early adoption of the latest software can be hard to resist to many companies and in particular those with large customer-facing operations. Into this arena comes Artificial Intelligence and more specifically Speech Analytics.
In call and contact centres, speech analytics is increasingly being used to analyse recorded calls to gather customer information to improve communication and future interaction. In simple terms, it works by reading wave (audio) files and processing them into text. The text is then added to data tables that may contain customer metrics, order history and feedback. By analysing specific words and phrases with the other data fields, relationships between customer behaviour and what the Agents have said can be identified.
By identifying the language that drives increased sales and improves customer experience calls, call centre companies can hone their scripts and better target their offers.
From a quality assurance perspective, speech analytics also allows a high volume of calls to be monitored, and therefore allows the widespread analysis of strengths and weaknesses across contact centres. At least, in theory, it could help to reduce the amount of staff employed by a company, thus saving costs.
Whilst, it could reduce QA staff, it also requires a support team to ensure that it working and updated, and these staff will have to be highly specialised due to the relative complexity of the technology. Additionally, in-house analysts will be required to interpret the vast amount of results.
Human v Machine
So this brings us to the heart of the issue. Who is better at the call centre QA job, humans or machines? One of the arguments you might hear most often in the human vs. machine debate is the chess argument. It claims that if machines can beat humans at chess, a game of extreme complexity, foresight and mental dexterity, then they can be outfitted to be more efficient than humans at almost any task.
The key difference between measuring human interaction and a game of chess, however, is two-fold. Firstly, and most simply, human interaction is not about beating your conversational partner in the art of verbosity. Secondly, and more relevantly, is that chess is a finite game with a finite ruleset, albeit one that produces millions of possible scenarios.
Human interaction can be like a game of chess where the rules are constantly in flux, where new pieces are added to the board without a seconds warning, and where the tiles are forever changing colour. It is this level of complexity and – most importantly – ambiguity, that makes the human mind better equipped to handle the rigours of contact centre QA.
Even a sphere as seemingly objective as GDPR compliance can be fraught with complexities. Often appropriate questions are asked to establish the caller’s identity, but the answer to the question will be buried in the premise. Other times, customers can provide unprompted answers that mean the advisor is compelled to ask one less question. These more semantic complexities can, of course, be dealt with by the using computational linguistics technology, but, at that point, the solution is growing vastly more complicated than is achievable with a little bit of human intuition.
Soft Skills and Subjectivity
Where it comes to soft skill measures and subjective assessment, this is where using non-automated QA is really useful. Whilst word and speech patterns can be programmed into speech analytic software, this more socially-geared side of customer-advisor interaction is more easily achieved by socially-attuned quality assurance agents who can pick out friendliness, hostility or politeness without having to refer to a pre-defined rule set; a rule set which, to return to the chess metaphor, may confound pre-definition due to its nebulous, shifting quality of human interaction.
Additionally, the pattern recognition of speech analytics may be rigorous in some far as it can ascertain high volumes of data at relative speed, but this sheer volume can lead to ‘analysis paralysis’, telling you what you already know without providing any real insight.
An Outsourced Human Solution
Whilst this may be straying more into the realm of quality analysis, QA agents are able to provide new and insightful feedback week-on-week, innovate potential solutions, are proactively suggest changes to policies or dynamically adjust to changes in corporate priorities immediately.
This is the experience we provide clients who have outsourced their call centre QA to Centrebound: a human solution to a uniquely human question. Our dedicated team of Manchester-based Quality Assurance Executives work in a closely-knit team to calibrate results and share best practice in order to help clients achieve results they want.
If you would like to speak to one of our consultants about outsourcing your quality assurance team please get in touch today.