Showing posts with label Speech Analytics. Show all posts
Showing posts with label Speech Analytics. Show all posts

Wednesday, April 24, 2019

Listen to Your Heart (or your calls)

By Diana Aviles


I’ve chatted with speech analysts from all walks of life. They speak many languages and are versed in different speech analytics solutions. Despite the fact there are some differences in approaches, we can all agree on the one thing, we consistently struggle to get our organizations to understand: you still need to listen to the calls.

I can bucket, categorize and query the life breath out of everything you need, but at the end of the day it does not absolve you from some type of manual listening. I am notorious for saying that our goal in speech analytics is to help make insights easier to obtain- not outright replace human listening.

What good is a category or query to an organization, if you have no context outside of those key terms and phrases it was programmed with? How do you know if a process is successful, or if it is failing, if you aren’t engaged with the actual agent and customer interaction? What actionable intelligence are you getting from static data? Very little to none. But, you can change that.

Speech Analytics (SA) is an extremely interactive process. It is very exciting to get your hands dirty and discover all sorts of wild stuff you never would’ve found without SA. Sometimes it may feel like it takes forever, but simply adjusting your perspective to see it as being for the greater good of your organization will brighten your outlook and reveal vast benefits. Listening to 100-200 calls certainly will not injure you. There are different strategies on how to create listening studies that are effective and efficient at any size.

The aspect of SA that makes us awesome, is that we can pull insights from data and have the flexibility to further think outside the box, while exploring those insights on a deeper level than our more structured data contemporaries. Speech Analytics is about tying together trends to tell a story, that can help an organization make business decisions. You are doing yourself a big disservice, if you are just sitting around building out queries, and running reports without trying to take a peek at the bigger picture.

So please, do yourself a favor and listen to your calls. I promise you it's worth every second of time you invest in it. 


Diana Aviles is a long time speech analytics fan with a specialty in Nexidia Interaction Analytics. 

She is a vocal speech analytics advocate with the primary objective to simultaneously promote and educate the world of Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Connect: LinkedIn

Monday, April 1, 2019

Removing The Training Wheels - Self Service Solutions in Speech Analytics

By Diana Aviles


I feel like I often say, “One of the biggest challenges in speech analytics is...” in many of my pieces since there are so many moving parts in what we do. From getting the SA program launched, to maintaining the program so that the organization continues to see value in the investment- it’s all a challenge.

However, something to think about is how you want your SA program to sustain itself in the long run. Sometimes it seems easier to keep all the knowledge on how to utilize the tool to yourself, but I have found that organizations who operate like this find themselves with very burnt-out speech analysts since what often happens is they become inundated with countless requests. They start feeling like the nerdy kid in school who the popular kids only talk to when they want to copy their algebra homework. So how do you reduce this? With self-service solutions.

What does “self-service solutions” mean? Well, it means that you and your team will have to get comfortable with teaching people who are interested in utilizing speech insights on how they can produce those results on their own. It might mean you have to decline certain requests, but kindly offer a 30 minute consulting call to show them how they can run the report on their own. You can also see if you may host a bi-weekly, one-hour “Ask us Anything” type of call, with your most frequent requesters, to see what kind of things they are interested in looking at. “You wanna see AHT? Sure, we can show you how to run a report that shows this, and we can even set it to indicate outliers if you would like.” “You want to see how often an agent uses the proper closing greeting filtered-down by supervisor? Absolutely, we can show you how to set that up.” This allows you to get an idea on what the organization actually cares about so that you and your team can aim to be as proactive as possible with your insights.

Also, by establishing self-sufficiency it will help alleviate the workload from your folks so that you can better focus on all the cool types of studies that you’re dying to do. It also helps maintain value by having more than a select few people know how to use the tool. Remember, at the end of the day as speech analysts you should be aiming to spread the word about speech analytics as much as possible. You should be looking to always share your experiences with other people. Those experiences often turn into valuable insights that improve ROI. Heck, even in my new role, I find myself still learning new tricks from my colleagues because each of our experiences with speech analytics is unique.

Ultimately, this boils down to a culture thing. If your organization’s culture is one where no one wants to step out of their comfort zone, unfortunately there's not much you can do to fix that, but it is worth trying to put it out there and see who bites. I can guarantee you there is at least one person in your organization who would jump at the chance to see how the magic happens with speech analytics.


Diana Aviles Senior Speech Analytics Consultant, Wells Fargo

Diana has been working in speech analytics for 8 years with a specialty in Nexidia Interaction Analytics.  She is a vocal speech analytics advocate with the primary objective to simultaneously promote and educate the world of Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Follow Diana on LinkedIn.

Wednesday, January 16, 2019

Your Query Failed. Now What?

By Diana Aviles


Sometimes, you just get stuck. You sit there at your desk trying to come up with every way possible a person can say something with the hope that it will help make some progress on getting your category or query to pass validation. But like Pac-Man, you keep getting hit by the ghosts repeatedly dying. I think it’s important to slow things down and be honest with everyone. Even us seasoned veterans have our tricky builds. [Please note: I am going to use the term “build” to refer to query/category building in order to maintain neutrality in this piece.]

Frustrating building is often one of the major contributing factors for why organizations pull out of speech analytics programs. It's like Super Mario Bros 2. The one we have here in the US, where you are running around throwing radishes, is technically not the official version.

Nintendo of Japan, thought the sequel game should continue where the player left off in the last game, by following a traditional path of progressive difficulty. The end result was that Nintendo of America said "no thank you", and created the version of SMB2 that we are more familiar with. Outside of showing how dorky I am, why am I mentioning this? The frustration that Nintendo of America experienced with the game, is often similar to what some speech analysts experience when they get stuck on a tricky build. So while there is no real life Speech Analytics equivalent of a Game Genie, I came up with a list of things that might help you get past the level.

  • You may have too low volume – If the particular subject you are building does not drive a lot of volume, you are entering the needle-in-haystack territory. You cannot make hits appear out of thin air. In these situations, it’s important to communicate that to your requestor, to set those expectations. I have found that it can help to offer them a monitoring period, to observe volume, and see if it improves somewhere down the line. You would largely do this in the form of ad-hoc searches or term lists, if your software offers the option.

  • Cross-talk Interference – This is one that burned me recently. Sometimes, if you are trying to look for something on a specific line, you may encounter situations where noise on one line bleeds into another line, causing the appearance of cross talk. This can result in a missed hit in your build. Speaker separation relies on having high quality, and clear audio to be able to differentiate who is who. This is one of those situations which you should communicate to your requestor, after taking your best shot.

  • Too complicated for your own good – In a previous article, I suggest building is comparable to good a marinara sauce. You have to mix a bit of this and that.  Everything has to be balanced. Some builders get too complicated, and it hurts the performance of the build. Remember, to keep your builds to one topic at a time. I’m also going to call out builders who are looking for the most specific of items to capture in their build. I knew someone who was getting hammered trying to build for a specific issue that was supposed to capture a specific type of change being made on accounts, but without a specific piece of information being verified. I don’t know if it was ever built successfully, but again, it’s really important for as a builder, to keep things simple and educate end users.

Finally, I want you to remember that frustration is normal. Do not get discouraged, or destroy company owned property. A query/category build that is stubborn does not mean that you or the software is substandard. This comes with the territory. If you have speech analytics mentors, talk with them and see what advice they might have for you. If you do not have any mentors, and you are reading this, please contact me. I’ll be Luigi to your Mario, even though I like Princess Peach because she floats.

I promise, I am done with all the retro gaming references.

Diana Aviles has been working in speech analytics for 8 years with a specialty in Nexidia Interaction Analytics.

She is a vocal speech analytics advocate with the primary objective to simultaneously promote and educate the world of Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Follow Diana on LinkedIn.


Monday, May 7, 2018

Ingestion Indigestion

By Diana Aviles




Ingestion is Speech Analytics (SA) jargon that describes the act of the SA tool downloading a copy of call audio and its associated metadata from the recorder source. In a perfect world ingestion should run smoothly, however sometimes, just like people who get indigestion after binging on buffalo chicken pizza, ingestion has similar hiccups. Ingestion problems can be tricky at times since it takes a bit of research to distinguish the source of where the hiccups came from.

These often lead to stressful situations between the speech analytics software company and the end user group. A lot of end users are often tempted to just pin these problems on the speech analytics software company since the idea is “well it’s broken, so fix it”. The SA software company has only so much visibility on the end user’s side of the wall. That is why it is very important that you research on your end to really confirm what the issue is at hand. So now you may be thinking, “Well how the hell do I do that!?” and that’s where I come in – I specialize in ingestion so I have seen my fair share of ugly and uglier when it comes to “ingestion indigestion”, so I want to offer you guys some tips on how and what you should be doing to keep your tools moving smoothly.

If you are not receiving a “Disposition Report” – start getting one and have it sent daily:


I just heard a few groans from some of my friends who work for SA software vendors. To some this may seem like an extra step but this is the precise place of where we need to start. Essentially a disposition report is a report that basically tells you everything the software vendor has received for ingestion. It’s a giant receipt that shows you what you are paying for. You will want to receive this daily so you can monitor days that show abnormally high or low volume. You also want to have it broken down by various categories within the ingestion process to show you if there are calls that might have gotten stuck during the process. This document is your best friend when you are investigating ingestion problems.

You should always know where your data sources are:

Your speech analytics end user team should all have some familiarity with where you are pulling your calls and metadata from for ingestion. You guys don’t need to be on a first name basis with the jargon and process but you need to know how to spot the more obvious ingestion faults. An example of this is with metadata that is pulled from the billing systems– you want your calls to show metadata for the customer’s account balance. If you are running searches and notice most of these calls are not show any billing system data, that should be a red flag for you to investigate further. The more that people on your team are aware of how to spot these problems the less stressful troubleshooting becomes. This also helps in clarifying where fault may lie on ingestion issues– the SA end user team or the SA software.

Abnormally low/high volume – “Compare before you declare”:

When you see certain sites in your tool that show higher or lower than usual volume, run a comparison of the same day of the week for that site from a prior week. First, see if you can determine if the spike in volume is attributed to a specific external driver such as a service outage. If there is no external issue, run a search to see if there is leakage of audio that has no sound or is below the minimum threshold for ingestion (typically this is set to audio which is below one minute in duration which typically would not be ingested into the tool). Another possible cause– language lines which are not supposed to ingest in your respective language pack; ie: Spanish VDNs coming into English sessions.

For low volume you will do the same comparison of looking at the same day of the week for a prior week of the affected site. You may also want to verify the numbers in the master recorder or switch to see if the volume is low on that end. If your switch indicates volume that looks to be in the right place for that location, you will need to check the disposition report I mentioned above to see if that site has calls that are stuck in a pending status. If you see something that is held up, this is about the time where you can loop-in your speech analytics software vendor for additional support.

Stuck? Pending? Rejected? What does is all mean?

Remember that disposition report I keep mentioning? That report ideally will have your different ingestion statuses broken out by a category. When something is “stuck” we generally mean that it is not in a completed category or ingested into the tool. Stuck volume will show in a pending category and will indicate where in the process it’s stuck. It might mean that it’s held up waiting to be referenced against the personnel management tool or even stuck waiting for the associated media to be assigned to it. A good speech analytics team will have a process in place to sound the alarm when there is too much volume indicated in a pending state for a specified period (this largely depends on how often your tool is ingesting media as you can ingest same day or ingest up to 3 days after media is recorded). You also periodically need to refer to rejected categories to ensure that you are not losing good volume to incorrectly grouped queues.

This is a general tip list for investigating ingestion problems within your tool. Keep in mind that this advice is not intended to replace technical support. As always, it is my goal to get more people involved in speech analytics along with getting existing speech analytics users further engaged with their chosen tools to get the best actionable intelligence from their insights.



Diana Aviles is an Operations Manager with more than 5 years of Quality Assurance experience in a call center environment. Diana's objective is to simultaneously promote and educate the world of Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Follow Diana on LinkedIn.




Monday, January 29, 2018

Audio Deep Dives: Listening Analysis Made Simple

By Diana Aviles



The core part of Speech Analytics that sometimes gets lost amongst the high powered metadata and reporting functionalities are the audio insights themselves. The whole purpose of SA is to have the ability to analyze specific words and phrases mentioned in customer/agent interactions. To SA newcomers, I have found that once the dust settles from all of the extensive training, there’s this feeling of “What’s next for me?” that begins to settle in.

When we talk about “Deep Dives” or listening analysis, we are generally talking about taking a random sample of calls and listening for specific criteria within the audio. Sample sizes can vary from 25 calls to 10,000 (yes, I have randomized 10,000 calls before). Of course the criteria you can look for is endless and that’s oftentimes where people get overwhelmed. There is the fear of looking for too much or too little in a deep dive. Also, there are complicated grey areas which you will need to account for. Here are some dos and don’ts for making deep dives a bit easier to manage.



Don't

  • Create “mile-long wish lists” - It’s tempting to want to look at every little thing in one go but depending on the type of deep dive you’re doing, it Is recommended you look at the data in phases. Rome wasn’t built in a day, neither are your insights.
  • Randomize/Size the project incorrectly - Samples require balance to them. If you are trying to look at data from two different markets and for the majority your sample only reflects one of them then it goes without saying that your data is tainted.
  • Improperly account for out of scope data (OOS) - Some listening analyses will have criteria that cannot be counted into the main pool of data. An example of this would be if you’re deep diving into cable box issues and you encounter a caller who is having issues with his phone service. His call does not meet the requirements for the project and must be bucketed to indicate that. Going back to the prior point of proper randomization and sizing, you need to account for OOS data by making sure that your sample is 20-30% above the total amount you’re looking for. Example: For a 100 call listening project send over 120 calls to account for the possibility of OOS data.

Do

  • Keep questions clear and concise – It is important to keep the wording of your questions or standards clear in order to avoid confusing auditors (if you are performing a listening analysis with more than one person). You want to avoid causing people to second guess how they are observing and documenting information.
  • Have job aids available for reference - I work mainly in telecom and we deal with a lot of technical issues so while I am pretty seasoned with troubleshooting most issues I do like to have a reference for items I seldom come across which I may be rusty on. If you outsource your listening analysis this is also critical as the listening team may not be as familiar with the line of business they are auditing as well as you are.
  • Maintain uniform data - There is nothing more annoying than having data that is all over the place. I am a fan of using conditional drop downs in Excel to restrict what is entered in the cell and only permitting certain cells to have open text. I recommend the core and secondary drivers you are looking to capture be placed in a drop down for this reason. I also recommend you avoid heavy use of “other” as a driver to prevent data pollution.
  • Require high level summaries of calls reviewed - I like to ask for two reasons- Reason #1: when it comes to “scrubbing” or cleaning up the data (before I start building charts and reporting against it,) the summaries allow me to see any major trends and observations captured OUTSIDE of the main listening project. Reason #2: to ensure the audio in question was ACTUALLY reviewed. In some studies I ask for a time stamp to determine where the criterion in the call was hit for as a method to maintain data integrity.


There are other related topics relevant to deep dives such as presenting and “data cleansing” which may be subject for discussion in later articles. These are a few general suggestions I have for people who are beginning their Speech Analytics journey and looking to start on high level deep dives. Once you get a few deep dives under your belt they will become second nature to you. The goal is to make sure that all your data insights make sense and can be organized in an efficient and concise manner.

Editors note: This article was originally posted on LinkedIn.

Diana Aviles has more than 5 years of Quality Assurance experience in a call center environment. Her objective is to simultaneously promote and educate Speech Analytics with a human touch; one which further emphasizes the importance of First Call Resolution and overall customer experience.

Follow Diana on LinkedIn.