Stop the Noise: Assessing End-User Experience
With so many KPIs and metrics for tracking security and performance over networks, clouds, clients, and applications, it’s difficult to see how SecOps and NetOps engineers ever rise above the noise.
VIAVI Solutions takes a different approach to eliminate the noise and improve engineer effectiveness in solving issues. An example of this in Observer is the End-User Experience (EUE) Score powered by machine learning, or, as Gartner describes it: Artificial Intelligence for IT Operations (AIOps).
Operations teams need to quickly understand which users are negatively impacted by degraded services and—more important—where the problem root cause resides. What they don’t want is a bunch of flashing false-positive red indicators (aka…noise) or nagging doubts about missing real service problems (false negatives). This is where the Observer EUE Score comes to the rescue.
The EUE Score analyzes every network conversation in real-time by leveraging sophisticated analytics that automatically learn the unique characteristics of your environment, adjusting scores accordingly to reflect what the actual user is experiencing. Real problems are instantly detected, and the noise silenced.
To show how End-User Experience Score assessment improves when machine learning is applied to an IT environment and application, let’s look at an example. The machine learning function was temporarily disabled in the screen shot below to illustrate the EUE score without machine learning. In the second image, machine learning has been activated to provide a more accurate view of user experience.
Figure 1 – EUE Score Without Machine Learning
Figure 1 illustrates an encrypted conversation with an End-User Experience Score of 3.0 (critical), and a corresponding domain problem call out tied to the server. It’s important to note that end-user experience scoring algorithms do not require access to the encrypted payload data. The associated ladder diagram on the right is a detailed visualization of the conversations.
Upon further analysis of the specific conditions of this application environment, the performance issues didn’t impact user experience to the degree implied by the score in the above screen shot. While the score should have reflected degraded performance, it did not merit a critical score (false positive).
In the upcoming Figure 2 where machine learning has been activated the impact of performance degradation on user experience is accurately assessed.
Figure 2 – EUE Score with Machine Learning Activated
This screen shot shows the same transaction scored using machine learning. In this case, the algorithm automatically “learned” (from this and previous conversations) that the observed application behavior represented only minor degradation in the real-world user experience (from a “perfect” score of 10 to 8.1) It was likely imperceptible by the user. A score of 8.1 is not perfect, but it isn’t a critical issue that needs immediate action. Good-bye false alarms!
Key takeaway: Today’s operations teams must be efficient and show bottom-line effectiveness. To accomplish this, they need definitive answers to:
- Is there a problem?
- Where is it located?
- What is the resolution?
That’s what the Observer End-User Experience Score delivers; visibility into real issues and how to solve them. Millions of network conversations automatically can be analyzed and scored pointing operations teams to ONLY those transactions where there is a problem. Why get distracted by the noise when you can stop it?
 Lerner, Andrew. (2017, August 9). AIOps Platforms [Gartner Blog Network]. Retrieved 2019, April 3, from https://blogs.gartner.com/andrew-lerner/2017/08/09/aiops-platforms/