Are you seeing what I’m seeing?
With application workloads moving to the cloud, how do IT teams relying on traditional performance metrics gain accurate insight into user experience in these new hybrid IT environments? Network and application performance monitoring have become the de facto standard for meeting service delivery expectations for enterprise IT. But if IT operations personnel aren’t seeing the same picture as the user, how can they see eye to eye?
Enterprises are investing in their networks at an accelerating rate. As legacy IT on-premises infrastructure gives way to hybrid cloud and virtualized environments, and an escalating data tsunami drives data center expansions, increasing investments of time and money are raising the stakes ever higher. Unfortunately, end users’ expectations for service are growing as well, piling additional demands onto network operators and engineers who are already wrestling with network migration challenges.
Yet despite the fact that the enterprise networking environment is rapidly changing, IT support teams are still using the same network performance metrics to monitor their networks and evaluate whether or not service delivery is up to par. The problem is that they’re using a one-dimensional tool to measure a subjective experience that tool was not designed to even understand, much less aid in troubleshooting. It’s kind of like trying to tighten a screw with a hammer.
It’s no surprise that Forrester Research found that roughly one-third of performance (aka user-experience) issues take more than a month to fix — or go unresolved entirely.
Today, many IT support teams are trying to manage their networks in a vacuum without sufficient visibility. For example, the ability to troubleshoot applications such as Oracle or Microsoft requires a clear understanding of exactly what the user is experiencing. Too often, IT is tasked with fixing an issue with a particular application that has its own behavior and characteristics, only to find they lack the necessary insight. In the end, a determination is made that the response time for the application is just too slow, or that latency is just too high, all with little to no context as to how the end-user actually consumed and perceived the application to perform.
As a result, both end-users and network support teams are increasingly frustrated. In fact, Gartner analysts noted during the 2017 Gartner IT Infrastructure, Operations Management & Data Center Conference that 79 percent of network engineers were dissatisfied with trying to assess user experience in the cloud based on performance metrics. In addition, Gartner reported that more than 50 percent of network engineers admitted to being “blind to what happens in the cloud,” while 32 percent indicated having large visibility gaps.
In a typical scenario, everything across the network monitoring dashboard may be green (e.g., availability, jitter, packet loss, latency…) but the user is still having a poor experience. This could be due to many little nuances in performance, or a combination of multiple pieces of infrastructure performing poorly in aggregate while operating within spec independently, but either way the network operator can’t visually see the issues that users are seeing.
Armed with only a one-dimensional view, IT teams can make rough guesses at the cause of the issue. This might lead to an escalated ‘war room’ conversation, but after wasting precious time, the user is still experiencing problems. If you’re not monitoring the network from the perspective of users, it’s nearly impossible to ensure optimal user experience. The truth of the matter is that performance metrics don’t equal user experience.
On the same page
Traditionally, the only way to really understand what a user is experiencing is to see for yourself; either in person, or by taking control of their PC remotely. But with the current reality of reduced budgets, shrinking staff resources and increased responsibilities, those days are a thing of the past. Particularly given that more than 90 percent of enterprises have at least some portion of their workforce accessing network or application services remotely; often using devices the IT team doesn’t support, thanks to BYOD trends.
As the consumerization of technology results in heightened user expectations, the gap between users’ experiences and their expectations continues to widen. And if network support teams can’t get a handle on troubleshooting end-user experience issues, this gulf will continue to consume an increasing number of resources.
Fortunately, advancements in machine learning (ML) and artificial intelligence (AI) technology are making it easier to get ahead of service delivery issues and streamline the workload. The adoption of algorithmic measurements enables event data to be collected in real-time across both remote and on-premises environments. Combine that with a truly adaptive ML approach to data analysis, and the help desk may even be aware of looming user experience issues before the users themselves. This not only speeds up the task of tracing and identifying issues, but also significantly reduces mean time to repair (MTTR).
One step ahead
As IT network technology continues to evolve, the tools and processes that we rely on to maintain and optimize those networks need to evolve as well if we want to stay a step ahead of performance degradation. By leveraging adaptive intelligence to mine and analyze network data in real-time, IT teams can monitor critical services and network health in order to assess and optimize user experience.
Equally importantly, however, is the need to take a step back and reassess our traditional assumptions and habits. With the availability of tools to monitor actual user experience, network operators don’t need to waste time trying to decipher an issue with incomplete data, and they should not presume they know exactly what the user is experiencing. Because as long as we cling to outdated assumptions, IT support teams and enterprise end-users will never see eye to eye.
This post originally appeared in NetworkWorld on September 11, 2018.