Imagine any corporation with a public Internet connectivity or connections to offices across Canada connected by a private data communication (network) infrastructure, using WAN links, MPLS or Internet with or without VPN tunnels.
This could be the application performance as perceived by your end-user: “Slow, unpredictable, no one knows what’s going on.”
Whereas the IT department believes that the quality of service delivered is spot-on: “all lights are on green, we don’t have any problems, all systems are up”.
Not too difficult to spot the differences between the two pictures. Please allow me to depict some scenarios that could have led to this situation.
· The IT department responsible for the “highway” connections between the two corporate locations assumed that a connection with 4 traffic lanes was sufficient.
· The old infrastructure was, according to the usage reports of the service provider, only used for 50%. When the contract renewal was due, the service contract manager ordered half of the bandwidth.
· The application support team was full of confidence that the server system, serving the local users, could also service the remote office users on “the other side” of the network connection, without having tested the network-friendliness of the application.
· The business unit Marketing launched a new customer contact campaign requiring the Marketing team in the remote office to interact on-line with web-based customers through the corporate Internet gateway.
· The EDP auditor, as part of the annual risk auditing, advised to have all data residing on every corporate server, to be backed-up on a central, secure, storage system.
· The corporate ERP application had to be implemented before year-end, end-user testing was limited to a functional test.
Monitoring and management of these individual technology silo’s such as databases, web servers, networks, storage, applications, etc. does not guarantee that the end-user gets deliver what is agreed upon. Nor what the perception or expectation of the end-user is. Each “silo manager” may conclude that he (or she) delivers in accordance to the agree terms, however the end-user gets confronted with all the individual events and hick-ups in the application chain.
Each discipline oversees in its own technology or functional silo. It is at the end-user where the performance pain is felt. Not surprisingly is that studies show that more than 70% of all performance problems are reported by the end-user.
Time to analyze this phenomenon. The key lies in actionable performance information. Only with actionable performance information organizations are able to tackle application and end-user performance problems and challenges.
Creating this performance information starts with gathering performance data. This sounds a trivial conclusion, but it is far from that. The challenge starts with choosing a technology (or two) to obtain the required data. So, where to start?
[more in part 2, to be published soon]