Virtual instruments wants to be “the brain of the datacentre”, bringing together the ability to monitor storage, compute, networks and applications through artificial intelligence (AI) and machine learning (ML) so that storage performance issues can be spotted as they happen, if not before.
The company’s long-term vision is that IT staff will be able to look at one of its dashboards and see how issues across the IT hardware and software stack affect each other.
In this way, it hopes to banish the concept of the “war room”, in which the customer and multiple suppliers work out the root cause of issues, with a lot of finger pointing and throat choking.
That’s according to Virtual Instruments’ chief marketing officer, Len Rosenthal, in conversation with Computer Weekly this week.
He was speaking in the wake of Virtual Instruments’ release of version 6.0 of its VirtualWisdom monitoring platform and WorkloadWisdom testing suite.
One key addition in that release was the ability to proactively monitor the entire IT stack, from applications to back-end hardware. This ability comes from the building in of software interfaces to application performance management (APM) tools such as AppDynamics, Dynatrace and ServiceNow, which monitor application performance.
These have been knitted together with Virtual Instruments’ existing capabilities in real-time hardware monitoring, with artificial intelligence and machine learning that can help spot issues before they arise.
Virtual Instruments started out monitoring big-iron Fibre Channel SAN infrastructures with physical taps into the fabric to interrogate latency, read, write and other key storage metrics.
Virtual Instruments’ dashboard shows application issues and connections to servers, storage and networks
It still has that capability, but has added network-attached storage and object storage monitoring, as well as some capability to measure performance in the Amazon Web Services (AWS) and Microsoft Azure clouds at the level of virtual machines (VMs) but not underlying hardware.
Elsewhere, it gathers metrics such as bandwidth, port utilisation and health from networks and common compute-level measures such as CPU, memory, host bus adapter (HBA) and network interface card (NIC) utilisation from servers.
It claims to measure around 300 metrics in total.
It also claims to be unique in the market on the basis of its roots in hardware monitoring, which software-based competitors such as SolarWinds do not have.
To those 300 metrics it has added the ability to cross-reference these to the customer’s applications.
So, for example, customer IT teams can be alerted to issues as they happen, and potentially – as the AI learns the patterns prevalent in the customer system – before they happen.
Customers can see, for example, where an application response time is slowing and easily trace that to contention for virtual server compute capacity. The customer will then be presented with recommendations, such as, “Move VMs to a different ESX cluster”.
Virtual Instruments’ predictive capacity view shows when hardware needs to be upgraded
According to Rosenthal, the AI’s machine learning is all about “spotting anomalies from the norm; it’s about pattern recognition”.
“When deployed it will be set to thresholds and alerts based on 15 years of previous deployments. Then it will learn the patterns particular to the customer,” he added.
Capacity forecasting is also included, with the ability to track trends in hardware use that will lead to upgrades being required.
Next, Virtual Instruments plans to add Google Cloud Platform monitoring – in mid-2019 – to its existing AWS and Azure capabilities.
Also, it plans to offer integration with Pure Storage hardware, to match that which exists with Dell EMC, NetApp and IBM storage arrays.