Organizations are rapidly shifting to a far more decentralized IT framework — one in which the traditional data center is augmented with a variety of cloud platforms and services, with extra computing resources pushed to the network edge in proximity to various connected devices and sensors. While this decentralization is essential for creating the flexibility and agility demanded by digital business, it does present management challenges.
With high-volume, dynamic workloads traversing on-premises, cloud and edge resources, it is creating management complexity that is beyond the scale of manual controls or monitoring tools. Artificial intelligence (AI) solutions can offset this complexity. AI-powered software tools can automatically orchestrate workloads to ensure that applications continuously get the resources needed to perform at peak efficiency and at the lowest possible cost — without human intervention.
Gartner Research says AI-powered management is a necessary component of an “intelligent data center” that combines architecture with algorithms in a self-optimizing and self-organizing system that produces “an aggregated IQ greater than the sum of its parts.” These systems can discover degradations in performance, or see broken paths and either repair or route around them.
Turbonomic is advancing the management capabilities of the decentralized data center with the latest version of its workload automation software. Turbonomic v6.1 features an AI-powered decision engine that analyzes performance, cost and compliance data across the entire IT stack — from applications through virtualization/cloud to the physical layers — and generates trustworthy management decisions much faster than ever before.
In as little as 30 minutes, the Turbonomic software can assess workloads and determine how efficiently they're provisioned on premises or in the cloud. Organizations also can use Turbonomic to right-size on-premises workloads before moving them to the public cloud to ensure they’re not overpaying. Once in the cloud, Turbonomic can continually optimize those workloads.
Workload self-management features provide placement, scaling and provisioning actions across hosts, clusters, datastores, data centers, and cloud provider regions and zones. This virtually eliminates queuing, latency and I/O contention to deliver the Quality of Service (QoS) required by a distributed workforce. A third-party validation study found that Turbonomic improves application performance by 30 percent or more by removing these performance penalties.
One-click integration with the Cisco AppDynamics application performance management tool contributes to that performance boost. This allows Turbonomic to dynamically discover the application topology, dependencies and real-time performance data.
Additionally, Turbonomic’s QoS Adherence feature allows operators and application teams to define QoS levels — response time or transaction throughput — for any application. The platform then continuously controls the environment to deliver the specified QoS levels while taking cost and business policies into consideration.
For decades, IT has been built upon centralized infrastructure in which all computing resources are consolidated and delivered from a central location. The need to improve agility with geographically dispersed resources is driving organizations to adopt a decentralized framework. However, this requires a new approach to management and resource allocation. AI-powered tools such as Turbonomic enable new levels of automation that make IT management easier and more reliable.