Artificial intelligence (AI) has rapidly become a mainstream technology that is being applied to a range of complex challenges, from financial analysis and medical diagnostics to speech recognition and self-driving cars. New research from McKinsey Global Institute estimates AI techniques have the potential to create nearly $6 trillion in value annually across multiple industries.
Despite this growth, AI remains a somewhat misunderstood technology. It isn’t actually a single thing, but rather an umbrella term for a number of technologies, all of which are designed to embed machines with the ability to analyze data sets, identify patterns and make autonomous decisions — eliminating the need for programmers to write code for every function.
Machine learning and deep learning are the two predominant subsets of AI. While they are closely related, there are significant differences. Machine learning refers to the use of algorithms that “learn” to produce better analysis as they are exposed to more and more data. This is how Netflix and Amazon can predict your viewing and shopping preferences.
Deep learning has been referred to as machine learning on steroids. It is designed to loosely mimic the way the human brain works with neurons and synapses. It utilizes a hierarchical system of so-called “artificial neural networks” with a large number of highly interconnected nodes working in unison to analyze large datasets. This gives a machine the ability to discover patterns or trends — and learn from those discoveries.
The significant difference is that deep learning algorithms require little or no human involvement. Unlike machine learning models that require programmers to code specific instructions and then label and load datasets for analysis, artificial neural networks require only minimal instructions represented by just a few lines of code. They then “infer” things about the data they are exposed to.
Deep learning is particularly useful for image and speech recognition tasks, which are important building blocks for autonomous cars, disease diagnosis, fraud detection and many other applications. Such tasks obviously require significant amounts of computational power, and Nvidia makes that possible with its revolutionary graphics-processing unit (GPU) architecture.
Although GPUs were originally designed for gaming applications, Nvidia pioneered their use in massively parallel processing environments designed to make compute-intensive programs run faster. A CPU has a few cores with lots of cache memory that can handle a few software threads at a time, but a GPU has hundreds of cores that can handle thousands of threads at the same time. GPU-accelerated computing can run some software 100 times faster than with a CPU alone.
Nvidia’s Volta GPU architecture features more than 21 billion transistors and delivers the equivalent performance of 100 CPUs. The company’s first Volta-based processor, the Tesla V100 data center GPU, is built to drive the next wave of advancement in AI. It features 672 tensor cores, a new type of core explicitly designed to accelerate AI workloads. These cores can deliver up to 120 teraflops of processing power, making it the world’s first GPU to break the 100TFLOPs barrier.
GPU-accelerated servers are now making their way into data centers as organizations look to capitalize on the performance and efficiency of neural networks. Nvidia says GPU-accelerated servers can cut data center costs by up to 70 percent by allowing organizations to replace several racks of CPU servers, freeing up precious rack space and reducing energy and cooling requirements. Organizations should be keeping a close eye on Nvidia’s developments and considering how AI can create value in their operations.
May 8, 2018
Comments