GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. Pioneered in 2007 by NVIDIA®, GPU accelerators now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world. GPUs are accelerating applications in platforms ranging from cars, to mobile phones and tablets, to drones and robots.

HOW GPUs ACCELERATE APPLICATIONS
GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster.

how-gpu-acceleration-works

CPU VERSUS GPU
A simple way to understand the difference between a CPU and GPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

GPUs have thousands of cores to process parallel workloads efficiently

cpu-and-gpu

Check out the video clip below for an entertaining CPU versus GPU

Hundreds of industry-leading applications are already GPU-accelerated. Find out if the applications you use are GPU-accelerated by looking in our application catalog.

– See more at (original text): http://www.nvidia.com/object/what-is-gpu-computing.html#sthash.fZfIiyV8.dpuf

Kit de ensino GPGPU

A NVIDIA, em colaboração com a Universidade de Illinois, vem desenvolvendo um kit de ensino para auxiliar no ensino e treinamento de pessoal em programação de alto desempenho usando GPUs.

Agora, no segundo semestre de 2016, com a colaboração da Universidade Federal Fluminense, uma versão legendada em português começa a ser disponibilizada.

KIT DE ENSINO GPGPU

GET STARTED TODAY

There are three basic approaches to adding GPU acceleration to your applications:

Dropping in GPU-optimized libraries
Adding compiler “hints” to auto-parallelize your code
Using extensions to standard languages like C and Fortran

Learning how to use GPUs with the CUDA parallel programming model is easy.

For free online classes and developer resources visit CUDA zone.

VISIT CUDA ZONE

GPU

Aenean sodales eros ac scelerisque sagittis. Aliquam porta consectetur blandit. Nulla sed augue nisl. Vivamus pulvinar ullamc orper malesuada.

Suspendisse ornare velit eget dolor fringilla, et imperdiet ipsum convallis. Integer faucibus, felis nec lobortis hendrerit, augue mi mattis massa, vitae lobortis neque nunc ut eros. Donec massa lectus

Support by:

NVIDIA has pioneered visual computing, the art and science of computer graphics.

Development: