Tuesday, August 7, 2018

What’s the Difference Between a CPU and a GPU?

SHARE



The CPU (central processing unit) has often been called the brains of the PC. But increasingly, that brain is being enhanced by another part of the PC – the GPU (graphics processing unit), which is its soul.

Cpu-vs-gpu All PCs have chips that render the display images to monitors. But not all these chips are created equal. Intel’s integrated graphics controller provides basic graphics that can display only productivity applications like Microsoft PowerPoint, low-resolution video and basic games.

The GPU is in a class by itself – it goes far beyond basic graphics controller functions, and is a programmable and powerful computational device in its own right.


The First GPU

The first company to develop the GPU is NVIDIA Inc. The GeForce 256 GPU was capable of billions of calculations per second, can process a minimum of 10 million polygons per second, and has over 22 million transistors, compared to the 9 million found on the Pentium III. Its workstation version called the Quadro, designed for CAD applications, can process over 200 billion operations a second and deliver up to 17 million triangles per second.

HISTORY OF GPU COMPUTING
Graphics chips started as fixed-function graphics pipelines. Over the years, these graphics chips became increasingly programmable, which led NVIDIA to introduce the first GPU. In the 1999-2000 timeframe, computer scientists, along with researchers in fields such as medical imaging and electromagnetics, started using GPUs to accelerate a range of scientific applications. This was the advent of the movement called GPGPU, or General Purpose GPU computing.

The challenge was that GPGPU required the use of graphics programming languages like OpenGL and Cg to program the GPU. Developers had to make their scientific applications look like graphics applications and map them into problems that drew triangles and polygons. This limited the accessibility to the tremendous performance of GPUs for science.

NVIDIA realized the potential of bringing this performance to the larger scientific community and invested in modifying the GPU to make it fully programmable for scientific applications. Plus, it added support for high-level languages like C, C++, and Fortran. This led to the CUDA parallel computing platform for the GPU.


GPU-Accelerated Computing Goes Mainstream
GPU-accelerated computing has now grown into a mainstream movement supported by the latest operating systems from Apple (with OpenCL) and Microsoft (using DirectCompute). The reason for the wide and mainstream acceptance is that the GPU is a computational powerhouse, and its capabilities are growing faster than those of the x86 CPU.

In today’s PC, the GPU can now take on many multimedia tasks, such as accelerating Adobe Flash video, transcoding (translating) video between different formats, image recognition, virus pattern matching and others. More and more, the really hard problems to solve are those that have an inherent parallel nature – video processing, image analysis, signal processing.


The combination of a CPU with a GPU can deliver the best value of system performance, price, and power.
SHARE

Author: verified_user

0 comments:

POPULAR

Facebook

Ads