Sunday, May 18, 2025

NVIDIA Opens GPU R & D Center for Computation Research in MIMOS Malaysia

By Brandon Teoh

 

The term GPU (Graphical Processing Unit)was popularised by Nvidia in 1999, who marketed the GeForce 256 as “the world’s first ‘GPU’. In the early days, personal computers usually come with separate graphic cards for graphical processing. Those graphic cards would come with a microchip because of the need of controlling the processing based on firmware. Therefore, essentially, the microchip serves as the controller of the graphic cards and it is also a common thing to refer to graphic cards as graphic controllers.

As the industry dawned into more requirements for better graphical processing computing power, the supply was met with increasingly powerful microchip on board of the graphic card. As a result of this, the industry has found that now it is possible to make use of these powerful computing powers of a graphic card to do general purpose computing apart from processing graphics.

Nvidia GeForce 256 was the first consumer-level GPU which provides a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that are capable of processing a minimum of 10 million polygons per second”, leveled with industry specialised cards which cater for specific scenarios such as hardware-accelerated 3D cards.

In other words, with Nvidia GeForce 256, the industry has arrived at the capability of a powerful general purpose graphic cards which are capable of doing 2D or 3D and other complex computer science computation.

As a result of this, GPU vendors began to realize that it may be possible to contribute to general purpose computing by supplementing the computing power the core processor of a CPU and this gave birth to GPCPU.

The core concept of GPU computing is based on the idea of parallel computing. Over the years, vendors have developed different platforms for GPU programming such as Nvidia’s CUDA platform was the earliest widely adopted programming model for GPU computing. Other standards include OpenCL, Microsoft DirectX, OpenACC and etc.

In a nutshell, software engineers can now leverage on GPGPU to solve the issue of limited computing resources when tackling big data scenarios for analytics using lower-end CPUs with suitable GPUs on board. The benefits of using GPGPU are:

  • It is fun
  • It caters for more computing resources and hence it improves performance
  • It is most costs saving and also doesn’t generate too much heat because of usage of lesser hardware and more optimised processing.
  • GPU caters for programming libraries and techniques which are applicable for general purpose computing.

In other words, software engineers can now write codes which utilise processing power from the microprocessor of the CPU while at the same time can also extend the codes to make use of more processing power from GPUs without having to buy new servers.

And that is exactly what the R & D center is for which has the objective to promote dynamic growth of GPU computing in Malaysia. Local activities include : collaboration with university for research, creating CUDA teaching centers at universities, GPGPU workshops, CUDA programming contests, technology road shows and hosting government and industry visits.

 

(L-R) Mr. Thiyagu Letchumana, Country manager, Commercial & Public Sector, HP Enterprise Group, HP, Mr. Thilai Raj, CTO MIMOS, Mr. Eric Chang, Manager, Professional Solution Group for SEA and Taiwan, NVIDIA,  Mr. Sean Zhang, Director Professional Solution Group Marketing, APAC & Japan,  NVIDIA, Dr. Simon See, Chief Solution Architect, APAC, NVIDIA at the press conference

Demonstration in the lab conducted by NVIDIA’s engineers

It will also cater to GPU-related activities, such as developing GPU accelerator libraries, R & D in GPGPU-enabled applications and generic algorithms, and GPGPU application benchmark and testing.

It targets mainly software engineers, architects, CTOs and CEOs who are seeking to bring their own products to the next level by achieving a higher performance benchmark and also to help with reaching new market segments.

The vision of MIMOS for this initiative is to bridge the gap in the area of GPGPU for software and hardware engineering. The center is powered by HP infrastructures such as the HP Proliant Servers, HP Network switches, HP workstations and HP 3PAR storage.

According to Mr. Thiyagu Letchumanan, Country Manager, Commercial & Public Sector, HP Enterprise Group, HP Malaysia, HP believes that the next technology wave for high performance computing is GPGPU, which delivers super-scale performance with outstanding efficiency and affordability. And for businesses, especially Malaysian SMEs, GPU provides greater opportunities in the market place, locally and globally.

Pay attention that Mr. Thiyagu highlighted that the core idea is to have a super-scale computing infrastructure at an affordable price.

In many ways, this initiative challenges other trends such as Unified Computing System evangelised by Oracle (Engineered System) and IBM (Expert System) and also cloud computing. The idea of Unified Computing System is about vendors doing everything for consumers, by making hardware and software working together at the lowest level to optimise performance. Cloud computing offers everybody in the industry to acquire super-scale computing power through rental at a lower cost.

The way I see it, this initiative offers software engineers and architects the window of opportunity to build their own Unified Computing System and private cloud infrastructure. Mind you that Oracle’s Engineered System costs at least US $ 500,000. This also reminds me about the story of two humble software engineers who had utilised available computing resources at their disposal to build servers hosted at their garage to power a search engine which has resulted in the birth of Google.

Mr. Thilai Raj, CTO of MIMOS explained a scenario where a SOCSO application which processes 14 million records, originally ran on two racks of servers and averagely took 11 hours to process everything. Now using GPGPU technique running on a single core processor CPU server, it was able to achieve the same feat in 45 minutes – an achievement of over 15 times faster in performance, more energy saving and costs saving.

Personally, I salute this initiative which offers software engineers and architects another alternative to achieve super-scale computing power and opportunities to have more power and control when it comes to tweaking the codes for higher performance.

To access the GPGPU center in MIMOS, a web site will be made available by MIMOS soon which interested parties can login and make appointments to utilise the lab and for consultation with MIMOS. And to get started with CUDA programming, check out the NVIDIA’s developers zone for CUDA at http://developer.nvidia.com/category/zone/cuda-zone

My gut feeling tells me that in the near future after today, we will see books like ‘Programming GPGPU in 24 hours’ surface in the market – great stuff!

Table of contents [hide]

Read more

News