Nvidia’s next GPUs will be designed partially by AI

At the GTC 2022 conference, Nvidia talked about using artificial intelligence and machine learning to make future graphics cards better than ever.

As the company chooses to prioritize AI and machine learning (ML), some of these improvements will already make their way to the upcoming next-gen Ada Lovelace GPUs.

Nvidia’s big plans for AI and ML in next-gen graphics cards were shared by Bill Dally, the company’s chief scientist and senior vice president of research. He spoke about Nvidia’s research and development teams, how they use AI and machine learning (ML), and what this means for next-gen GPUs.

In short, using these technologies can only mean good things for Nvidia graphics cards. Dally discussed four key sections of GPU design, as well as the ways in which using AI and ML can dramatically accelerate GPU performance.

The goal is to increase both speed and efficiency, and Dally’s example tells how using AI and ML can reduce a typical GPU design task from three hours to just three seconds.

The use of artificial intelligence and machine learning can help optimize all of these processes.

This is possible by optimizing up to four processes that are normally time consuming and highly detailed.

This refers to monitoring and mapping voltage drops, anticipating errors through parasitic prediction, standard automation of cell migration and addressing various routing problems. The use of artificial intelligence and machine learning can help optimize all of these processes, resulting in big profits in the final product.

By mapping potential voltage drops, Nvidia can track the flow of next-gen graphics cards. According to Dally, switching from using standard tools to specialized AI tools can dramatically speed up this task, as the new technology can perform such tasks in just seconds.

Dally said using AI and ML to map voltage drops can increase accuracy by as much as 94% while vastly increasing the speed at which these tasks are performed.

Nvidia

Data flow in new chips is an important factor in how well a new graphics card performs. Therefore, Nvidia uses graphical neural networks (GNN) to identify and quickly resolve potential problems in the data flow.

Parasitic prediction through the use of AI is another area where Nvidia sees improvements, with increased accuracy, with simulation errors falling below 10 percent.

Nvidia has also managed to automate the process of migrating the chip’s default cells, reducing a lot of downtime and speeding up the entire task. With that, 92% of the cell library was migrated using a tool without errors.

The company plans to focus on AI and machine learning in the future, devoting five of its labs to researching and designing new solutions in those segments. Dally hinted that we could see the first results of these new developments in Nvidia’s new 7nm and 5nm designs, including the upcoming Ada Lovelace GPUs. This was first reported by Wccftech.

It’s no secret that the next generation of graphics cards, often referred to as RTX 4000, will be intensely powerful (with associated power requirements). Using AI and machine learning to advance the development of these GPUs means we may soon have a real powerhouse on our hands.

Leave a Comment