{"id":205146,"date":"2024-03-19T16:57:18","date_gmt":"2024-03-19T16:57:18","guid":{"rendered":"https:\/\/www.techopedia.com\/?post_type=news&p=205146"},"modified":"2024-03-19T16:57:18","modified_gmt":"2024-03-19T16:57:18","slug":"how-does-nvidias-blackwell-chip-make-ai-development-more-cost-effective-and-eco-friendly","status":"publish","type":"news","link":"https:\/\/www.techopedia.com\/news\/why-nvidia-blackwell-chip-matters","title":{"rendered":"How Does Nvidia\u2019s Blackwell Chip Make AI Development More Cost-Effective and Eco-Friendly?"},"content":{"rendered":"
Earlier this week at the annual Nvidia GTC conference in San Jose Convention Center in California, Nvidia announced the release of the Blackwell B200 GPU \u2013 \u201cthe world\u2019s most powerful chip.\u201d<\/p>\n
Blackwell GPUs come with 208 billion transistors<\/a> and enable organizations to train and run generative AI<\/a> models with up to 25x less overall cost and energy consumption than its H100 series of chips.<\/p>\n This computationally-efficient chip is designed to support trillion-parameter<\/a> large language models<\/a> (LLMs), which are integral in allowing multimodal<\/a> models to train against, process and respond to data in formats including text, image, audio, and video.<\/p>\n For context, arguable the most powerful LLM right on the market, GPT-4<\/a> is rumored to have 1.7 trillion parameters.<\/a><\/p>\n