{"id":307513,"date":"2024-09-03T14:47:58","date_gmt":"2024-09-03T14:47:58","guid":{"rendered":"https:\/\/www.techopedia.com\/?p=307513"},"modified":"2024-09-12T09:52:31","modified_gmt":"2024-09-12T09:52:31","slug":"ibm-ntt-explain-how-ai-works-on-edge-computing","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/ibm-ntt-explain-how-ai-works-on-edge-computing","title":{"rendered":"IBM & NTT Data Explain How AI Works on Edge Computing"},"content":{"rendered":"

One trillion plus AI parameter models<\/a> are already banging hard at our front door. These new artificial intelligence<\/a> models, with unprecedented computing power, will become the norm in the coming months and years ahead.<\/p>\n

While the technological advancements of new generative AI<\/a> models are promising and expected to benefit most sectors and industries, the world still has a big AI-size-edge-infrastructure problem (we will break this down in a moment).<\/p>\n

How will giant AI models operate on the edge<\/a> of networks while offering low latency<\/a> and real-time services? It’s a ‘shrinking giant’ problem.<\/p>\n

In this report, Techopedia talks with NTT Data and IBM experts to understand an approach to solving fast, resource-intensive artificial intelligence<\/a> without over-burdening a network.<\/p>\n

\n

Key Takeaways<\/span><\/h2>\n