{"id":190956,"date":"2024-02-26T12:34:57","date_gmt":"2024-02-26T12:34:57","guid":{"rendered":"https:\/\/www.techopedia.com\/?p=190956"},"modified":"2024-02-27T16:14:11","modified_gmt":"2024-02-27T16:14:11","slug":"how-nvidia-chat-with-rtx-brings-llms-as-an-operating-system-closer","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/exploring-nvidia-chat-with-rtx-as-an-offline-llm","title":{"rendered":"How ‘Nvidia Chat with RTX’ Brings ‘LLMs as an Operating System’ Closer"},"content":{"rendered":"
Earlier this month, Nvidia released Chat with RTX<\/a>, a free-to-download, generative AI<\/a>-powered chatbot<\/a> that users can interact with and customize as long as they have relatively affordable GPUs<\/a> in their desktops.<\/p>\n Users query the chatbot and locate content stored locally in txt, .pdf, .doc, .docx, and .xml files, which can then be connected to open-source<\/a> language models like Llama 2<\/a> and Mistral<\/a>.<\/p>\n What’s notable about this approach is that it brings a virtual assistant directly onto the user’s local device without reaching out to online server farms.<\/p>\n In announcing Nvidia Chat, Nvidia said: “Since Chat with RT runs locally on Windows RTX PCs and workstations, the provided results are fast \u2014 and the user’s data stays on the device.<\/p>\n “Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.”<\/p><\/blockquote>\nKey Takeaways<\/span><\/h2>\n
\n