{"id":145188,"date":"2023-12-29T12:12:51","date_gmt":"2023-12-29T12:12:51","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2023-12-29T12:13:28","modified_gmt":"2023-12-29T12:13:28","slug":"tech-ceos-share-8-top-ai-trends-to-watch-in-2024","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/top-ai-trends","title":{"rendered":"Tech CEOs Share 8 Top AI Trends to Watch in 2024"},"content":{"rendered":"
2023 alone has been a massive year for generative AI<\/a>, with GPT-4, GPT-4<\/a>, GPT-4V, Google Bard<\/a>, PaLM 2, and Google Gemini<\/a> all launching this year as part of a fast-brewing arms race to automate day-to-day workflows.<\/p>\n To get some insight into what\u2019s next, Techopedia asked some of the top CEOs in enterprise tech to find out how they believe AI will impact organizations in 2024 and the top AI trends they see emerging. The comments below have been edited for brevity and clarity.<\/p>\n We will soon see the rise of generative AI-fueled malware<\/a> that can essentially think and act on its own. This is a threat the U.S. should be particularly concerned about coming from nation-state adversaries.<\/p><\/blockquote>\n We will see attack patterns that get more polymorphic, meaning the artificial intelligence<\/a> (AI) carefully evaluates the target environment and then thinks on its own to find the ultimate hole in the network or the best area to exploit and transforms accordingly.<\/p>\n READ MORE: <\/strong>12 Highest Paid AI Jobs for 2024<\/strong><\/a><\/p>\n Rather than having a human crunching code, we will see self-learning probes that can figure out how to exploit vulnerabilities based on changes in their environment.<\/p>\n Patrick Harr, CEO at SlashNext<\/a>\u00a0<\/em><\/p>\n There\u2019s a dark side of the AI boom that not many consumers or businesses have realized: cybercriminals are now able to make their phishing<\/a> attacks more credible, frequent, and sophisticated by leveraging the power of generative AI<\/a>, such as WormGPT<\/a>. As we enter 2024, this threat will grow in size and scale.<\/p>\n Against this backdrop, we\u2019ll reach the tipping point for mass passkey<\/a> adoption (although there will still be a significant period of transition before we reach a truly passwordless future).<\/p>\n <\/p>\n In the past year, passkeys became more commonplace across platforms, and Big Tech \u2013 from Google<\/a> to Amazon<\/a> \u2013 implemented various levels of passkey support.<\/p><\/blockquote>\n However, passkeys will ultimately surpass passwords as the status quo technology once the consequences of not adopting a more secure, phishing-resistant form of authentication become clear in the wake of increasingly harmful and costly cyberattacks.<\/p>\n John Bennett, CEO at Dashlane<\/a>\u00a0<\/em><\/p>\n Safety and privacy must continue to be a top concern for any tech company, regardless of whether it is AI-focused or not. When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and, most importantly, mechanism for highlighting safety concerns is critical.<\/p><\/blockquote>\n As organizations continue to rapidly adopt AI in 2024 for all of the efficiency, productivity, and democratization of data benefits, it\u2019s important to ensure that as concerns are identified, there is a reporting mechanism to surface those, in the same way, a security vulnerability would be identified and reported.<\/p>\n David Gerry, CEO at Bugcrowd<\/a>\u00a0<\/em><\/p>\n In 2024, the evolution of Generative AI (Gen AI) and Large Language Models (LLM<\/a>), initiated in 2023, is poised to redefine the cybersecurity<\/a> chain, elevating efficiency and minimizing manpower dependencies in cloud security.<\/p>\n One example is detection tools fortified by LLMs. We\u2019ll see LLMs bolster log analysis, providing early, accurate, and comprehensive detection of both known and elusive zero-day attacks<\/a>.<\/p>\n READ MORE:<\/strong><\/p>\n The analytical prowess of LLMs will uncover subtle, intricate patterns and anomalies, allowing for the identification and mitigation of complex threats and enhancing the overall security posture.<\/p>\n We\u2019re going to see GenAI intensify the sophistication of both cyber attacks and defense mechanisms, necessitating innovative strategies and fostering the creation of agile, responsive security frameworks.<\/p>\n The amalgamation of AI and LLMs will streamline security operations and protocols, enabling professionals to concentrate on strategic analysis and innovation and ensuring robust detection and counteraction of threats, which will fortify the integrity, confidentiality, and availability of information in the cloud<\/a>.<\/p><\/blockquote>\n Chen Burshan, CEO of Skyhawk Security<\/a>\u00a0<\/em><\/p>\n The concept of \u2018risk reduction\u2019 in data security will evolve in the next few years, in line with the rise in the use of Generative AI technologies.<\/p>\n Until recently, organizations implemented data retention and deletion policies to ensure minimal risk to their assets. As GenAI capabilities become more widespread and valuable for organizations, they will become more motivated to hold on to data for as long as possible in order to use it for training and testing these new capabilities.<\/p>\n Data security teams will, therefore, no longer be able to address risk by deleting unnecessary data since the new business approach will be that any and all data may be needed at some point, This will bring about a change in how organizations perceive, assess and address risk reduction in data security.<\/p><\/blockquote>\n Liat Hayun, CEO and co-founder at Eureka Security<\/a><\/em><\/p>\n In a rapidly evolving technological landscape, the parallels between the adoption of cloud services and the current surge in artificial intelligence (AI) implementation are both striking and cautionary.<\/p>\n Just as organizations eagerly embraced cloud solutions for their transformative potential in innovation, the haste of adoption outpaced the development of robust security controls and compliance tools.<\/p>\n <\/p>\n Consequently, this created vulnerabilities that malicious actors were quick to exploit, leaving enterprises grappling with unforeseen challenges.<\/p><\/blockquote>\n As we witness a similar trajectory in the adoption of AI technologies, it becomes imperative to draw lessons from the past and proactively address the looming concerns. The rapid integration of AI into various facets of business operations is undeniably transformative, but the lack of comprehensive visibility and enterprise control raises red flags.<\/p>\n Much like in the early days of cloud adoption, organizations are navigating uncharted territories with AI, often without the necessary safeguards in place. The consequences of insufficient controls are twofold: first, a heightened risk of security breaches, and second, a potential erosion of trust as stakeholders question the ethical implications and transparency surrounding AI decision-making.<\/p>\n Varun Badhwar, CEO and co-founder at <\/em>Endor Labs<\/em><\/a><\/p>\n This is a two-pronged topic for leadership to really think about in 2024. On one hand, CISOS and IT leaders need to be able to think about how we\u2019re going to securely consume it into our own source code \u201ckingdoms\u201d within the tuner-rise.<\/p>\n With the likes of Co-Pilot<\/a> and ChatGPT, developers and organizations will be a lot more efficient, but it also introduces more risk of potential vulnerabilities we need to worry about.<\/p><\/blockquote>\n \u201cOn the other side, we need to be able to think about how Application Security vendors in the space will allow CISOS and IT leadership to leverage generative AI in their tools to be able to run their programs a lot more efficiently and drive productivity in terms of using AI to speed up security outcomes like security policy generation, identifying patterns and anomalies, finding and prioritizing vulnerabilities a lot faster, and assisting with the incident response process.<\/p>\n Lior Levy, CEO and co-founder at Cycode<\/a>\u00a0<\/em><\/p>\n Over the past year, video generative models (text-to-video, image-to-video, video-to-video) became publicly available for the first time.<\/p>\n In 2024, we\u2019ll see the quality, generality, and controllability of those models continue to improve rapidly, and we\u2019ll end the year with a non-trivial percentage of video content on the internet incorporating them in some capacity.<\/p><\/blockquote>\n Additionally, as large models become faster to turn and we develop more structured ways of controlling them, we\u2019ll start to see more kinds of novel interfaces and products emerge around them that go beyond the standard prompt-to-X or chat assistant paradigms.<\/p>\n Much of the focus of conversation over the past year has been on the capabilities of individual networks trained end-to-end. In practice, however, a pipeline of models usually powers AI systems deployed in real-world settings, and more frameworks will appear for building modular AI systems.<\/p>\n We will also see AI powering research \u2014 while LLM code assistants like Copilot have received a lot of adoption, there hasn\u2019t been a lot of tolling that targets speeding up AI research workflows specifically, e.g., in automating a lot of the repetitive work involved in developing\/debugging<\/a> model code, training and evaluating models, etc. We\u2019ll likely see more of those tools emerge in the coming year.<\/p>\nThe Rise of AI-Fueled Malware<\/span><\/h2>\n
Passkey Adoption Will Increase<\/span><\/h2>\n
Adding Safeguards to AI Models<\/span><\/h2>\n
LLMs Will Reshape Cloud Security<\/span><\/h2>\n
\n
Data Security \u2018Risk Reduction\u2019 Will Evolve<\/span><\/h2>\n
An Erosion of Trust Surrounding AI Decision-Making<\/span><\/h2>\n
Developers Will Be More Efficient<\/span><\/h2>\n
Video Generation Goes Mainstream<\/span><\/h2>\n