{"id":50023,"date":"2022-04-13T00:00:00","date_gmt":"2022-04-13T00:00:00","guid":{"rendered":"https:\/\/www.techopedia.com\/6-scary-things-ai-is-getting-better-at-doing\/"},"modified":"2023-08-29T09:29:13","modified_gmt":"2023-08-29T09:29:13","slug":"6-scary-things-ai-is-getting-better-at-doing","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/6-scary-things-ai-is-getting-better-at-doing\/2\/33716","title":{"rendered":"6 (Scary) Things AI Is Getting Better at Doing"},"content":{"rendered":"
New AI tech is constantly being introduced. And just as constantly, warnings and frantic concerns are raised. Sometimes the concerns may seem overwrought or an off-shoot of conspiracy theories. Other times, they may be warranted.<\/p>\n
It reminds us of an old joke on Tumblr<\/a>:<\/p>\n As technologies are accepted and become commonplace, the concerns dissipate and are replaced by new things that at first seem weird or spooky and torn straight from the pages of a sci-fi novel.<\/p>\n Think of how self-driving vehicles keep getting pushed as the way of the future, while some close to the industry suggest that the technology is not even remotely ready for prime time.<\/p>\n That’s the main point of concern to people understanding the obvious safety risks involved!<\/p>\n Here are six of the AI technologies that seem most ominous to people surveyed by study researchers over the past few months – self-driving vehicles included.<\/p>\n <\/p>\n New types of bedding and high-tech pillows are offering to do more for us at our most vulnerable \u2013 while we\u2019re asleep. Nothing scary about that, right?<\/p>\n In some ways, it’s intuitive to use new AI technology to improve on things like CPAP or BiPAP machines. A lot of people suffer from sleep apnea or other sleep conditions, so why not apply AI to the new medical science for treatment?<\/p>\n Well, to some people, including quite a few fans of dark humor, the idea of machines watching you sleep is just outright creepy. Take a look at smart pillows<\/a>, for example, that will gently nudge your head in different directions, and can be connected to your smartphone.<\/p>\n As long as their careful solicitations are doing you good, everything is cool, but what if the pillow starts doing things that you wouldn’t sign off on if you were awake?<\/p>\n Apply that concern to any technology that we use to monitor or assist us in our sleep!<\/p>\n Much has been made of the application of AI to pain management, but what about the opposite \u2013 using AI to simulate pain through a person\u2019s central nervous system?<\/p>\n If you’re wondering where this would be applicable to commerce and industry, it’s in the gaming market. We\u2019re getting closer to virtual reality gaming<\/a>, where people are running around in virtual environments. So some companies are starting to pioneer things like direct heat applications and certain types of impact that will cause a physical response when the player gets shot or stabbed during gameplay, things that are likely to happen to players in a whole host of modern shoot-em-up games.<\/p>\n If you’re the speculative type, you can probably see where this is going. There are lots of ways that these technologies could go overboard and lead to some pretty scary and nefarious AI applications. (Read also: 5 Ways Virtual Reality Will Augment Web3<\/a>)<\/strong><\/p>\n Here’s where we get back to that overriding concern about having a computer drive your car.<\/p>\n Driving a car is not a simple job. We talk about the ability of self-driving vehicles to navigate the streets, but we tend to gloss over a lot of the intuitive and instinctive parts of the human task of driving.<\/p>\n Watch this video of human drivers using a full Tesla autopilot on Boston streets<\/a> (not without human intervention!) and you’ll see why many of these self-driving technologies are sadly behind the game when it comes to actually provide safe vehicular conduct through traffic.<\/p>\n It only takes one sensor failure or another glitch to cause a fatality, and that’s one reason that we won’t be using full self-driving vehicles anytime soon, especially not on roads where you would normally encounter pedestrians. Some experts suggest that highway cargo delivery will come first, but even that assumes a level of safety that we may not yet have in today\u2019s AI. (Read: Hacking Autonomous Vehicles: Is This Why We Don’t Have Self-Driving Cars Yet?<\/a>)<\/strong><\/p>\n <\/p>\n Concerns about internal microchips are as old as computer technology itself. Many of them are based on something even older, biblical revelations of the \u201cmark of the beast\u201d that gets forcibly implanted under your skin.<\/p>\n Aside from that, though, people have other more prosaic fears about having chip implants in their body, especially for cognitive purposes. A Pew Research Center study<\/a> shows that internal computer chips were far and away the biggest concern of respondents when presented with a range of issues and the “most scary” AI technology.<\/p>\n <\/p>\n Here’s one that’s a little different, where AI simply gives humans the ability to do bad things.<\/p>\n A Verge story<\/a> recently profile the situation where an AI program was able to suggest no less than 40,000 different types of chemical weapons within six hours.<\/p>\n The issue here isn’t that AI would do something threatening or dangerous to humanity. It’s that it gives human bad actors the keys to do those bad things themselves.<\/p>\n AI applications to weapons, as a rule, make those weapons more powerful and weapons, to anybody with a lick of common sense, are just pretty scary in general!<\/p>\n So some of these types of applications are on the radar for people who believe that AI needs to be harnessed for good instead of dangerous applications. (Read also: Is Blockchain the Solution to Gun Control?<\/a>) <\/strong><\/p>\n Some people are scared of self-driving tractors, and others would give a wide berth to a trash compactor that seems to be doing its work without any human management or intervention.<\/p>\n Big equipment, people feel, should be controlled by humans and not some computer algorithm.<\/p>\n In this and many other ways, concerns about AI have to do with the combination of non-human cognitive systems and big physical pieces of hardware.<\/p>\n As long as AI\u2019s work is going on in cyberspace, we feel like the technology is more contained. Is that a false sense of security? In some cases, yes, and in other cases, no. These examples are just the tip of the iceberg when it comes to “scary” AI. Other reports include more of these intangible terrors, like machines that can read your mind<\/a>!<\/p>\n The drive toward explainable and transparent AI is part of the response to these and other scary situations. By implementing a human-in-the-loop scenario, and promoting trusted AI that doesn’t use black-box algorithms, we\u2019re trying to make sure that we\u2019re confident about where new technology is going. And that’s going to make all the difference in how we experience technology in the future!<\/p>\n","protected":false},"excerpt":{"rendered":" New AI tech is constantly being introduced. And just as constantly, warnings and frantic concerns are raised. Sometimes the concerns may seem overwrought or an off-shoot of conspiracy theories. Other times, they may be warranted. It reminds us of an old joke on Tumblr: Tech Enthusiasts: Everything in my house is wired to the Internet […]<\/p>\n","protected":false},"author":7683,"featured_media":50024,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_lmt_disableupdate":"","_lmt_disable":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[573,590,571,586,599],"tags":[],"category_partsoff":[],"class_list":["post-50023","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-business-intelligence-bi","category-computer-science","category-emerging-technology","category-machine-learning"],"acf":[],"yoast_head":"\n\n
1) The Smart Pillow<\/span><\/h2>\n
2) AI and Simulated Pain<\/span><\/h2>\n
3) Self-Driving Vehicles<\/span><\/h2>\n
Computer Chip Implants<\/span><\/h2>\n
Weapons Technology<\/span><\/h2>\n
Big Equipment<\/span><\/h2>\n
The Bottom Line<\/span><\/h2>\n