Cybersecurity expert working on a project

Artificial intelligence (A.I.) isn’t quite ready to take over the world, but it’s moving a step closer. A recent survey released by Morning Consult and IBM finds that the COVID-19 pandemic, coupled with the continuing need to automate manual tasks and cybersecurity, has pushed more organizations to adopt A.I. technologies.

The Global AI Adoption Index 2022 report, which is based on questions sent to approximately 7,500 IT leaders across the globe, finds that 35 percent of organizations surveyed are deploying A.I. today; another 42 percent are “exploring” these technologies.

The study added that A.I. adoption was up four percentage points compared to a previous report released in 2021. As with many other cutting-edge technologies, the survey found that COVID-19 played a significant role in increasing A.I. adoption, as companies looked to automate certain tasks as a way to overcome tech skill gaps (as well as some of the labor shortages associated with the pandemic).

When it comes to those organizations now deploying A.I., the report found that 54 percent of those surveyed are using these technologies to automate IT, business or network processes that also include cost savings and increased efficiencies. Another 53 percent report that they use A.I. to improve IT or network performance, while 48 percent report using A.I. to create better customer experiences.

“A.I. is rapidly providing new benefits and efficiencies to organizations around the world through new automation capabilities, greater ease of use and accessibility, and a wider variety of well-established use cases,” the report noted. “A.I. is both being applied through off-the-shelf solutions like virtual assistants and embedded in existing business operations like IT processes.

Concerns with A.I. Adoption

While A.I. is allowing organizations to automate tasks and help fill the gaps left by a workforce shortage, the report added that businesses and government organizations still face numerous barriers when it comes to adopting these technologies.

Those hurdles include a limited number of employees with the requisite A.I. skills, expertise or knowledge; the price of adopting the technology currently remaining too high; a lack of tools or platforms to develop successful A.I. models; projects too complex or difficult to integrate and scale; and the amount of data needed and collected adds too much complexity.

Several experts also noted that, while A.I. adoption is on the rise, the barriers to adopting these technologies miss a crucial point: Specifically, the ability to secure A.I. technologies and having skilled engineers who can bake security into these tools and platforms as they are developed.

“A.I. is a specialized sub-discipline of software engineering and, as such, security expertise in that area will lag as it has in other new disciplines such as machine learning, cloud and microservices,” Andrew Hay, the chief operating officer at LARES Consulting, told Dice.

Hay added that many organizations are trying to build security into the DevOps process (commonly called DevSecOps) and this is likely to happen in the A.I. space as well, which has a downside.

“As a short-term solution, A.I. engineers will likely be tasked with taking on security responsibilities for their or their peers' code until a dedicated resource is required,” Hay said. “By that time, we'll likely have a small cadre of security specialists that focus almost exclusively on secure A.I. engineering principles and testing—but we're a long way off from that.”

A recent report suggests there are about 600,000 open cybersecurity positions across the U.S. With an increased interest in investing in A.I. technologies, finding skilled cybersecurity professionals to help secure A.I. is going to prove challenging, said John Bambenek, principal threat hunter at security firm Netenrich.

“The shortage of A.I. and machine learning talent is certainly a problem in many fields but there are unique challenges in cybersecurity. We don’t need more data scientists per se—the A.I./ML libraries do much of the heavy lifting for you—we need cybersecurity researchers who have a basic knowledge of A.I./ML,” Bambenek told Dice. “The linear algebra isn’t what protects you. It is the deep knowledge and experience needed to create training data and define features.”

Bambenek also noted another downside: Sophisticated threat actors and nation-state groups also adopting A.I. technologies. Critical organizations do not have skilled security professionals to counter these types of cyber threats.

“With some few exceptions, A.I./ML security tools simply aren’t working due to this lack of talent,” Bambenek added. “Fundamentally, we are also making automated decisions based on data created by criminals who are decades ahead of us on fooling automated systems.”

Security Firm Raises Concerns

Experts note that even sophisticated security firms sometimes have trouble recruiting enough talent to ensure their own A.I. initiatives are taking off. Oliver Tavakoli, the CTO at Vectra, a San Jose, Calif.-based A.I. cybersecurity company, observed that many employees with the most knowledge of A.I. tend to take positions at companies like Google or financial firms.

This has created a unique talent gap when it comes to developing A.I. tools. “What this has meant is that many cybersecurity companies have resorted to A.I.-as-a-sidecar—solving a small number of peripheral problems through the application of A.I.—rather than A.I.-as-the-engine—building the core of their offerings around A.I. and solving peripheral problems with conventional techniques,” Tavakoli told Dice.

“[Predictably], the former approach has resulted in a large gap in what they deliver versus the value customers think the A.I. should be delivering,” he added.