Main image of article A.I. Getting Mixed Reviews from Ethical Hackers, Tech Pros

Over the past six months, headlines about recent advances in artificial intelligence, especially after the release of OpenAI’s ChatGPT and Google Bard, have focused on the potential these tools possess to disrupt various industries, as well as how tech and cybersecurity professionals approach their work.

One question these headlines and stories don’t fully answer: What do these tech professionals and other enthusiasts think about A.I. and the role the technology will play?

A recent survey released by Bugcrowd, a crowdsourced security company, offers fresh insights into how ethical hackers, bug-bounty hunters and other tech pros think about A.I. and where they see benefits and shortfalls.

“Although A.I. chatbots are certainly in vogue, they aren’t the only way hackers are

leveraging A.I.,” according to the Inside the Mind of a Hacker 2023 survey, which is based on answers from 1,000 respondents from 85 countries, including the U.S. U.K., Australia and Canada. “We’re seeing signs that point to hackers being early adopters of AI technologies, as they are constantly testing out different use cases to be more well-rounded and efficient at their jobs.”

While many survey respondents see benefits in generation A.I. tools and plan to incorporate those into their skill sets, others note that the tech will not likely replace human creativity.

Survey participants also reported that, while 78 percent of respondents believe that A.I. will disrupt the way they conduct penetration testing or work on bug bounty programs, 72 percent do not believe A.I. will replicate human creativity.

A.I. and the Evolving Tech Skills Set

While not everyone is convinced of the creative superiority of A.I., ethical hackers and other tech pros want to incorporate the technology into their skill sets. The Bugcrowd survey found that 94 percent of hackers plan to start using A.I. in the future to help them ethically hack and work on bug bounty programs.

The top five use cases for A.I., the survey found, include:
 

  • Automating tasks: 50 percent of respondents
  • Analyzing data: 48 percent of respondents
  • Identifying vulnerabilities: 36 percent of respondents
  • Validating findings: 35 percent of respondents
  • Conducting reconnaissance: 33 percent of respondents

This view of A.I. is complementary to what Timothy Morris, chief security advisor at security firm Tanium, sees in the market.

“I agree completely with the majority that it will not replace the security researchers and [ethical] hackers. Hacking requires skill—A.I. has that—but also creativity that comes from understanding context—A.I. doesn't have that. While AI may get better over the years, I don't see it as a replacement,” Morris recently told Dice. “Not surprisingly, automation of tasks was high on the list for A.I. use. Most researchers I know do not like writing reports. I normally would pair up a solid researcher with an excellent report writer to get them done. With tools like ChatGPT, having those report templates ready to go is a great tool for the hacker that likes to minimize the red tape.”

Even before the release of the BugCrowd study, industry insiders see A.I. and machine learning as emerging skill sets that everyone, whether a part-time researcher or full-time cyber analyst, will need to master as the technology become more ingrained.

This, in turn, will drive the need for many tech pros to revamp their skill sets.

“The implementation of A.I. in security research serves a multitude of purposes for hackers,” Craig Jones, vice president of security operations at Ontinue, told Dice. “Automating tasks and analyzing data are the most prevalent applications, providing hackers with time-saving and data-driven advantages. However, hackers also employ A.I. to conduct reconnaissance, identify vulnerabilities, validate findings, categorize threats, detect anomalies, prioritize risks and even train models. This breadth of application showcases the versatility of A.I. and its ability to empower hackers in various aspects of their work.”

What is also driving interest in A.I. is the well-documented lack of enough tech and security professionals to fill the more than 600,000 open cybersecurity positions in the U.S. By automating current manual processes, organizations can improve their defenses, especially around vulnerability detection, experts noted. This approach, however, still requires skilled workers who know how to run A.I. models.

“With the much-hyped skills shortage in the industry, automation and adoption of A.I. can help plug the gaps and help defenders stay several steps ahead of the threat actors,” Mike Heredia, vice president for EMEA at XM Cyber, told Dice.

Other Hacking Skills

While the Bugcrowd survey focused mainly on developments in A.I., the report also noted how those ethical hackers who look to make additional money or build their cybersecurity knowledge are developing their skill sets.

The study found that about 68 percent of ethical hackers and bug bounty hunters are college-educated and use various means to develop and advance their skill sets. These include:
 

  • Online resources: 84 percent
  • Self-teaching: 71 percent 
  • Trial and error: 40 percent
  • Friends or mentors: 34 percent
  • Academic or professional coursework: 24 percent

“In our conversations with the hacker community, one constant that comes up every year is the fact that hackers use a diverse set of resources to increase their hacking skills,” according to the report. “Hackers leverage a variety of online resources, work with mentors, and pursue traditional methods, such as academic or professional coursework. Many hackers also teach themselves how to hack, which often comes down to trial and error.”

And while ethical hackers are finding new ways to learn more about a particular field of interest, public and private organizations must also expand and diversify their recruiting efforts to find cyber talent, said Bugcrowd CEO Dave Gerry.

“Organizations must take a more vigorous approach to recruit from non-traditional backgrounds, which, in turn, greatly expands the candidate pool from just those with formal degrees to individuals who provided the right training [and] have extremely high potential,” Gerry wrote in a statement to Dice. “Additionally, this provides the opportunity for folks from diverse backgrounds, who otherwise wouldn’t be able to receive formal training, to break into the cybersecurity industry providing income, career and wealth-creation opportunities that they otherwise may not have access to.”

The increasing emphasis on A.I. and machine learning shows how fast the industry is changing and how cyber talent needs to expand and adjust with it.

“One of the reasons I love this industry is that there is always something new to learn and some new risks or attacks to try to fix,” John Bambenek, principal threat hunter at security firm Netenrich, told Dice. “For instance, we are just now starting to come to grips with the cybersecurity risks of AI/ML systems and in some ways, we have to recreative everything we know about red teaming when it comes to these applications.”