How Third-Party Code Makes App Development A Cybersecurity Risk

For years, cybersecurity-minded organizations have attempted to convince their development teams to bring more security into the application development process. This includes creating modern disciplines such as DevSecOps as a way to build better, more secure code and apps.

Despite these efforts, as well as warnings from cybersecurity analysts and experts, vulnerable third-party code continues to find its way into the DevOps and application development process, creating apps that are vulnerable to attack and exploitation by fraudsters and sophisticated attackers alike.

The ongoing use of so-called “shadow-code” (third-party scripts and code libraries used to help quickly create web applications with little regard to security) continues to be a major problem for many enterprises and organizations, according to a recent study released by Osterman Research and security firm PerimeterX.

The study, conducted between May and June, included responses from about 800 security professionals and developers, and found that 99 percent of respondents reported that the websites used in their organization contained at least one third-party script; almost 80 percent of those interviewed note that these scripts account for half to 70 percent of a typical website.

More than half of respondents reported that this third-party code changes four or more times every year. Only about a third, however, have the ability to detect changes or updates made on their website that could potentially lead to a security problem.

Most importantly, about 50 percent of those surveyed could not definitively say whether their web applications had not been subject to an attack. This, security professionals note, demonstrates why using untested third-party code is a cyber threat even as it adds speed to the DevOps process.

“Vulnerable shadow code can be devastating in consequence, introducing vulnerabilities including remote-code execution in which an attacker may inject specially crafted requests to applications that are executed as code,” Archie Agarwal, the founder and CEO of security firm ThreatModeler, told Dice. “This has the potential to reveal sensitive information residing in databases.”

The Trouble with Third-Party Code

The Osterman Research report found that much of this third-party code is used to speed up application development, and could be leveraged by developers for an array of purposes, including ad tracking, payments, customer reviews, chatbots, tag management, social media integration or other helper libraries that simplify common functions.

By adding this functionality, however, developers are opening up their applications to various types of attacks, including card skimming and so-called Magecart operations that are designed to insert code into e-commerce check-out sites and steal credit and payment card information from consumers.

“Shadow code in the form of vulnerable open-source libraries and third-party code has the potential to open applications to attacks that could potentially reveal personally identifiable information of users,” Agarwal said. “This goes far beyond names and addresses of course when we consider health apps. If this sort of private information is exposed there is an onus on the organization to inform the relevant regulatory body which may have severe financial implications as well as reputational.”

In the wake of the cyberespionage campaign that targeted SolarWinds and its customers, how threat actors can manipulate code used in various apps has raised concerns about supply-chain attacks, according to the study.

Kevin Dunne, president at security firm Pathlock, noted that the use of third-party code opens up organizations to three specific security issues:

  • The risk that a third party can view data on the organization’s site;
  • The risk that a third party can access the organization’s site or network directly;
  • And the risk that shadow code libraries and scripts may become unsupported, which can then affect the functionality on the organization’s site.

This opens up customer and other personal data to potential exposure and theft, which can then risk violations of the EU’s General Data Protection Regulation or the California Consumer Privacy Act.

“If the shadow code allows a third party to unknowingly view data on an organization’s site, it likely put the organization at risk of maintaining GDPR or CCPA compliance, because an unknown data processor is viewing data without public disclosure,” Dunne told Dice. “This can result in millions of dollars of potential fines for an organization that is required to maintain this type of data privacy compliance.”

Sticking to DevSecOps

While the continued use of shadow code and other third-party scripts and libraries would seem to show that concepts like DevSecOps don’t work, security experts say the results of the study demonstrate exactly why organizations need to stick to these principles of better, more secure app development.

“My advice for developers looking to bake in security to their applications would be to stick to approved libraries approved by their organization,” Taylor Gulley, senior application security consultant at nVisium, told Dice. “This, in addition to reviewing the code—or getting a second set of eyes to do so—is critical to security. Lastly, be sure to source actively maintained options and codebases that are widely used.”

Part of this also includes being aware of vulnerabilities in code and keeping aware of various security alerts. Gulley also notes that it’s not only obscure projects that have faulty code that could be exploited.

“Be wary of popular projects as those are more likely to be the target of an attack and are often large with a broader attack surface,” Gulley said. “This does not mean to not use such libraries but to ensure that they go through a wide security review of the code and its dependencies recursively.”

Caitlin Johanson, the director of the Application Security Center of Excellence at consulting firm Coalfire, noted that organizations and their development teams must define the use, tracking and updating of platforms, libraries and components—and that these policies must be enforced. This helps to ensure that when security updates are published, they are responsibly worked into the application and underlying environment, Johanson added.

“The problem we’re all seeing is that even when a developer has explicitly included specific libraries, often, those dependent libraries have their own dependent libraries,” Johanson told Dice. “This is where the association of ‘shadow code’ truly hits home for us. The cases where a page on a website includes a single JavaScript file from somewhere else, but that one dependency ends uploading 20 more from various other destinations… it’s easy to see just how quickly code becomes unmanageable.”

2 Responses to “How Third-Party Code Makes App Development A Cybersecurity Risk”

  1. Dan Gielan

    Good article, but it doesn’t even start to scratch the surface. To a veteran IT professional like myself, the colossal failure of the entire IT realm to properly address security is no longer astonishing or newsworthy. It is as expected as tomorrow’s sunrise.

    When the Solar Winds debacle was exposed, I shook my head from side to side and chuckled. As sad and painful as it was to the country, it was absolutely and totally predictable. It was just a matter of time for such a breach to occur, and my amazement was that it had not happened sooner.

    I chuckled because In one of the early Unix symposia, (I believe 1979 or 1980?) Ken Thompson of Unix fame made a presentation and in it he made the comment relevant to this discussion: “If you let me put one of my files on your system, you lost your security.” And the crowd totally dismissed his wisdom as a joke (because right after that he offered the audience a copy of his highly cherished PDP-11 chess game, and the audience laughed.) Ken was not joking, he truly believed that, and he was obviously right.

    This was long before the internet with its systems interconnections and the gaping security holes it brought with it. This was before software reuse and shared libraries created by every Tom, Dick, and Mary, became mainstream.

    I chuckled, because I knew it was bound to happen.

    In 1991 the consulting company that I ran was hired to participate in a classified US Federal Government project, to produce a Compartmentalized Mode system for one of the branches of the military. The operating system was provided by a security software vendor, as a B1 secure Unix variant written in C, and running on specialized hardware. My company provided the C software generation system including the compiler, assembler, loader, and libraries – the whole enchilada.

    Except that my company did not have a security clearance, and none of the people involved had one either. We delivered the binaries of the compilation system, and it could have had any malicious code we desired to put in. NOBODY inspected its source code, nobody verified the object code produced. Nobody.

    That was then, 30 years ago. I am not sure that the Government even had a CISO then, let alone someone with the charge to evaluate the product.

    But this is 2021. I would have thought someone has gotten more sophisticated during those 30 years, but obviously I was mistaken. There are no margins, I guess, in investing a broken cent on this gaping hole of 3rd party software. .

    And has Solar Winds CISO been reprimanded? How about the CEO? How could their security be so sloppy to allow malicious code be inserted into their product line, and go out to unsuspecting clients, and no one of their clients validated their security processes?

    I cried.