DevOps

Thwart Supply Chain Attacks by Securing Development

Written By
Published On
Oct 13, 2021

Listen to the following security and product experts share their insights in the webinar:

  • Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance
  • Colton Chojnacki, product manager at Beyond Identity
  • Husnain Bajwa, senior manager for global sales engineering at Beyond Identity

Transcription

Deb Radcliffe

Hello, everyone, and thanks for joining us. I'm Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance, and I'll be moderating today's broadcast titled, "Thwart Supply Chain Attacks by Securing Development," sponsored by Beyond Identity. Today's information will be presented by Husnain Bajwa, HB for short, and Colton Chojnacki. HB is the senior manager for global sales engineering at Beyond Identity and Colton is the product manager at Beyond Identity. 

First, before we get started, we need to get over some housekeeping. If you'd like to ask any questions, please select the Q&A button below. If you're interested in accessing additional resources provided by Area 1 Security, select the handouts button below. Also, note that a recording of today's presentation will be made available on-demand after the event. And now we turn our discussion over to Husnain and Colton. 

Husnain Bajwa

I wanted to go over the agenda real quick and provide some introductions for ourselves. And today we're going to be talking about supply chains, a popular topic, especially with the executive order earlier this year from the White House and all of the work being done by NIST and the Software Engineering Institute and various security development lifecycle firms. 

We want to essentially talk about the context for supply chains and sort of how we've gotten to where we are. Then we're going to talk a little bit about the specifics of software development in a modern agile cloud-native environment, and what it looks like when you establish opinionated kill chains and start thinking about your security in a more rigorous fashion with structured controls and practices around protection, detection, and response. 

And how to shift left to move more towards protection, to move more towards earlier assurance and proactive verification. And then we'll sort of provide a demo and wrap it up with some simple recommendations for better hygiene. And then we'll open it up for questions, we're really excited to hear questions from the audience. 

So, let me switch it and let Colton provide his introduction. 

Colton Chojnacki

Hi, my name is Colton Chojnacki, I'm a product manager here at Beyond Identity, I work on our directory and our generalized key management solutions. I've been in the industry nearly 10 years now as a software developer, working on application development, DevOps, and various infrastructure tasks. 

Husnain

Colton is very humble, he spent a lot of time intensively working as a software engineer and has worked in multiple facets of cybersecurity, including very large, global critical infrastructure...with some very large global critical infrastructure customers. 

And through that experience, he's been a really valuable resource trying to put together the product that we're going to be talking about, but more framing the problems and sort of creating the sort of framework for the solutions that we believe are most important in this space. And my name is Husnain again, and I've spent over 20 years in the infrastructure industry. 

I'm kind of an infrastructure geek. The last 12, 13 years, I've spent doing cybersecurity with a significant focus on wireless as well as large-scale IP networks. And more recently, I've been quite involved with our efforts around secure DevOps and putting together solutions with Colton. 

So to begin, I think we should define what we're talking about when we describe the supply chain attacks that we're going to cover. We're really mostly interested in software supply chain attacks, but supply chain attacks are impacting software organizations across the board, technology companies, and really almost every company is a technology company today. 

They're experiencing a new type of threat from a more sophisticated threat actor with a larger variation in cost of compromise that they can tolerate. And so, when they're coming up with targeted attacks, more and more organizations are essentially vulnerable to the supply chain attacks, which generally reflect attackers leveraging initial access and lateral movement to establish long term reconnaissance and strongly target highest value components of an organization's sort of business process and business sequence. 

Within this sort of space, we've obviously seen a lot of major news stories around Kaseya and SolarWinds and Colonial Pipeline. When we talk about these kinds of incidents, it's often sort of implied that these organizations were unsophisticated or fell victim to very rudimentary kind of attack vectors. 

While it's somewhat true, the story is always much more nuanced. And from our standpoint, we think that blame dominates far too many of these conversations, and the focus should shift away from blame, which is more of a compliance and validation side activator. 

When you blame an organization, you end up doing a lot of testing towards the end of a cycle. And the real solution to all of these things that we've seen through secure development lifecycle work over the last 20 years, is that remediation and prevention with a strong cooperative bond between the earliest actors, in software cases, developers, is really, really important. 

And so, that's what we want to concentrate on. Now, this is a little bit of an exercise and just looking at the way that blame is kind of assigned, and looking at it in the context of some popular attacks that I think all of you will be able to recognize, right? 

The basic format for all of this is, "It was blank in blank with blank who blank." And it feels like an extremely techie geeky game of clue, but with a lot of Sigmund Freud kind of wrapped into it. If you look at some of the earliest ones that we sort of popularly talk about, target comes up quite a bit, and we start talking about HVAC controllers that it was HVAC controllers and private store networks with weak credentials who allowed attackers to compromise the point-of-sale terminals. 

It evolved into this, like, REvil messaging where you end up getting kind of, "It was REvil in company networks for months," so that long-term reconnaissance using RDP as initial access and encrypting and exfiltrating data to create new kinds of lucrative ransomware attacks. 

And then, of course, we also all saw, like, some of the interesting stuff that happened with congressional testimony around one of the major breaches, and a CEO blamed an intern. And the story became that it was an intern in an FTP server with a weak password who exposed an infrastructure software company and all of its customers to malware. 

These are easy, non-nuanced stories, and they make for great television and they make for great stories but they're definitely not catching the entire story. But you can see that this blame game sort of continues and you can easily imagine a world where new kinds of developer-centric and insider threat machinations imaginations can kind of create problems for us moving forward that developers are responsible for an enormous amount of cryptographic assets in modern environments. 

Their access to infrastructure and their ability to impact products is enormous, and it's easy to see them as careless or irresponsible when in the course of an extremely busy, ordinary sort of job activity, they're also having to account for security challenges like keys at-rest. 

It's easy to imagine them being blamed for carelessness, but it's also easy to see stories being built up around malicious actors. And with the emergence of Infrastructure as Code, the same types of problems that we have on software supply chains are kind of strongly impacting existing operations teams and operations teams that are still struggling to transition from conventional systems administration to DevOps to DevSecOps. 

And so, within that journey, the last thing that any of these individuals need is an additional sort of challenge and reputational hit. And when you look at it, the stories are always very convenient. The attacker was always sophisticated and well-funded, the employee was careless, incompetent, malicious. 

This is the sort of default practice. And when you look at this kind of framing that's typical, it's really important to look at, like, what best current practices might look like for credentials and crypto private keys, some of the most important components of initial access that leads to subsequent lateral movement and long-term compromise. 

Knowledge factors, specifically, passwords and secrets and associated complementary technologies are weak by design. We all know, like, why passwords are a challenge, we all understand how secrets are managed. Sophisticated, well-designed secrets architecture will typically use centralized vaults and privileged access management tooling that's relatively extravagant, it has a very high threshold for training, it has a very high friction ceremony design, and this creates a lot of challenges in modern environments. 

Also, we tend to think that the solution to knowledge factors is simply private keys and any private keys. But the reality is that developers are often on the hook for doing private keys and generating key pairs under the assumption that the consequences will be much better automatic security. 

And the reality is is that's not true, like, we end up weak by implementation. The vast majority of key pairs generated by developers and operations folks are not protected in use, they're human protected at-rest. 

So, if a person chooses to store them in their home directory if they choose to synchronize them over Dropbox, whatever they're choosing to do in terms of their own personal IT infrastructure has a huge impact on the quality of that storage. And when the certificates are being transferred or onboarded in typical centralized architectures, it's dependent on the protocol on how much security is present and there are at least a dozen approaches that are commonly used for these kinds of security events. 

So, what we've seen more recently is that CISOs are becoming more and more aware of the situation. Developer shortages, especially accelerated by COVID, and work from anywhere and broad, global competition for talent has driven a lot of the CISOs to see themselves as advocates for the employees as well as advocates for the organization. 

And when they look at protecting the organization, they want to have principled approaches to reducing the attack surface and protecting the brand. They also want to protect the employees, reduce that initial access threat, minimize lateral movement, contain the blast radius involved in most of these operations, and do it in order to protect reputations, avoid identity theft, and construct it around something that the developers and employees can buy into as well. 

And looking at that new kind of employee-centric CISO is what brought us to our core product that then led us to this new avenue that we've done with DevSecOps and developer keys. What we built was a new type of authenticator that specifically addressed those keys and use key-at-rest and key-in-motion challenges, and essentially put a PKI infrastructure into the platform authenticator. 

A simple, lightweight agent that can run on Windows, Mac, Linux, iOS, or Android. This authenticator is responsible for maintaining the security bindings and leveraging modern security protocols used in enterprise, SSO, and SaaS authentication. And underlying this integration with all of these SSO tools, we embed a very strong credential that's minted locally at each platform authenticator using the local Secure Enclave technology for that particular platform. 

So, over the last 10 years, we've seen a huge uptake in Secure Enclaves and Trusted Platform Modules. These are hardware enclaves that provide unclonable, secure, tamper-proof environments to store keys, and in many cases, also generate keys and seal keys and seal information and provide cryptographic operations. 

We've maximized that capability and provided keys that are strongly trusted on first use based on bringing a public key into our directory. 

And beyond the public key in our directory, we don't maintain any other credentials. And so, the users are basically without password...genuinely without password, and all of their credentials are stored within this hermetically-sealed Secure Enclave vault that simply operates on signing operations to make sure that authentication can occur in a maximally secure way that doesn't rely on any custodianship of credentials by the enterprise. 

This is a big change. Directories for the last 30 to 35 years have essentially assumed that in addition to the username, the other construct that's always going to persist is the username. We're trying to really shift away from that and bring forward the promise of 1988 and early X.509 strong authentication, but make it usable in a modern environment without having all of the friction and challenges associated with certificate authorities and all of the associated PKI infrastructure. 

And the reality is, is that eliminating passwords and replacing them with strong authentication for all of your SaaS applications and enterprise applications is pretty low-hanging fruit. If you look at the evolution of TPM, if you look at the evolution of PKI itself, and smart cards and FIDO and WebAuthn, there are an enormous number of tools and underlying technologies that really enable us to make this jump finally, after 60 years. 

But what it reveals after you complete that password list journey is that cryptographic assets sprawl is a much bigger problem in enterprises, especially information-centric enterprises. It's grown enormously with the adoption of the cloud. 

And a lot of the challenges are what we were talking about just earlier, that keys sort of are assumed to suggest security but keys are fundamentally easy to mint. The number of packages available in open source and in user space software that can generate keys are manyfold and those keys generated in insecure environments are hard to protect and the poor hygiene associated with those kinds of keys really degrades the quality of the key itself to a very basic sort of weak secret. 

And that brings us to sort of the last 10 years, we've seen a few major changes in infrastructure, cloud has really become a dominant force over the last 10 years. 

You've seen it go from essentially an IT tool to a broad line-of-business enabler, so vertical solutions for very specific business applications are everywhere. Development methodologies are all pretty much agile. 

And within the agile world, we have resorted to using source code...collaborative source code management in the cloud that was largely designed for open-source collaboration at a global scale for projects like Linux. Simultaneously, we've seen the adoption of all of these enclave technologies to facilitate disk encryption, biometric, and digital wallet compliance regulations. 

And so, when you look at all of these tools at 10,000 feet and you pull back and think about what the architecture should look like and take an opinionated stance on solving for security as leftmost as possible in this journey, that's where we think that identity is really at a pivotal point and can transform the software supply chain. 

And I'll hand it over to Colton now. 

Colton

Yeah. So, if you are a developer working in the last 10 years building a cloud-native application, you most likely have to pick a few of these boxes out of this chart to work with. And this is what makes up your CI/CD pipeline and software supply chain. And in reality, how this actually works is a developer is writing code to stitch all of these projects or third-party vendors together. 

And where exactly does the identity...like, where do you enforce identity and access management if your entire supply chain is sprawled out like this? So, let me... 

Husnain

You know, Colton, since you were a developer, when you look at charts like this and market landscape assessments, what portion of these kinds of tools were you able to, like, really process and understand even from sort of a category perspective when you were doing day-to-day development? 

Colton

I would tell you developers always going to go with the tool that makes it...that is less friction, which is something that the security team is not always paying attention to. So, there's like this fine line between the security team and the application developer team. The application developers, they don't want friction, and it's almost like the security team...this chart is so massive, they don't even know where they should get involved. 

Husnain

Cool. That always seems overwhelming to me, so I just figured I'd check with you. 

Colton

Yes. Okay, so let me talk about, like, what a pipeline looks like today, a continuous integration/continuous deployment cloud-native pipeline. Developers submit code, and this could be code for an application, it could be code for infrastructure. 

That code goes through some verification like linting, unit tests, compile the code, run the unit tests, run the system tests, the integration tests, send it off to QA, then finally deploy it to production. And this has really expanded, like, the threat surface in the CI/CD pipeline that these attacks like SolarWinds are targeting. 

They're targeting this, they're targeting the code that is stitching together all of these cloud-native services. So, what we're talking about today is code signing. And I think there's something we should make clear and what is the difference between signing an artifact of the pipeline, signing the binary that comes out of the pipeline, and signing the code change that goes into the pipeline, whether it's a change to an application or now with the rise of GitOps, a change to your infrastructure code. 

There's these two domains that we want to talk about, and one is the development organization domain and this is sort of like your CI/CD pipeline that is built and maintained by the organization. And whenever something comes out of that pipeline, what you're really... 

and you sign it, what you're really saying is the organization endorses that artifact. So, what we're doing here at Beyond Identity is shifting left to the developer domain. Now, this is a...there are not great security tools out there for making sure that developer code changes securely go into the CI/CD pipeline. 

Like historically, it's kind of always been left up to the developers to own this process and a lot of security teams don't even really know that they're supposed to get involved or leave it to just the developers. 

Husnain

So, Colton, when you look at this kind of developer domain and development organization domain, are you saying development organization is essentially providing promises to its downstream and customers but they're not necessarily getting the level of assurance that they think they should...that they probably should be getting from the developers themselves? 

Colton

Yes, I think a lot of the times, developers are kind of in their own silo and they're kind of telling the security team, "Trust us, like, our code changes are legitimate." So, yeah, let's go into why is securing the developer important. Let me talk about Git. 

Git is a software that is used by software engineers so that they can collaborate on the same code base without stepping on each other's toes. And this was, like, originally designed and developed in the mid-2000s by the Linux kernel developers so that they could contribute code to the Linux kernel without breaking anything. 

And then services like GitHub and GitLab and Bitbucket came along, and they really just put this SaaS wrapper around this Git protocol. And the thing is, is that this Git protocol was never really...it was never really designed or intended to be used by enterprises, so it's kind of...that portion of the developer domain, which is not really...usually not under the control of an enterprise goes largely unprotected. So, yeah, the problem with signing these artifacts that come out of a pipeline is that you can't confirm where the code necessarily came from, like, what developers it came from, you just know it came from the organization. 

Husnain

But doesn't Git...don't most of these platforms like GitHub have, like, a checkmark for sign code, like, that says that's verified or something? 

Colton

Yeah. So, that's sort of like the...on Twitter, that Twitter verified checkmark, Git has these. But really, what that's just saying is Git has verified that user and in most GitHub and GitLab organizations, the model is that the developers bring their own personal accounts to the Git...to the organization model in GitHub and GitLab. 

Husnain

So when Git verifies a developer's identity, are they also saying that that developer's identity is securely stored? 

Colton

Git is just saying that that developer...how should I say this? They know about the developer, but that kind of takes away from the enterprise security controls. Like, it's not GitHub's job to verify enterprise identity or corporate identities. 

Husnain

- Cool. 

Colton

Okay. Yeah, so I think I should give a little background on, like, what Git is. It's a distributed version control that allows developers to all work on the same code base. And typically what a developer workflow is, is the developer makes a change, they push their...and when I say change, they're source code change, they push that change up to a central repo and on every change, that runs through a continuous integration/continuous deployment pipeline. 

So, things like a developer makes a code change, the pipeline first does something like lints the code, make sure the syntax is correct, it builds the code, it then runs the code through unit tests, system tests, integration tests, maybe scan for vulnerabilities or credentials that are in the code. So, now, let me talk about what we've built, how we've shifted the identity...how we've just shifted secure DevOps to the developer and away from the organization domain. 

So, the platform authenticator that Husnain talked about earlier, we've essentially added a capability in there so that we can sign the git commits as developers are making code changes. And we're signing them on a device, on the developer's device that is, where in an organization model, they're usually signed by the CI/CD pipeline, which is running on some server. 

So, we are signing each code change with the developer's identity. And then the second component we built was a module that you can import into your CI and CD pipeline. And this module will ensure that only code that was signed by a known corporate or enterprise identity can be admitted into the CI/CD pipeline. 

Husnain

So, when you say that it's signed with the developer's identity, how does that relate to the GPG key? Where's the GPG key coming from? 

Colton

So, the GPG key is cryptographically tied to the developer's identity. We've essentially built a personal certificate authority in our platform authenticator, so we're able to issue keys and certificates from that identity. All right, and now I will do a quick demo. 

Let me share the screen. All right, can you see my screen? You should see a terminal and a web browser. Okay, so I've set up an example Git repo that we've integrated our product with. 

And in this repo, I've created this pipeline. I've kind of just created a standard CI/CD pipeline, where first, anytime a code change comes in, we lint the code, we build the code, we run some of the tests, and then we finally deploy the code if everything passes. 

So what I've configured is I've installed our verification module at the very beginning of the pipeline. And, yeah, so let me...now I'm going to do some examples of like a developer workflow. So, we have our platform authenticator running here. 

This is our credential. This is another credential, just as an example. If I go into the GPG keys, this is the actual signing key that was generated. And the private key is stored in the Secure Enclave or the TPM, and the public key can be uploaded to GitHub or GitLab. 

So, I'm going to make a code change. I have this example repository called "Effective Guacamole," and it's really, like, where we keep our super-secret guacamole recipe, so we only want known corporate identities committing to this recipe, we don't want anyone injecting bad ingredients into the recipe. 

Okay, I hope this is big enough. Let me make this bigger. And I open up my recipe. Let's just say I want to add more tomatoes to the guac. I just made my code change. 

Now I'm going to do my git commit and I'm going to add a message that said, "Added more tomatoes." Now I've made that git commit, and you can see this little toast message that said, "Beyond Identity has signed the git commit," and this just happened in the background, the developer didn't have to do anything. 

Now, usually, this would be, like, an involved ceremony where a developer has to go check out a key, they have to bring it down, they have to sign it, and then put the key back. But with Beyond Identity, we're just signing it in the background and really, the developer doesn't even know it's happening. So, I'm going to push that code change up to the repository. And now that it's pushed up, it's going to run through the CI/CD pipeline. 

So, we can go watch it. So, I added more tomatoes to the guac, it's running through my pipeline, we can go watch the logs. And it's checking, "Is this..." 

It's doing a check, it's basically asking Beyond Identity, "The key that was used to sign this git commit, do you know who it is? And should we allow this or not?" And we return a message like, "Yes, we know who it is, it's an identity we know about and go ahead with the change." 

So then, just the rest of the pipeline runs. And this is just example of the pipeline running. So now, I'll make a minor config change so that I don't sign the commits and I'll try to push up another change. 

And I'm really just saying, "Do not sign the git commits." I'm going to make a change to the recipe. What's something that's really...what's something you shouldn't add to a guacamole recipe? Cheese. 

Husnain

Chocolate chips. 

Colton

Chocolate chips. Perfect. Now I'm making a commit and I didn't sign it. Push it up to the repo, it's now going to go through the same pipeline. 

All right, the job is running. And we stopped it, we prevented this code change from going into the repo because it wasn't signed. 

And that's really just one example of a reason to not allow it in. Because all of these identities are tied to Beyond Identity, a user is able to go into our Beyond Identity console and sort of suspend a user, which would, therefore, prevent us from signing anything if that user is suspended. 

Also, there's policies that can be written into so that you can only sign git commits from a managed device. So, if you, like, take a step back and you look at what we have running on the authenticator, what we have running on in the pipeline, we've really created a solution that allows git commits to only come from a trusted managed device and known corporate identity. 

Deb

This may be a stupid question, but I'm wondering what if they put in the chocolate chip and then signed the code? What would be the process? Are they even allowed to sign the code if they put the chocolate chips in? 

Colton

So, that's sort of like an insider threat. And what we're also providing is, like, code provenance and non-repudiation. There's no way...if the developer signed that they put the chocolate chips in, there's no way they can say they did it. 

Husnain

Yeah, you'll know that it's a malicious actor or a very bad employee with poor taste buds. 

Yeah, one of the bigger things that's kind of emerged from this sort of evolving and advanced threat landscape are some learnings that we got from sort of the way that distributed ledgers and modern cryptocurrency has turned out. 

One of the hardest guarantees to provide is that someone genuinely did perform an operation even when they say that they did not. And so, that, like, elimination of plausible deniability is a really core component to achieving computational accountability and rigorous trust. 

And so, that formal proof is what we're able to do by signing and sealing our log messages as well. So, because we're able to sort of locally and seamlessly sign any piece of information without the latency of central usage or checking in or checking out stuff, we're able to essentially provide that sealed guarantee around the logs and every event that's happening in the system. 

Cool. Well, Colton, that was an awesome demo. Just out of curiosity, how long does it take to set something like this up? 

Colton

Yeah, so it takes a...I would say a couple of minutes. It's more like something a developer does one time, they set it, and then they forget it. 

Husnain

And what about the GitHub actions or the sort of Git repo actions? Are those difficult to set up or do we provide samples or...? 

Colton

Yeah, we have samples, they're not difficult to set up. And kind of like what's great about the way we implemented it is that you can decide where in your pipeline you want to put that. So, let's say you want to run that every time a commit is pushed up to your Git repo or every time a commit...every time a merge request is opened. 

So, a request to merge a code change into your main branch. So, it's really up to the administrator to decide where it makes sense to put that check. 

Husnain

And does the system require you to be integrated to the SSO or the sort of enterprise identity system or is it possible to start smaller and more compartmentalized? 

Colton

Yeah, you can definitely start smaller. There's nothing that says you need to be integrated with SSO to use this. 

Husnain

Cool. So, you know, we just want to wrap up before we start taking some questions. In terms of our recommendations sort of at a 10,000 foot kind of industry level, we think it's really important to adopt blameless approaches to cybersecurity controls. 

It's important to pull back and ask the right questions, adhere to first principles, and really think through how you want to think about cybersecurity frameworks. People have a tendency to look at output attacks and compromise, and immediately reactively put in new layers of controls without reassessing the entire situation and thinking about where the vulnerability really exists and at what point you should really intervene. 

And so, that's where having these models around MITRE ATT&CK or Lockheed Martin Cyber Kill Chain or various NIST controls, it's important to sit back and generalize it, right? Like, all of these models have essentially a protect, detect, and respond component to them. So, even if you're just looking at it as three simple steps, pulling back and thinking about it like that is useful. 

Breaking down your software development lifecycle and understanding how to frame that supply chain within the context of the cybersecurity framework that you choose is really important. Absolutely, people should be signing their code artifacts and they should have strong attestation and provenance for those. 

So, using proper tooling to make sure that you're signing code artifacts is important. We encourage people to use dynamic application security testing tools that validate those artifacts, also with checks for known patch levels and vulnerabilities and scans it against large CVE databases. 

And then we also think that there's a huge role to be played by SaaS tools. So, when you look at the static application security testing tools that have access to source code, they've been extremely valuable in establishing software bill of materials that have strong assurances and understood provenance. 

That said, SaaS tools to understand your open source component contribution isn't the end-all, be-all, you really also need to understand that majority of intellectual property that's getting injected into your code repositories by your actual developers. 

And so, that's why having the supply chain mindset and sequencing things out and thinking about what order they come in is really important. So, we just want to move everyone as left as possible and we want to get people using cryptographic key pairs but in smarter ways. 

A lot of people have noticed that the source code management systems' automatic scanning capabilities have actually lowered developers' vigilance on ensuring that they don't put secrets into their code, they rely on the scanning that's taking place in these platforms to take care of that. 

That's the kind of reactive end of chain compliance-centric managing to the test kind of mentality that just doesn't help anyone in a software...in a secure development lifecycle. And so, we want people to be able to utilize tools intelligently that recruit people earlier and more proactively to think about security not just in terms of known threats and mitigating against existing attacks that a team may have experienced, and really look at it as a systemic foundation and adhere to the practices that are sort of emerging from that broader secure software development community. 

And so, that's really where we come in and we just ask that people, you know, be more intentional and be more respectful of cryptographic asset hygiene. Cool. 

And I think we're open to questions now. 

Deb

Thanks for that great presentation. It was very informative and I believe it probably helped a lot of developers level up without feeling that they have to take on a lot of extra responsibility. So, we have some questions that came in. 

The first one sort of speaks to the blame game that you started the presentation out with, Husnain, and that is, "Who is typically responsible for the CI/CD pipeline? And what do they actually do in their rules of responsibility?" 

Colton

Yeah, it sort of depends on the stage of the company, like software startups, usually, it's the developers who kind of work in a silo and they start to actually build and develop the pipeline themselves, only then later it is that a security team may come in and start to even try to understand what's going on. So, I guess I'm saying it's usually the developers who do it but it probably should be the security and DevSecOps team who should be responsible for it. 

Husnain

Yeah, and Microsoft has done phenomenal work in their secure development evangelism works and stuff, making it clear that early on, their practices were very reactive and built around very formal engineering ops groups that organized these software development environments and provided the assurance tooling. 

What we're seeing even in large organizations with tens of thousands of developers is a move towards getting people more engaged and enrolled in the process early on. And so, it tends to be more collaborative, you end up having security evangelists within specific product teams within the larger organizations and that security evangelist function within each of those product groups is able to work in conjunction with all of the sorts of other folks who are making sure that CI/CD pipeline is secure. 

Deb

Excellent. And would you say that security evangelist comes from the DevOps side or the IT ops side? Did the developer usually pick up the role? Like, is there usually a champion of security among the developers or is it an IT person coming in and trying to rally the troops? 

Husnain

Conventionally, it's been an IT person or an IS person. And what's really happening now with sort of the breadth of attacks being really kind of almost astonishing and impossible to manage from a purely testing perspective, there's more emphasis on putting the security evangelists within the software engineering teams. 

So, they're actually developers contributing code simply stepped in and are making sure that during every developer stand up, during every retrospective, during every planning exercise, an adequate amount of focus is being applied to security within the people who are actually contributing to the software. 

Deb

Excellent. Okay. That's been my observation as well as the journalists and analysts in the space for a long time, I've seen the DevOps team sort of taking the lead setting up an evangelist-type role within the DevOps to do DevSecOps better. 

So, your answer aligns with some studies and stuff I've been following as well. My next question from the audience is, "What do you say to development managers and engineering leaders who think that their developers can't take any additional responsibility to sign commit?" And I'm going to add to this question here, from what I heard in the presentation today, it's actually going to be easier if they could use a tool like yours. 

Colton

Yeah, I would say that the developers have...you know, or I would say they're used to have a good argument like, "Signing code is complicated and it creates friction." And in the old model, the developer possesses the keys and they are responsible for the keys. 

But in this new model sort of in our solution, yes, the developers possess the keys but they don't really have to worry about them. They're protected. 

Husnain

Yeah, I think one of the keys is that if you make this simple enough, the value prop to the developer is that as threat actors become more sophisticated, it's really just a matter of time before they're the victims of identity theft and the identity theft that they'll be victims of is this kind of like, you know, malicious code insertion on their behalf. 

And given the amount of cryptographic assets that they're creating and not necessarily tracking, there's no lifecycle to it, there's not like a deprecation framework, there's not like a revocation kind of window. So, these things get created and they remain in the ether with access and the ability to claim that they're a particular developer forever. 

And solving for that in a way that doesn't add any additional time on each individual commit but only requires a few minutes of initial setup, we think it has a lot of value to the developer. 

Deb

And here's another sort of related question, "Git commit signing has been around for a while, those organizations who did utilize it, how were they storing their git commit key?" 

Husnain

So, you want to explain, like, how GPG...you can be honest, you can tell us where you stored your GPG keys. 

Colton

Okay, USB sticks, home directories. I used the same keys for 10 years, just take that key wherever I go. 

Deb

All of which will create risk. 

Colton

Yeah, and it's kind of like my DevSecOps team is expecting me to be, like, responsible enough for the key but hey, I'll put that key wherever I want if it's up to me. 

Deb

If it makes your job easier as a developer, right? 

Colton

Yeah, I was just saying I will put it in something that is easily accessible and low friction. 

Deb

Okay, well, it looks like we're out of questions. I would like to thank everybody for being here. As a reminder, if we weren't able to answer your questions during the live presentation, watch your email for a response within a few days. 

And I want to thank all of you for being with us today and also to our speakers for sharing the valuable information and to Beyond Identity for sponsoring this webcast. And, of course, thanks to our audience for tuning in, we hope you enjoyed the presentation. 

Get started with Device360 today
Weekly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.

Thwart Supply Chain Attacks by Securing Development

Download

Listen to the following security and product experts share their insights in the webinar:

  • Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance
  • Colton Chojnacki, product manager at Beyond Identity
  • Husnain Bajwa, senior manager for global sales engineering at Beyond Identity

Transcription

Deb Radcliffe

Hello, everyone, and thanks for joining us. I'm Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance, and I'll be moderating today's broadcast titled, "Thwart Supply Chain Attacks by Securing Development," sponsored by Beyond Identity. Today's information will be presented by Husnain Bajwa, HB for short, and Colton Chojnacki. HB is the senior manager for global sales engineering at Beyond Identity and Colton is the product manager at Beyond Identity. 

First, before we get started, we need to get over some housekeeping. If you'd like to ask any questions, please select the Q&A button below. If you're interested in accessing additional resources provided by Area 1 Security, select the handouts button below. Also, note that a recording of today's presentation will be made available on-demand after the event. And now we turn our discussion over to Husnain and Colton. 

Husnain Bajwa

I wanted to go over the agenda real quick and provide some introductions for ourselves. And today we're going to be talking about supply chains, a popular topic, especially with the executive order earlier this year from the White House and all of the work being done by NIST and the Software Engineering Institute and various security development lifecycle firms. 

We want to essentially talk about the context for supply chains and sort of how we've gotten to where we are. Then we're going to talk a little bit about the specifics of software development in a modern agile cloud-native environment, and what it looks like when you establish opinionated kill chains and start thinking about your security in a more rigorous fashion with structured controls and practices around protection, detection, and response. 

And how to shift left to move more towards protection, to move more towards earlier assurance and proactive verification. And then we'll sort of provide a demo and wrap it up with some simple recommendations for better hygiene. And then we'll open it up for questions, we're really excited to hear questions from the audience. 

So, let me switch it and let Colton provide his introduction. 

Colton Chojnacki

Hi, my name is Colton Chojnacki, I'm a product manager here at Beyond Identity, I work on our directory and our generalized key management solutions. I've been in the industry nearly 10 years now as a software developer, working on application development, DevOps, and various infrastructure tasks. 

Husnain

Colton is very humble, he spent a lot of time intensively working as a software engineer and has worked in multiple facets of cybersecurity, including very large, global critical infrastructure...with some very large global critical infrastructure customers. 

And through that experience, he's been a really valuable resource trying to put together the product that we're going to be talking about, but more framing the problems and sort of creating the sort of framework for the solutions that we believe are most important in this space. And my name is Husnain again, and I've spent over 20 years in the infrastructure industry. 

I'm kind of an infrastructure geek. The last 12, 13 years, I've spent doing cybersecurity with a significant focus on wireless as well as large-scale IP networks. And more recently, I've been quite involved with our efforts around secure DevOps and putting together solutions with Colton. 

So to begin, I think we should define what we're talking about when we describe the supply chain attacks that we're going to cover. We're really mostly interested in software supply chain attacks, but supply chain attacks are impacting software organizations across the board, technology companies, and really almost every company is a technology company today. 

They're experiencing a new type of threat from a more sophisticated threat actor with a larger variation in cost of compromise that they can tolerate. And so, when they're coming up with targeted attacks, more and more organizations are essentially vulnerable to the supply chain attacks, which generally reflect attackers leveraging initial access and lateral movement to establish long term reconnaissance and strongly target highest value components of an organization's sort of business process and business sequence. 

Within this sort of space, we've obviously seen a lot of major news stories around Kaseya and SolarWinds and Colonial Pipeline. When we talk about these kinds of incidents, it's often sort of implied that these organizations were unsophisticated or fell victim to very rudimentary kind of attack vectors. 

While it's somewhat true, the story is always much more nuanced. And from our standpoint, we think that blame dominates far too many of these conversations, and the focus should shift away from blame, which is more of a compliance and validation side activator. 

When you blame an organization, you end up doing a lot of testing towards the end of a cycle. And the real solution to all of these things that we've seen through secure development lifecycle work over the last 20 years, is that remediation and prevention with a strong cooperative bond between the earliest actors, in software cases, developers, is really, really important. 

And so, that's what we want to concentrate on. Now, this is a little bit of an exercise and just looking at the way that blame is kind of assigned, and looking at it in the context of some popular attacks that I think all of you will be able to recognize, right? 

The basic format for all of this is, "It was blank in blank with blank who blank." And it feels like an extremely techie geeky game of clue, but with a lot of Sigmund Freud kind of wrapped into it. If you look at some of the earliest ones that we sort of popularly talk about, target comes up quite a bit, and we start talking about HVAC controllers that it was HVAC controllers and private store networks with weak credentials who allowed attackers to compromise the point-of-sale terminals. 

It evolved into this, like, REvil messaging where you end up getting kind of, "It was REvil in company networks for months," so that long-term reconnaissance using RDP as initial access and encrypting and exfiltrating data to create new kinds of lucrative ransomware attacks. 

And then, of course, we also all saw, like, some of the interesting stuff that happened with congressional testimony around one of the major breaches, and a CEO blamed an intern. And the story became that it was an intern in an FTP server with a weak password who exposed an infrastructure software company and all of its customers to malware. 

These are easy, non-nuanced stories, and they make for great television and they make for great stories but they're definitely not catching the entire story. But you can see that this blame game sort of continues and you can easily imagine a world where new kinds of developer-centric and insider threat machinations imaginations can kind of create problems for us moving forward that developers are responsible for an enormous amount of cryptographic assets in modern environments. 

Their access to infrastructure and their ability to impact products is enormous, and it's easy to see them as careless or irresponsible when in the course of an extremely busy, ordinary sort of job activity, they're also having to account for security challenges like keys at-rest. 

It's easy to imagine them being blamed for carelessness, but it's also easy to see stories being built up around malicious actors. And with the emergence of Infrastructure as Code, the same types of problems that we have on software supply chains are kind of strongly impacting existing operations teams and operations teams that are still struggling to transition from conventional systems administration to DevOps to DevSecOps. 

And so, within that journey, the last thing that any of these individuals need is an additional sort of challenge and reputational hit. And when you look at it, the stories are always very convenient. The attacker was always sophisticated and well-funded, the employee was careless, incompetent, malicious. 

This is the sort of default practice. And when you look at this kind of framing that's typical, it's really important to look at, like, what best current practices might look like for credentials and crypto private keys, some of the most important components of initial access that leads to subsequent lateral movement and long-term compromise. 

Knowledge factors, specifically, passwords and secrets and associated complementary technologies are weak by design. We all know, like, why passwords are a challenge, we all understand how secrets are managed. Sophisticated, well-designed secrets architecture will typically use centralized vaults and privileged access management tooling that's relatively extravagant, it has a very high threshold for training, it has a very high friction ceremony design, and this creates a lot of challenges in modern environments. 

Also, we tend to think that the solution to knowledge factors is simply private keys and any private keys. But the reality is that developers are often on the hook for doing private keys and generating key pairs under the assumption that the consequences will be much better automatic security. 

And the reality is is that's not true, like, we end up weak by implementation. The vast majority of key pairs generated by developers and operations folks are not protected in use, they're human protected at-rest. 

So, if a person chooses to store them in their home directory if they choose to synchronize them over Dropbox, whatever they're choosing to do in terms of their own personal IT infrastructure has a huge impact on the quality of that storage. And when the certificates are being transferred or onboarded in typical centralized architectures, it's dependent on the protocol on how much security is present and there are at least a dozen approaches that are commonly used for these kinds of security events. 

So, what we've seen more recently is that CISOs are becoming more and more aware of the situation. Developer shortages, especially accelerated by COVID, and work from anywhere and broad, global competition for talent has driven a lot of the CISOs to see themselves as advocates for the employees as well as advocates for the organization. 

And when they look at protecting the organization, they want to have principled approaches to reducing the attack surface and protecting the brand. They also want to protect the employees, reduce that initial access threat, minimize lateral movement, contain the blast radius involved in most of these operations, and do it in order to protect reputations, avoid identity theft, and construct it around something that the developers and employees can buy into as well. 

And looking at that new kind of employee-centric CISO is what brought us to our core product that then led us to this new avenue that we've done with DevSecOps and developer keys. What we built was a new type of authenticator that specifically addressed those keys and use key-at-rest and key-in-motion challenges, and essentially put a PKI infrastructure into the platform authenticator. 

A simple, lightweight agent that can run on Windows, Mac, Linux, iOS, or Android. This authenticator is responsible for maintaining the security bindings and leveraging modern security protocols used in enterprise, SSO, and SaaS authentication. And underlying this integration with all of these SSO tools, we embed a very strong credential that's minted locally at each platform authenticator using the local Secure Enclave technology for that particular platform. 

So, over the last 10 years, we've seen a huge uptake in Secure Enclaves and Trusted Platform Modules. These are hardware enclaves that provide unclonable, secure, tamper-proof environments to store keys, and in many cases, also generate keys and seal keys and seal information and provide cryptographic operations. 

We've maximized that capability and provided keys that are strongly trusted on first use based on bringing a public key into our directory. 

And beyond the public key in our directory, we don't maintain any other credentials. And so, the users are basically without password...genuinely without password, and all of their credentials are stored within this hermetically-sealed Secure Enclave vault that simply operates on signing operations to make sure that authentication can occur in a maximally secure way that doesn't rely on any custodianship of credentials by the enterprise. 

This is a big change. Directories for the last 30 to 35 years have essentially assumed that in addition to the username, the other construct that's always going to persist is the username. We're trying to really shift away from that and bring forward the promise of 1988 and early X.509 strong authentication, but make it usable in a modern environment without having all of the friction and challenges associated with certificate authorities and all of the associated PKI infrastructure. 

And the reality is, is that eliminating passwords and replacing them with strong authentication for all of your SaaS applications and enterprise applications is pretty low-hanging fruit. If you look at the evolution of TPM, if you look at the evolution of PKI itself, and smart cards and FIDO and WebAuthn, there are an enormous number of tools and underlying technologies that really enable us to make this jump finally, after 60 years. 

But what it reveals after you complete that password list journey is that cryptographic assets sprawl is a much bigger problem in enterprises, especially information-centric enterprises. It's grown enormously with the adoption of the cloud. 

And a lot of the challenges are what we were talking about just earlier, that keys sort of are assumed to suggest security but keys are fundamentally easy to mint. The number of packages available in open source and in user space software that can generate keys are manyfold and those keys generated in insecure environments are hard to protect and the poor hygiene associated with those kinds of keys really degrades the quality of the key itself to a very basic sort of weak secret. 

And that brings us to sort of the last 10 years, we've seen a few major changes in infrastructure, cloud has really become a dominant force over the last 10 years. 

You've seen it go from essentially an IT tool to a broad line-of-business enabler, so vertical solutions for very specific business applications are everywhere. Development methodologies are all pretty much agile. 

And within the agile world, we have resorted to using source code...collaborative source code management in the cloud that was largely designed for open-source collaboration at a global scale for projects like Linux. Simultaneously, we've seen the adoption of all of these enclave technologies to facilitate disk encryption, biometric, and digital wallet compliance regulations. 

And so, when you look at all of these tools at 10,000 feet and you pull back and think about what the architecture should look like and take an opinionated stance on solving for security as leftmost as possible in this journey, that's where we think that identity is really at a pivotal point and can transform the software supply chain. 

And I'll hand it over to Colton now. 

Colton

Yeah. So, if you are a developer working in the last 10 years building a cloud-native application, you most likely have to pick a few of these boxes out of this chart to work with. And this is what makes up your CI/CD pipeline and software supply chain. And in reality, how this actually works is a developer is writing code to stitch all of these projects or third-party vendors together. 

And where exactly does the identity...like, where do you enforce identity and access management if your entire supply chain is sprawled out like this? So, let me... 

Husnain

You know, Colton, since you were a developer, when you look at charts like this and market landscape assessments, what portion of these kinds of tools were you able to, like, really process and understand even from sort of a category perspective when you were doing day-to-day development? 

Colton

I would tell you developers always going to go with the tool that makes it...that is less friction, which is something that the security team is not always paying attention to. So, there's like this fine line between the security team and the application developer team. The application developers, they don't want friction, and it's almost like the security team...this chart is so massive, they don't even know where they should get involved. 

Husnain

Cool. That always seems overwhelming to me, so I just figured I'd check with you. 

Colton

Yes. Okay, so let me talk about, like, what a pipeline looks like today, a continuous integration/continuous deployment cloud-native pipeline. Developers submit code, and this could be code for an application, it could be code for infrastructure. 

That code goes through some verification like linting, unit tests, compile the code, run the unit tests, run the system tests, the integration tests, send it off to QA, then finally deploy it to production. And this has really expanded, like, the threat surface in the CI/CD pipeline that these attacks like SolarWinds are targeting. 

They're targeting this, they're targeting the code that is stitching together all of these cloud-native services. So, what we're talking about today is code signing. And I think there's something we should make clear and what is the difference between signing an artifact of the pipeline, signing the binary that comes out of the pipeline, and signing the code change that goes into the pipeline, whether it's a change to an application or now with the rise of GitOps, a change to your infrastructure code. 

There's these two domains that we want to talk about, and one is the development organization domain and this is sort of like your CI/CD pipeline that is built and maintained by the organization. And whenever something comes out of that pipeline, what you're really... 

and you sign it, what you're really saying is the organization endorses that artifact. So, what we're doing here at Beyond Identity is shifting left to the developer domain. Now, this is a...there are not great security tools out there for making sure that developer code changes securely go into the CI/CD pipeline. 

Like historically, it's kind of always been left up to the developers to own this process and a lot of security teams don't even really know that they're supposed to get involved or leave it to just the developers. 

Husnain

So, Colton, when you look at this kind of developer domain and development organization domain, are you saying development organization is essentially providing promises to its downstream and customers but they're not necessarily getting the level of assurance that they think they should...that they probably should be getting from the developers themselves? 

Colton

Yes, I think a lot of the times, developers are kind of in their own silo and they're kind of telling the security team, "Trust us, like, our code changes are legitimate." So, yeah, let's go into why is securing the developer important. Let me talk about Git. 

Git is a software that is used by software engineers so that they can collaborate on the same code base without stepping on each other's toes. And this was, like, originally designed and developed in the mid-2000s by the Linux kernel developers so that they could contribute code to the Linux kernel without breaking anything. 

And then services like GitHub and GitLab and Bitbucket came along, and they really just put this SaaS wrapper around this Git protocol. And the thing is, is that this Git protocol was never really...it was never really designed or intended to be used by enterprises, so it's kind of...that portion of the developer domain, which is not really...usually not under the control of an enterprise goes largely unprotected. So, yeah, the problem with signing these artifacts that come out of a pipeline is that you can't confirm where the code necessarily came from, like, what developers it came from, you just know it came from the organization. 

Husnain

But doesn't Git...don't most of these platforms like GitHub have, like, a checkmark for sign code, like, that says that's verified or something? 

Colton

Yeah. So, that's sort of like the...on Twitter, that Twitter verified checkmark, Git has these. But really, what that's just saying is Git has verified that user and in most GitHub and GitLab organizations, the model is that the developers bring their own personal accounts to the Git...to the organization model in GitHub and GitLab. 

Husnain

So when Git verifies a developer's identity, are they also saying that that developer's identity is securely stored? 

Colton

Git is just saying that that developer...how should I say this? They know about the developer, but that kind of takes away from the enterprise security controls. Like, it's not GitHub's job to verify enterprise identity or corporate identities. 

Husnain

- Cool. 

Colton

Okay. Yeah, so I think I should give a little background on, like, what Git is. It's a distributed version control that allows developers to all work on the same code base. And typically what a developer workflow is, is the developer makes a change, they push their...and when I say change, they're source code change, they push that change up to a central repo and on every change, that runs through a continuous integration/continuous deployment pipeline. 

So, things like a developer makes a code change, the pipeline first does something like lints the code, make sure the syntax is correct, it builds the code, it then runs the code through unit tests, system tests, integration tests, maybe scan for vulnerabilities or credentials that are in the code. So, now, let me talk about what we've built, how we've shifted the identity...how we've just shifted secure DevOps to the developer and away from the organization domain. 

So, the platform authenticator that Husnain talked about earlier, we've essentially added a capability in there so that we can sign the git commits as developers are making code changes. And we're signing them on a device, on the developer's device that is, where in an organization model, they're usually signed by the CI/CD pipeline, which is running on some server. 

So, we are signing each code change with the developer's identity. And then the second component we built was a module that you can import into your CI and CD pipeline. And this module will ensure that only code that was signed by a known corporate or enterprise identity can be admitted into the CI/CD pipeline. 

Husnain

So, when you say that it's signed with the developer's identity, how does that relate to the GPG key? Where's the GPG key coming from? 

Colton

So, the GPG key is cryptographically tied to the developer's identity. We've essentially built a personal certificate authority in our platform authenticator, so we're able to issue keys and certificates from that identity. All right, and now I will do a quick demo. 

Let me share the screen. All right, can you see my screen? You should see a terminal and a web browser. Okay, so I've set up an example Git repo that we've integrated our product with. 

And in this repo, I've created this pipeline. I've kind of just created a standard CI/CD pipeline, where first, anytime a code change comes in, we lint the code, we build the code, we run some of the tests, and then we finally deploy the code if everything passes. 

So what I've configured is I've installed our verification module at the very beginning of the pipeline. And, yeah, so let me...now I'm going to do some examples of like a developer workflow. So, we have our platform authenticator running here. 

This is our credential. This is another credential, just as an example. If I go into the GPG keys, this is the actual signing key that was generated. And the private key is stored in the Secure Enclave or the TPM, and the public key can be uploaded to GitHub or GitLab. 

So, I'm going to make a code change. I have this example repository called "Effective Guacamole," and it's really, like, where we keep our super-secret guacamole recipe, so we only want known corporate identities committing to this recipe, we don't want anyone injecting bad ingredients into the recipe. 

Okay, I hope this is big enough. Let me make this bigger. And I open up my recipe. Let's just say I want to add more tomatoes to the guac. I just made my code change. 

Now I'm going to do my git commit and I'm going to add a message that said, "Added more tomatoes." Now I've made that git commit, and you can see this little toast message that said, "Beyond Identity has signed the git commit," and this just happened in the background, the developer didn't have to do anything. 

Now, usually, this would be, like, an involved ceremony where a developer has to go check out a key, they have to bring it down, they have to sign it, and then put the key back. But with Beyond Identity, we're just signing it in the background and really, the developer doesn't even know it's happening. So, I'm going to push that code change up to the repository. And now that it's pushed up, it's going to run through the CI/CD pipeline. 

So, we can go watch it. So, I added more tomatoes to the guac, it's running through my pipeline, we can go watch the logs. And it's checking, "Is this..." 

It's doing a check, it's basically asking Beyond Identity, "The key that was used to sign this git commit, do you know who it is? And should we allow this or not?" And we return a message like, "Yes, we know who it is, it's an identity we know about and go ahead with the change." 

So then, just the rest of the pipeline runs. And this is just example of the pipeline running. So now, I'll make a minor config change so that I don't sign the commits and I'll try to push up another change. 

And I'm really just saying, "Do not sign the git commits." I'm going to make a change to the recipe. What's something that's really...what's something you shouldn't add to a guacamole recipe? Cheese. 

Husnain

Chocolate chips. 

Colton

Chocolate chips. Perfect. Now I'm making a commit and I didn't sign it. Push it up to the repo, it's now going to go through the same pipeline. 

All right, the job is running. And we stopped it, we prevented this code change from going into the repo because it wasn't signed. 

And that's really just one example of a reason to not allow it in. Because all of these identities are tied to Beyond Identity, a user is able to go into our Beyond Identity console and sort of suspend a user, which would, therefore, prevent us from signing anything if that user is suspended. 

Also, there's policies that can be written into so that you can only sign git commits from a managed device. So, if you, like, take a step back and you look at what we have running on the authenticator, what we have running on in the pipeline, we've really created a solution that allows git commits to only come from a trusted managed device and known corporate identity. 

Deb

This may be a stupid question, but I'm wondering what if they put in the chocolate chip and then signed the code? What would be the process? Are they even allowed to sign the code if they put the chocolate chips in? 

Colton

So, that's sort of like an insider threat. And what we're also providing is, like, code provenance and non-repudiation. There's no way...if the developer signed that they put the chocolate chips in, there's no way they can say they did it. 

Husnain

Yeah, you'll know that it's a malicious actor or a very bad employee with poor taste buds. 

Yeah, one of the bigger things that's kind of emerged from this sort of evolving and advanced threat landscape are some learnings that we got from sort of the way that distributed ledgers and modern cryptocurrency has turned out. 

One of the hardest guarantees to provide is that someone genuinely did perform an operation even when they say that they did not. And so, that, like, elimination of plausible deniability is a really core component to achieving computational accountability and rigorous trust. 

And so, that formal proof is what we're able to do by signing and sealing our log messages as well. So, because we're able to sort of locally and seamlessly sign any piece of information without the latency of central usage or checking in or checking out stuff, we're able to essentially provide that sealed guarantee around the logs and every event that's happening in the system. 

Cool. Well, Colton, that was an awesome demo. Just out of curiosity, how long does it take to set something like this up? 

Colton

Yeah, so it takes a...I would say a couple of minutes. It's more like something a developer does one time, they set it, and then they forget it. 

Husnain

And what about the GitHub actions or the sort of Git repo actions? Are those difficult to set up or do we provide samples or...? 

Colton

Yeah, we have samples, they're not difficult to set up. And kind of like what's great about the way we implemented it is that you can decide where in your pipeline you want to put that. So, let's say you want to run that every time a commit is pushed up to your Git repo or every time a commit...every time a merge request is opened. 

So, a request to merge a code change into your main branch. So, it's really up to the administrator to decide where it makes sense to put that check. 

Husnain

And does the system require you to be integrated to the SSO or the sort of enterprise identity system or is it possible to start smaller and more compartmentalized? 

Colton

Yeah, you can definitely start smaller. There's nothing that says you need to be integrated with SSO to use this. 

Husnain

Cool. So, you know, we just want to wrap up before we start taking some questions. In terms of our recommendations sort of at a 10,000 foot kind of industry level, we think it's really important to adopt blameless approaches to cybersecurity controls. 

It's important to pull back and ask the right questions, adhere to first principles, and really think through how you want to think about cybersecurity frameworks. People have a tendency to look at output attacks and compromise, and immediately reactively put in new layers of controls without reassessing the entire situation and thinking about where the vulnerability really exists and at what point you should really intervene. 

And so, that's where having these models around MITRE ATT&CK or Lockheed Martin Cyber Kill Chain or various NIST controls, it's important to sit back and generalize it, right? Like, all of these models have essentially a protect, detect, and respond component to them. So, even if you're just looking at it as three simple steps, pulling back and thinking about it like that is useful. 

Breaking down your software development lifecycle and understanding how to frame that supply chain within the context of the cybersecurity framework that you choose is really important. Absolutely, people should be signing their code artifacts and they should have strong attestation and provenance for those. 

So, using proper tooling to make sure that you're signing code artifacts is important. We encourage people to use dynamic application security testing tools that validate those artifacts, also with checks for known patch levels and vulnerabilities and scans it against large CVE databases. 

And then we also think that there's a huge role to be played by SaaS tools. So, when you look at the static application security testing tools that have access to source code, they've been extremely valuable in establishing software bill of materials that have strong assurances and understood provenance. 

That said, SaaS tools to understand your open source component contribution isn't the end-all, be-all, you really also need to understand that majority of intellectual property that's getting injected into your code repositories by your actual developers. 

And so, that's why having the supply chain mindset and sequencing things out and thinking about what order they come in is really important. So, we just want to move everyone as left as possible and we want to get people using cryptographic key pairs but in smarter ways. 

A lot of people have noticed that the source code management systems' automatic scanning capabilities have actually lowered developers' vigilance on ensuring that they don't put secrets into their code, they rely on the scanning that's taking place in these platforms to take care of that. 

That's the kind of reactive end of chain compliance-centric managing to the test kind of mentality that just doesn't help anyone in a software...in a secure development lifecycle. And so, we want people to be able to utilize tools intelligently that recruit people earlier and more proactively to think about security not just in terms of known threats and mitigating against existing attacks that a team may have experienced, and really look at it as a systemic foundation and adhere to the practices that are sort of emerging from that broader secure software development community. 

And so, that's really where we come in and we just ask that people, you know, be more intentional and be more respectful of cryptographic asset hygiene. Cool. 

And I think we're open to questions now. 

Deb

Thanks for that great presentation. It was very informative and I believe it probably helped a lot of developers level up without feeling that they have to take on a lot of extra responsibility. So, we have some questions that came in. 

The first one sort of speaks to the blame game that you started the presentation out with, Husnain, and that is, "Who is typically responsible for the CI/CD pipeline? And what do they actually do in their rules of responsibility?" 

Colton

Yeah, it sort of depends on the stage of the company, like software startups, usually, it's the developers who kind of work in a silo and they start to actually build and develop the pipeline themselves, only then later it is that a security team may come in and start to even try to understand what's going on. So, I guess I'm saying it's usually the developers who do it but it probably should be the security and DevSecOps team who should be responsible for it. 

Husnain

Yeah, and Microsoft has done phenomenal work in their secure development evangelism works and stuff, making it clear that early on, their practices were very reactive and built around very formal engineering ops groups that organized these software development environments and provided the assurance tooling. 

What we're seeing even in large organizations with tens of thousands of developers is a move towards getting people more engaged and enrolled in the process early on. And so, it tends to be more collaborative, you end up having security evangelists within specific product teams within the larger organizations and that security evangelist function within each of those product groups is able to work in conjunction with all of the sorts of other folks who are making sure that CI/CD pipeline is secure. 

Deb

Excellent. And would you say that security evangelist comes from the DevOps side or the IT ops side? Did the developer usually pick up the role? Like, is there usually a champion of security among the developers or is it an IT person coming in and trying to rally the troops? 

Husnain

Conventionally, it's been an IT person or an IS person. And what's really happening now with sort of the breadth of attacks being really kind of almost astonishing and impossible to manage from a purely testing perspective, there's more emphasis on putting the security evangelists within the software engineering teams. 

So, they're actually developers contributing code simply stepped in and are making sure that during every developer stand up, during every retrospective, during every planning exercise, an adequate amount of focus is being applied to security within the people who are actually contributing to the software. 

Deb

Excellent. Okay. That's been my observation as well as the journalists and analysts in the space for a long time, I've seen the DevOps team sort of taking the lead setting up an evangelist-type role within the DevOps to do DevSecOps better. 

So, your answer aligns with some studies and stuff I've been following as well. My next question from the audience is, "What do you say to development managers and engineering leaders who think that their developers can't take any additional responsibility to sign commit?" And I'm going to add to this question here, from what I heard in the presentation today, it's actually going to be easier if they could use a tool like yours. 

Colton

Yeah, I would say that the developers have...you know, or I would say they're used to have a good argument like, "Signing code is complicated and it creates friction." And in the old model, the developer possesses the keys and they are responsible for the keys. 

But in this new model sort of in our solution, yes, the developers possess the keys but they don't really have to worry about them. They're protected. 

Husnain

Yeah, I think one of the keys is that if you make this simple enough, the value prop to the developer is that as threat actors become more sophisticated, it's really just a matter of time before they're the victims of identity theft and the identity theft that they'll be victims of is this kind of like, you know, malicious code insertion on their behalf. 

And given the amount of cryptographic assets that they're creating and not necessarily tracking, there's no lifecycle to it, there's not like a deprecation framework, there's not like a revocation kind of window. So, these things get created and they remain in the ether with access and the ability to claim that they're a particular developer forever. 

And solving for that in a way that doesn't add any additional time on each individual commit but only requires a few minutes of initial setup, we think it has a lot of value to the developer. 

Deb

And here's another sort of related question, "Git commit signing has been around for a while, those organizations who did utilize it, how were they storing their git commit key?" 

Husnain

So, you want to explain, like, how GPG...you can be honest, you can tell us where you stored your GPG keys. 

Colton

Okay, USB sticks, home directories. I used the same keys for 10 years, just take that key wherever I go. 

Deb

All of which will create risk. 

Colton

Yeah, and it's kind of like my DevSecOps team is expecting me to be, like, responsible enough for the key but hey, I'll put that key wherever I want if it's up to me. 

Deb

If it makes your job easier as a developer, right? 

Colton

Yeah, I was just saying I will put it in something that is easily accessible and low friction. 

Deb

Okay, well, it looks like we're out of questions. I would like to thank everybody for being here. As a reminder, if we weren't able to answer your questions during the live presentation, watch your email for a response within a few days. 

And I want to thank all of you for being with us today and also to our speakers for sharing the valuable information and to Beyond Identity for sponsoring this webcast. And, of course, thanks to our audience for tuning in, we hope you enjoyed the presentation. 

Thwart Supply Chain Attacks by Securing Development

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Listen to the following security and product experts share their insights in the webinar:

  • Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance
  • Colton Chojnacki, product manager at Beyond Identity
  • Husnain Bajwa, senior manager for global sales engineering at Beyond Identity

Transcription

Deb Radcliffe

Hello, everyone, and thanks for joining us. I'm Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance, and I'll be moderating today's broadcast titled, "Thwart Supply Chain Attacks by Securing Development," sponsored by Beyond Identity. Today's information will be presented by Husnain Bajwa, HB for short, and Colton Chojnacki. HB is the senior manager for global sales engineering at Beyond Identity and Colton is the product manager at Beyond Identity. 

First, before we get started, we need to get over some housekeeping. If you'd like to ask any questions, please select the Q&A button below. If you're interested in accessing additional resources provided by Area 1 Security, select the handouts button below. Also, note that a recording of today's presentation will be made available on-demand after the event. And now we turn our discussion over to Husnain and Colton. 

Husnain Bajwa

I wanted to go over the agenda real quick and provide some introductions for ourselves. And today we're going to be talking about supply chains, a popular topic, especially with the executive order earlier this year from the White House and all of the work being done by NIST and the Software Engineering Institute and various security development lifecycle firms. 

We want to essentially talk about the context for supply chains and sort of how we've gotten to where we are. Then we're going to talk a little bit about the specifics of software development in a modern agile cloud-native environment, and what it looks like when you establish opinionated kill chains and start thinking about your security in a more rigorous fashion with structured controls and practices around protection, detection, and response. 

And how to shift left to move more towards protection, to move more towards earlier assurance and proactive verification. And then we'll sort of provide a demo and wrap it up with some simple recommendations for better hygiene. And then we'll open it up for questions, we're really excited to hear questions from the audience. 

So, let me switch it and let Colton provide his introduction. 

Colton Chojnacki

Hi, my name is Colton Chojnacki, I'm a product manager here at Beyond Identity, I work on our directory and our generalized key management solutions. I've been in the industry nearly 10 years now as a software developer, working on application development, DevOps, and various infrastructure tasks. 

Husnain

Colton is very humble, he spent a lot of time intensively working as a software engineer and has worked in multiple facets of cybersecurity, including very large, global critical infrastructure...with some very large global critical infrastructure customers. 

And through that experience, he's been a really valuable resource trying to put together the product that we're going to be talking about, but more framing the problems and sort of creating the sort of framework for the solutions that we believe are most important in this space. And my name is Husnain again, and I've spent over 20 years in the infrastructure industry. 

I'm kind of an infrastructure geek. The last 12, 13 years, I've spent doing cybersecurity with a significant focus on wireless as well as large-scale IP networks. And more recently, I've been quite involved with our efforts around secure DevOps and putting together solutions with Colton. 

So to begin, I think we should define what we're talking about when we describe the supply chain attacks that we're going to cover. We're really mostly interested in software supply chain attacks, but supply chain attacks are impacting software organizations across the board, technology companies, and really almost every company is a technology company today. 

They're experiencing a new type of threat from a more sophisticated threat actor with a larger variation in cost of compromise that they can tolerate. And so, when they're coming up with targeted attacks, more and more organizations are essentially vulnerable to the supply chain attacks, which generally reflect attackers leveraging initial access and lateral movement to establish long term reconnaissance and strongly target highest value components of an organization's sort of business process and business sequence. 

Within this sort of space, we've obviously seen a lot of major news stories around Kaseya and SolarWinds and Colonial Pipeline. When we talk about these kinds of incidents, it's often sort of implied that these organizations were unsophisticated or fell victim to very rudimentary kind of attack vectors. 

While it's somewhat true, the story is always much more nuanced. And from our standpoint, we think that blame dominates far too many of these conversations, and the focus should shift away from blame, which is more of a compliance and validation side activator. 

When you blame an organization, you end up doing a lot of testing towards the end of a cycle. And the real solution to all of these things that we've seen through secure development lifecycle work over the last 20 years, is that remediation and prevention with a strong cooperative bond between the earliest actors, in software cases, developers, is really, really important. 

And so, that's what we want to concentrate on. Now, this is a little bit of an exercise and just looking at the way that blame is kind of assigned, and looking at it in the context of some popular attacks that I think all of you will be able to recognize, right? 

The basic format for all of this is, "It was blank in blank with blank who blank." And it feels like an extremely techie geeky game of clue, but with a lot of Sigmund Freud kind of wrapped into it. If you look at some of the earliest ones that we sort of popularly talk about, target comes up quite a bit, and we start talking about HVAC controllers that it was HVAC controllers and private store networks with weak credentials who allowed attackers to compromise the point-of-sale terminals. 

It evolved into this, like, REvil messaging where you end up getting kind of, "It was REvil in company networks for months," so that long-term reconnaissance using RDP as initial access and encrypting and exfiltrating data to create new kinds of lucrative ransomware attacks. 

And then, of course, we also all saw, like, some of the interesting stuff that happened with congressional testimony around one of the major breaches, and a CEO blamed an intern. And the story became that it was an intern in an FTP server with a weak password who exposed an infrastructure software company and all of its customers to malware. 

These are easy, non-nuanced stories, and they make for great television and they make for great stories but they're definitely not catching the entire story. But you can see that this blame game sort of continues and you can easily imagine a world where new kinds of developer-centric and insider threat machinations imaginations can kind of create problems for us moving forward that developers are responsible for an enormous amount of cryptographic assets in modern environments. 

Their access to infrastructure and their ability to impact products is enormous, and it's easy to see them as careless or irresponsible when in the course of an extremely busy, ordinary sort of job activity, they're also having to account for security challenges like keys at-rest. 

It's easy to imagine them being blamed for carelessness, but it's also easy to see stories being built up around malicious actors. And with the emergence of Infrastructure as Code, the same types of problems that we have on software supply chains are kind of strongly impacting existing operations teams and operations teams that are still struggling to transition from conventional systems administration to DevOps to DevSecOps. 

And so, within that journey, the last thing that any of these individuals need is an additional sort of challenge and reputational hit. And when you look at it, the stories are always very convenient. The attacker was always sophisticated and well-funded, the employee was careless, incompetent, malicious. 

This is the sort of default practice. And when you look at this kind of framing that's typical, it's really important to look at, like, what best current practices might look like for credentials and crypto private keys, some of the most important components of initial access that leads to subsequent lateral movement and long-term compromise. 

Knowledge factors, specifically, passwords and secrets and associated complementary technologies are weak by design. We all know, like, why passwords are a challenge, we all understand how secrets are managed. Sophisticated, well-designed secrets architecture will typically use centralized vaults and privileged access management tooling that's relatively extravagant, it has a very high threshold for training, it has a very high friction ceremony design, and this creates a lot of challenges in modern environments. 

Also, we tend to think that the solution to knowledge factors is simply private keys and any private keys. But the reality is that developers are often on the hook for doing private keys and generating key pairs under the assumption that the consequences will be much better automatic security. 

And the reality is is that's not true, like, we end up weak by implementation. The vast majority of key pairs generated by developers and operations folks are not protected in use, they're human protected at-rest. 

So, if a person chooses to store them in their home directory if they choose to synchronize them over Dropbox, whatever they're choosing to do in terms of their own personal IT infrastructure has a huge impact on the quality of that storage. And when the certificates are being transferred or onboarded in typical centralized architectures, it's dependent on the protocol on how much security is present and there are at least a dozen approaches that are commonly used for these kinds of security events. 

So, what we've seen more recently is that CISOs are becoming more and more aware of the situation. Developer shortages, especially accelerated by COVID, and work from anywhere and broad, global competition for talent has driven a lot of the CISOs to see themselves as advocates for the employees as well as advocates for the organization. 

And when they look at protecting the organization, they want to have principled approaches to reducing the attack surface and protecting the brand. They also want to protect the employees, reduce that initial access threat, minimize lateral movement, contain the blast radius involved in most of these operations, and do it in order to protect reputations, avoid identity theft, and construct it around something that the developers and employees can buy into as well. 

And looking at that new kind of employee-centric CISO is what brought us to our core product that then led us to this new avenue that we've done with DevSecOps and developer keys. What we built was a new type of authenticator that specifically addressed those keys and use key-at-rest and key-in-motion challenges, and essentially put a PKI infrastructure into the platform authenticator. 

A simple, lightweight agent that can run on Windows, Mac, Linux, iOS, or Android. This authenticator is responsible for maintaining the security bindings and leveraging modern security protocols used in enterprise, SSO, and SaaS authentication. And underlying this integration with all of these SSO tools, we embed a very strong credential that's minted locally at each platform authenticator using the local Secure Enclave technology for that particular platform. 

So, over the last 10 years, we've seen a huge uptake in Secure Enclaves and Trusted Platform Modules. These are hardware enclaves that provide unclonable, secure, tamper-proof environments to store keys, and in many cases, also generate keys and seal keys and seal information and provide cryptographic operations. 

We've maximized that capability and provided keys that are strongly trusted on first use based on bringing a public key into our directory. 

And beyond the public key in our directory, we don't maintain any other credentials. And so, the users are basically without password...genuinely without password, and all of their credentials are stored within this hermetically-sealed Secure Enclave vault that simply operates on signing operations to make sure that authentication can occur in a maximally secure way that doesn't rely on any custodianship of credentials by the enterprise. 

This is a big change. Directories for the last 30 to 35 years have essentially assumed that in addition to the username, the other construct that's always going to persist is the username. We're trying to really shift away from that and bring forward the promise of 1988 and early X.509 strong authentication, but make it usable in a modern environment without having all of the friction and challenges associated with certificate authorities and all of the associated PKI infrastructure. 

And the reality is, is that eliminating passwords and replacing them with strong authentication for all of your SaaS applications and enterprise applications is pretty low-hanging fruit. If you look at the evolution of TPM, if you look at the evolution of PKI itself, and smart cards and FIDO and WebAuthn, there are an enormous number of tools and underlying technologies that really enable us to make this jump finally, after 60 years. 

But what it reveals after you complete that password list journey is that cryptographic assets sprawl is a much bigger problem in enterprises, especially information-centric enterprises. It's grown enormously with the adoption of the cloud. 

And a lot of the challenges are what we were talking about just earlier, that keys sort of are assumed to suggest security but keys are fundamentally easy to mint. The number of packages available in open source and in user space software that can generate keys are manyfold and those keys generated in insecure environments are hard to protect and the poor hygiene associated with those kinds of keys really degrades the quality of the key itself to a very basic sort of weak secret. 

And that brings us to sort of the last 10 years, we've seen a few major changes in infrastructure, cloud has really become a dominant force over the last 10 years. 

You've seen it go from essentially an IT tool to a broad line-of-business enabler, so vertical solutions for very specific business applications are everywhere. Development methodologies are all pretty much agile. 

And within the agile world, we have resorted to using source code...collaborative source code management in the cloud that was largely designed for open-source collaboration at a global scale for projects like Linux. Simultaneously, we've seen the adoption of all of these enclave technologies to facilitate disk encryption, biometric, and digital wallet compliance regulations. 

And so, when you look at all of these tools at 10,000 feet and you pull back and think about what the architecture should look like and take an opinionated stance on solving for security as leftmost as possible in this journey, that's where we think that identity is really at a pivotal point and can transform the software supply chain. 

And I'll hand it over to Colton now. 

Colton

Yeah. So, if you are a developer working in the last 10 years building a cloud-native application, you most likely have to pick a few of these boxes out of this chart to work with. And this is what makes up your CI/CD pipeline and software supply chain. And in reality, how this actually works is a developer is writing code to stitch all of these projects or third-party vendors together. 

And where exactly does the identity...like, where do you enforce identity and access management if your entire supply chain is sprawled out like this? So, let me... 

Husnain

You know, Colton, since you were a developer, when you look at charts like this and market landscape assessments, what portion of these kinds of tools were you able to, like, really process and understand even from sort of a category perspective when you were doing day-to-day development? 

Colton

I would tell you developers always going to go with the tool that makes it...that is less friction, which is something that the security team is not always paying attention to. So, there's like this fine line between the security team and the application developer team. The application developers, they don't want friction, and it's almost like the security team...this chart is so massive, they don't even know where they should get involved. 

Husnain

Cool. That always seems overwhelming to me, so I just figured I'd check with you. 

Colton

Yes. Okay, so let me talk about, like, what a pipeline looks like today, a continuous integration/continuous deployment cloud-native pipeline. Developers submit code, and this could be code for an application, it could be code for infrastructure. 

That code goes through some verification like linting, unit tests, compile the code, run the unit tests, run the system tests, the integration tests, send it off to QA, then finally deploy it to production. And this has really expanded, like, the threat surface in the CI/CD pipeline that these attacks like SolarWinds are targeting. 

They're targeting this, they're targeting the code that is stitching together all of these cloud-native services. So, what we're talking about today is code signing. And I think there's something we should make clear and what is the difference between signing an artifact of the pipeline, signing the binary that comes out of the pipeline, and signing the code change that goes into the pipeline, whether it's a change to an application or now with the rise of GitOps, a change to your infrastructure code. 

There's these two domains that we want to talk about, and one is the development organization domain and this is sort of like your CI/CD pipeline that is built and maintained by the organization. And whenever something comes out of that pipeline, what you're really... 

and you sign it, what you're really saying is the organization endorses that artifact. So, what we're doing here at Beyond Identity is shifting left to the developer domain. Now, this is a...there are not great security tools out there for making sure that developer code changes securely go into the CI/CD pipeline. 

Like historically, it's kind of always been left up to the developers to own this process and a lot of security teams don't even really know that they're supposed to get involved or leave it to just the developers. 

Husnain

So, Colton, when you look at this kind of developer domain and development organization domain, are you saying development organization is essentially providing promises to its downstream and customers but they're not necessarily getting the level of assurance that they think they should...that they probably should be getting from the developers themselves? 

Colton

Yes, I think a lot of the times, developers are kind of in their own silo and they're kind of telling the security team, "Trust us, like, our code changes are legitimate." So, yeah, let's go into why is securing the developer important. Let me talk about Git. 

Git is a software that is used by software engineers so that they can collaborate on the same code base without stepping on each other's toes. And this was, like, originally designed and developed in the mid-2000s by the Linux kernel developers so that they could contribute code to the Linux kernel without breaking anything. 

And then services like GitHub and GitLab and Bitbucket came along, and they really just put this SaaS wrapper around this Git protocol. And the thing is, is that this Git protocol was never really...it was never really designed or intended to be used by enterprises, so it's kind of...that portion of the developer domain, which is not really...usually not under the control of an enterprise goes largely unprotected. So, yeah, the problem with signing these artifacts that come out of a pipeline is that you can't confirm where the code necessarily came from, like, what developers it came from, you just know it came from the organization. 

Husnain

But doesn't Git...don't most of these platforms like GitHub have, like, a checkmark for sign code, like, that says that's verified or something? 

Colton

Yeah. So, that's sort of like the...on Twitter, that Twitter verified checkmark, Git has these. But really, what that's just saying is Git has verified that user and in most GitHub and GitLab organizations, the model is that the developers bring their own personal accounts to the Git...to the organization model in GitHub and GitLab. 

Husnain

So when Git verifies a developer's identity, are they also saying that that developer's identity is securely stored? 

Colton

Git is just saying that that developer...how should I say this? They know about the developer, but that kind of takes away from the enterprise security controls. Like, it's not GitHub's job to verify enterprise identity or corporate identities. 

Husnain

- Cool. 

Colton

Okay. Yeah, so I think I should give a little background on, like, what Git is. It's a distributed version control that allows developers to all work on the same code base. And typically what a developer workflow is, is the developer makes a change, they push their...and when I say change, they're source code change, they push that change up to a central repo and on every change, that runs through a continuous integration/continuous deployment pipeline. 

So, things like a developer makes a code change, the pipeline first does something like lints the code, make sure the syntax is correct, it builds the code, it then runs the code through unit tests, system tests, integration tests, maybe scan for vulnerabilities or credentials that are in the code. So, now, let me talk about what we've built, how we've shifted the identity...how we've just shifted secure DevOps to the developer and away from the organization domain. 

So, the platform authenticator that Husnain talked about earlier, we've essentially added a capability in there so that we can sign the git commits as developers are making code changes. And we're signing them on a device, on the developer's device that is, where in an organization model, they're usually signed by the CI/CD pipeline, which is running on some server. 

So, we are signing each code change with the developer's identity. And then the second component we built was a module that you can import into your CI and CD pipeline. And this module will ensure that only code that was signed by a known corporate or enterprise identity can be admitted into the CI/CD pipeline. 

Husnain

So, when you say that it's signed with the developer's identity, how does that relate to the GPG key? Where's the GPG key coming from? 

Colton

So, the GPG key is cryptographically tied to the developer's identity. We've essentially built a personal certificate authority in our platform authenticator, so we're able to issue keys and certificates from that identity. All right, and now I will do a quick demo. 

Let me share the screen. All right, can you see my screen? You should see a terminal and a web browser. Okay, so I've set up an example Git repo that we've integrated our product with. 

And in this repo, I've created this pipeline. I've kind of just created a standard CI/CD pipeline, where first, anytime a code change comes in, we lint the code, we build the code, we run some of the tests, and then we finally deploy the code if everything passes. 

So what I've configured is I've installed our verification module at the very beginning of the pipeline. And, yeah, so let me...now I'm going to do some examples of like a developer workflow. So, we have our platform authenticator running here. 

This is our credential. This is another credential, just as an example. If I go into the GPG keys, this is the actual signing key that was generated. And the private key is stored in the Secure Enclave or the TPM, and the public key can be uploaded to GitHub or GitLab. 

So, I'm going to make a code change. I have this example repository called "Effective Guacamole," and it's really, like, where we keep our super-secret guacamole recipe, so we only want known corporate identities committing to this recipe, we don't want anyone injecting bad ingredients into the recipe. 

Okay, I hope this is big enough. Let me make this bigger. And I open up my recipe. Let's just say I want to add more tomatoes to the guac. I just made my code change. 

Now I'm going to do my git commit and I'm going to add a message that said, "Added more tomatoes." Now I've made that git commit, and you can see this little toast message that said, "Beyond Identity has signed the git commit," and this just happened in the background, the developer didn't have to do anything. 

Now, usually, this would be, like, an involved ceremony where a developer has to go check out a key, they have to bring it down, they have to sign it, and then put the key back. But with Beyond Identity, we're just signing it in the background and really, the developer doesn't even know it's happening. So, I'm going to push that code change up to the repository. And now that it's pushed up, it's going to run through the CI/CD pipeline. 

So, we can go watch it. So, I added more tomatoes to the guac, it's running through my pipeline, we can go watch the logs. And it's checking, "Is this..." 

It's doing a check, it's basically asking Beyond Identity, "The key that was used to sign this git commit, do you know who it is? And should we allow this or not?" And we return a message like, "Yes, we know who it is, it's an identity we know about and go ahead with the change." 

So then, just the rest of the pipeline runs. And this is just example of the pipeline running. So now, I'll make a minor config change so that I don't sign the commits and I'll try to push up another change. 

And I'm really just saying, "Do not sign the git commits." I'm going to make a change to the recipe. What's something that's really...what's something you shouldn't add to a guacamole recipe? Cheese. 

Husnain

Chocolate chips. 

Colton

Chocolate chips. Perfect. Now I'm making a commit and I didn't sign it. Push it up to the repo, it's now going to go through the same pipeline. 

All right, the job is running. And we stopped it, we prevented this code change from going into the repo because it wasn't signed. 

And that's really just one example of a reason to not allow it in. Because all of these identities are tied to Beyond Identity, a user is able to go into our Beyond Identity console and sort of suspend a user, which would, therefore, prevent us from signing anything if that user is suspended. 

Also, there's policies that can be written into so that you can only sign git commits from a managed device. So, if you, like, take a step back and you look at what we have running on the authenticator, what we have running on in the pipeline, we've really created a solution that allows git commits to only come from a trusted managed device and known corporate identity. 

Deb

This may be a stupid question, but I'm wondering what if they put in the chocolate chip and then signed the code? What would be the process? Are they even allowed to sign the code if they put the chocolate chips in? 

Colton

So, that's sort of like an insider threat. And what we're also providing is, like, code provenance and non-repudiation. There's no way...if the developer signed that they put the chocolate chips in, there's no way they can say they did it. 

Husnain

Yeah, you'll know that it's a malicious actor or a very bad employee with poor taste buds. 

Yeah, one of the bigger things that's kind of emerged from this sort of evolving and advanced threat landscape are some learnings that we got from sort of the way that distributed ledgers and modern cryptocurrency has turned out. 

One of the hardest guarantees to provide is that someone genuinely did perform an operation even when they say that they did not. And so, that, like, elimination of plausible deniability is a really core component to achieving computational accountability and rigorous trust. 

And so, that formal proof is what we're able to do by signing and sealing our log messages as well. So, because we're able to sort of locally and seamlessly sign any piece of information without the latency of central usage or checking in or checking out stuff, we're able to essentially provide that sealed guarantee around the logs and every event that's happening in the system. 

Cool. Well, Colton, that was an awesome demo. Just out of curiosity, how long does it take to set something like this up? 

Colton

Yeah, so it takes a...I would say a couple of minutes. It's more like something a developer does one time, they set it, and then they forget it. 

Husnain

And what about the GitHub actions or the sort of Git repo actions? Are those difficult to set up or do we provide samples or...? 

Colton

Yeah, we have samples, they're not difficult to set up. And kind of like what's great about the way we implemented it is that you can decide where in your pipeline you want to put that. So, let's say you want to run that every time a commit is pushed up to your Git repo or every time a commit...every time a merge request is opened. 

So, a request to merge a code change into your main branch. So, it's really up to the administrator to decide where it makes sense to put that check. 

Husnain

And does the system require you to be integrated to the SSO or the sort of enterprise identity system or is it possible to start smaller and more compartmentalized? 

Colton

Yeah, you can definitely start smaller. There's nothing that says you need to be integrated with SSO to use this. 

Husnain

Cool. So, you know, we just want to wrap up before we start taking some questions. In terms of our recommendations sort of at a 10,000 foot kind of industry level, we think it's really important to adopt blameless approaches to cybersecurity controls. 

It's important to pull back and ask the right questions, adhere to first principles, and really think through how you want to think about cybersecurity frameworks. People have a tendency to look at output attacks and compromise, and immediately reactively put in new layers of controls without reassessing the entire situation and thinking about where the vulnerability really exists and at what point you should really intervene. 

And so, that's where having these models around MITRE ATT&CK or Lockheed Martin Cyber Kill Chain or various NIST controls, it's important to sit back and generalize it, right? Like, all of these models have essentially a protect, detect, and respond component to them. So, even if you're just looking at it as three simple steps, pulling back and thinking about it like that is useful. 

Breaking down your software development lifecycle and understanding how to frame that supply chain within the context of the cybersecurity framework that you choose is really important. Absolutely, people should be signing their code artifacts and they should have strong attestation and provenance for those. 

So, using proper tooling to make sure that you're signing code artifacts is important. We encourage people to use dynamic application security testing tools that validate those artifacts, also with checks for known patch levels and vulnerabilities and scans it against large CVE databases. 

And then we also think that there's a huge role to be played by SaaS tools. So, when you look at the static application security testing tools that have access to source code, they've been extremely valuable in establishing software bill of materials that have strong assurances and understood provenance. 

That said, SaaS tools to understand your open source component contribution isn't the end-all, be-all, you really also need to understand that majority of intellectual property that's getting injected into your code repositories by your actual developers. 

And so, that's why having the supply chain mindset and sequencing things out and thinking about what order they come in is really important. So, we just want to move everyone as left as possible and we want to get people using cryptographic key pairs but in smarter ways. 

A lot of people have noticed that the source code management systems' automatic scanning capabilities have actually lowered developers' vigilance on ensuring that they don't put secrets into their code, they rely on the scanning that's taking place in these platforms to take care of that. 

That's the kind of reactive end of chain compliance-centric managing to the test kind of mentality that just doesn't help anyone in a software...in a secure development lifecycle. And so, we want people to be able to utilize tools intelligently that recruit people earlier and more proactively to think about security not just in terms of known threats and mitigating against existing attacks that a team may have experienced, and really look at it as a systemic foundation and adhere to the practices that are sort of emerging from that broader secure software development community. 

And so, that's really where we come in and we just ask that people, you know, be more intentional and be more respectful of cryptographic asset hygiene. Cool. 

And I think we're open to questions now. 

Deb

Thanks for that great presentation. It was very informative and I believe it probably helped a lot of developers level up without feeling that they have to take on a lot of extra responsibility. So, we have some questions that came in. 

The first one sort of speaks to the blame game that you started the presentation out with, Husnain, and that is, "Who is typically responsible for the CI/CD pipeline? And what do they actually do in their rules of responsibility?" 

Colton

Yeah, it sort of depends on the stage of the company, like software startups, usually, it's the developers who kind of work in a silo and they start to actually build and develop the pipeline themselves, only then later it is that a security team may come in and start to even try to understand what's going on. So, I guess I'm saying it's usually the developers who do it but it probably should be the security and DevSecOps team who should be responsible for it. 

Husnain

Yeah, and Microsoft has done phenomenal work in their secure development evangelism works and stuff, making it clear that early on, their practices were very reactive and built around very formal engineering ops groups that organized these software development environments and provided the assurance tooling. 

What we're seeing even in large organizations with tens of thousands of developers is a move towards getting people more engaged and enrolled in the process early on. And so, it tends to be more collaborative, you end up having security evangelists within specific product teams within the larger organizations and that security evangelist function within each of those product groups is able to work in conjunction with all of the sorts of other folks who are making sure that CI/CD pipeline is secure. 

Deb

Excellent. And would you say that security evangelist comes from the DevOps side or the IT ops side? Did the developer usually pick up the role? Like, is there usually a champion of security among the developers or is it an IT person coming in and trying to rally the troops? 

Husnain

Conventionally, it's been an IT person or an IS person. And what's really happening now with sort of the breadth of attacks being really kind of almost astonishing and impossible to manage from a purely testing perspective, there's more emphasis on putting the security evangelists within the software engineering teams. 

So, they're actually developers contributing code simply stepped in and are making sure that during every developer stand up, during every retrospective, during every planning exercise, an adequate amount of focus is being applied to security within the people who are actually contributing to the software. 

Deb

Excellent. Okay. That's been my observation as well as the journalists and analysts in the space for a long time, I've seen the DevOps team sort of taking the lead setting up an evangelist-type role within the DevOps to do DevSecOps better. 

So, your answer aligns with some studies and stuff I've been following as well. My next question from the audience is, "What do you say to development managers and engineering leaders who think that their developers can't take any additional responsibility to sign commit?" And I'm going to add to this question here, from what I heard in the presentation today, it's actually going to be easier if they could use a tool like yours. 

Colton

Yeah, I would say that the developers have...you know, or I would say they're used to have a good argument like, "Signing code is complicated and it creates friction." And in the old model, the developer possesses the keys and they are responsible for the keys. 

But in this new model sort of in our solution, yes, the developers possess the keys but they don't really have to worry about them. They're protected. 

Husnain

Yeah, I think one of the keys is that if you make this simple enough, the value prop to the developer is that as threat actors become more sophisticated, it's really just a matter of time before they're the victims of identity theft and the identity theft that they'll be victims of is this kind of like, you know, malicious code insertion on their behalf. 

And given the amount of cryptographic assets that they're creating and not necessarily tracking, there's no lifecycle to it, there's not like a deprecation framework, there's not like a revocation kind of window. So, these things get created and they remain in the ether with access and the ability to claim that they're a particular developer forever. 

And solving for that in a way that doesn't add any additional time on each individual commit but only requires a few minutes of initial setup, we think it has a lot of value to the developer. 

Deb

And here's another sort of related question, "Git commit signing has been around for a while, those organizations who did utilize it, how were they storing their git commit key?" 

Husnain

So, you want to explain, like, how GPG...you can be honest, you can tell us where you stored your GPG keys. 

Colton

Okay, USB sticks, home directories. I used the same keys for 10 years, just take that key wherever I go. 

Deb

All of which will create risk. 

Colton

Yeah, and it's kind of like my DevSecOps team is expecting me to be, like, responsible enough for the key but hey, I'll put that key wherever I want if it's up to me. 

Deb

If it makes your job easier as a developer, right? 

Colton

Yeah, I was just saying I will put it in something that is easily accessible and low friction. 

Deb

Okay, well, it looks like we're out of questions. I would like to thank everybody for being here. As a reminder, if we weren't able to answer your questions during the live presentation, watch your email for a response within a few days. 

And I want to thank all of you for being with us today and also to our speakers for sharing the valuable information and to Beyond Identity for sponsoring this webcast. And, of course, thanks to our audience for tuning in, we hope you enjoyed the presentation. 

Thwart Supply Chain Attacks by Securing Development

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Listen to the following security and product experts share their insights in the webinar:

  • Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance
  • Colton Chojnacki, product manager at Beyond Identity
  • Husnain Bajwa, senior manager for global sales engineering at Beyond Identity

Transcription

Deb Radcliffe

Hello, everyone, and thanks for joining us. I'm Deb Radcliffe, strategic analyst at CyberRisk Alliance Alliance, and I'll be moderating today's broadcast titled, "Thwart Supply Chain Attacks by Securing Development," sponsored by Beyond Identity. Today's information will be presented by Husnain Bajwa, HB for short, and Colton Chojnacki. HB is the senior manager for global sales engineering at Beyond Identity and Colton is the product manager at Beyond Identity. 

First, before we get started, we need to get over some housekeeping. If you'd like to ask any questions, please select the Q&A button below. If you're interested in accessing additional resources provided by Area 1 Security, select the handouts button below. Also, note that a recording of today's presentation will be made available on-demand after the event. And now we turn our discussion over to Husnain and Colton. 

Husnain Bajwa

I wanted to go over the agenda real quick and provide some introductions for ourselves. And today we're going to be talking about supply chains, a popular topic, especially with the executive order earlier this year from the White House and all of the work being done by NIST and the Software Engineering Institute and various security development lifecycle firms. 

We want to essentially talk about the context for supply chains and sort of how we've gotten to where we are. Then we're going to talk a little bit about the specifics of software development in a modern agile cloud-native environment, and what it looks like when you establish opinionated kill chains and start thinking about your security in a more rigorous fashion with structured controls and practices around protection, detection, and response. 

And how to shift left to move more towards protection, to move more towards earlier assurance and proactive verification. And then we'll sort of provide a demo and wrap it up with some simple recommendations for better hygiene. And then we'll open it up for questions, we're really excited to hear questions from the audience. 

So, let me switch it and let Colton provide his introduction. 

Colton Chojnacki

Hi, my name is Colton Chojnacki, I'm a product manager here at Beyond Identity, I work on our directory and our generalized key management solutions. I've been in the industry nearly 10 years now as a software developer, working on application development, DevOps, and various infrastructure tasks. 

Husnain

Colton is very humble, he spent a lot of time intensively working as a software engineer and has worked in multiple facets of cybersecurity, including very large, global critical infrastructure...with some very large global critical infrastructure customers. 

And through that experience, he's been a really valuable resource trying to put together the product that we're going to be talking about, but more framing the problems and sort of creating the sort of framework for the solutions that we believe are most important in this space. And my name is Husnain again, and I've spent over 20 years in the infrastructure industry. 

I'm kind of an infrastructure geek. The last 12, 13 years, I've spent doing cybersecurity with a significant focus on wireless as well as large-scale IP networks. And more recently, I've been quite involved with our efforts around secure DevOps and putting together solutions with Colton. 

So to begin, I think we should define what we're talking about when we describe the supply chain attacks that we're going to cover. We're really mostly interested in software supply chain attacks, but supply chain attacks are impacting software organizations across the board, technology companies, and really almost every company is a technology company today. 

They're experiencing a new type of threat from a more sophisticated threat actor with a larger variation in cost of compromise that they can tolerate. And so, when they're coming up with targeted attacks, more and more organizations are essentially vulnerable to the supply chain attacks, which generally reflect attackers leveraging initial access and lateral movement to establish long term reconnaissance and strongly target highest value components of an organization's sort of business process and business sequence. 

Within this sort of space, we've obviously seen a lot of major news stories around Kaseya and SolarWinds and Colonial Pipeline. When we talk about these kinds of incidents, it's often sort of implied that these organizations were unsophisticated or fell victim to very rudimentary kind of attack vectors. 

While it's somewhat true, the story is always much more nuanced. And from our standpoint, we think that blame dominates far too many of these conversations, and the focus should shift away from blame, which is more of a compliance and validation side activator. 

When you blame an organization, you end up doing a lot of testing towards the end of a cycle. And the real solution to all of these things that we've seen through secure development lifecycle work over the last 20 years, is that remediation and prevention with a strong cooperative bond between the earliest actors, in software cases, developers, is really, really important. 

And so, that's what we want to concentrate on. Now, this is a little bit of an exercise and just looking at the way that blame is kind of assigned, and looking at it in the context of some popular attacks that I think all of you will be able to recognize, right? 

The basic format for all of this is, "It was blank in blank with blank who blank." And it feels like an extremely techie geeky game of clue, but with a lot of Sigmund Freud kind of wrapped into it. If you look at some of the earliest ones that we sort of popularly talk about, target comes up quite a bit, and we start talking about HVAC controllers that it was HVAC controllers and private store networks with weak credentials who allowed attackers to compromise the point-of-sale terminals. 

It evolved into this, like, REvil messaging where you end up getting kind of, "It was REvil in company networks for months," so that long-term reconnaissance using RDP as initial access and encrypting and exfiltrating data to create new kinds of lucrative ransomware attacks. 

And then, of course, we also all saw, like, some of the interesting stuff that happened with congressional testimony around one of the major breaches, and a CEO blamed an intern. And the story became that it was an intern in an FTP server with a weak password who exposed an infrastructure software company and all of its customers to malware. 

These are easy, non-nuanced stories, and they make for great television and they make for great stories but they're definitely not catching the entire story. But you can see that this blame game sort of continues and you can easily imagine a world where new kinds of developer-centric and insider threat machinations imaginations can kind of create problems for us moving forward that developers are responsible for an enormous amount of cryptographic assets in modern environments. 

Their access to infrastructure and their ability to impact products is enormous, and it's easy to see them as careless or irresponsible when in the course of an extremely busy, ordinary sort of job activity, they're also having to account for security challenges like keys at-rest. 

It's easy to imagine them being blamed for carelessness, but it's also easy to see stories being built up around malicious actors. And with the emergence of Infrastructure as Code, the same types of problems that we have on software supply chains are kind of strongly impacting existing operations teams and operations teams that are still struggling to transition from conventional systems administration to DevOps to DevSecOps. 

And so, within that journey, the last thing that any of these individuals need is an additional sort of challenge and reputational hit. And when you look at it, the stories are always very convenient. The attacker was always sophisticated and well-funded, the employee was careless, incompetent, malicious. 

This is the sort of default practice. And when you look at this kind of framing that's typical, it's really important to look at, like, what best current practices might look like for credentials and crypto private keys, some of the most important components of initial access that leads to subsequent lateral movement and long-term compromise. 

Knowledge factors, specifically, passwords and secrets and associated complementary technologies are weak by design. We all know, like, why passwords are a challenge, we all understand how secrets are managed. Sophisticated, well-designed secrets architecture will typically use centralized vaults and privileged access management tooling that's relatively extravagant, it has a very high threshold for training, it has a very high friction ceremony design, and this creates a lot of challenges in modern environments. 

Also, we tend to think that the solution to knowledge factors is simply private keys and any private keys. But the reality is that developers are often on the hook for doing private keys and generating key pairs under the assumption that the consequences will be much better automatic security. 

And the reality is is that's not true, like, we end up weak by implementation. The vast majority of key pairs generated by developers and operations folks are not protected in use, they're human protected at-rest. 

So, if a person chooses to store them in their home directory if they choose to synchronize them over Dropbox, whatever they're choosing to do in terms of their own personal IT infrastructure has a huge impact on the quality of that storage. And when the certificates are being transferred or onboarded in typical centralized architectures, it's dependent on the protocol on how much security is present and there are at least a dozen approaches that are commonly used for these kinds of security events. 

So, what we've seen more recently is that CISOs are becoming more and more aware of the situation. Developer shortages, especially accelerated by COVID, and work from anywhere and broad, global competition for talent has driven a lot of the CISOs to see themselves as advocates for the employees as well as advocates for the organization. 

And when they look at protecting the organization, they want to have principled approaches to reducing the attack surface and protecting the brand. They also want to protect the employees, reduce that initial access threat, minimize lateral movement, contain the blast radius involved in most of these operations, and do it in order to protect reputations, avoid identity theft, and construct it around something that the developers and employees can buy into as well. 

And looking at that new kind of employee-centric CISO is what brought us to our core product that then led us to this new avenue that we've done with DevSecOps and developer keys. What we built was a new type of authenticator that specifically addressed those keys and use key-at-rest and key-in-motion challenges, and essentially put a PKI infrastructure into the platform authenticator. 

A simple, lightweight agent that can run on Windows, Mac, Linux, iOS, or Android. This authenticator is responsible for maintaining the security bindings and leveraging modern security protocols used in enterprise, SSO, and SaaS authentication. And underlying this integration with all of these SSO tools, we embed a very strong credential that's minted locally at each platform authenticator using the local Secure Enclave technology for that particular platform. 

So, over the last 10 years, we've seen a huge uptake in Secure Enclaves and Trusted Platform Modules. These are hardware enclaves that provide unclonable, secure, tamper-proof environments to store keys, and in many cases, also generate keys and seal keys and seal information and provide cryptographic operations. 

We've maximized that capability and provided keys that are strongly trusted on first use based on bringing a public key into our directory. 

And beyond the public key in our directory, we don't maintain any other credentials. And so, the users are basically without password...genuinely without password, and all of their credentials are stored within this hermetically-sealed Secure Enclave vault that simply operates on signing operations to make sure that authentication can occur in a maximally secure way that doesn't rely on any custodianship of credentials by the enterprise. 

This is a big change. Directories for the last 30 to 35 years have essentially assumed that in addition to the username, the other construct that's always going to persist is the username. We're trying to really shift away from that and bring forward the promise of 1988 and early X.509 strong authentication, but make it usable in a modern environment without having all of the friction and challenges associated with certificate authorities and all of the associated PKI infrastructure. 

And the reality is, is that eliminating passwords and replacing them with strong authentication for all of your SaaS applications and enterprise applications is pretty low-hanging fruit. If you look at the evolution of TPM, if you look at the evolution of PKI itself, and smart cards and FIDO and WebAuthn, there are an enormous number of tools and underlying technologies that really enable us to make this jump finally, after 60 years. 

But what it reveals after you complete that password list journey is that cryptographic assets sprawl is a much bigger problem in enterprises, especially information-centric enterprises. It's grown enormously with the adoption of the cloud. 

And a lot of the challenges are what we were talking about just earlier, that keys sort of are assumed to suggest security but keys are fundamentally easy to mint. The number of packages available in open source and in user space software that can generate keys are manyfold and those keys generated in insecure environments are hard to protect and the poor hygiene associated with those kinds of keys really degrades the quality of the key itself to a very basic sort of weak secret. 

And that brings us to sort of the last 10 years, we've seen a few major changes in infrastructure, cloud has really become a dominant force over the last 10 years. 

You've seen it go from essentially an IT tool to a broad line-of-business enabler, so vertical solutions for very specific business applications are everywhere. Development methodologies are all pretty much agile. 

And within the agile world, we have resorted to using source code...collaborative source code management in the cloud that was largely designed for open-source collaboration at a global scale for projects like Linux. Simultaneously, we've seen the adoption of all of these enclave technologies to facilitate disk encryption, biometric, and digital wallet compliance regulations. 

And so, when you look at all of these tools at 10,000 feet and you pull back and think about what the architecture should look like and take an opinionated stance on solving for security as leftmost as possible in this journey, that's where we think that identity is really at a pivotal point and can transform the software supply chain. 

And I'll hand it over to Colton now. 

Colton

Yeah. So, if you are a developer working in the last 10 years building a cloud-native application, you most likely have to pick a few of these boxes out of this chart to work with. And this is what makes up your CI/CD pipeline and software supply chain. And in reality, how this actually works is a developer is writing code to stitch all of these projects or third-party vendors together. 

And where exactly does the identity...like, where do you enforce identity and access management if your entire supply chain is sprawled out like this? So, let me... 

Husnain

You know, Colton, since you were a developer, when you look at charts like this and market landscape assessments, what portion of these kinds of tools were you able to, like, really process and understand even from sort of a category perspective when you were doing day-to-day development? 

Colton

I would tell you developers always going to go with the tool that makes it...that is less friction, which is something that the security team is not always paying attention to. So, there's like this fine line between the security team and the application developer team. The application developers, they don't want friction, and it's almost like the security team...this chart is so massive, they don't even know where they should get involved. 

Husnain

Cool. That always seems overwhelming to me, so I just figured I'd check with you. 

Colton

Yes. Okay, so let me talk about, like, what a pipeline looks like today, a continuous integration/continuous deployment cloud-native pipeline. Developers submit code, and this could be code for an application, it could be code for infrastructure. 

That code goes through some verification like linting, unit tests, compile the code, run the unit tests, run the system tests, the integration tests, send it off to QA, then finally deploy it to production. And this has really expanded, like, the threat surface in the CI/CD pipeline that these attacks like SolarWinds are targeting. 

They're targeting this, they're targeting the code that is stitching together all of these cloud-native services. So, what we're talking about today is code signing. And I think there's something we should make clear and what is the difference between signing an artifact of the pipeline, signing the binary that comes out of the pipeline, and signing the code change that goes into the pipeline, whether it's a change to an application or now with the rise of GitOps, a change to your infrastructure code. 

There's these two domains that we want to talk about, and one is the development organization domain and this is sort of like your CI/CD pipeline that is built and maintained by the organization. And whenever something comes out of that pipeline, what you're really... 

and you sign it, what you're really saying is the organization endorses that artifact. So, what we're doing here at Beyond Identity is shifting left to the developer domain. Now, this is a...there are not great security tools out there for making sure that developer code changes securely go into the CI/CD pipeline. 

Like historically, it's kind of always been left up to the developers to own this process and a lot of security teams don't even really know that they're supposed to get involved or leave it to just the developers. 

Husnain

So, Colton, when you look at this kind of developer domain and development organization domain, are you saying development organization is essentially providing promises to its downstream and customers but they're not necessarily getting the level of assurance that they think they should...that they probably should be getting from the developers themselves? 

Colton

Yes, I think a lot of the times, developers are kind of in their own silo and they're kind of telling the security team, "Trust us, like, our code changes are legitimate." So, yeah, let's go into why is securing the developer important. Let me talk about Git. 

Git is a software that is used by software engineers so that they can collaborate on the same code base without stepping on each other's toes. And this was, like, originally designed and developed in the mid-2000s by the Linux kernel developers so that they could contribute code to the Linux kernel without breaking anything. 

And then services like GitHub and GitLab and Bitbucket came along, and they really just put this SaaS wrapper around this Git protocol. And the thing is, is that this Git protocol was never really...it was never really designed or intended to be used by enterprises, so it's kind of...that portion of the developer domain, which is not really...usually not under the control of an enterprise goes largely unprotected. So, yeah, the problem with signing these artifacts that come out of a pipeline is that you can't confirm where the code necessarily came from, like, what developers it came from, you just know it came from the organization. 

Husnain

But doesn't Git...don't most of these platforms like GitHub have, like, a checkmark for sign code, like, that says that's verified or something? 

Colton

Yeah. So, that's sort of like the...on Twitter, that Twitter verified checkmark, Git has these. But really, what that's just saying is Git has verified that user and in most GitHub and GitLab organizations, the model is that the developers bring their own personal accounts to the Git...to the organization model in GitHub and GitLab. 

Husnain

So when Git verifies a developer's identity, are they also saying that that developer's identity is securely stored? 

Colton

Git is just saying that that developer...how should I say this? They know about the developer, but that kind of takes away from the enterprise security controls. Like, it's not GitHub's job to verify enterprise identity or corporate identities. 

Husnain

- Cool. 

Colton

Okay. Yeah, so I think I should give a little background on, like, what Git is. It's a distributed version control that allows developers to all work on the same code base. And typically what a developer workflow is, is the developer makes a change, they push their...and when I say change, they're source code change, they push that change up to a central repo and on every change, that runs through a continuous integration/continuous deployment pipeline. 

So, things like a developer makes a code change, the pipeline first does something like lints the code, make sure the syntax is correct, it builds the code, it then runs the code through unit tests, system tests, integration tests, maybe scan for vulnerabilities or credentials that are in the code. So, now, let me talk about what we've built, how we've shifted the identity...how we've just shifted secure DevOps to the developer and away from the organization domain. 

So, the platform authenticator that Husnain talked about earlier, we've essentially added a capability in there so that we can sign the git commits as developers are making code changes. And we're signing them on a device, on the developer's device that is, where in an organization model, they're usually signed by the CI/CD pipeline, which is running on some server. 

So, we are signing each code change with the developer's identity. And then the second component we built was a module that you can import into your CI and CD pipeline. And this module will ensure that only code that was signed by a known corporate or enterprise identity can be admitted into the CI/CD pipeline. 

Husnain

So, when you say that it's signed with the developer's identity, how does that relate to the GPG key? Where's the GPG key coming from? 

Colton

So, the GPG key is cryptographically tied to the developer's identity. We've essentially built a personal certificate authority in our platform authenticator, so we're able to issue keys and certificates from that identity. All right, and now I will do a quick demo. 

Let me share the screen. All right, can you see my screen? You should see a terminal and a web browser. Okay, so I've set up an example Git repo that we've integrated our product with. 

And in this repo, I've created this pipeline. I've kind of just created a standard CI/CD pipeline, where first, anytime a code change comes in, we lint the code, we build the code, we run some of the tests, and then we finally deploy the code if everything passes. 

So what I've configured is I've installed our verification module at the very beginning of the pipeline. And, yeah, so let me...now I'm going to do some examples of like a developer workflow. So, we have our platform authenticator running here. 

This is our credential. This is another credential, just as an example. If I go into the GPG keys, this is the actual signing key that was generated. And the private key is stored in the Secure Enclave or the TPM, and the public key can be uploaded to GitHub or GitLab. 

So, I'm going to make a code change. I have this example repository called "Effective Guacamole," and it's really, like, where we keep our super-secret guacamole recipe, so we only want known corporate identities committing to this recipe, we don't want anyone injecting bad ingredients into the recipe. 

Okay, I hope this is big enough. Let me make this bigger. And I open up my recipe. Let's just say I want to add more tomatoes to the guac. I just made my code change. 

Now I'm going to do my git commit and I'm going to add a message that said, "Added more tomatoes." Now I've made that git commit, and you can see this little toast message that said, "Beyond Identity has signed the git commit," and this just happened in the background, the developer didn't have to do anything. 

Now, usually, this would be, like, an involved ceremony where a developer has to go check out a key, they have to bring it down, they have to sign it, and then put the key back. But with Beyond Identity, we're just signing it in the background and really, the developer doesn't even know it's happening. So, I'm going to push that code change up to the repository. And now that it's pushed up, it's going to run through the CI/CD pipeline. 

So, we can go watch it. So, I added more tomatoes to the guac, it's running through my pipeline, we can go watch the logs. And it's checking, "Is this..." 

It's doing a check, it's basically asking Beyond Identity, "The key that was used to sign this git commit, do you know who it is? And should we allow this or not?" And we return a message like, "Yes, we know who it is, it's an identity we know about and go ahead with the change." 

So then, just the rest of the pipeline runs. And this is just example of the pipeline running. So now, I'll make a minor config change so that I don't sign the commits and I'll try to push up another change. 

And I'm really just saying, "Do not sign the git commits." I'm going to make a change to the recipe. What's something that's really...what's something you shouldn't add to a guacamole recipe? Cheese. 

Husnain

Chocolate chips. 

Colton

Chocolate chips. Perfect. Now I'm making a commit and I didn't sign it. Push it up to the repo, it's now going to go through the same pipeline. 

All right, the job is running. And we stopped it, we prevented this code change from going into the repo because it wasn't signed. 

And that's really just one example of a reason to not allow it in. Because all of these identities are tied to Beyond Identity, a user is able to go into our Beyond Identity console and sort of suspend a user, which would, therefore, prevent us from signing anything if that user is suspended. 

Also, there's policies that can be written into so that you can only sign git commits from a managed device. So, if you, like, take a step back and you look at what we have running on the authenticator, what we have running on in the pipeline, we've really created a solution that allows git commits to only come from a trusted managed device and known corporate identity. 

Deb

This may be a stupid question, but I'm wondering what if they put in the chocolate chip and then signed the code? What would be the process? Are they even allowed to sign the code if they put the chocolate chips in? 

Colton

So, that's sort of like an insider threat. And what we're also providing is, like, code provenance and non-repudiation. There's no way...if the developer signed that they put the chocolate chips in, there's no way they can say they did it. 

Husnain

Yeah, you'll know that it's a malicious actor or a very bad employee with poor taste buds. 

Yeah, one of the bigger things that's kind of emerged from this sort of evolving and advanced threat landscape are some learnings that we got from sort of the way that distributed ledgers and modern cryptocurrency has turned out. 

One of the hardest guarantees to provide is that someone genuinely did perform an operation even when they say that they did not. And so, that, like, elimination of plausible deniability is a really core component to achieving computational accountability and rigorous trust. 

And so, that formal proof is what we're able to do by signing and sealing our log messages as well. So, because we're able to sort of locally and seamlessly sign any piece of information without the latency of central usage or checking in or checking out stuff, we're able to essentially provide that sealed guarantee around the logs and every event that's happening in the system. 

Cool. Well, Colton, that was an awesome demo. Just out of curiosity, how long does it take to set something like this up? 

Colton

Yeah, so it takes a...I would say a couple of minutes. It's more like something a developer does one time, they set it, and then they forget it. 

Husnain

And what about the GitHub actions or the sort of Git repo actions? Are those difficult to set up or do we provide samples or...? 

Colton

Yeah, we have samples, they're not difficult to set up. And kind of like what's great about the way we implemented it is that you can decide where in your pipeline you want to put that. So, let's say you want to run that every time a commit is pushed up to your Git repo or every time a commit...every time a merge request is opened. 

So, a request to merge a code change into your main branch. So, it's really up to the administrator to decide where it makes sense to put that check. 

Husnain

And does the system require you to be integrated to the SSO or the sort of enterprise identity system or is it possible to start smaller and more compartmentalized? 

Colton

Yeah, you can definitely start smaller. There's nothing that says you need to be integrated with SSO to use this. 

Husnain

Cool. So, you know, we just want to wrap up before we start taking some questions. In terms of our recommendations sort of at a 10,000 foot kind of industry level, we think it's really important to adopt blameless approaches to cybersecurity controls. 

It's important to pull back and ask the right questions, adhere to first principles, and really think through how you want to think about cybersecurity frameworks. People have a tendency to look at output attacks and compromise, and immediately reactively put in new layers of controls without reassessing the entire situation and thinking about where the vulnerability really exists and at what point you should really intervene. 

And so, that's where having these models around MITRE ATT&CK or Lockheed Martin Cyber Kill Chain or various NIST controls, it's important to sit back and generalize it, right? Like, all of these models have essentially a protect, detect, and respond component to them. So, even if you're just looking at it as three simple steps, pulling back and thinking about it like that is useful. 

Breaking down your software development lifecycle and understanding how to frame that supply chain within the context of the cybersecurity framework that you choose is really important. Absolutely, people should be signing their code artifacts and they should have strong attestation and provenance for those. 

So, using proper tooling to make sure that you're signing code artifacts is important. We encourage people to use dynamic application security testing tools that validate those artifacts, also with checks for known patch levels and vulnerabilities and scans it against large CVE databases. 

And then we also think that there's a huge role to be played by SaaS tools. So, when you look at the static application security testing tools that have access to source code, they've been extremely valuable in establishing software bill of materials that have strong assurances and understood provenance. 

That said, SaaS tools to understand your open source component contribution isn't the end-all, be-all, you really also need to understand that majority of intellectual property that's getting injected into your code repositories by your actual developers. 

And so, that's why having the supply chain mindset and sequencing things out and thinking about what order they come in is really important. So, we just want to move everyone as left as possible and we want to get people using cryptographic key pairs but in smarter ways. 

A lot of people have noticed that the source code management systems' automatic scanning capabilities have actually lowered developers' vigilance on ensuring that they don't put secrets into their code, they rely on the scanning that's taking place in these platforms to take care of that. 

That's the kind of reactive end of chain compliance-centric managing to the test kind of mentality that just doesn't help anyone in a software...in a secure development lifecycle. And so, we want people to be able to utilize tools intelligently that recruit people earlier and more proactively to think about security not just in terms of known threats and mitigating against existing attacks that a team may have experienced, and really look at it as a systemic foundation and adhere to the practices that are sort of emerging from that broader secure software development community. 

And so, that's really where we come in and we just ask that people, you know, be more intentional and be more respectful of cryptographic asset hygiene. Cool. 

And I think we're open to questions now. 

Deb

Thanks for that great presentation. It was very informative and I believe it probably helped a lot of developers level up without feeling that they have to take on a lot of extra responsibility. So, we have some questions that came in. 

The first one sort of speaks to the blame game that you started the presentation out with, Husnain, and that is, "Who is typically responsible for the CI/CD pipeline? And what do they actually do in their rules of responsibility?" 

Colton

Yeah, it sort of depends on the stage of the company, like software startups, usually, it's the developers who kind of work in a silo and they start to actually build and develop the pipeline themselves, only then later it is that a security team may come in and start to even try to understand what's going on. So, I guess I'm saying it's usually the developers who do it but it probably should be the security and DevSecOps team who should be responsible for it. 

Husnain

Yeah, and Microsoft has done phenomenal work in their secure development evangelism works and stuff, making it clear that early on, their practices were very reactive and built around very formal engineering ops groups that organized these software development environments and provided the assurance tooling. 

What we're seeing even in large organizations with tens of thousands of developers is a move towards getting people more engaged and enrolled in the process early on. And so, it tends to be more collaborative, you end up having security evangelists within specific product teams within the larger organizations and that security evangelist function within each of those product groups is able to work in conjunction with all of the sorts of other folks who are making sure that CI/CD pipeline is secure. 

Deb

Excellent. And would you say that security evangelist comes from the DevOps side or the IT ops side? Did the developer usually pick up the role? Like, is there usually a champion of security among the developers or is it an IT person coming in and trying to rally the troops? 

Husnain

Conventionally, it's been an IT person or an IS person. And what's really happening now with sort of the breadth of attacks being really kind of almost astonishing and impossible to manage from a purely testing perspective, there's more emphasis on putting the security evangelists within the software engineering teams. 

So, they're actually developers contributing code simply stepped in and are making sure that during every developer stand up, during every retrospective, during every planning exercise, an adequate amount of focus is being applied to security within the people who are actually contributing to the software. 

Deb

Excellent. Okay. That's been my observation as well as the journalists and analysts in the space for a long time, I've seen the DevOps team sort of taking the lead setting up an evangelist-type role within the DevOps to do DevSecOps better. 

So, your answer aligns with some studies and stuff I've been following as well. My next question from the audience is, "What do you say to development managers and engineering leaders who think that their developers can't take any additional responsibility to sign commit?" And I'm going to add to this question here, from what I heard in the presentation today, it's actually going to be easier if they could use a tool like yours. 

Colton

Yeah, I would say that the developers have...you know, or I would say they're used to have a good argument like, "Signing code is complicated and it creates friction." And in the old model, the developer possesses the keys and they are responsible for the keys. 

But in this new model sort of in our solution, yes, the developers possess the keys but they don't really have to worry about them. They're protected. 

Husnain

Yeah, I think one of the keys is that if you make this simple enough, the value prop to the developer is that as threat actors become more sophisticated, it's really just a matter of time before they're the victims of identity theft and the identity theft that they'll be victims of is this kind of like, you know, malicious code insertion on their behalf. 

And given the amount of cryptographic assets that they're creating and not necessarily tracking, there's no lifecycle to it, there's not like a deprecation framework, there's not like a revocation kind of window. So, these things get created and they remain in the ether with access and the ability to claim that they're a particular developer forever. 

And solving for that in a way that doesn't add any additional time on each individual commit but only requires a few minutes of initial setup, we think it has a lot of value to the developer. 

Deb

And here's another sort of related question, "Git commit signing has been around for a while, those organizations who did utilize it, how were they storing their git commit key?" 

Husnain

So, you want to explain, like, how GPG...you can be honest, you can tell us where you stored your GPG keys. 

Colton

Okay, USB sticks, home directories. I used the same keys for 10 years, just take that key wherever I go. 

Deb

All of which will create risk. 

Colton

Yeah, and it's kind of like my DevSecOps team is expecting me to be, like, responsible enough for the key but hey, I'll put that key wherever I want if it's up to me. 

Deb

If it makes your job easier as a developer, right? 

Colton

Yeah, I was just saying I will put it in something that is easily accessible and low friction. 

Deb

Okay, well, it looks like we're out of questions. I would like to thank everybody for being here. As a reminder, if we weren't able to answer your questions during the live presentation, watch your email for a response within a few days. 

And I want to thank all of you for being with us today and also to our speakers for sharing the valuable information and to Beyond Identity for sponsoring this webcast. And, of course, thanks to our audience for tuning in, we hope you enjoyed the presentation. 

Book

Thwart Supply Chain Attacks by Securing Development

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Download the book

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.