Bitcoin

Cloud Sprawl Is Real. Continuous Discovery Is Your Best Defense

Ask any Cloud engineer how many applications take place on their environment and you will get a stadium number. Ask again five minutes later, and they could double it. Not because they are evasive, but because they just don't know the exact number.

It is difficult to believe, but between the redeployments of CI / CD pipeline, zombie workloads, inherited applications in the obscure corners of the infrastructure and too many identity providers (IDP) to count, it is easy to lose the track.

When you don't know what works, you can't manage and secure it. What is not: a reserved governance problem IAM Compliance Control lists. What is: a real -time security gap. Orphan applications without MFA, applications always relying on inherited authentication or workloads redeployed by an obsolete script are easy prey for bad actors.

All the rest becoming continuous – from integration to deployment – discovery should be too.

The problem of fragmentation

In a typical company today, applications are deployed through AWS, Azure, GCP and perhaps a private or two cloud. Even in a single cloud supplier, there is spreading.

Using Google Cloud Platform (GCP) as an example, an application can be deployed in several ways. You have options that include the engine application, the Run Cloud, the calculation engine, the Google Kubernetes engine and the APIGEE gateway. Other cloud platforms such as Azure and Amazon Web Services also have many deployment options for applications workloads. Some are similar, such as Kubernetes support, but other technologies could be unique to this cloud platform.

Without a central discovery mechanism, some of these applications can easily fall through the meshes of the net. Even the infrastructure as a code (IAC), like Terraform, does not always capture the whole image, especially when the developers circumvent the models for manual deployments or forget to update the beacons.

Of course, there is a similar spread for identity systems that control access to these application environments. Companies can have a mixture of OKTA, Microsoft Entrance ID and Amazon Cognito, as well as Active Directory or Legacy on -site web access, inheritance site and active directory on site, often coexisting.

Fragmented identity

Applications can authenticate against different PDIs depending on the moment or when they were deployed. For example, a team can choose OKTA for internal applications, while customer -oriented systems rely on Microsoft ID or cognito.

The result is a sprawling network of references, policies and access models that make audit almost impossible in a coherent manner. Even knowing if the MFA is activated for a given application becomes a research project.

This level of Identity fragmentation is certainly embarrassing, but worse, it's dangerous. The attackers do not need to compromise your crown jewels; They just need to find this door not kept. When the applications are deployed without visibility and adequate controls, you leave the doors wide open.

Why traditional audits do not cut it

An example of an audit model is to provide a Big Four consulting firm, execute an audit, generate a report and call it one day. The report was already obsolete when he was sent by e-mail.

CI / CD pipelines can redeploy the discovery applications. A development team could turn something new without informing security. Or worse, a dormant application with a loose siteminder policy could still allow anyone with an e-mail @ Company.com works absolutely.

What is more disturbing, these audits are intrinsically narrow. They capture a punctual snapshot of a system that is constantly changing. Any significant discovery is partitioned to the people involved in the audit process and often stored in static documents that no one revisits before the next round.

There is no continuity, no automation and no assurance that the data remain precise beyond the day it has been collected.

Dollars and Sense of Security

Let's not forget the cost. These evaluations often take weeks of effort and hundreds of thousands of dollars. The result is a pretty presentation, with graphics and chips that look great in a conference room. But what value does it bring to an engineer trying to find which IDP governs access to a containerized application operating on a forgotten GCP cluster?

Meanwhile, the attackers do not wait for your next audit cycle. They score your attack surface, looking for termination points that your spreadsheet has not caught. This is why the continuing discovery is a necessity today.

What does the continuous discovery look like?

Many security teams are stuck by playing blind-blind eyes with the visibility of the application. They discover only one Shadow application to find three other hidden in the cracks. The problem is not due to laziness or lack of tools, it is that the environments are constantly changing. Between the revolutionary development teams of new services, the CI / CD pipelines redeploying the old and the infrastructure evolving by the week, the maintenance of a static inventory is impossible.

This is why continuous discovery scales. It moves the visibility of something you do once by a quarter to something you build in the fabric of your operations. Here is what it means:

** Cloud-native digitization: ** Call APIs on Cloud platforms (GCP, Azure, AWS) to list deployments through services: application engine, DHWs, Lambda, Cloud Run, etc.

Correlation of identity: Card each application on its IDP, check the application of the MFA and the catalog authentication models (SAML, OIDC, LDAP, header, etc.).

CI / CD surveillance: Catch of applications that reappear after being put out of service because a pipeline did not obtain the memo.

Marking and classification: Apply metadata to organize compliance by scope (for example, PCI applications), Department or Data Sensitivity.

Continuous discovery provides connective tissue between your infrastructure and your identity architecture.

It opens the way to the management of the security posture in real time, proactive compliance and an effective response of incidents. Without this, the next violation of the application may not come from a sophisticated feat, but instead, something that your team did not even know was being executed.

Instead of periodic fire exercises, continuous discovery deals with the visibility of the application as a live process.

Case of real world use: 2,500 against 4,500 apps guesses

In a 500 fortune company, a newly named CTO has been asked an apparently simple question: “How many applications are in your environment?”

“2,500,” she replied with confidence. Then a break. “Wait, we have just acquired another company about the same size. Call it 4,500.”

This answer did not come from a recording system. It was an assumption, based on acquisition staff and an approximate presumption of parity. There was no application inventory to consult, no dashboard to confirm the total – just mathematics at the envelope. And he was not a junior computer analyst. It was a senior manager to ask a fundamental question about the digital imprint of the organization.

Scenarios like this are revealing because they highlight the absence of a reliable and permanent update register. Without one, companies are forced to count on memory, manual calculation sheets and tribal knowledge – all of which fail in dynamic cloud -based environments.

It also reveals the operational risks and safety of uncertainty. If the management team cannot quantify the number of applications at stake, how can they be sure that these applications are secure, compliant and correctly governed?

Why should engineers and IAM teams worry about it?

Unleashed applications are low fruits for attackers. If you manage IAM and an application still uses basic authentication without MFA, it is on you, whether you know the application or not. And if you are responsible for keeping things in conformity, this six -month spreadsheet will not protect you when a verifier requests current details of access and authentication. With continuous discovery:

  • You know what exists
  • You know who has access
  • You know if it's secure

And you can prove it.

The way to follow a world of changes

The discovery goes far beyond the compilation of a list. It plays an essential role in the surface of hidden risks –

The dark corners of your cloud domain where inherited applications, poorly configured services or unauthorized deployments can still be fulfilled and unnoticed.

Negrigendum systems can become attack vectors, compliance liabilities or unexpected cost sources. Continuous discovery closes the loop, giving safety and identity teams a real -time card of their application landscape – what works, how it is secure and which has access – so that they can take decisive measures before the problems improve.

(As indicated, Gerry Gebel is professionally affiliated with a company operating in identity management and security space. Its views reflect its expertise and experience in industry.)

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please consider supporting us by disabling your ad blocker