SSE solution series: the criticality of a zero trust architecture foundation
Feb 28, 2022
Delivering zero trust within an enterprise has traditionally been challenging due to the shared network context of connecting the source to destination, relying on either a physical or logical network path to interconnect the two entities. The figure below outlines these shared physical concerns. You cannot build or add on zero trust with SD-WANs or firewalls.
Among many other things, zero trust architecture enforces granular controls, ensuring that each requester communicates with the correct destination on a per-session basis. Such rules require knowledge of the source and destination entities and are why most enterprises begin their zero trust (and SSE) journey with their user base. Users are often assigned an identity, allowing them to differentiate themselves from various services. However, as networks are flat, exposed, and open, the risk of a user having access to more information just because they shared a network is a major concern to the stability of enterprises.
Consider all business use cases like protecting users and key business assets and apply SSE controls to all traffic. Establish connections after dynamically and contextually reviewing the riskiness of the following four connection values:
1. Initiator of the connection
What is the identity and trust of the user/device/network? How does this identity differentiate the access for this source and under which conditions?
Example: Sarah in HR needs access to the cloud-hosted HR system as well as the internal hosted expense system. Access is granted through the SSE platform as long as her identity and device trust have the rights defined to get access.
2. Control of policy
Where, how and which controls will be applied? Criteria for control includes effectiveness of path, the risk and trust of the initiator, the function of the requested destination and the policy of the enterprise.
Example: Pierre has a valid identity to access Salesforce, but his company only wants him to view, not download or manipulate data. The SSE solution thus only allows Pierre the access to view the content of the application and nothing more.
3. Destination of the connection
Which service is the requester accessing? Is it public SaaS or an internal workload? What controls are to be applied? Access can change based on the context of the identity and control policy.
Example: A valid initiator may have approval to access a specific cloud PaaS service, and if it is a cloud service, the SSE will inspect the workload to ensure it is not leaking corporate secrets. That same initiator may then speak to an internal service with a similar trust, thus simply establishing an initiator to service connection, without additional controls.
4. Establishment of connection
Finally, taking the previous inputs, the conditional insights on workloads, network or edge capacities, enterprise-defined policy, etc., and establish access. The SSE solution should identify variations, e.g. a changed location, and steer the access through the best applicable path.
Example: Once the source, control and destinations are validated, the connection will be built, for that session and nothing more. The per-session enforcement end-to-end flow is outlined below:
Defining the connection controls within an SSE solution ensures that only the correct source can consume the correct destination, through the correct SSE solution. This least-privilege use of SSE delivers multiple benefits to an enterprise, including:
- Applying the correct SSE controls to the correct source
- SSE protected services are not exposed to unauthorized sources, reducing cybersecurity risks
- Waste reduction, e.g. don’t allow a Linux server to connect to a Windows patch system.
- Granular visibility and learning of flows - per-access request, not network IP to IP
- Consolidation of access based on identity and not on network, allowing networks function (and infrastructure) to be rationalized
By selecting an SSE solution that delivers the control under all of the following use cases–and only user based control–you can extend protection across all your business functions.
User to Workloads
Enabling user access to workloads means you can remove the network context from user access, whilst simultaneously gaining visibility of the workloads that are being accessed by users. This one-two punch typically delivers the quickest value.
Consider granular control for users across the entire application landscape. For instance, internet services like YouTube can be limited to an organization’s PR team.
Allowing for greater development of inventory of enterprise services and allowing for more granular rules such as access to isolated OT and R&D platforms, without ever exposing the entire ecosystem to the user base.
Implementing zero trust access for third-party partners removes the risk of network connectivity and exposed attack surface that comes with legacy partner access. The least- privilege control of zero trust allows you to control partner access from untrusted or personal devices to specifically designated apps and nothing more, whilst giving greater visibility of what is being accessed.
The third-party controls of the SSE solution should provide multiple mechanisms for access control. Options include authorized client access from multiple identity providers, through to specific applications, isolated browser-only access, or complete isolation of access to a rendered image presented to the third party (streaming pixels to the user device like BYOD).
Workloads to Workloads
Workload-to-workload controls are requests for access to applications and services. Generally, a Windows machine will request Windows patches, not Linux. Thus it is critical for an enterprise to categorize which systems should get access to what.
As with users, workload controls must provide a valid identity to consume a service. If the workload consumes public resources such as PaaS-based IoT/OT services, the security edge must validate and understand its context and block any attempted misuse.
Conversely, should the workload access a local, private service, this can only be done through in-line SSE controls, after the approval of the identity, as per a zero trust validation.
Location to Location
As access and control evolve across your enterprise, consider zero trust for inter-site connectivity. You’d need to isolate a set of services to a network, site, VPC, etc. The connection between the location and the known site should not be over a shared network. Zero trust enables a valid location to connect to a valid set of workloads within another location. Zero trust does not use network link-layer access; it calls for app-to-app connectivity uniformly across any site, VPC, VLAN, etc.
Protecting an enterprise and its user must be approached in a way that delivers access on a need-to-know, least-privileged basis. Zero trust must be the foundation control when picking an SSE solution, so that:
- The SSE vendor protects all enterprise services and validates the identity of the entities before allowing access; everything else must be blocked.
- Solutions that force network connectivity should be avoided and access should be network-agnostic, everywhere.
- The SSE service delivers a zero attack surface for your private enterprise services.
We'll examine ATP, advanced DLP, and encrypted traffic inspection at scale in the next installment.
Editor's note: This content is adapted from The 7 Pitfalls to Avoid when Selecting an SSE Solution