When choosing an SSE solution, look for flexible deployment models for protecting your users and applications wherever the application is hosted, including the data center, public cloud, private cloud, edge compute node, and on-premises.
Most users will connect to the SSE via a vendor’s public service edge. These are full-featured, secure internet gateways and private application brokers that provide integrated security. They inspect all traffic bi-directionally for malware and enforce security, compliance, and firewall policies and need to handle hundreds of thousands of concurrent users with millions of concurrent sessions. Because of this, regardless of where your users are, they can access from any device:
- The Internet with the public service edges protecting traffic and applying your corporate policies.
- Internal applications with enforced access and re-authentication policies based on your organization’s corporate best practices.
It is important to ensure that these public service edges have significant fault tolerance capabilities and are deployed in active-active mode to ensure availability and redundancy. The vendor should monitor and maintain its public service edges to ensure continuous availability. To ensure data privacy, customer traffic must not be passed to any other component within the infrastructure and no data should ever be stored to disk.
However, situations may arise where the public service edge may not meet requirements and therefore the SSE vendor must offer private service edge options. This option extends the public service edge architecture and capabilities to an organization’s premises or private location and leverages the same centrally controlled policy as the public service edges.
For secure access to the Internet, private service edges can be installed in an organization’s data center and are dedicated to its traffic, but should be managed and maintained by the SSE vendor, with a near-zero touch from the organization. This deployment mode typically benefits organizations that have certain geopolitical requirements or use applications that require that organization’s IP address as the source IP address.
For internal application access, the private service edge provides similar management of connections between the user and application and applies the same policies as the public service edge, with the service hosted either on- site or in the public cloud, but again managed by the SSE vendor. This deployment model allows zero trust within the four walls, as it is useful to reduce application latency when an app and user are in the same location (and going to the public service edge would add extra latency). This option also provides a layer of survivability if a connection to the Internet is lost. The SSE vendor should distribute images for deployment in enterprise data centers and local private cloud environments.
To provide zero trust protection for internal applications, SSE vendors must offer a way to create a secure, authenticated interface between your application servers and both public and private service edges in order to protect internal applications. This mechanism should be available in several form factors: a standard virtual machine (VM) image or containerized deployment in enterprise data centers, local private cloud environments such as VMware, or public cloud environments such as Amazon Web Services (AWS) EC2, and packages that can be installed on supported Linux distributions.
Once it is established from where the SSE policies will be administered and enforced, then consider how users and workloads will be offered this protection. It is important to consider various scenarios:
For remote users on managed devices, the SSE vendor must offer a single unified agent that forwards traffic to the service edge for secure Internet access. The agent should also provide granular, policy-based access to internal resources. All of this should be automatic using the intelligence built into the agent. It should also protect your users’ mobile traffic on Wi-Fi or cellular networks. The agent forwards user traffic to the SSE service, which enforces your organization’s security and access policies wherever users access the Internet and establishes a secure transport for accessing enterprise apps and services. Ensure that this agent can detect when a user connects to a trusted network and, if a trusted network is detected, whether the agent must disable its service, as determined by policy. Ensure that these agents support a wide range of operating systems, including Windows, MacOS, Linux, iOS, and Android.
For users in a branch office, a common method for forwarding traffic to the service edge is via a GRE or IPSec tunnel. However, the SSE vendor should offer an alternative approach. A virtual machine installed in the branch can simplify the complexity and ongoing administration of these tunnels and eliminate lateral threat movement by removing the customer-managed routable network. The deployment should be automated and include flexible traffic steering policies to the service edge with built-in SLA monitoring and failover. This option works well for medium and large branches and those that offer local services.
The previous option of treating every user like a remote user should be considered for smaller branches where no local services are offered (think about a coffee shop model). Given how recent events have changed the importance of the branch office, this option is desirable, as it allows no one on the corporate network and prevents the chance of lateral movement.
For users/things on unmanaged devices or third-party access to internal web applications, SSE vendors should provide similar SSE protection without the need to install an agent. Such users should leverage a web browser for user authentication that then provides zero trust protection by publishing an application-specific CNAME in your DNS zone so the web browser can automatically redirect those requests. Alternatively, the SSE vendor must also have an integrated cloud browser isolation (CBI) capability for agentless security for any unmanaged device anywhere. As a side benefit, this completely circumvents the need for a fragile reverse proxy.
With CBI, admins would configure a sanctioned cloud resource’s SSO setting to redirect to the SSE vendor. After that, when users attempt to access said cloud resource from a personal or third-party endpoint, their traffic is sent to CBI automatically and without any software installations. It renders content into pixels sent to user devices, preventing downloading, copying, pasting, and printing. In this way, users can perform their work duties from unmanaged endpoints without the risk of data leakage and malware uploads, all while respecting compliance requirements.
For workloads connecting to workloads within the same VPC or data center, traditional network segmentation was the answer. While this made sense on paper, achieving network segmentation in practice was challenging. As such, SSE vendors must extend their user-to-application protections to workload-to-workload communications. With an agent installation on the workload itself, the SSE provider should determine risk and apply identity-based protection to your workloads, without any changes to the network, and should have policies that automatically adapt to environmental changes.
For workloads connecting to workloads across VPCs or CSPs or to the Internet, SSE vendors must again extend similar SSE protection offered for users to these workloads. As such, SSE vendors should offer a mechanism, typically via virtual machine (available in public clouds or on-prem hypervisors), that simplifies traffic forwarding to the service edge. The result is cyber threat and data protection to workloads reaching out to the Internet, as well as zero trust protection for workloads in one cloud accessing workloads in another cloud. With this approach, SSE vendors can consolidate multiple products (e.g., web proxies, firewalls, NAT gateways, URL filtering, etc.) into a single solution.
For securing data at rest in IaaS and SaaS environments, the SSE vendor must also provide solutions in the CASB, cloud infrastructure entitlements management (CIEM), and cloud security posture management (CSPM) space, so that API-based scanning with popular SaaS and IaaS applications can occur. Doing so allows for the identifying and remediation of misconfigurations and improper permissions within cloud environments, coupled with audit and scans of SaaS and IaaS platforms for data and threat protection. An SSE vendor should offer these out-of-band capabilities in tight alignment with their in-line capabilities to apply consistent policies to data at rest and data in motion.
Deployed correctly, these flexible, diverse, and scalable options will provide your organization with all the benefits of the Security Service Edge, regardless of where the user or thing may be, or where the application is hosted, and will even ex- tend such protection within the application itself:
- The benefit of a single SSE vendor providing this broad blanket of protection is that it can be managed from a central control plane with corporate policies applied evenly and dynamically across all users/thing-to-application and workload-to-workload communications.
- Extending the same protection for managed devices to unmanaged BYOD and third-party access allows greater flexibility for contractors and employees.
- Workload-to-workload security affords DevOps and CloudOps engineers the same zero trust protections for their applications accessing other workloads, other clouds, or the Internet.
Part 1: SSE solution series: why a global, scalable cloud platform matters
Part 2: SSE solution series: the criticality of a zero trust architecture foundation
Part 3: SSE solution series: choose SSL/TLS inspection of traffic at production scale
Editor's note: This content is adapted from The 7 Pitfalls to Avoid when Selecting an SSE Solution