Secure Access on the Internet
Introduction
If you are somewhat security conscious you should already know that the Internet was designed as a closed network with a protocol stack that assumed that only trusted and registered entities were going to be allowed to connect to, in the same way only registered telecom oligopolies were going to be able to provide phone services.
Via an alignment of planetary proportions of greed, convenience and plain-old thirst for power, the internet grew up towards its own plane of reality. No longer restricted to organizations, nationalities or a single jurisdiction, but as a network of networks connecting the whole world shared in the same way that the ocean or the air space is shared.
Setting up a server and a port is the literal equivalent to setting up a port in international waters, exposed to all kinds of actors monitoring the IPv4 address space and beyond the same way radio signals intelligence is executed by superpowers, and sometimes to literal raids by modern day pirates.
Anyone who sets up a SSH server in port 22 can already monitor for connection and authentication attempts from every single place on the world by just having a look on /var/log/auth.log
, coming into complete awareness regarding the violence that governs modern day networks.
For other services such as HTTP (port 80) and HTTPS (port 443), we are already in a situation in which crawlers and AI providers are hungry for any kind of public data they can capitalize into results for the search queries of their users or into the training data for the next multi-billion parameter AI model.
As an operator actively involved in this world, that wants to self-host his own services or those of the enterprises and organizations he works with, how can I build systems and architectures that are able to withstand these forces?
The thesis
This last year I have been in grad school doing a master’s in cybersecurity and I have spent this summer working on my final thesis about the implementation of Zero Trust architectures using cloud technologies such as Envoy Proxy, Istio and Kubernetes.
You can find a copy here if you are interested in checking it out.
My purpose with this thesis was never to achieve some kind of academic or innovative achievement, but certain level of understanding on how to work with existent technologies and frameworks, which are already very powerful, in order to create software systems that we can rely on.
One of such technologies is TLS and specifically, mutual TLS, which I believe to be an underrepresented security scheme in the web.
Coming back to the topic of my thesis, NIST recommends the implementation of the concept of “Identity-Based Segmentation” in your security architecture of your back-end APIs.
This involves deploying mechanisms to achieve:
- Encryption for data in transit.
- Software/Service authentication.
- Software/Service authorization.
- End-User authentication.
- End-User authorization.
Simply using mutual TLS in your application allows the API to achieve strong encryption for the data in transit, implicit mutual authentication by means of a shared Certificate Authority and a set of a high integrity attributes encoded in the client X.509 certificate that the server can use as basis for its authorization model.
The two remaining mechanisms for end-user authorization and authentication can be implemented via OAuth 2.0 with OpenID Connect or SAML.
One of the limitations of using a client certificate is the fact that it needs to be previously distributed, and once distributed, it needs to be securely stored and finally retired, which is cumbersome even for advanced technical users.
In solutions such as Istio, the certificate is renewed via an automatic process every week through a previously authenticated and encrypted channel with the proxy that needs to enforce it. The time-frame for the certificate renewal is a suggestion, ranging from months to minutes depending on the use case.
This sounds amazing in theory, but when talking about client certificates, it has been a notorious usability nightmare ever since the first moment RSA was considered cool between cryptographers.
The fact of the matter is that a complete implementation of these secure architectures requires a lot of infrastructure around them.
To enable widespread mTLS support, you need a secure channel through which the client certificate needs to be distributed, a PKI setup, and software that can manage the kind of certificates with the kind of public key cryptography you want to use, which is another rabbit hole by itself.
This is why I left in my thesis the problem of reliable client identity certificate management and distribution for future work, as well as the reason why I decided to use mTLS between my own software clients and server infrastructure.
This increase in complexity is why there is a whole industry around implementing secure access over the Internet.
SASE
The purpose of my thesis isn’t anything new. There are multiple products in the market that already implement these principles.
SASE stands for Secure Access Service Edge, it is an umbrella term for multiple both competing and complementary solutions to enable secure access in the Internet for a variety of use cases. This is a term invented by Gartner, so I recommend the reader to checkout their site.
One example that I have been exposed to in my day job is Hashicorp Boundary, which implements a secure enclave ZTA architecture based on OAuth and SAML.
Another one is ZTNA, which is the new paradigm for deploying highly granular corporate VPNs such as Zscaler.
For Open Source alternatives, Pomerium is another system that implements a secure enclave zero trust architecture using a network of proxies controlled by a central policy engine.
A lot of these options have an open-source option, which can be deployed on your own server infrastructure or the one of your organization, as well as a cloud offer in order to accommodate a whole set of regulations and security policies.
I am of the opinion that you should never use a security solution that you cannot audit and execute yourself, by obvious reasons. The real money is in custom deployments and management of these systems so open sourcing is not a big risk for these kinds of products from a business perspective.
The poor man’s SASE
For the rest of us that love to self-host services for ourselves or our loved ones, there are a couple of solutions to implementing secure access over the Internet. These options are good enough when you are the only one using those services or for a small restricted group.
One of them is setting up a wireguard VPN or something similar, such as a OpenVPN. This is by no means a zero trust solution and will not solve by itself the problem of certificate distribution, but it is perfect and mostly enough for a small network, and solves the problem of secure access.
The paranoid man’s SASE
If you are too poor even for a VPN, you can get leaner and arguably more secure by using plain SSH connections via port forwarding. The idea is that services are only exposed to localhost on their own servers or machines and only SSH is exposed, so that you can use remote port forwarding to use those specific services in your local machine.
An outsider may know that this machine exists and has one single TCP port open, but no information about what is inside is leaked. Certificate authentication can be enabled and there is a well established set of tools to onboard them into a network.
This last option is the most minimalistic and arguably more secure, as the openssh daemon is probably one of the more secured and audited piece of software in history, but probably the one that requires more skills of its users to being used to its full extent.
Conclusion
The purpose of this post was to explore the concept of secure access, which is a key feature that solves a big problem in computer security and secure communications. I want to explore this topic in future articles with different systems such as mTLS for nginx and Apache2 servers, as well as integrating these concepts in the cloud.