API Security : API key is dead..Long live Distributed Token by value
As industry is gearing up for the micro services driven APIs, authentication becomes complicated across multi party, multi channel, multi device challenge.. its magnitude comes higher, when our digital world is being decomposed into more granular micro-services; when authenticating with more than one backend, things can get complicated. you might think of using either proxy all requests through a central app server which would then need to know all the logic of each secondary service, or each service could implement complex inter-server communication (and CORS) to verify incoming session IDs with a central auth server... in either case extra load on the app server is required and more complex interconnects need to be maintained).
Let's quickly rewind a little bit thru’ the sagas of authentication practises and its evolution thereafter.
The genius : During the traditional web architecture, session cookies ruled the authentication world. Even though, stateless approach of REST makes session cookies inappropriate from the security standpoint, but they are still widely used. And this leads to following well known common drawbacks in addition to the infamous threats like session hijacking
Era I: In an effort, to get rid of client sessions from the server, other methods have been used , such as Basic or Digest HTTP authentication. Both use an Authorization header to transmit user credentials, with some encoding (HTTP Basic) or encryption (HTTP Digest) added.
Apparently, they carried the same flaws found in websites: HTTP Basic had to be used over HTTPS since username and password are sent in easily reversible base64 encoding, and HTTP Digest forced the use of obsolete MD5 hashing that is proven to be insecure.
Era II: In the case of HTTP Basic Auth, if somehow security is compromised, vulnerability impact is at the peak. Attacker has the user's master account. So API key becomes popular.. In the case of API key, we are not passing any user identifier rather.. opaque random strings that only know to the server to limit to certain APIs. Hence even if compromised, vulnerability limited to API, not the user's master account.
Think of it like a hotel plastic key:
When you checkin at the front-desk, and they give you one of those plastic electronic keys with which you can access your room, but you can't open other people's rooms or go into the manager's office. And, like a hotel key, when your stay has ended, you're simply left with a useless piece of plastic (i.e., the key doesn't do anything anymore after it's expired).
More convenient, as you can easily expire or regenerate key without affecting the user's account password. You can have multiple keys per account (e.g. users can have "test" and "production" keys side by side.)
Basically, API keys identify the calling service — the application or micros services — making the call to an API. API keys are supposed to be a secret that only the client and server know. Like Basic authentication, API key-based authentication is only considered secure if used together with other security mechanisms such as HTTPS/SSL. However, API keys are generally not considered secure; they are typically accessible to clients, making it easy for someone to steal an API key. Once the key is stolen, so it may be used indefinitely, unless the owner revokes or regenerates the key. While the restrictions you can set on an API key mitigate this, there are better approaches for authorization.
Unlike credentials that use short-live tokens or signed requests, API keys are a part of the request and are therefore considered to be vulnerable to man-in-the-middle attacks and therefore less secure.. For security reasons, don't use API keys by themselves when API calls contain user data.
Era III: Tokens :Finally, then comes the implementations that used self-contained tokens to authenticate clients. However, initially, every service provider had his or her idea of what to put in the token, and how to encode or encrypt it. Consuming services from different providers required additional setup time, just to adapt to the specific token format used. In short, there was no standard practices.
ERA III+. Token by reference - OAuth.
Like shown in the diagram, once acquired token is used in all following service calls. Receiving service need check validity of the token or even query some authorisation details. In a case of reference token, every service need to contact Auth Service again on ever interaction. This also means that token is only a reference to the user-data and the whole data set behind it. The service is able to fetch everything it needed using that token reference.
Remember that microservice are tending to be stateless (state is inhibiting, think of REST - transfer state). Also if services are tending to be small and simple, the complexity is not completely gone, it just moved out of services to the level of interactions and communication between the services and we need to be very careful here. Imagine 50 and more services as backend of one Single Page Applications that need to talk to one authoritative service just to verify tokens on every request.
ERA 4 : Token by value
In this, Token by value, tokens that contain and not reference all needed information (at least for authentication & authorization) and could be validated in place, by the called microservice self, but also protected by cryptography in untrusted environments.
This is exactly what JSON Web Tokens (JWT) are for!
With JWT you register yourself with an app and you login with your credentials (e.g., username/password, or 3rd party OAuth). But instead of making a session and setting a cookie, the server will send you a JSON Web Token instead. Now you can use that token to do whatever you want to do with the server (that you have authorization to do).
Here is how JWT security is designed to work:
- Clients logs in by sending their credentials to the identity provider.
- The identity provider verifies the credentials; if all is OK, it retrieves the user data, generates a JWT containing user details and permissions that will be used to access the services, and it also sets the expiration on the JWT (which might be unlimited).
- Identity provider signs, and if needed, encrypts the JWT and sends it to the client as a response to the initial request with credentials.
- Client stores the JWT for a limited or unlimited amount of time, depending on the expiration set by the identity provider.
- Client sends the stored JWT in an Authorization header for every request to the service provider.
- For each request, the service provider takes the JWT from the Authorization header and decrypts it, if needed, validates the signature, and if everything is OK, extracts the user data and permissions. Based on this data solely, and again without looking up further details in the database or contacting the identity provider, it can accept or deny the client request. The only requirement is that the identity and service providers have an agreement on encryption so that service can verify the signature or even decrypt which identity was encrypted.
So What is JWT and why so good?
Note: JWT is a great technology for API authentication and server-to-server authorization.
In a multi-micro service roll out, you could also imagine a JWT akin to a " Festival Pass for festival" like you would get at a film festival Cannes festival. Whereas an individual movie ticket will grant you access to a single movie (from which you can leave to get popcorn or use the restroom and return), a Festival Pass will grant you access to any movie in the entire festival at different locations at different times.
3 DAYS IN CANNES A 3-days pass giving access to the Official Selection (Competition, Out of Competition, Special Screenings, Un Certain Regard, Cannes Classics, Cinéma de la plage) and to the Palais des Festivals.
In the same way, you can take your JWT generated from one server, and use it to authenticate with totally different servers (on different domains) which share the same verification method (e.g., share a secret). Each of those other servers don't need to "call home" to ask if the token is ok, because they can simply do a quick computation on the token itself and check its signature and expiration time directly without incurring a hit to the database or an additional network request.
If a client gets man-in-the-middled, their credentials remain relatively safe. Without attacking the JWT cryptography, attackers can only run intercepted commands and can only run them within the requests' expiration window.
How to choose the perfect JWT library Have glance at jwt.io and look at the Libraries list. The site contains a list of the most popular libraries that implement JWT. Let’s see how can we implement the JWT token based REST API using Java and Spring
JSON Based Token (JWT)
A token is an encoded string, generated by our application (after being authenticated) and sent by the user along each request to allow access to the resources exposed by our application.
JSON Based Token (JWT) is a JSON-based open standard for creating access tokens. It consists of three parts; header, payload, and signature.
The header contains the hashing algorithm
{type: “JWT”, hash: “HS256”}
The payload contains attributes (username, email, etc) and their values.
{username: "josh", email: "[email protected]", admin: true }
The signature is hashing of: Header + “.” + Payload + Secret key
At the end, you get a string that looks something like "hhhhhhhh.pppppppp.ssssssss" where "h" is the encoded header, "p" is the encoded payload, and "s" is the encrypted signature.
Whenever you make a request to the server, you send the token with your request. Typically this is done in the authorization header like "Authorization: Bearer hhhhhhhh.pppppppp.ssssssss". But it could also be passed in a POST body or in the URL itself as a query parameter. When the server sees the token, it decodes it and compares the signature with the secret it has stored which would have been used to generate the token in the first place. If everything matches, the request is authentic, and it responds with data, otherwise it sends back an error message.
The key to all this is the "secret" string that is stored on the server. This is a piece of information that only the server knows which is used to make new tokens and validate existing ones. Because the server is the only thing that knows the secret, it prevents unauthorized access by attackers because it would be technically too difficult to decrypt a captured signature, or to guess the secret in order to spoof a contrived signature before an issued token expires. Remember to never share your secret with anybody and to send it to your server over a secure line (e.g., using SSH).
You can also setup other servers to use the same secret so that a token created by one server could also be used to authenticate with a completely different server (client to server, or server to server).
If an attacker tries to tamper with the payload data (e.g., to give themselves admin rights), the corresponding signature won't match (because the signature was generated from the original data in the payload) and as such that token won't be considered valid on the server and any requests made with it will be denied. The only way to create an authentic token is with the secret (which should only be on the server and never published).
Here we are going to develop a simple micros service for portfolio aggregation with Netflix Eureka and Zuul. In this example, we have 3 services:
1. Service Registry : A service registry is a phone book for your microservices. Each service registers itself with the service registry and tells the registry where it lives (host, port, node name) and perhaps other service-specific metadata - things that other services can use to make informed decisions about it. Clients can ask questions about the service topology (“are there any ‘order-services’ available, and if so, where?”) and service capabilities (“can you handle X, Y, and Z?”). There are several popular options for service registries. We shall leverage Netflix’s Eureka service registry for this..
2. Zuul Gateway Server: When calling any service from the browser, we can’t call it by it’s name as we did from Portfolio service — This is used internally between services. And as we spin more instances of services, each with a different port numbers, So, now the question is: How can we call the services from the browser and distribute the requests among their instances running at different ports? Well, a common solution is to use a Gateway.
A gateway is a single entry point into the system, used to handle requests by routing them to the corresponding service. It can also be used for authentication, monitoring, and more.
Zuul? its an edge service that provides dynamic routing, monitoring, resiliency, security, and more It’s a proxy, gateway, an intermediate layer between the users and your services.
3. Load Balancing : Zuul uses Netflix Ribbon to discover all the instances of a service from the Eureka Service Discovery Server. It automatically finds the physical locations of each service instance and redirects the requests to the actual services holding the resources to be accessed.
4. Zipkin – distributed tracing system with request visualization.
From JWT implementation perspectives, 2 key implementation we need to handle..
1. Auth Service :
In the auth service, we need to (1) validate the user credentials, and if valid, (2) generate a token, otherwise, throw an exception.
2. Gateway
In the gateway, we need to do two things: (1) validate tokens with every request, and (2) prevent all unauthenticated requests to our services.
In the pom.xml add spring security and JWT dependencies.
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt</artifactId>
<version>0.9.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
Testing our Microservices: Let us plugin in the authentication logic, we can validate credentials, issue tokens, and authenticate our users seamlessly. So, run our Eureka Server. Then, run other services: position, portfolio, auth, and finally, the gateway.
First, let’s try to access portfolio service without a token. You should get Unauthorized error
To get a token, send user credentials to localhost:8762/auth, and make sure the Content-Type in the headers is assigned to application/json
Now, we can make a request to portfolio service passing the token in the header.
However, here we are just taking the secret key that is distributed across multiple services .. which is huge risk in a typical microservice infrastructure where 20 services will be sharing the same secret key. Anywhere once comprised, then penalty will be humongous domino impact
Hence we need some sort of algorithm for asymmetric (private-public key) encryption
RSA is a commonly used algorithm for asymmetric (public key) encryption. To encrypt a JWT for a given recipient you need to know their public RSA key. Decryption happens with the private RSA key, which the recipient must keep secure at all times.
1. Generate JWT RS256 Private, Public Key
ssh-keygen -t rsa -b 2048 -f jwtRS256.key openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwtRS256.key.pu
2. Sign JWT with Private Key
public String signToken(){
try {
Algorithm algorithm = Algorithm.RSA256(null, this.privateKey);
return JWT.create().withIssuer(ISSUER).sign(algorithm);
} catch (JWTCreationException exception){
//Invalid Signing configuration / Couldn't convert Claims.
return null;
}
}
3. Verify JWT with Public Key
public boolean verifyToken(String token){
try {
Algorithm algorithm = Algorithm.RSA256(publicKey, null);
JWTVerifier verifier = JWT.require(algorithm).withIssuer(ISSUER).build();
verifier.verify(token);
return true;
} catch (Exception e){
return false;
}
}
JWTs are designed to be portable, decoupled identities. Once you authenticate against an Auth and get back a JWT, you don’t need to ask the Auth Service if the JWT is valid. This is particularly powerful when you use RSA public/private key signing. The Auth Service signs the JWT using the private key and then any service that has the public key can verify the integrity of the JWT.
Microservices in the diagram can use the JWT and the public key to verify the JWT and then pull the user’s id (in this case the subject) out of the JWT. The Backend microservices can then use the user’s id to perform operations on that user’s data
However, one challenge here is that these microservices isn’t verifying the JWT with the auth, it has no idea if an administrator has logged into the auth and locked or deleted that user’s account.
One way is addressing is to have a distributed event system that notifies services when refresh tokens have been revoked. The Auth service broadcasts an event when a refresh token is revoked and other backends/services listen for the event. When an event is received the backends/services update a local cache that maintains a set of users whose refresh tokens have been revoked. This cache is checked whenever a JWT is verified to determine if the JWT should be revoked or not. This is all based on the duration of JWTs and expiration instant of individual JWTs.
Some the key best practises to consider
- Always verify the signature of JWT tokens
- Avoid library functions that do not verify signatures
- Check that the secret of symmetric signatures is not shared
- A distributed setup should only use asymmetric signatures
- JWTs with authorization data should have a short lifetime
- Combine short-lived JWTs with a long-lived session
Let me conclude this with some of the key favourable benefits
Chief Technology Officer at MDOTM
4 年You write "For each request, the service provider takes the JWT from the Authorization header and decrypts it, if needed, validates the signature". How can the server validates the signature if it doesn't contact the AuthService. In you diagram there is no connection between Server and AuthService. The Server should at least once communicate with the AuthServie to retrieve the key and validate the signature.
Data Analytics| Supply Chain Visibility & Orchestration | Technology & Consulting | Value Engineering
5 年Good piece. Thanks for sharing.