OpenID Connect: authorization of internal applications from custom to standard

A few months ago, I was implementing an OpenID Connect server to manage access for hundreds of our internal applications. From our own developments, convenient on a smaller scale, we have moved to a generally accepted standard. Access through the central service greatly simplifies monotonous operations, reduces the cost of implementing authorizations, allows you to find many ready-made solutions and not rack your brains when developing new ones. In this article, I will talk about this transition and the bumps that we managed to fill.

OpenID Connect: authorization of internal applications from custom to standard

A long time ago... How it all began

A few years ago, when there were too many internal applications for manual control, we wrote an application to control access within the company. It was a simple Rails application that connected to a database with information about employees, where access to various functionality was configured. At the same time, we raised the first SSO, which was based on the verification of tokens from the side of the client and the authorization server, the token was transmitted in encrypted form with several parameters and verified on the authorization server. This was not the most convenient option, since each internal application had to describe a considerable layer of logic, and the employee databases were completely synchronized with the authorization server.

After some time, we decided to simplify the task of centralized authorization. SSO was transferred to the balancer. With the help of OpenResty, a template was added to Lua that checked tokens, knew which application the request was going to, and could check if there was access there. This approach greatly simplified the task of controlling access to internal applications - in the code of each application, it was no longer necessary to describe additional logic. As a result, we closed the traffic externally, and the application itself did not know anything about authorization.

However, one problem remained unresolved. What about applications that need information about employees? It was possible to write an API for the authorization service, but then you would have to add additional logic for each such application. In addition, we wanted to get rid of the dependence on one of our self-written applications, oriented in the future for translation into OpenSource, on our internal authorization server. We will talk about it some other time. The solution to both problems was OAuth.

to common standards

OAuth is an understandable, generally accepted authorization standard, but since only its functionality is not enough, they immediately began to consider OpenID Connect (OIDC). OIDC itself is the third implementation of the open authentication standard, which has flowed into an add-on over the OAuth 2.0 protocol (an open authorization protocol). This solution closes the problem of the lack of data about the end user, and also makes it possible to change the authorization provider.

However, we did not choose a specific provider and decided to add integration with OIDC for our existing authorization server. In favor of this decision was the fact that OIDC is very flexible in terms of end user authorization. Thus, it was possible to implement OIDC support on your current authorization server.

OpenID Connect: authorization of internal applications from custom to standard

Our way of implementing our own OIDC server

1) Brought the data to the desired form

To integrate OIDC, it is necessary to bring the current user data into a form understandable by the standard. In OIDC this is called Claims. Claims are essentially final fields in the user database (name, email, phone, etc.). Exists standard list of stamps, and everything that is not included in this list is considered custom. Therefore, the first point that you need to pay attention to if you want to choose an existing OIDC provider is the possibility of convenient customization of new brands.

The group of hallmarks is combined into the following subset - Scope. During authorization, access is requested not to specific brands, but to scopes, even if some of the brands from the scope are not needed.

2) Implemented the necessary grants

The next part of OIDC integration is the selection and implementation of authorization types, the so-called grants. The further scenario of interaction between the selected application and the authorization server will depend on the selected grant. An exemplary scheme for choosing the right grant is shown in the figure below.

OpenID Connect: authorization of internal applications from custom to standard

For our first application, we used the most common grant, the Authorization Code. Its difference from others is that it is a three-step, i.e. is undergoing additional testing. First, the user makes a request for authorization permission, receives a token - Authorization Code, then with this token, as if with a ticket for travel, requests an access token. All the main interaction of this authorization script is based on redirects between the application and the authorization server. You can read more about this grant here.

OAuth adheres to the concept that access tokens obtained after authorization should be temporary and should change, preferably every 10 minutes on average. The Authorization Code grant is a three-step verification through redirects, every 10 minutes to turn such a step, frankly, is not the most pleasant task for the eyes. To solve this problem, there is another grant - Refresh Token, which we also used in our country. Everything is easier here. During verification from another grant, in addition to the main access token, another one is issued - Refresh Token, which can be used only once and its lifetime is usually much longer. With this Refresh Token, when the TTL (Time to Live) of the main access token ends, the request for a new access token will come to the endpoint of another grant. The used Refresh Token is immediately reset to zero. This check is two-step and can be performed in the background, imperceptibly to the user.

3) Set up custom data output formats

After the selected grants are implemented, authorization works, it is worth mentioning getting data about the end user. OIDC has a separate endpoint for this, where you can request user data with your current access token and if it is up to date. And if the user's data does not change so often, and you need to follow the current ones many times, you can come to such a solution as JWT tokens. These tokens are also supported by the standard. The JWT token itself consists of three parts: header (information about the token), payload (any necessary data) and signature (signature, the token is signed by the server and you can later check the source of its signature).

In the OIDC implementation, the JWT token is called id_token. It can be requested along with a normal access token and all that's left is to verify the signature. The authorization server has a separate endpoint for this with a bunch of public keys in the format Ctte. And speaking of this, it is worth mentioning that there is another endpoint, which, based on the standard RFC5785 reflects the current configuration of the OIDC server. It contains all endpoint addresses (including the address of the public key ring used for signing), supported brands and scopes, used encryption algorithms, supported grants, etc.

For example on Google:

{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "device_authorization_endpoint": "https://oauth2.googleapis.com/device/code",
 "token_endpoint": "https://oauth2.googleapis.com/token",
 "userinfo_endpoint": "https://openidconnect.googleapis.com/v1/userinfo",
 "revocation_endpoint": "https://oauth2.googleapis.com/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
 "response_types_supported": [
  "code",
  "token",
  "id_token",
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
  "none"
 ],
 "subject_types_supported": [
  "public"
 ],
 "id_token_signing_alg_values_supported": [
  "RS256"
 ],
 "scopes_supported": [
  "openid",
  "email",
  "profile"
 ],
 "token_endpoint_auth_methods_supported": [
  "client_secret_post",
  "client_secret_basic"
 ],
 "claims_supported": [
  "aud",
  "email",
  "email_verified",
  "exp",
  "family_name",
  "given_name",
  "iat",
  "iss",
  "locale",
  "name",
  "picture",
  "sub"
 ],
 "code_challenge_methods_supported": [
  "plain",
  "S256"
 ],
 "grant_types_supported": [
  "authorization_code",
  "refresh_token",
  "urn:ietf:params:oauth:grant-type:device_code",
  "urn:ietf:params:oauth:grant-type:jwt-bearer"
 ]
}

Thus, using id_token, you can transfer all the necessary hallmarks to the payload of the token and not contact the authorization server each time to request user data. The disadvantage of this approach is that the change in user data from the server does not come immediately, but along with a new access token.

Implementation results

So, after implementing our own OIDC server and configuring connections to it on the application side, we solved the problem of transferring information about users.
Since OIDC is an open standard, we have the option of choosing an existing provider or server implementation. We tried Keycloak, which turned out to be very convenient to configure, after setting up and changing connection configurations on the application side, it is ready to go. On the application side, all that remains is to change the connection configurations.

Talking about existing solutions

Within our organization, as the first OIDC server, we assembled our own implementation, which was supplemented as necessary. After a detailed review of other ready-made solutions, we can say that this is a moot point. In favor of the decision to implement their own server, there were concerns on the part of providers in the absence of the necessary functionality, as well as the presence of an old system in which there were different custom authorizations for some services and quite a lot of data about employees was already stored. However, in ready-made implementations, there are conveniences for integration. For example, Keycloak has its own user management system and data is stored directly in it, and it will not be difficult to overtake your users there. To do this, Keycloak has an API that will allow you to fully carry out all the necessary transfer actions.

Another example of a certified, interesting, in my opinion, implementation is Ory Hydra. It is interesting because it consists of different components. To integrate, you will need to link your user management service to their authorization service and extend as needed.

Keycloak and Ory Hydra are not the only off-the-shelf solutions. It is best to choose an implementation certified by the OpenID Foundation. These solutions usually have an OpenID Certification badge.

OpenID Connect: authorization of internal applications from custom to standard

Also don't forget about existing paid providers if you don't want to keep your OIDC server. Today there are many good options.

What's next

In the near future, we are going to close traffic to internal services in a different way. We plan to transfer our current SSO on the balancer using OpenResty to a proxy based on OAuth. There are already many ready-made solutions here, for example:
github.com/bitly/oauth2_proxy
github.com/ory/oathkeeper
github.com/keycloak/keycloak-gatekeeper

More

jwt.io – good service for validating JWT tokens
openid.net/developers/certified - list of certified OIDC implementations

Source: habr.com

Add a comment