Jekyll2024-11-02T13:48:23+00:00/feed.xmlWaiting for dev…I happen to be one of the roughly 106,000,000,000 human beings who have inhabited the Earth at some point. In this thing called society, I have taken on the role of a software developer. If I can assist you with anything or if you'd like to chat, please feel free to reach out to me.Open Source Status: November 2023 - dry-operation failure hooks & database transactions2023-12-01T00:00:00+00:002023-12-01T00:00:00+00:00/blog/2023/12/01/open_source_status_november_2023This month, there’s a lot in the making! My two main priorities remain dry-operation and web_pipe, and I’ve put a great deal of thought into both of them. I’m excited to share the progress I’ve made, so let’s get started!

dry-operation: the “unhappy” path & database transactions

As I’ve discussed in previous updates, dry-operation is all about streamlining the happy path. This doesn’t mean that the “unhappy” path is neglected, but it doesn’t impede understanding the intended flow. Usually, individual operations are responsible for locally managing their failures. Often, that’s sufficient. The caller will likely also perform some form of failure handling in respect to the entire flow, such as returning a 4xx response code. However, sometimes it’s useful to encapsulate part of that global error handling in the flow instance, treating it as something intrinsic that should be done regardless of the caller (e.g., logging a failure). To facilitate this, we’re introducing an #on_failure hook that can be defined to be called when things go wrong.

class CreateUser < Dry::Operation
  def call(input)
    attrs = step validate(input)
    step persist(attrs)
    user
  end

  private

  def on_failure(failure)
    log_failure(failure)
  end

  # ...
end

Regarding database transactions, there are two main approaches we could consider. The first is to wrap the entire flow in a transaction, which appears to be the more user-friendly option. The second is to require manually wrapping the desired operations via a #transaction method, allowing more fine-grained control. The general behavior in both cases would be the same: rolling back a DB transaction, if present, in the case of an operation returning a failure. After much consideration, we’ve decided to go with the second approach. A lot has been done to hide the impedance of database transactions from the developer’s eyes, and few of these efforts have been successful. Database transactions are lower-level details that developers need to be aware of, ensuring that no expensive operations are wrapped within them. Although it goes against a vision of composable flows, dry-operation will lean towards encouraging developers to compose operations instead of entire flows. Thanks to its design and helper libraries like dry-auto_inject, dry-operation operations are completely decoupled from the wrapping flow, making them suitable for composability at the right level of granularity.

An extension for ROM is the first working example of this approach, but we’d eventually add support for other libraries.

class MyOperation < Dry::Operation
  include Dry::Operation::Extensions::ROM

  attr_reader :rom

  def initialize(rom:)
    @rom = rom
  end

  def call(input)
    attrs = step validate(input)
    user = transaction do
      new_user = step persist(attrs)
      step assign_initial_role(new_user)
      new_user
    end
    step notify(user)
    user
  end

  # ...
end

By the way, if you’re more interested in the thought process behind these decisions, you can check and comment on gist where we discussed the approach to take.

web_pipe: welcome to the Zeitwerk family

From now on, web_pipe is part of the growing family of Zeitwerk-enabled Ruby gems.

Additionally, I’m experimenting a lot with its architecture, and it could result in a significant overhaul of its internals. The idea is to remove injection responsibilities from it and, instead, rely on something like dry-auto_inject. However, it’s still too soon to share more information, so please stay tuned!

]]>
Open Source Status: October 2023 - Syntax: dry-operation vs. do notation2023-11-07T00:00:00+00:002023-11-07T00:00:00+00:00/blog/2023/11/07/open_source_status_october_2023In September, we witnessed the birth of dry-operation, a new library designed for managing business flows in Ruby. During October, my focus shifted towards optimizing its developer experience (DX) and also allowed me to revisit my beloved project, web_pipe.

dry-operation: Ruby’s magic wand to remove boilerplate

I typically approach Ruby’s metaprogramming capabilities with caution. I’ve seen them misused too many times, and I strongly believe that, in most cases, being explicit rather than implicit is the better choice. However, there are certain scenarios where metaprogramming can be a powerful tool for reducing boilerplate and enhancing the developer experience. This is precisely where dry-operation shines.

The essence of dry-operation lies in crafting easily readable business flows, emphasizing the “happy path.” Back in September, we agreed upon the following interface:

class MyOperation < Dry::Operation
  def call(input)
    steps do
      attrs = step validate(input)
      user = step persist(attrs)
      step notify(user)
      user
    end
  end
end

The step method will unwrap a Dry::Monad::Result::Success returned by each operation, but it’ll short-circuit the flow in case of Failure. It does so by throwing a symbol that is caught by the surrounding #steps.

Already, this is quite optimized for readability. Now, see how it could appear without dry-operation:

class MyOperation 
  def call(input)
    validate(input).bind do |attrs|
      persist(attrs).bind do |user|
        notify(user).bind do |user|
          Success(user)
        end
      end
    end
  end
end

You’re correct; we can do better with dry-monad’s do notation:

class MyOperation 
  include Dry::Monads::Do.for(:call)
  
  def call(input)
    attrs = yield validate(params)
    user = yield persist(attrs)
    yield notify(user)
    
    Success(user)
  end
end

There are other benefits that dry-operation will provide over dry-monad’s do notation, but we can already compare their boilerplate. dry-operation comes out ahead in the following aspects:

  • Inheriting from Dry::Operation is more concise than including Dry::Monads::Do.for(:call).
  • Using a step method feels less confusing than yield.
  • There’s no need to explicitly wrap the returned value in a Success object in dry-operation.

Nonetheless:

  • dry-monad’s do notation doesn’t require wrapping the sequence of steps in a surrounding block (steps in dry-operation).

Here’s where the metaprogramming magic comes into play. After a couple of iterations, and thanks to the valuable feedback from Tim Riley, the happy path in dry-operation has become even more straightforward, as the #steps block is no longer required:

class MyOperation < Dry::Operation
  def call(input)
    attrs = step validate(input)
    user = step persist(attrs)
    step notify(user)
    user
  end
end

The way it works is that Dry::Operation will automatically prepend a steps block to the #call method. All batteries are included by default, but if you’re a purist there’s always the possibility to opt-out of any kind of magic through a skip_prepending class method.

web_pipe: on it’s way to 1.0

web_pipe serves as a lightweight layer on top of Rack, designed for building composable web applications. It has been in existence for a while, yet it hasn’t reached version 1.0. I am committed to changing that and gradually solidifying its interface until the next major release. Last month, I made a few updates:

What’s next?

In November, I anticipate dedicating my efforts to gracefully managing the “unhappy” path within dry-operation. Additionally, I’ll be exploring the optimal approach for integrating database transactions. I’m excited to make progress and continue moving forward!

]]>
Open Source Status: September 2023 - Hello, dry-operation!2023-10-10T00:00:00+00:002023-10-10T00:00:00+00:00/blog/2023/10/10/open_source_status_september_2023You promised yourself to provide monthly updates on your Open Source activities, and suddenly, a whole year has passed since the last one. But rest assured, I haven’t been idly lounging on the sofa during this time, as much as I might have wished for that luxury.

As I shared in my last update back in September 2022, I’ve been dedicating a significant amount of thought to shaping a library for handling business transactions in Ruby. The spark was reignited when the possibility arose for this library to become an integral part of a service layer for Solidus. However, due to other pressing priorities within the project, my efforts were redirected, but I persevered, chipping away at it during my limited free time. Eventually, I successfully developed a prototype, which I affectionately named “kwork.” Naming can be quite a challenge. In this case, it paid homage to Murray Gell-Mann, who originally contemplated the spelling “kwork” for quarks (one of the fundamental constituents of matter). The pun tried to mirror the idea that business transactions, too, should be indivisible when it comes to their outputs:

In 1963, when I assigned the name “quark” to the fundamental constituents of the nucleon, I had the sound first, without the spelling, which could have been “kwork”. Then, in one of my occasional perusals of Finnegans Wake, by James Joyce, I came across the word “quark” in the phrase “Three quarks for Muster Mark”.

Kwork is no more; long live dry-operation

In Catalan, there’s a saying that goes, “Roda el món i torna al Born,” which loosely translates to “Travel the world and come back to El Born.” El Born is an ancient district with medieval origins in Barcelona. My grandmother settled there with her family after fleeing their rural hometown, escaping the Francoist army, and it’s also where my mother was born. Today, it’s transformed into a hub of trendy shops, and the local community can no longer afford to reside there. Nevertheless, this saying conveys the notion that, after a multitude of new experiences, one often finds themselves back where they started. This sentiment beautifully encapsulates my journey with business transactions in Ruby.

Over four years ago, I submitted a PR to dry-transaction to address one of its most significant limitations, which was the inability to freely reuse output from previous steps. Following discussions within the dry-rb team, Tim Riley informed me that they had decided to retire the library due to this and other constraints. At that time, it was quite a disappointment for me, as I believed we were on the cusp of delivering exactly what Ruby needed: an idiomatic monadic DSL.

A couple of years later, now as a contributor to dry-rb, I proposed a couple of solutions to resurrect dry-transaction. The first approach closely mirrored the old dry-transaction and the work in that PR, while the second leaned more towards the final shape of kwork or dry-monad’s do notation. However, the team was deeply engrossed in their work on Hanami, and it was entirely understandable that they couldn’t manage everything simultaneously.

Just last month, I learned that Tim Riley was eager to commence work on a dry-transaction successor. We engaged in extensive conversations over Slack, exchanging lengthy messages with Tim and Brooke Kuhlmann (author of transactable). Concepts surrounding monads, do notation, dry-transaction, and the Railway Pattern were all in the mix, helping us to envisage what could potentially be the most fitting form for a Ruby library.

And so, here I am, actively contributing to the inception of dry-operation. My initial work this month has involved the [extraction of key elements from kwork, molding them to align with the established patterns within the dry-rb ecosystem. I’m thrilled to report that we’ve already got the syntax for the fundamental features up and running, and we’re now diligently seeking the best possible compromise to enhance the developer experience.

On a personal note, I’ve been eagerly looking forward to deepening my collaboration with dry-rb for quite some time. Having the opportunity to spearhead the development of a new library fills me with immense joy and gratitude. I’d like to extend my heartfelt thanks to Tim and the entire team for placing their trust in me for this exciting endeavor!

]]>
Open Source Status: September 20222022-10-02T00:00:00+00:002022-10-02T00:00:00+00:00/blog/2022/10/02/open_source_status_september_2022This month was important in the personal sphere, as I moved out to the mountains (well, in a town in the mountains, to be fair). Still, I managed to dedicate some time to Open Source.

Decoupling the router from the application

I already mentioned in the past update that we were making hanami-router an optional component in Hanami. I was working on a PR, but it needed to be reverted. We were also trying to improve DX by raising an error when fetching the configuration for a gem that is not there. We still need more work there, but we extracted the part that makes hanami-router optional and merged it on another PR.

In the freezer: taking the app path on hanami new

I started this one as a personal initiative, as I really wanted to have hanami new . work. I always develop using Docker, so my flow consists of initializing a container in the current working directory where I want to create the application. One thing led to the other, and I realized it was more natural to have the hanami new argument always be a path instead of the app module name with . as an exception. However, there’re some concerns about how it communicates to the user, and as it’s not a priority at this point, we’ll retake it once more important stuff has been done.

In progress: making nested slices first-class

Tim introduced the option to register a slice within another slice. However, they’re not picked for slice configurable things. We need to fix it.

Excited with a layer for service objects

I already said I don’t talk about my work for Solidus here, as that’s part of my paid job, while my OS Status is primarily something directed to my GitHub sponsorship.

However, I’m very excited about a service layer we’ll add there, as it will be built taking concepts from monads, railway programming, and atomicity patterns. That’s just the same idea I had for a new version of dry-transaction, so there’s a total overlap in my mind. I don’t know where it will land, but I see a field for collaboration here.

]]>
Open Source Status: August 20222022-09-01T00:00:00+00:002022-09-01T00:00:00+00:00/blog/2022/09/01/open_source_status_august_2022Hey, there!

This month I decided to take the step and open sponsorship on GitHub. I reckon this entails some kind of acknowledgment, so I’ll try to publish regular updates about my Open Source status.

Let’s start with what happened last month (August 2022)!

Important notice: I won’t report about what we’re doing at Solidus, as that’s already part of my paid work with the fantastic team at Nebulab.

More robust application detection in Hanami

We’ve improved how Hanami detects the application file to set up its environment. Previously, it wasn’t possible to run bundle exec hanami console from a sub-directory within the application root. Still, a bit of recursion up the filesystem hierarchy improves the development experience at the command line. I initially took this one, and Tim, who knows best how Hanami is organized, reshaped it in the best possible way.

Listing middlewares

This one came from July, but the command to list all registered Rack middlewares got merged and included in the latest beta of Hanami. The output is pretty informative, rendering the type of the middleware (class, object…), the arguments provided on initialization, and the path where it applies. It also required some minor adjustments in Hanami’s main repo.

Reloading Hanami console

We’ve introduced a reload method available in the Hanami console. Its implementation was no secret; it was just copied from what Hanami 1 did: replacing the current process with a new invocation.

Minor improvements

We removed a leftover file and all the old integration tests. The latter will make the test suite less confusing for new developers.

Under way: decoupling the router from the application

Hanami 2 is not only for web applications, so the router is not a mandatory system part. We’re using the same logic already present for actions and views to make router configuration optional. Following Tim’s recommendations, we’ve also improved the developer experience with a clear error message.

That’s pretty much everything. However, I don’t want to say goodbye without thanking Seb for being my first sponsor (and, once more, for his amazing work at HanamiMastery).

See you soon!

]]>
A walk with JWT and security (and IV): A secure JWT authentication implementation for Rack and Rails2017-01-26T08:19:23+00:002017-01-26T08:19:23+00:00/blog/2017/01/26/a_secure_jwt_authentication_implementation_for_rack_and_railsAfter having discussed in the three previous posts (I, II and III) about JWT authentication and security, I would like to share two Ruby libraries I made in order to implement these security tips we have discussed so far. One of them is warden-jwt_auth, which can be used in any Rack application that uses Warden as authentication library. The other one is devise-jwt, which is just a thin layer on top of the first that automatically configures for devise and, subsequently, for Ruby on Rails.

What did I expect from a JWT authentication Ruby library?

When I looked for current Ruby libraries helping with JWT authentication, I wanted them to have a string of conditions:

  • I wanted it to rely on Warden, a heavily tested authentication library that works for any kind of Rack application. This decision was also based on the fact that most applications in my current company use it.
  • It should be easily pluggable into Rails with Devise (which uses Warden). That way, in Rails applications, I could use Devise database authentication for the sign-in action and JWT for the rest.
  • Relying on Warden and not being a full authentication system, it should be very simple to audit.
  • Zero monkey patching. A lot of libraries meant to work with Devise have a lot of monkey patching. I don’t like that.
  • It should be ORM agnostic when used outside of Rails.
  • It should support or make easy the implementation of a revocation strategy on top of JWT.
  • It should be flexible enough to distinguish between different user resources (like a User and an AdminUser), so that a token valid for one of them can’t impersonate another resource user record.

I looked at what it had been done so far, and here it is what I found.

  • Knock. Surely, it is the most serious attempt of implementing JWT authentication for Rails applications. But it is Rails specific, does not integrate with devise and it has ruled out adding a revocation layer.

  • rack-jwt It is nice that it is for any Rack application, but it is also true that it needs some handwork to integrate with tested authentication libraries like Warden. Besides, it doesn’t help with revocation.

  • jwt_authentication It is only for Rails with devise. I think is quite complex and tries to mix simple_token_authentication, which adds even more complexity and monkey patching. It doesn’t support revocation.

What I have done

As I said, finally I ended up programming two libraries. I’m not going to go through their details, because they can be consulted in their README and surely they will change over time. I’ll just give some generic vision of what they are.

warden-jwt_auth

warden-jwt_auth works for any Rack application with Warden.

At its core, this library consists of:

  • A Warden strategy that authenticates a user if a valid JWT token is present in the request headers.
  • A rack middleware which adds a JWT token to the response headers in configured requests.
  • A rack middleware which revokes JWT tokens in configured requests.

As it requires the user to implement user resource interfaces along with revocation strategies, it is completely ORM agnostic.

Having warden the ‘scopes’ concept, they can be leveraged to flag each token in order not to confuse user records from different resources.

devise-jwt

devise-jwt is just a thin layer on top of warden-jwt_auth. It configures it to be used out of the box for devise and rails.

Basically, it does:

  • Creates a devise module, which when added to a user devise model configures it to be able to use the JWT authentication strategy. This devise module implements required user interface for ActiveRecord.
  • Implements some revocation strategies for ActiveRecord.
  • Configure create session devise requests to dispatch tokens.
  • Configure destroy session devise requests to revoke tokens.

Hope they can be useful.

]]>
A walk with JWT and security (III): JWT secure usage2017-01-25T08:43:41+00:002017-01-25T08:43:41+00:00/blog/2017/01/25/jwt_secure_usageIn this post, after having discussed the why and how on revoking JWT tokens, we’ll talk about some general advice that should be kept in mind while using JWT for user authentication.

The most important one is what we have already mentioned: add a revocation layer on top of JWT, even if it implies losing its stateless nature. The opposite is, most of the time, not acceptable. But there are more aspects to consider.

  • Don’t add information that may change for a user. For instance, many times people recommend adding role information so that a database query can be saved. For example, in your JWT payload, you would add that Alice has the user ID and the admin role. What happens if admin privilege is revoked for Alice? If the token that states the opposite is still valid, they could still appear as an admin.

  • Don’t add private information unless you encrypt your tokens. Bare JWT (without encryption) is just a signed token. It means that the server perfectly knows whether it issued the incoming token or not. But the information contained in the token is readable by everyone ([try it in the ‘Debugger’ section on JWT site); it is just base64 encoded. So I recommend just encoding harmless information like the user id.

  • Be careful about what you use to identify a user. For example, if you just add the user id and you have two different user resources, say a User and an AdminUser, a valid token for the User with id 3 would be valid for the AdminUser with the same id. In this case, you should add and use something like a scope claim to distinguish between the two.

With all of that in mind, in my view, when choosing an authentication mechanism, if possible, you should prefer using cookies. They have been around for a long time and they are battle-tested. Authentication is an essential security aspect of an application, and the closer you stay to the standard way, the more peacefully you will sleep.

So, in which situations might it be better not to use cookies? I can think of two:

  • It is not easy to share cookies between different domains. That’s not true for CORS requests, where the Access-Control-Allow-Credentials header can be used to instruct the browser to accept them. But, for instance, routine GET requests exclude you from this option.

  • Mobile platform support. It is no longer an issue for acceptable modern APIs, but if you need legacy compatibility, you could encounter troubles.

Besides, you should be aware that JWT authentication could expose you to XSS attacks, even though it is also true that cookies could expose you to CSRF attacks. In both cases, the best thing you can do is use good and modern tools and frameworks, always avoiding reinventing the wheel.

]]>
A walk with JWT and security (II): JWT revocation strategies2017-01-24T06:20:17+00:002017-01-24T06:20:17+00:00/blog/2017/01/24/jwt_revocation_strategiesIn my last post I concluded why, in my view, JWT revocation makes a lot of sense. Now, it’s time to analyze which revocation strategies can be used and what their pros and cons are

Short-Lived tokens

The first one I’ll mention is not an actual revocation strategy, but some people argue that it is the best you can do with JWT to maintain its stateless nature while still mitigating its lack of built-in revocation. They say JWT tokens should have short expiration times, like, say, 15 minutes. This way, the time frame for an attacker to do some harm is reduced.

I don’t agree with that. I think it is still unacceptable not to have an actual server-side sign-out request. Shouldn’t the client do the right job destroying the token, a user could think that she has signed out from a server, but somebody coming just after them could be able to impersonate them.

Furthermore, it also has implications from a usability point of view. In some sites, such as online banking, it makes sense to force a user to sign in again if he has been inactive for 15 minutes, but in other scenarios, it can be a real hassle.

Signing Key Modification

Changing the signing key automatically invalidates all issued tokens, but it is not a way to invalidate single tokens. Doing that in a sign-out request would mean that everybody gets kicked out when just one requires leaving. Changing the signing key is something that must be done when there is some indication that it could have been compromised, but it doesn’t fit the scenario we are talking about.

Tokens Denylist

Keeping a denylist of tokens is the simplest actual revocation strategy for individual tokens that can be implemented. If we want to invalidate a token, we extract from it something that uniquely identifies it, like its “jti” claim, and we store this information in somewhere (for example, a database table). Then, each time a valid token arrives, the same information is extracted from it and queried against the denylist to decide whether to accept or refuse the token.

It is very important to notice that what is persisted is not the whole token but some information that uniquely identifies it. Should we persist the whole token, we will do that using encryption and adding a different salt for each one of them, treating it similarly to a password. However, for example, a “jti” claim alone is completely useless for an attacker.

A denylist strategy has some advantages:

  • Very easy to implement.

  • It works well when we can have a user signed in from different devices, and we just want to sign her out from one of them. Each denylist record references one single token, so we have complete control.

But it also has some drawbacks:

  • With a denylist, there would be no way to sign a user out from all their devices. We would need to add some flag in the user record.

  • A denylist can grow quite rapidly if the user base is large enough because, at least, every sign-out request would add a record. To mitigate it, maintenance cleaning could be scheduled, which would empty the list at the same time that the signing key is changed. Of course, this would sign out all users, but surely it is something we can live with.

  • A denylist usually requires a second query (the one to the list) besides the usually needed query for the user.

User-Attached Token

This strategy is analogous to what is usually done with opaque tokens. Here, the information that uniquely identifies a token is stored attached to the same user record that it authorizes; for example, as another column in the user record. When a token with a valid signature comes in, that information is extracted and compared with the one attached to the user to decide whether authorization should be allowed. When we want to sign out the token, we simply change or nullify it.

In fact, these two steps in the checking process (fetching the user and comparing the token with the persisted information) can be reduced to a single one if we fetch the user through the token information. For example, we could have a “uid” column where we would store the “jti” claim of the current active token. When fetching the user, we would look that column up, and no matching would mean that the token is not valid. Of course, we need to be sure that the “uid” is actually unique.

As in the case of the denylist strategy, it is very important to understand that it is not the whole token that is persisted but a unique representation of it to avoid security concerns related to secrets storage (see the denylist section for details).

User-attached tokens have several advantages that, for me, make it a better idea than the denylist strategy:

  • Instead of storing which tokens are not valid (which grows over time), we store the one that is valid. As a consequence, there is no need to schedule any clean-up.

  • A single query to the user resource suffices to do the checking, instead of having to query both the user and a denylist.

On the other side, it also has one inconvenience:

  • It makes dealing with multiple client applications from which a single user can interact more challenging. In this scenario, in the user storage, we should differentiate active tokens per client application, maybe using different columns or doing some kind of serialization. From the token side, a good idea would be to differentiate the client through the “aud” claim. Signing out from one device would mean revoking just the affected token, while signing out from all clients would require revoking all the tokens for that user.
]]>
A walk with JWT and security (I): Stand up for JWT revocation2017-01-23T11:02:55+00:002017-01-23T11:02:55+00:00/blog/2017/01/23/stand_up_for_jwt_revocationThere is some debate about whether JWT tokens should be revoked, for example, when signing a user out, or whether doing so is nonsense and breaks the primary reason why this technology exists.

JWT tokens are self-describing. They encapsulate the information that a client is trying to provide to an API, such as the ID of a user trying to authenticate. For this reason, they are defined as stateless. The server only needs to validate that the incoming token has its own signature to trust it; it doesn’t have to query any other server, like a database or another API.

Self-describing tokens exist in opposition to more traditional opaque tokens, which are usually strings of characters that an API needs to check against another server to see if they match the one associated with a user record.

The stateless nature of a pure JWT implementation has a very important implication. A server has nothing to say about a token except whether it was signed by itself or not, so it has no way to revoke them individually (it could invalidate all issued tokens at once by changing its signing key, but not a single one in isolation). This means that it is not possible to create an actual sign-out request from the server-side.

Even though this fact can be seen as a testament to stateless purism, I consider it close to an abomination from a security point of view. It is true that if both the client and API belong to the same team, client-side token revocation can be kept under control. But server-side technologies have better tools to deal with security and fewer attack vectors, like, say, a web browser. If the API is consumed by third-party clients, then relying on the assumption that they will do the right job is completely unacceptable.

However, there is nothing in JWT technology that prevents adding a revocation layer on top of it. In this scenario, an incoming token is verified, and, like in opaque tokens, another server is reached to check if it is still valid.

At first glance, it actually seems like nonsense. It looks like we are ending up in the same land as opaque tokens with the additional signature verification overhead. However, a closer look reveals that we gain some security benefits and that, in fact, there is no such overhead.

Let’s first examine the security benefits we can get. When revoking a JWT token, there is no need to store the whole token in the database. As it contains readable information, we can, for example, extract its “jti” claim, which uniquely identifies it. This is a huge advantage because it means that the stored information is completely useless for an attacker. Therefore, there is no need to hash it to implement a good zero-knowledge policy, and there is no need to keep a salt value for each user to protect against rainbow table attacks.

Now, about the alleged overhead that JWT with revocation would entail. As we said, with JWT, we have to take two steps: signature verification and a server query. In opaque tokens, it seems like we only have to query the server. But that is not true. A secure opaque token implementation should not store unencrypted tokens. Instead, it should require the client to send a kind of user UID along with the unencrypted token. The user UID would be used to fetch the user, and the unencrypted token would be securely compared with the hashed one. So, this hash comparison is also a second step, which, even though I haven’t benchmarked it, should have a similar overhead to signature verification.

Using a standard like JWT also has some abstract benefits that are difficult to measure. For example, usually, with current libraries, you get integrated expiration management through the “exp” claim. However, as far as I know, there is no standard for opaque tokens, which makes libraries prone to reinvent the wheel every time. In general, using JWT should be more portable.

Of course, I’m not saying that JWT with revocation is always good and opaque tokens are always bad. There have been detected JWT-specific attacks that good libraries should have fixed, and irresponsible use of JWT can have some dangers that we’ll examine in further posts. In the end, developers must be aware of what they are using, and a secure opaque token implementation is also very valid. But adding the revocation layer on top of JWT shouldn’t be disregarded as easily. In the next post, we’ll take a look at some revocation strategies that can be implemented.


Background on the debate about JWT security and revocation:

]]>