Benchmarking Secure Comparator

When we conceived Secure Comparator, we saw that it is going to be slightly slower than existing authentication methods, because:

This is a consequence of different demands and different security guarantees Secure Comparator gives: let systems with zero shared information exchange requests to data, where request data itself is a leakage.

Should we measure idealistic behavior?

After finishing SC PoC code and it's performance was measured in an ideal environment (C code calling other C code, socket transport, etc.), an important question emerged: how real-world slow it is?

In the real world, authentication code is called by production code written in some high-level language, rarely in pure C. In the real world, authentication does not exist in vacuum, it is surrounded by transport encryption, record serialization and a gazillion of other things.

To understand how much SC will affect general application performance, we've decided to compare it against the fastest authentication method we know - basic HTTP authentication, in different execution contexts.

Goal, architecture, methodology

What we wanted to do is to put it into realistic context:

  • authenticating server would be lighttpd (because it's decently fast and quite easy to alter it's behavior)
  • clients would be quick'n'dirty Python and Go hacks
  • Python and Go code would intentionally be written by a person who does not program in either language every day, but is well-educated developer (a C++ programmer, to be precise); languages were chosen because most of Toughbase code which is not in C is either in Python or Go (and it's slowly gravitating to the latter).

Architecture would be rather simple client-server:

There are two ways to measure performance:

  1. full run: starting interpreter and loading/initializing everything
  2. in-script run: starting everything once and then just measuring the code responsible for authentication

We've done both because in terms of 'abstract performance', way 2 is more correct, but to understand penalties imposed by ecosystem, having metric from the 1st type of experimenting is very useful. This is how much everything around the code you measure actually costs you.

The protocol: regular HTTP AUTH

The protocol is regular HTTP AUTH performed by real-world server, but with some exceptions:

  • Client initiates connection already trying to authenticate. We've skipped "Authentication required" step since it's not affected by operations in comparison.
  • Client uses some typical language-specific library to perform authentication
  • The process is considered complete when the server sends 200 OK.

The protocol: Secure Comparator

Even though Secure Comparator is a stateful protocol (it involves parties storing some temporary data), application level protocol was made as simple as possible:

  • Client sends headers Authorization: Themis <user_name> <comparator payload>
  • Server responds with 302 Redirect <same path as request>(so that client knows that he has to send request to the same address) and Authorization: Themis <user_name> <comparator payload>
  • Client sends headers Authorization: Themis <user_name> <comparator payload>
  • Server sends Authorization: Themis <user_name> <comparator payload>
  • If last Comparator payload from the server is valid, the protocol is considered successfully finished.

The experiment

Our test was aimed to understand how much more expensive SC would be against regular http_auth over regular HTTP connection:

Python: in-script repeats, 10000 runs

  Secure Comparator HTTP_AUTH
Minimum 0.075696 0.005163
Average 0.093323 0.006472
Maximum 0.202766 0.026345

What does this show us? In this case, Secure Comparator is 7 to 15 times slower, and most of the time it's 15 times slower (see the difference between average and minimal, and similar proportion). The numbers are a bit distressing, but we're drag racing M1A1 tank with a lightweight Formula-1 vehicle here, and measuring only when Formula is already speeding full throttle.

Now, going further from lab experience into the real world, let's see how much Python HTTP (Requests packet) infrastructure is a penalizing compared to lightweight set Python of features Themis requires (and, bear in mind, we're wasting precious cycles for base64 in Themis case):

Python: full script, 10000 runs

  Secure Comparator HTTP_AUTH
Minimum 0.455740 0.361105
Average 0.552638 0.451385
Maximum 1.647957 1.350096

Turns out, the difference is quite significant, right? Infrastructure to run even HTTP auth is quite expensive to run, compared to lean C stack Themis requires. They're beasts of the same order now (although, still, HTTP AUTH is a bit faster).

This begs another question: if Python is so expensive, how cheap compiled language would be?

Go, 10000 runs

  Secure Comparator HTTP_AUTH
Minimum 0.078757 0.013381
Average 0.100022 0.020516
Maximum 0.1412649 0.0644648

Quite odd, isn't it? The difference between Go and Python for running the same linked C code is 5-10 times, as much as the difference between SC and cheapest HA authentication. Which means that scripted language and questionable frameworks affect performance as much as running the expensive math, if not more.

Morale of the story

Let's combine tables from 2nd and 3rd experiment:

  Minimum Average Maximum
SC Python 0.455740 0.552638 1.647957
SC Go 0.078757 0.100022 0.141264
HA Python 0.361105 0.451385 1.350096
HA Go 0.013381 0.020516 0.064464

Morale is simple: most of the time, your stack makes more effect on performance than your choice of security tools and even such absurd comparison suddenly starts to make sense when you measure infrastructure as a whole. Python turned out to be more expensive than all "expensive" SC code.

However, this all is quite far from real-world use: http_auth is insecure to use without protected transport, which adds another layer of computational expense for SSL. SC does not need that, because of it being a ZKP cryptosystem, which is not vulnerable to any MiTM attacks by design.

By the way, you can take a look at the code we've used for benchmarking (and some additional setups like Secure Comparator + Secure Session here

Copyright © 2014-2017 Cossack Labs Limited
Cossack Labs is a privately-held British company with a team of data security experts based in Kyiv, Ukraine.