TLSNotary Performance Benchmarks (August 2025)
Over the past months, we’ve made major performance leaps in TLSNotary. We implemented the VOLE-based IZK backend (QuickSilver) and introduced control-flow and MPC optimizations across the stack.
Starting with v0.1.0-alpha.8, QuickSilver replaced the older garbled-circuit proof backend, reducing bandwidth usage and sensitivity to latency. Subsequent releases added transcript hash commitments, low-bandwidth modes, faster APIs, and more. (https://github.com/tlsnotary/tlsn/releases)
These changes yield significant performance gains in both native and browser builds.
In this post, we share results from our new benchmarking harness and highlight how different network conditions (bandwidth, latency, response size) affect real-world performance.
Why is performance important?
TLSNotary is an interactive protocol: the Prover and Verifier exchange data while the TLS session is ongoing. That means runtime is more than a benchmark number, it directly affects usability.
If proving takes too long:
- Connections may timeout before notarization completes.
- Users may experience slow, blocking interactions.
A key objective is minimizing the online time: the period when the Prover is actively connected to the Server. If this phase runs too long, the server will simply close the connection.
TLSNotary addresses this by allowing the Prover and Verifier to preprocess much of the MPC work before the Prover connects to the Server. The protocol is designed to make both the preprocessing and the online phase as short as possible; fast enough for smooth end-user experiences, without compromising security or privacy.
How did we measure?
To ensure performance results are both reliable and reproducible, we created a dedicated benchmarking harness. It executes the full TLSNotary protocol in both native and browser-based (WebAssembly) environments, enabling apples-to-apples comparisons.
For simulating network conditions, the harness uses robust, low-level Linux tooling (ip
, iptables
, and tc
) to precisely emulate real-world scenarios:
- Bandwidth throttling, to model tight or abundant network capacity.
- Custom latency, to reflect different round-trip times.
- Packet shaping, which can introduce jitter, chunking, or drops.
Combined with the ability to tweak request and response sizes, this gives us a controlled environment to isolate how each factor—network and payload—impacts runtime in native versus browser builds.
As all TLSNotary code, the harness is open source, so anyone can reproduce our results or adapt it for their own testing: https://github.com/tlsnotary/tlsn/tree/783355772ac34af469048d0e67bb161fc620c6ac/crates/harness
Raw data and notebooks are available on GitHub.
How does Prover Upload Bandwidth impact performance?
Benchmark parameters: latency = 25 ms, request size = 1 KB, response size = 4 KB.
On low-bandwidth connections, protocol runtime is dominated by the volume of MPC data the prover must upload to the verifier. Once bandwidth reaches around 100 Mbps, the impact diminishes significantly and no longer drives the overall runtime.
How does Network Latency impact performance?
Benchmark parameters: bandwidth = 1000 Mbps (to isolate latency), request size = 1 KB, response size = 4 KB.
As expected, latency has a direct proportional impact on runtime. Since our MPC-TLS protocol involves ~40 communication rounds, higher RTT values linearly increase total runtime. At higher latencies, the cost of communication dominates, and the native build’s speed advantage is effectively canceled out — its runtime converges to that of the browser build.
How does Server Response Size impact performance?
Benchmark parameters: latency = 10 ms, bandwidth = 200 Mbps, request size = 2 KB.
Runtime also scales with server response size. In many real-world use cases, a response size of ~10 KB is sufficient. Under these conditions, the native build completes in ~5 s, while the browser build takes ~10 s — still responsive enough for a smooth end-user experience.
Note: The benchmarks above measure proving statements over the entire server response. If selective disclosure is not required, TLSNotary can process much larger resources, such as images or video, without a significant impact on runtime. In these cases, obtaining a ciphertext commitment is fast and largely independent of response size. This scenario will be covered in a separate benchmark in the upcoming alpha.13 release.
Conclusions
Overall, as demonstrated in the final benchmark where bandwidth and latency are not the limiting factors, the browser build runs about 3× slower than the native build. The main reason is the absence of hardware acceleration in the browser’s WebAssembly environment. The underlying cryptography relies heavily on SIMD instructions and hardware-accelerated cryptographic operations for optimal performance, which are fully available in native builds but not yet accessible in browsers.
In conclusion, the performance is good enough for practical use, but still leaves room for optimization in the browser. For example, an AES implementation which leverages the WASM SIMD extension could narrow the gap some more. Contributions welcome!
Benchmark Details
- Hardware: All benchmarks were run on an AWS c5.4xlarge instance (16 vCPU, 3.0 GHz, 32 GB RAM).
- Deferred Decryption: Enabled (TLSNotary feature that defers decryption until the full TLS transcript is available, reducing MPC workload).
- TCP_NODELAY: Enabled to disable Nagle’s algorithm, ensuring immediate packet transmission and reducing latency for faster interactive proving. Learn more.
- Reproducibility: You can reproduce these results using our open-source benchmarking harness: tlsnotary/tlsn/crates/harness.