Greetings,
Upfront — thanks very much for the excellent Jersey project.
We’ve been using it for years at my day job. It is a godsend!
So my question…
I recently built a “dropwizard-netty-bundle”. And I was extremely pleased that I could incorporate Jersey onto Netty so cleanly.
It boiled down to a relatively small amount of code.
To accomplish this; I used this:
https://github.com/jersey/jersey/tree/master/containers/netty-http/src/main/java/org/glassfish/jersey/netty <
https://github.com/jersey/jersey/tree/master/containers/netty-http/src/main/java/org/glassfish/jersey/netty>
And based it on this:
https://github.com/jersey/jersey/tree/master/examples/helloworld-netty <
https://github.com/jersey/jersey/tree/master/examples/helloworld-netty>
And it works great — except, sadly, it performs terribly.
I compared my code with a Jetty/Jersey version of exactly the same Resource
(A simple GET Resource that returns; “OK”)
#### Jetty/Jersey -- using the alive page and the `wrk` program
```
cberry@localhost ~ $ wrk -d30s --latency
http://localhost:8080/alive.txt <
http://localhost:8080/alive.txt>
Running 30s test @
http://localhost:8080/alive.txt <
http://localhost:8080/alive.txt>
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 339.17us 1.56ms 43.71ms 98.87%
Req/Sec 20.83k 3.38k 28.61k 74.88%
Latency Distribution
50% 194.00us
75% 235.00us
90% 292.00us
99% 2.14ms
1246019 requests in 30.10s, 231.72MB read
Requests/sec: 41393.01
Transfer/sec: 7.70MB
```
#### Netty/Jersey (with an apples to apples alive.txt)
```
cberry@localhost ~ $ wrk -d30s --latency
http://localhost:8007/v1/alive.txt <
http://localhost:8007/v1/alive.txt>
Running 30s test @
http://localhost:8007/v1/alive.txt <
http://localhost:8007/v1/alive.txt>
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 649.52us 4.52ms 118.48ms 98.77%
Req/Sec 16.49k 1.74k 19.55k 81.83%
Latency Distribution
50% 284.00us
75% 304.00us
90% 342.00us
99% 8.05ms
984994 requests in 30.03s, 84.54MB read
Requests/sec: 32801.29
Transfer/sec: 2.82MB
```
~21% less throughput. 2.75X slower.
#### BUT when I run it WITHOUT Jersey
```
cberry@localhost ~ $ wrk -d30s --latency
http://localhost:8007/helloworld <
http://localhost:8007/helloworld>
Running 30s test @
http://localhost:8007/helloworld <
http://localhost:8007/helloworld>
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 138.29us 59.75us 5.40ms 97.83%
Req/Sec 35.96k 4.76k 41.01k 74.09%
Latency Distribution
50% 124.00us
75% 149.00us
90% 178.00us
99% 208.00us
2153924 requests in 30.10s, 199.25MB read
Requests/sec: 71558.68
Transfer/sec: 6.62MB
```
a whopping _72K requests/sec_ (as I’d expect — 1.7X the throughput — at a much faster 99%tile)
I _think_ that this code:
https://github.com/jersey/jersey/blob/2.25/containers/netty-http/src/main/java/org/glassfish/jersey/netty/httpserver/JerseyServerHandler.java#L112 <
https://github.com/jersey/jersey/blob/2.25/containers/netty-http/src/main/java/org/glassfish/jersey/netty/httpserver/JerseyServerHandler.java#L112>
is the reason why!
It looks like it is firing another Thread each time for Jersey??
So it makes sense why Netty/Jersey is slightly slower !!
(At least by my read)
So, my question, Am I doing something terribly wrong?
Meaning — Should Netty/Jersey perform as raw Netty is expected too?? Or are there fundamental reasons why it will not??
Or do I have it configured/built incorrectly??
(I realize that I’d need to send code to truly answer that question :~) — But, in general?
Thank you for your assistance.
Cheers,
— Chris