users@jersey.java.net

RE: [Jersey] Custom inject provider

From: Guba, Nicolai <nguba_at_bioware.com>
Date: Tue, 30 Jun 2009 07:58:44 -0700

Thank you for the interesting background info!

Indeed, there could be quite some overhead introduced by the layering (or what happens in a particular layer). Establishing the exact cause is beyond what we have time for to measure :(

I wonder whether one can optimize it to reduce this overhead even further? It may be more of a JAXB issue than anything, but I am guessing here and have no data to support anything. Either way, considering the benefits this is certainly not a show stopper for us and we are ok with the slight degradation in performance as result.

FYI all this stuff is in production on http://swtor.com and we use it for game integration as well ;)

Cheers!
-- 
   =NPG=
-----Original Message-----
From: Paul.Sandoz_at_Sun.COM [mailto:Paul.Sandoz_at_Sun.COM] 
Sent: Tuesday, June 30, 2009 9:33 AM
To: users_at_jersey.dev.java.net
Subject: Re: [Jersey] Custom inject provider
On Jun 30, 2009, at 4:06 PM, Guba, Nicolai wrote:
> One thing that we have learned over the years is that it is  
> impossible to predict performance empirically.  One needs to measure  
> the application and then compare measurements to determine whether  
> any improvements actually yielded the expected results.  You may be  
> surprised what you find at times :)  Like Paul mentioned earlier.   
> Data is king!
>
> It must be said however, that we measured the effect of the request  
> being processed by jaxb-json (which uses Jackson) versus 'plain'  
> jersey where we used the java implementation of JSON and hand- 
> crafted a protocol.  To be fair, there may be plenty of reasons why  
> this is slower, not just Jackson on it's own.
Yes, the following is the layering:
   JAXB unmarshaller/marshaller
                        |
           StAX reader/writer
                        |
   Jackson JSON parser/serializer
It does not use the Jackson Object binding. It would not surprise me  
to know, and i definitely would be surprised to learn if it was not  
the case, that the Jackson Object binding is faster, given the  
potential for optimizations and how Tatu writes code :-)
Paul.
>  I would arguge that it is a price worth paying though.
>
> Unfortunately I cannot release any source code :(  but such test  
> should not be too difficult to set up on your own.
>
> Thank you for the background info in FI.  Are there any  
> recommendations for an efficient data format other than JSON or can  
> we say that right now JSON would be our best option?
>
> Happy Hacking!
> -- 
>   =NPG=
>
> -----Original Message-----
> From: Tatu Saloranta [mailto:tsaloranta_at_gmail.com]
> Sent: Tuesday, June 30, 2009 1:52 AM
> To: users_at_jersey.dev.java.net
> Subject: Re: [Jersey] Custom inject provider
>
> On Mon, Jun 29, 2009 at 12:23 PM, Guba, Nicolai<nguba_at_bioware.com>  
> wrote:
>> You don't have to use JAXB at all.  You can just obtain the request  
>> as String and then unroll your own object.
>
> Or, for better efficiency, InputStream.
>
>> We have actually measured the impact of this and the results have  
>> shown about a 16% overhead of Jackson unmarshalling vs the good ol  
>> JSONObject (go figure!  I thought Jackson was meant to be quicker).
>
> That would be most unexpected result. :-)
>
> Do you happen to have a simple test case at hand to exhibit this
> behavior? Or maybe a code snippet?
>
> And yes, Jackson is meant to be faster. Usually by factor of 4x or 5x,
> from feedback I have gotten. I haven't seen any case where default
> JSONObject would come anywhere near it, speedwise, so I would be
> interested in seeing what gives results you are seeing.,
>
>> If raw protocol speed is of essence, then looking at fast infoset  
>> may be a better alternative.  As far as we are concerned, we have  
>> benchmarked our request handling in the region of 16-17k requests  
>> per second per server---which is most adequate.
>
> From what I have seen, FI is actually not necessarily faster than  
> JSON.
> It can be bit faster than plain old textual XML overall (slower to
> write == serialize, faster to read), but it's really important to
> measure things to know for specific use cases. FI has bit higher
> per-invocation overhead it seems, so benefits are bigger for larger
> documents.
>
> -+ Tatu +-
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_jersey.dev.java.net
> For additional commands, e-mail: users-help_at_jersey.dev.java.net
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_jersey.dev.java.net
> For additional commands, e-mail: users-help_at_jersey.dev.java.net
>
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe_at_jersey.dev.java.net
For additional commands, e-mail: users-help_at_jersey.dev.java.net