users@jersey.java.net

[Jersey] Jersey REST client NoRouteToHostException

From: Chen Wang <chen.apache.solr_at_gmail.com>
Date: Wed, 27 Aug 2014 17:27:43 -0700

Hi, Guys,
i am using Jersey 1.8 to call a REST api,
I am using a static client and web resource:

private static Client optimizorClient;

private static WebResource optimizorWebResource;

static {

Connfig = new DefaultClientConfig();

 // TODO when ever I set these, I always receive time out exception

 // values are in milliseconds

 // clientConfig.getProperties().put(ClientConfig.PROPERTY_READ_TIMEOUT,

 // 2000);

 // clientConfig.getProperties().put(ClientConfig.PROPERTY_CONNECT_TIMEOUT,

 // 2000);

 clientConfig.getProperties().put(ClientConfig.PROPERTY_THREADPOOL_SIZE,

200);

 clientConfig.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING,

  Boolean.TRUE);


 optimizorClient = Client.create(clientConfig);

 optimizorWebResource = optimizorClient

  .resource("webserviceendpoint");

 }

and the code of calling the web service:

ClientResponse response = optimizorWebResource.type(

  "application/json").post(ClientResponse.class, json);

 if (response.getStatus() != 200) {

  logger.error("Failed : HTTP error code : "

   + response.getStatus());

  return false;

  }

return true;

I have 200 threads querying the webservice. The thread pooling seems to
have nothing to do with the connections established to the webservice: I
can still see as many as 1k connection established, and I eventually
receive NoRouteToHostException exception.

Looking through the code ,it seems that an connection is established
whenever there is a request. So my understanding is that
clientConfig.getProperties().put(ClientConfig.PROPERTY_THREADPOOL_SIZE,

200); only limits how many threads can serve all the requests. If they can
serve 1000 request at the same time, then i will still have 1000
connections. Is that right?

If so, how can i solve the NoRouteToHostException in this case? Limit the
number of threads might help, but I would like to see whether Jersey has
internal request cache/pooling to solve this issue.

Thanks,

Chen