> Sweet! Now we are talking business here! ;-)
:)
> I really need to get my act together and upgrade grizzly-sendfile to
> 1.9.x.
> I read that the downside is the memory usage, which is to be
> expected. In grizzly-sendfile, I have an option to automatically
> kill inactive downloads after x seconds, maybe grizzly could do the
> same (if there isn't such an option already).
No, current async write queue implementation is very simple, it
doesn't have such a feature yet.
> The other way to combat the memory consumption could be to serialize
> LRU bytebuffers to the hard drive - just a thought.
It could be one of the possible strategies.
> Do you think that this option could become the default in gf v3 or
> is it not "proven" yet?
It's not proven. We still want to see concrete numbers, what it gives
us. Hope perf. team will do some measurement soon.
Thank you.
WBR,
Alexey.
>
>
> Thanks Alexey!
>
> /i
>
>
> On Feb 2, 2009, at 9:32 AM, Oleksiy Stashok wrote:
>
>> Hi Igor,
>>
>> default Glassfish V3 configuration really uses "flush" like write,
>> so worker thread is blocked until whole response will be written.
>> But it is possible to use asynchronous write [1], which solves that.
>>
>> Hope this will help.
>>
>> WBR,
>> Alexey.
>>
>> [1] http://blogs.sun.com/oleksiys/entry/glassfish_v3_asynchronous_http_responses
>>
>> On Feb 2, 2009, at 18:20 , Igor Minar wrote:
>>
>>> I had a brief discussion with colleagues about thread-count
>>> setting in glassfish.
>>>
>>> I understand that this is the max number of worker threads that
>>> grizzly can use, but what surprised me was that I believed that
>>> this wasn't the max number of concurrent connections that grizzly
>>> can process (concurrently send responses to). I know that the
>>> synchronous nature of JavaEE doesn't allow grizzly to do much
>>> magic, but I expected that grizzly could break the one thread per
>>> request paradigm for static resources.
>>>
>>> Unfortunately a simple tests with grizzly-v3-prelude and thread-
>>> count 5 shows that this is not true:
>>>
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> starts the download
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> starts the download
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> starts the download
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> starts the download
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> starts the download
>>> wget --tries=1 --limit-rate=100 http://localhost:8080/foo.txt -
>>> >> times out!!
>>>
>>> Am I missing something? In my pet-project grizzly-sendfile[1] I
>>> was able to create algorithms (still experimental) which can serve
>>> hundreds of downloads with just a handful of worker threads. Why
>>> doesn't grizzly do this for static resources? The same technique
>>> could also be used for sending buffered JavaEE responses to slow
>>> clients, which should increase throughput of the server
>>> significantly. Ideas?
>>>
>>> /i
>>>
>>> [1] http://kenai.com/projects/grizzly-sendfile/pages/Home
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>