So, to summarize my previous email..
I would be ok to have the message being a single message, and having the TX
under the covers, as long as we had a way to determine the beginning and
end of a transaction through some sort of callback.
although I prefer the approach of a simpler API, with a simple array. I
don't see much benefit of receiving the messages earlier if the transaction
is suspended anyways.
On Thu, Aug 27, 2015 at 5:02 PM, Clebert Suconic <clebert.suconic_at_gmail.com>
wrote:
> I'd rather have the array...
>
> also one common thing people can do over Database operations is to use
> array operations on the DB...
>
>
> Say:
>
> I don't remember the JDBC api on top of my head now.. these are just
> examples:
>
>
> JDBCStatement preparedStmt = createdAPreparedStatement("insert operation
> (a) values (:a)");
>
>
> for (TextMessage m : messages)
> {
> s.setString(count++, m.getText());
> }
>
> s.execute();
>
>
>
>
> or you could have something like:
>
>
> long value = 0;
> for (Message m : messages) {
> value += m.getIntProperty("someProperty");
> }
>
> updateDB(value);
>
>
> Receiving one message at the time and have the TX on the background would
> forfeit the use of these two possibilities.
>
>
> You could say.. .well what if we added a listener method to give you
> beginTX and afterTX operations... beginArray / afterArray???
>
> But then the API starts to get complicated... I would rather have a simple
> array API.
>
>
> On Thu, Aug 27, 2015 at 3:21 PM, Nigel Deakin <nigel.deakin_at_oracle.com>
> wrote:
>
>> I was a bit shocked at discovering it was 2011 too :-)
>>
>> I'll think about this a bit more and propose something as an add-on the
>> existing JMS MDB proposals. The @BatchConfig annotation was just off the
>> top of my head.
>>
>> I think we should define some default values for batchSize and
>> batchTimeout . Perhaps batchSize=1 should be the default (in which case
>> batchTimeout is ignored).
>>
>> Before we go deeply into this, I'd like to mention that Weblogic has a
>> MDB feature for batching multiple messages in the same transaction. This is
>> described here:
>>
>> https://docs.oracle.com/cd/E24329_01/web.1211/e24977/batching.htm#WLMDB1384
>> This works by deferring the transaction commit after a defined number of
>> messages have been received.
>>
>> My colleagues on the Oracle Weblogic team tell me that this can offer
>> better throughput as it allows the earlier messages to be processed whilst
>> waiting for subsequent messages to arrive, rather than saving them up and
>> delivering them all in one go. Do you think a similar approach has any
>> merit?
>>
>> Nigel
>>
>>
>>
>>
>>
>> On 27/08/2015 17:54, Clebert Suconic wrote:
>>
>> Man.yeah.. we discussed that back in.2011.. can't believe it's that
>> long.. that makes me feel old :)
>> I like the possibility of being arrays... and I agree with your idea:
>>
>> This one here definitely hits the mark of what I'm thinking:
>>
>>
>> @BatchConfig(maxBatchSize=100, batchTimeout=10000);
>> @JMSListener(lookup="java:global/Trades", type=JMSListener.Type.QUEUE)
>> public void processTrades(TextMessage[] tradeMessages,
>> @MessageHeader(Header.JMSCorrelationID) String[] correlationIds){
>> ...
>> }
>>
>>
>> Just one minor thing: maybe we should have default values for BatchConfig
>> in case it's not present, or that should represent a deployment error?
>>
>>
>> On Thu, Aug 27, 2015 at 12:45 PM, Nigel Deakin <nigel.deakin_at_oracle.com>
>> wrote:
>>
>>> On 27/08/2015 16:11, Clebert Suconic wrote:
>>>
>>>>
>>>> On Thu, Aug 27, 2015 at 10:37 AM, Nigel Deakin <
>>>> <nigel.deakin_at_oracle.com>nigel.deakin_at_oracle.com <mailto:
>>>> nigel.deakin_at_oracle.com>> wrote:
>>>>
>>>> Hi Clebert,
>>>>
>>>> We should definitely discuss how the JMS 2.0 proposals proposals
>>>> for batch delivery (since deferred) might work with
>>>> the new-style MDBs and listener beans.
>>>>
>>>> As you know, MDB users can turn off XA transactions using the
>>>> @TransactionManagement annotation, and can remove the
>>>> number of network round-trips using an acknowledgement mode of
>>>> DUPS_OK. Why is this not sufficient?
>>>>
>>>>
>>>> Because users want to have the messages and their Database operations
>>>> in transaction.
>>>> I have seen this pattern since my days of consultant.
>>>>
>>>> This goes beyond my JBoss. I'm not concerned about any specific usecase
>>>> from JBoss, but from what I see from users doing
>>>> on day by day with their messaging systems.
>>>>
>>>> DupsOK is ok, but that doesn't guarantee the semantic these users have.
>>>>
>>>>
>>>> Or do you have a specific need to have XA transactions that cover
>>>> multiple messages?
>>>>
>>>>
>>>>
>>>> From what I see, EE users will want to have async messages, but will
>>>> need transactions.. the easiest way to do XA
>>>> Transactions between JMS and Databases are through EE... for that
>>>> reason, users are *over-using* MDBs... and XA transaction
>>>>
>>>> And that's limiting any improvements done by any messaging provider.
>>>>
>>>> A few handful users will write standalone application to batch things
>>>> properly...
>>>>
>>>>
>>>> So, I think we really should propose a way to batch things through
>>>> MDBs. At least these users would take full advantage
>>>> of asynchronous message systems and still do transactions.
>>>>
>>>>
>>> OK, thanks. So the user *does* want XA transactions. They just don't
>>> want every single message to be in a separate transaction.
>>>
>>> As you will of course remember, you proposed "batch delivery" for JMS
>>> 2.0. This was a way of passing an array (or whatever) of messages in the
>>> same onMessage() call.
>>>
>>> (Incidentally, another approach might be to provide a mechanism for
>>> grouping multiple container-managed transaction MDB onMessage() calls
>>> together under a single transaction. I know Weblogic has a proprietary
>>> feature like this, others may too.)
>>>
>>> I went back to look at the ideas we discussed for batch message
>>> delivery. This was back in December 2011! The basic idea was to have a new
>>> BatchMessageListener interface, which defined a method onMessages.
>>>
>>> public interface BatchMessageListener {
>>> void onMessages(Message[] messages);
>>> }
>>>
>>> During that discussion we decided that the application needed to be able
>>> to specify maxBatchSize and batchTimeout:
>>>
>>> * maxBatchSize - The maximum batch size that should be used. Must be
>>> greater than zero.
>>>
>>> * batchTimeout - The batch timeout in milliseconds. A value of zero
>>> means no timeout is required The JMS provider may override the specified
>>> timeout with a shorter value.
>>>
>>> Now it's JMS 2.1, and we're discussing my proposals that JMS MDBs don't
>>> need to implement a fixed interface and can define their own callback
>>> methods.
>>>
>>> I've already proposed that a callback method could be defined with two
>>> parameters, the first of which was a TextMessage, and the second was the
>>> JMSCorrelationID of that message.
>>>
>>> @JMSListener(lookup="java:global/Trades", type=JMSListener.Type.QUEUE)
>>> public void processTrade(TextMessage tradeMessage,
>>> @MessageHeader(Header.JMSCorrelationID) String correlationId){
>>> ...
>>> }
>>>
>>> An obvious extension would be to allow these parameters to be arrays:
>>>
>>> @JMSListener(lookup="java:global/Trades", type=JMSListener.Type.QUEUE)
>>> public void processTrades(TextMessage[] tradeMessages,
>>> @MessageHeader(Header.JMSCorrelationID) String[] correlationIds){
>>> ...
>>> }
>>>
>>> We'd need to define some new annotations to allow maxBatchSize and
>>> batchTimeout to be specified. Perhaps:
>>>
>>> @BatchConfig(maxBatchSize=100, batchTimeout=10000);
>>> @JMSListener(lookup="java:global/Trades", type=JMSListener.Type.QUEUE)
>>> public void processTrades(TextMessage[] tradeMessages,
>>> @MessageHeader(Header.JMSCorrelationID) String[] correlationIds){
>>> ...
>>> }
>>>
>>> No doubt there are other possibilities, and I know arrays are a bit
>>> old-fashioned, but I hope this gives an idea.
>>>
>>> Is this on the right track, Clebert? If so I'll write this up in a bit
>>> more detail.
>>>
>>> Nigel
>>>
>>>
>>>
>>
>>
>> --
>> Clebert Suconic
>>
>>
>>
>
>
> --
> Clebert Suconic
>
--
Clebert Suconic