users@fi.java.net

bug: deserialized chunks are wrong

From: <Bert_Vingerhoets_at_inventivegroup.com>
Date: Tue, 6 Jan 2009 12:45:45 +0100

Hi,

First of all, sorry if I post this to the wrong list.

When using fi to optimize for known character content chunks using a
vocabulary, it appears this feature has a bug in it: after parsing, all
chunks from the vocabulary are too long because, instead of using the
lengths array to determine the chunk sizes in the char array, the offsets
array is used.
It seems
com.sun.xml.fastinfoset.util.ContiguousCharArrayArray#getCompleteLengthArray
has a copy/paste error in it (getCompleteLengthArray has clearly been
copied from getCompleteOffsetArray):

public final int[] getCompleteLengthArray() {
    if (_readOnlyArray == null) {
        return _length;
    } else {
        final int[] ra = _readOnlyArray.getCompleteOffsetArray(); ///
<--- !!!
        final int[] a = new int[_readOnlyArraySize + _length.length];
        System.arraycopy(ra, 0, a, 0, _readOnlyArraySize);
        return a;
    }
}


The call to _readOnlyArray.getCompleteOffsetArray() must, of course, be
_readOnlyArray.getCompleteLengthArray()


Regards,
Bert Vingerhoets - System Programmer and Designer
Inventive Designers NV

Phone: +32 3 821 01 70
Fax: +32 3 821 01 71
Email: Bert_Vingerhoets at inventivegroup dot com
http://www.inventivegroup.com/




Inventive Designers' Email Disclaimer:
http://www.inventivedesigners.com/email-disclaimer


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
--