persistence@glassfish.java.net

Re: [New Subject] Issue 699

From: Markus Fuchs <Markus.Fuchs_at_Sun.COM>
Date: Mon, 21 Aug 2006 14:13:32 -0700

Hi Tom,

I agree that setting the non-nullable relationship to null is a user
error. But we should throw a more description exception at commit-time,
and not rely on the SQLException. Shall I work on that?

Thanks,

-- markus.

Tom Ware wrote:

> I have looked a bit more at this bug and think there is an argument
> that this is not, in fact, a bug.
>
> The argument that the update should not occur is based on the part of
> the specification that says:
>
> "Note that it is the application that bears responsibility for
> maintaining the consistency of runtime relatinships -- for example,
> for ensuring that the "one" and "many" sides of a bidirectional
> relationship are consistent with one another when the application
> updates the relationship."
>
> This does not appear to me to say that on a delete the user must null
> out the relationship on the object to be deleted or that when the
> application does that, the persistence provider must handle it.
>
> It seems to me that setting a not-nullable relationship to null is an
> error. (whether or not you are about to delete). When an object is
> deleted it's relationships go away with it. Maintaining the
> consistency of the relationships as stated above refers to ensuring
> that other objects that refer to the object that is being deleted no
> longer refer to it prior to the delete.
>
> I believe that we should throw an exception when a not-nullable
> relationship is updated to null. The current exception is a
> SQLException. We can either continue to do that, or we could throw a
> more descriptive exception at commit-time if our metadata tells us
> what we are attempting to null out is not nullable.
>
> -Tom
>
> Tom Ware wrote:
>
>> Hi Markus,
>>
>> Sorry for taking so long to get back to you about your changes for
>> issue 699. Here are some comments:
>>
>> 1. Removing all the change sets of deleted objects is a bit risky at
>> this point (commitToDatabase). I realize that occurs in the
>> "performDeletesFirst" scenario, but when users do that they realize
>> they are altering the default behavior and exposing themselves to risk.
>> 2. It might be a better idea to look at the code where we do the
>> update. Perhaps we can avoid doing the update for objects in the
>> deleted objects list.
>> 3. I am trying to figure out if there may be a spec issue here. i.e.
>> Is it possible for the persistence provider to know enough about the
>> underlying schema to know whether or not it can avoid an update that
>> has been asked for by the user. Are there some times when the
>> foreign key goes both ways in a bidirectional relationship where we
>> just can't know if we can avoid that delete. I'll have to think
>> about that one some more and get back to you if/when I come up with
>> an answer.
>>
>> -Tom
>>
>> Markus Fuchs wrote:
>>
>>
>>
>>> Hi Tom,
>>>
>>> Tom Ware wrote:
>>>
>>>
>>>
>>>
>>>
>>>> Hi Markus,
>>>>
>>>> Comments inline
>>>>
>>>> Markus Fuchs wrote:
>>>>
>>>>
>>>>
>>>>
>>>>> Hi Tom!
>>>>>
>>>>> [Including the persistence alias in the discussion]
>>>>>
>>>>> I hope you had a great vacation!
>>>>>
>>>>> While investigating issue 879 I noticed, that the class
>>>>> OneToManyMapping doesn't have to implement method
>>>>> hasInverseConstraintDependency().
>>>>>
>>>>> The dependencies corresponding to a OneToMany/ManyToOne relationship
>>>>> are always added by the owning side, which is represented by the
>>>>> class
>>>>> OneToOneMapping in TopLink, correct?
>>>>>
>>>>> So, the dependencies are added twice, once from the owning, and once
>>>>>
>>>>> from the non-owning side. It should be possible to remove this
>>>>> method,
>>>>
>>>>
>>>>
>>>>> letting OneToManyMapping fall back to the default implementation
>>>>> returning 'false'. Would you agree?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> If we make this change, how will the commit order calculation logic
>>>> work with we are processing a OneToMany? With the code as it is,
>>>> it looks to me like if we have OneToManyMapping return false from
>>>> this method, in recordMappingDependencies() we will get end up
>>>> adding a dependancy that is in the incorrect order (because we will
>>>> get into the "if" rather than the "else")
>>>>
>>>>
>>>>
>>>
>>> I understand the handling of the two cases in
>>> recordMappingDependencies()
>>> as follows:
>>>
>>> (1) (ForeignReferenceMapping)mapping).hasConstraintDependency() == true
>>>
>>> This is for the owning side. The referenced class is the owned class
>>> and is added with all its subclasses to the owner's list relatedNodes.
>>>
>>> (2)
>>> ((ForeignReferenceMapping)mapping).hasInverseConstraintDependency()
>>> == true
>>>
>>> This is for the non-owning side. The current class is the owned
>>> class and
>>> is added with all its subclasses to the owner's list of relatedNodes.
>>>
>>>
>>> recordMappingDependencies() uses an exclusive if ("if" ... "else if")
>>> construct. So if the two methods hasConstraintDependency() and
>>> hasInverseConstraintDependency() are implemented consistently for each
>>> mapping, there is no problem of adding "reversed" dependencies.
>>>
>>> The implementation of hasInverseConstraintDependency() for
>>> OneToManyMapping is *correct*, and shouldn't be changed. Rather, one
>>> could consider changing the implementation of
>>> recordMappingDependencies() to evaluate only the owning side (case 1)
>>> for calculating the dependencies. But it's not a big deal anyway. I
>>> withdraw my proposal to change OneToManyMapping.
>>>
>>>
>>>
>>>
>>>
>>>>> Could you please also comment on my evaluation of issue 879? The
>>>>> change in class ManyToManyMapping could be the fix for this issue.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> I'll add comments to the bug shortly.
>>>>
>>>>
>>>>
>>>>
>>>>> For issue 699, I changed UnitOfWorkImpl to remove the
>>>>> ObjectChangeSets
>>>>> for all objects that are to be deleted before updates/inserts are
>>>>> processed in any case. I got the impression that the updates for
>>>>> deleted objects are executed in order to prevent foreign key (unique
>>>>> key) violations. Is this correct?
>>>>>
>>>>> There's no database constraint forcing to nullify foreign keys for
>>>>> for
>>>>> OneToMany relationships, before other updates happen. The same is
>>>>> true
>>>>> for join table entires in case of ManyToMany relationships, which
>>>>> don't need to be removed. The only case where nulling out the foreign
>>>>> key is important is for OneToOne relationships, which might have a
>>>>> unique constraint on the foreign key in the database. But as
>>>>> dependencies within the same class can't be determinated anyway,
>>>>> removing those updates should not make a difference. Please see the
>>>>> attached UnitOfWorkImpl class for my change.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> I am concerned about the change to UnitOfWorkImpl and will need to
>>>> take a closer look at the issue. Here is my concern: It looks like
>>>> the way we are solving this issue is to go through the change sets
>>>> as usual and build a change set as normal and them make a second
>>>> pass to "fix" the change set. Ideally, I think we would build a
>>>> change set that we could use correctly like from the beginning.
>>>> I'll take a look at this issue in some more depth and get back to you.
>>>>
>>>>
>>>>
>>>
>>> Thanks, I'll wait for your update!
>>>
>>> -- markus.
>>>
>>>
>>>
>>>
>>>
>>>>> I ran entity-persistence-tests w/o regressions with my changes. What
>>>>> do you think?
>>>>>
>>>>> Thanks very much,
>>>>>
>>>>> -- markus.
>>>>>
>>>>>
>>>>> Tom Ware wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Hi Markus,
>>>>>>
>>>>>> I was on vacation, but it looks like you got your answer. Feel
>>>>>> free to contact me with questions.
>>>>>>
>>>>>> -Tom
>>>>>>
>>>>>> Markus Fuchs wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Hi Tom,
>>>>>>>
>>>>>>> is it ok if I look into this?
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> -- markus.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>
>>
>>