dev@glassfish.java.net

RE: About GLASSFISH-18861

From: Lv Songping <lvsongping_at_cn.fujitsu.com>
Date: Fri, 27 Jul 2012 10:42:35 +0800

>> Dear Shingwai, Hong Zhang:
>>
>>
>>>>> Thanks for your suggestions. I have some questions about it:
>>>>>
>>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>>> will be mostly fixed too.
>>>>>>
>>>>> 1. Today I have found Shingwai has fixed the issue 18866, but I
>>>>> think there's also some other issues haven't been fixed.
>>>>> What Shingwai has fixed is only related to the applications
>>>>> deployed on the server. It doesn't work when applications deployed
>>>>> on the cluster. For
>>>>> example:
>>>>> (1) Create a cluster and instance called cluster001 and instance001.
>>>>> (2) Start the cluster001.
>>>>> (3) Deploy test_sample1 on the cluster001 and the context root
>>>>> value is test_sample1.
>>>>> (4) Deploy test_sample2 on the cluster001 and the context root
>>>>> value is
>>>>> test_sample1 too.
>>>>> (5) The test_sample2 can be deployed successfully with the warming
>>>>> messages suggest the user the context root is duplicated.
>>>>> About the above operation, I don't think it's a good idea to permit
>>>>> the application deployed successfully on the same cluster with some
>>>>> warming messages. I think these applications with same context root
>>>>> shouldn't be deployed on the same cluster and should tell the user
>>>>> error message just as the applications deployed on the server.
>>>>>
>>>> I have a discussion with Hong before my checkin.
>>>> There is a different between DAS and cluster in deployment side.
>>>> There are actually two "internal" phases in deployment: "deploy" and
>>>> loading For DAS, "deploy" and loading happens at the same time.
>>>> For cluster, "deploy" on DAS and loading on instance.
>>>> In this particular case, the failure is from loading.
>>>> In our current implementation, he deployment will not fail when the
>>>> loading is failed in a cluster.
>>>> This explains why the scenario.
>>>> Hong, any further comments?
>>>>
>>
>>
>>> Yes, this was following the same model we used since v2 and earlier
>>> releases, we only roll back deployment on DAS and don't roll back
>>> failed loading on instances for various reasons. The user will be
>>> given a warning message to tell them the application will not run
>>> properly in instance in this case and they should fix their
>>> applications and then redeploy.
>>>
>> Thanks for your advices, I have two confused questions as follows.
>> 1. Is that the current deployment issues about GLASSFISH-18866 is no
>> need to be revised on the cluster?
>>
>Right, that is the expected behavior with the current
design/implementation.
>> 2. The issue trac about GLASSFISH-18861 hasn't been resolved yet, I
>> wonder to know whether the web team will investigate it and resolve
>> it. If you are not convenient to investigate this issue, I'll investigate
it >instead.
>>
>You are welcome to investigate issue 18861 if the problem is not completely
>addressed by the fix to 18866.
>
>And some information about this code path: any context root setting after
>deployment (through CLI or GUI) will trigger the
>main/nucleus/core/kernel/src/main/java/com/sun/enterprise/v3/server/Applica
>tionConfigListener.java
>to reload the application (through ApplicationLifecycle.disable/enable)
>and that should fail as it's conflicted context root.

>But we probably did not do proper roll back for the enable code path as the
deploy >code path. You could look into
>main/nucleus/core/kernel/src/main/java/com/sun/enterprise/v3/server/Applica
>tionLifecycle.java,
>to see how the deploy method does the rollback through ProgressTracker, and
we >can probably do similar for the enable method.

> (The roll back referred above is just for DAS case).

>Thanks,

Thanks for your suggestions. I'll try to investigate it.

--Best Regards
--Jeremy Lv
-----Original Message-----
From: Hong Zhang [mailto:hong.hz.zhang_at_oracle.com]
Sent: Friday, July 27, 2012 2:45 AM
To: dev_at_glassfish.java.net
Subject: Re: About GLASSFISH-18861

Lv Songping wrote:
> Dear Shingwai, Hong Zhang:
>
>
>>>> Thanks for your suggestions. I have some questions about it:
>>>>
>>>>> Probably once Shingwai checks in fix for 18866, this issue will be
>>>>> mostly fixed too.
>>>>>
>>>> 1. Today I have found Shingwai has fixed the issue 18866, but I think
>>>> there's also some other issues haven't been fixed.
>>>> What Shingwai has fixed is only related to the applications deployed
>>>> on the
>>>> server. It doesn't work when applications deployed on the cluster. For
>>>> example:
>>>> (1) Create a cluster and instance called cluster001 and instance001.
>>>> (2) Start the cluster001.
>>>> (3) Deploy test_sample1 on the cluster001 and the context root value is
>>>> test_sample1.
>>>> (4) Deploy test_sample2 on the cluster001 and the context root value is
>>>> test_sample1 too.
>>>> (5) The test_sample2 can be deployed successfully with the warming
>>>> messages
>>>> suggest the user the context root is duplicated.
>>>> About the above operation, I don't think it's a good idea to permit the
>>>> application deployed successfully on the same cluster with some warming
>>>> messages. I think these applications with same context root shouldn't
be
>>>> deployed on the same cluster and should tell the user error message
>>>> just as
>>>> the applications deployed on the server.
>>>>
>>> I have a discussion with Hong before my checkin.
>>> There is a different between DAS and cluster in deployment side.
>>> There are actually two "internal" phases in deployment: "deploy" and
>>> loading
>>> For DAS, "deploy" and loading happens at the same time.
>>> For cluster, "deploy" on DAS and loading on instance.
>>> In this particular case, the failure is from loading.
>>> In our current implementation, he deployment will not fail when the
>>> loading is failed in a cluster.
>>> This explains why the scenario.
>>> Hong, any further comments?
>>>
>
>
>> Yes, this was following the same model we used since v2 and earlier
>> releases, we only roll back deployment on DAS and don't roll back failed
>> loading on instances for various reasons. The user will be given a
>> warning message to tell them the application will not run properly in
>> instance in this case and they should fix their applications and then
>> redeploy.
>>
> Thanks for your advices, I have two confused questions as follows.
> 1. Is that the current deployment issues about GLASSFISH-18866 is no need
> to be revised on the cluster?
>
Right, that is the expected behavior with the current
design/implementation.
> 2. The issue trac about GLASSFISH-18861 hasn't been resolved yet, I wonder

> to know whether the web team will investigate it and resolve it. If you
are
> not convenient to investigate this issue, I'll investigate it instead.
>
You are welcome to investigate issue 18861 if the problem is not
completely addressed by the fix to 18866.

And some information about this code path: any context root setting
after deployment (through CLI or GUI) will trigger the
main/nucleus/core/kernel/src/main/java/com/sun/enterprise/v3/server/Applicat
ionConfigListener.java
to reload the application (through ApplicationLifecycle.disable/enable)
and that should fail as it's conflicted context root.

But we probably did not do proper roll back for the enable code path as
the deploy code path. You could look into
main/nucleus/core/kernel/src/main/java/com/sun/enterprise/v3/server/Applicat
ionLifecycle.java,
to see how the deploy method does the rollback through ProgressTracker,
and we can probably do similar for the enable method.

(The roll back referred above is just for DAS case).

Thanks,

- Hong
> -----Original Message-----
> From: Hong Zhang [mailto:hong.hz.zhang_at_oracle.com]
> Sent: Thursday, July 26, 2012 1:48 AM
> To: dev_at_glassfish.java.net
> Subject: Re: About GLASSFISH-18861
>
>
>
> On 7/25/2012 1:06 PM, Shing Wai Chan wrote:
>
>> On 7/25/12 1:17 AM, Lv Songping wrote:
>>
>>> Dear Shing wai chan:
>>> Cc:Hong Zhang:
>>>
>>> Thanks for your suggestions. I have some questions about it:
>>>
>>>> Probably once Shingwai checks in fix for 18866, this issue will be
>>>> mostly fixed too.
>>>>
>>> 1. Today I have found Shingwai has fixed the issue 18866, but I think
>>> there's also some other issues haven't been fixed.
>>> What Shingwai has fixed is only related to the applications deployed
>>> on the
>>> server. It doesn't work when applications deployed on the cluster. For
>>> example:
>>> (1) Create a cluster and instance called cluster001 and instance001.
>>> (2) Start the cluster001.
>>> (3) Deploy test_sample1 on the cluster001 and the context root value is
>>> test_sample1.
>>> (4) Deploy test_sample2 on the cluster001 and the context root value is
>>> test_sample1 too.
>>> (5) The test_sample2 can be deployed successfully with the warming
>>> messages
>>> suggest the user the context root is duplicated.
>>> About the above operation, I don't think it's a good idea to permit the
>>> application deployed successfully on the same cluster with some warming
>>> messages. I think these applications with same context root shouldn't be
>>> deployed on the same cluster and should tell the user error message
>>> just as
>>> the applications deployed on the server.
>>>
>> I have a discussion with Hong before my checkin.
>> There is a different between DAS and cluster in deployment side.
>> There are actually two "internal" phases in deployment: "deploy" and
>> loading
>> For DAS, "deploy" and loading happens at the same time.
>> For cluster, "deploy" on DAS and loading on instance.
>> In this particular case, the failure is from loading.
>> In our current implementation, he deployment will not fail when the
>> loading is failed in a cluster.
>> This explains why the scenario.
>> Hong, any further comments?
>>
>
> Yes, this was following the same model we used since v2 and earlier
> releases, we only roll back deployment on DAS and don't roll back failed
> loading on instances for various reasons. The user will be given a
> warning message to tell them the application will not run properly in
> instance in this case and they should fix their applications and then
> redeploy.
>
> - Hong
>
>>
>>
>>>> Thanks for looking into the issue. A few comments:
>>>> 1. The set command is a generic command and should not contain
>>>> anything specific (like context root).
>>>> 2. Just checking the context root in domain.xml does not guarantee
>>>> it to be unique, as the context roots used in the ear application are
>>>> not in the domain.xml. And also the context root just needs to be
>>>> unique per virtualserver, so applications could use same context root
>>>> if the applications are loaded on different virtual servers. I had
>>>> actually discussed with Shingwai on whether we should do
>>>> pre-validation for context root, Shingwai mentioned it will be tricky
>>>> to get this part of check correct, so the current logic is done in web
>>>> container when the application is being loaded where all necessary
>>>> information are available.
>>>> Probably once Shingwai checks in fix for 18866, this issue will be
>>>> mostly fixed too.
>>>>
>>> 2. The issue GLASSFISH-18861 hasn't fixed yet.
>>>
>>> Thanks for your suggestions. The basic logical about this issue is as
>>> follows:
>>> a. When we update the context root through GUI or command, the set
>>> method in
>>> SetCommand will be executed first.
>>> b. Then the HK2 module will update the context root value in domain.xml.
>>> c. After the context root value in domain.xml has been updated,
>>> there's no
>>> rollback operations about the same context root value in domain.xml but
>>> output the error messages in server.log so that the context root is
>>> duplicated.
>>> Above all, the first idea come to my mind is that I should prevent the
>>> update operation about the domain.xml in SetCommand. But it's not a good
>>> idea because I have something specific(the context root).
>>> I think we should create a rollback method if the context root in
>>> domain.xml
>>> is duplicated.
>>>
>>> --Best Regards
>>> --Jeremy Lv
>>>
>>> -----Original Message-----
>>> From: Shing Wai Chan [mailto:shing.wai.chan_at_oracle.com]
>>> Sent: Tuesday, July 24, 2012 1:08 AM
>>> To: dev_at_glassfish.java.net
>>> Cc: Hong Zhang
>>> Subject: Re: About GLASSFISH-18861
>>>
>>> On 7/23/12 8:00 AM, Hong Zhang wrote:
>>>
>>>> Hi, Jeremy
>>>> Thanks for looking into the issue. A few comments:
>>>> 1. The set command is a generic command and should not contain
>>>> anything specific (like context root).
>>>> 2. Just checking the context root in domain.xml does not guarantee
>>>> it to be unique, as the context roots used in the ear application are
>>>> not in the domain.xml. And also the context root just needs to be
>>>> unique per virtualserver, so applications could use same context root
>>>> if the applications are loaded on different virtual servers. I had
>>>> actually discussed with Shingwai on whether we should do
>>>> pre-validation for context root, Shingwai mentioned it will be tricky
>>>> to get this part of check correct, so the current logic is done in web
>>>> container when the application is being loaded where all necessary
>>>> information are available.
>>>>
>>>> Probably once Shingwai checks in fix for 18866, this issue will be
>>>> mostly fixed too.
>>>>
>>> I will checkin the fix once the svn is opened.
>>> Shing Wai Chan
>>>
>>>> Thanks,
>>>>
>>>> - Hong
>>>>
>>>>
>>>> On 7/23/2012 4:45 AM, Lv Songping wrote:
>>>>
>>>>> Dear Hong Zhang
>>>>> Cc:glassfish admin team,Tom
>>>>>
>>>>> I have revised the issue(GLASSFISH-18861) about the " After
>>>>> setting
>>>>> context root of two wars which deployed on the server target with the
>>>>> same
>>>>> value,both of the two wars accessed failed " and reflected the
>>>>> modified
>>>>> files into https://github.com/LvSongping/GLASSFISH-18861 and please
>>>>> review
>>>>> it and give me some advices.
>>>>>
>>>>> I think it should be an admin-cli issue related to the set
>>>>> method in
>>>>> SetCommand.java, it set the new values into domain.xml file before
>>>>> check the
>>>>> situation whether the contextroot is already used by another
>>>>> applications on
>>>>> the same target. I have defined two method called isCtxRootexist and
>>>>> validateTargetDup to check whether the contextroot is already exists
>>>>> on the
>>>>> same target.
>>>>>
>>>>> The issue url is as follows:
>>>>> http://java.net/jira/browse/GLASSFISH-18861
>>>>>
>>>>> --Best Regards
>>>>> --Jeremy Lv
>>>>>
>>>>>
>>>>>
>>>
>
>
>