dev@glassfish.java.net

RE: About GLASSFISH-18861

From: Lv Songping <lvsongping_at_cn.fujitsu.com>
Date: Fri, 27 Jul 2012 13:15:27 +0800

Dear Shingwai, Hong Zhang:

>>>>>> Thanks for your suggestions. I have some questions about it:
>>>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>>>> will be mostly fixed too.
>>>>>>> 1. Today I have found Shingwai has fixed the issue 18866, but I
>>>>>>> think there's also some other issues haven't been fixed.
>>>>>>> What Shingwai has fixed is only related to the applications
>>>>>>> deployed
>>>>>> on the
>>>>>> server. It doesn't work when applications deployed on the cluster.
>>>>>> For
>>>>>> example:
>>>>>> (1) Create a cluster and instance called cluster001 and instance001.
>>>>>> (2) Start the cluster001.
>>>>>> (3) Deploy test_sample1 on the cluster001 and the context root
>>>>>> value is test_sample1.
>>>>>> (4) Deploy test_sample2 on the cluster001 and the context root
>>>>>> value is
>>>>>> test_sample1 too.
>>>>>> (5) The test_sample2 can be deployed successfully with the
>>>>>> warming messages suggest the user the context root is duplicated.
>>>>>> About the above operation, I don't think it's a good idea to
>>>>>> permit the application deployed successfully on the same cluster
>>>>>> with some warming messages. I think these applications with same
>>>>>> context root shouldn't be deployed on the same cluster and should
>>>>>> tell the user error message just as the applications deployed on
>>>>>> the server.
>>>>> I have a discussion with Hong before my checkin.
>>>>> There is a different between DAS and cluster in deployment side.
>>>>> There are actually two "internal" phases in deployment: "deploy"
>>>>> and loading For DAS, "deploy" and loading happens at the same
>>>>> time.
>>>>> For cluster, "deploy" on DAS and loading on instance.
>>>>> In this particular case, the failure is from loading.
>>>>> In our current implementation, he deployment will not fail when
>>>>> the loading is failed in a cluster.
>>>>> This explains why the scenario.
>>>>> Hong, any further comments?
>>>> Yes, this was following the same model we used since v2 and earlier
>>>> releases, we only roll back deployment on DAS and don't roll back
>>>> failed loading on instances for various reasons. The user will be
>>>> given a warning message to tell them the application will not run
>>>> properly in instance in this case and they should fix their
>>>> applications and then redeploy.
>>> Thanks for your advices, I have two confused questions as follows.
>>> 1. Is that the current deployment issues about GLASSFISH-18866 is no
>>> need to be revised on the cluster?
>>> 2. The issue trac about GLASSFISH-18861 hasn't been resolved yet, I
>>> wonder to know whether the web team will investigate it and resolve
>>> it. If you are not convenient to investigate this issue, I'll
>>> investigate it instead.
>> The behavior for 18861 has been changed after my fix for 18866.
>> I have just updated the info in this bug.
>> WebContainer does throw exception in this case.
>>
>> Hong,
>> Is the failure status propagated correctly?
>No, I haven't looked into this since your fix. Jeremy could check the
>current behavior if he has some cycles to look into this..
Thanks for your reply, I have checked the latest glassfish after shingwai
revised the issue of GLASSFISH-18866. Although the Webcontainer has an
exception for duplicateContextRoot, I still have some questions about
GLASSFISH-18866 while setting the context root of test_sample2 to be
/test_sample1.
(1).Both of the applications can't be accessed because of the context root
conflicted.
(2).test_sample2 can't be undeployed until you redeploy this application.
(3).The context root of test_sample1 is same with test_sample2.

[My opinion]
(1).I think the operation about setting the context root of test_sample2 to
be /test_sample1 can't be successfully. Because after the operation, the
test_sample1 can't be accessed and user must redeploy test_sapmle1 if he
want to access the test_sample1.
(2).After setting the context root of test_sample2 to be /test_sample1, I
think it's not friendly to redeploy the test_sample2 if you want to undeploy
Test_sample2.

Above all, I think the issue of 18861 is still exists.

--Best Regards
--Jeremy Lv


-----Original Message-----
From: Hong Zhang [mailto:hong.hz.zhang_at_oracle.com]
Sent: Friday, July 27, 2012 4:40 AM
To: dev_at_glassfish.java.net
Subject: Re: About GLASSFISH-18861

Shing Wai Chan wrote:
> On 7/25/12 7:41 PM, Lv Songping wrote:
>> Dear Shingwai, Hong Zhang:
>>
>>>>> Thanks for your suggestions. I have some questions about it:
>>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>>> will be
>>>>>> mostly fixed too.
>>>>> 1. Today I have found Shingwai has fixed the issue 18866, but I think
>>>>> there's also some other issues haven't been fixed.
>>>>> What Shingwai has fixed is only related to the applications deployed
>>>>> on the
>>>>> server. It doesn't work when applications deployed on the cluster.
>>>>> For
>>>>> example:
>>>>> (1) Create a cluster and instance called cluster001 and instance001.
>>>>> (2) Start the cluster001.
>>>>> (3) Deploy test_sample1 on the cluster001 and the context root
>>>>> value is
>>>>> test_sample1.
>>>>> (4) Deploy test_sample2 on the cluster001 and the context root
>>>>> value is
>>>>> test_sample1 too.
>>>>> (5) The test_sample2 can be deployed successfully with the warming
>>>>> messages
>>>>> suggest the user the context root is duplicated.
>>>>> About the above operation, I don't think it's a good idea to
>>>>> permit the
>>>>> application deployed successfully on the same cluster with some
>>>>> warming
>>>>> messages. I think these applications with same context root
>>>>> shouldn't be
>>>>> deployed on the same cluster and should tell the user error message
>>>>> just as
>>>>> the applications deployed on the server.
>>>> I have a discussion with Hong before my checkin.
>>>> There is a different between DAS and cluster in deployment side.
>>>> There are actually two "internal" phases in deployment: "deploy" and
>>>> loading
>>>> For DAS, "deploy" and loading happens at the same time.
>>>> For cluster, "deploy" on DAS and loading on instance.
>>>> In this particular case, the failure is from loading.
>>>> In our current implementation, he deployment will not fail when the
>>>> loading is failed in a cluster.
>>>> This explains why the scenario.
>>>> Hong, any further comments?
>>> Yes, this was following the same model we used since v2 and earlier
>>> releases, we only roll back deployment on DAS and don't roll back
>>> failed
>>> loading on instances for various reasons. The user will be given a
>>> warning message to tell them the application will not run properly in
>>> instance in this case and they should fix their applications and then
>>> redeploy.
>> Thanks for your advices, I have two confused questions as follows.
>> 1. Is that the current deployment issues about GLASSFISH-18866 is no
>> need
>> to be revised on the cluster?
>> 2. The issue trac about GLASSFISH-18861 hasn't been resolved yet, I
>> wonder
>> to know whether the web team will investigate it and resolve it. If
>> you are
>> not convenient to investigate this issue, I'll investigate it instead.
> The behavior for 18861 has been changed after my fix for 18866.
> I have just updated the info in this bug.
> WebContainer does throw exception in this case.
>
> Hong,
> Is the failure status propagated correctly?
No, I haven't looked into this since your fix. Jeremy could check the
current behavior if he has some cycles to look into this..
>
>>
>> -----Original Message-----
>> From: Hong Zhang [mailto:hong.hz.zhang_at_oracle.com]
>> Sent: Thursday, July 26, 2012 1:48 AM
>> To: dev_at_glassfish.java.net
>> Subject: Re: About GLASSFISH-18861
>>
>>
>>
>> On 7/25/2012 1:06 PM, Shing Wai Chan wrote:
>>> On 7/25/12 1:17 AM, Lv Songping wrote:
>>>> Dear Shing wai chan:
>>>> Cc:Hong Zhang:
>>>>
>>>> Thanks for your suggestions. I have some questions about it:
>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>> will be
>>>>> mostly fixed too.
>>>> 1. Today I have found Shingwai has fixed the issue 18866, but I think
>>>> there's also some other issues haven't been fixed.
>>>> What Shingwai has fixed is only related to the applications deployed
>>>> on the
>>>> server. It doesn't work when applications deployed on the cluster. For
>>>> example:
>>>> (1) Create a cluster and instance called cluster001 and instance001.
>>>> (2) Start the cluster001.
>>>> (3) Deploy test_sample1 on the cluster001 and the context root
>>>> value is
>>>> test_sample1.
>>>> (4) Deploy test_sample2 on the cluster001 and the context root
>>>> value is
>>>> test_sample1 too.
>>>> (5) The test_sample2 can be deployed successfully with the warming
>>>> messages
>>>> suggest the user the context root is duplicated.
>>>> About the above operation, I don't think it's a good idea to permit
>>>> the
>>>> application deployed successfully on the same cluster with some
>>>> warming
>>>> messages. I think these applications with same context root
>>>> shouldn't be
>>>> deployed on the same cluster and should tell the user error message
>>>> just as
>>>> the applications deployed on the server.
>>> I have a discussion with Hong before my checkin.
>>> There is a different between DAS and cluster in deployment side.
>>> There are actually two "internal" phases in deployment: "deploy" and
>>> loading
>>> For DAS, "deploy" and loading happens at the same time.
>>> For cluster, "deploy" on DAS and loading on instance.
>>> In this particular case, the failure is from loading.
>>> In our current implementation, he deployment will not fail when the
>>> loading is failed in a cluster.
>>> This explains why the scenario.
>>> Hong, any further comments?
>> Yes, this was following the same model we used since v2 and earlier
>> releases, we only roll back deployment on DAS and don't roll back failed
>> loading on instances for various reasons. The user will be given a
>> warning message to tell them the application will not run properly in
>> instance in this case and they should fix their applications and then
>> redeploy.
>>
>> - Hong
>>>
>>>
>>>>> Thanks for looking into the issue. A few comments:
>>>>> 1. The set command is a generic command and should not contain
>>>>> anything specific (like context root).
>>>>> 2. Just checking the context root in domain.xml does not
>>>>> guarantee
>>>>> it to be unique, as the context roots used in the ear application are
>>>>> not in the domain.xml. And also the context root just needs to be
>>>>> unique per virtualserver, so applications could use same context root
>>>>> if the applications are loaded on different virtual servers. I had
>>>>> actually discussed with Shingwai on whether we should do
>>>>> pre-validation for context root, Shingwai mentioned it will be tricky
>>>>> to get this part of check correct, so the current logic is done in
>>>>> web
>>>>> container when the application is being loaded where all necessary
>>>>> information are available.
>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>> will be
>>>>> mostly fixed too.
>>>> 2. The issue GLASSFISH-18861 hasn't fixed yet.
>>>>
>>>> Thanks for your suggestions. The basic logical about this issue is as
>>>> follows:
>>>> a. When we update the context root through GUI or command, the set
>>>> method in
>>>> SetCommand will be executed first.
>>>> b. Then the HK2 module will update the context root value in
>>>> domain.xml.
>>>> c. After the context root value in domain.xml has been updated,
>>>> there's no
>>>> rollback operations about the same context root value in domain.xml
>>>> but
>>>> output the error messages in server.log so that the context root is
>>>> duplicated.
>>>> Above all, the first idea come to my mind is that I should prevent the
>>>> update operation about the domain.xml in SetCommand. But it's not a
>>>> good
>>>> idea because I have something specific(the context root).
>>>> I think we should create a rollback method if the context root in
>>>> domain.xml
>>>> is duplicated.
>>>>
>>>> --Best Regards
>>>> --Jeremy Lv
>>>>
>>>> -----Original Message-----
>>>> From: Shing Wai Chan [mailto:shing.wai.chan_at_oracle.com]
>>>> Sent: Tuesday, July 24, 2012 1:08 AM
>>>> To: dev_at_glassfish.java.net
>>>> Cc: Hong Zhang
>>>> Subject: Re: About GLASSFISH-18861
>>>>
>>>> On 7/23/12 8:00 AM, Hong Zhang wrote:
>>>>> Hi, Jeremy
>>>>> Thanks for looking into the issue. A few comments:
>>>>> 1. The set command is a generic command and should not contain
>>>>> anything specific (like context root).
>>>>> 2. Just checking the context root in domain.xml does not
>>>>> guarantee
>>>>> it to be unique, as the context roots used in the ear application are
>>>>> not in the domain.xml. And also the context root just needs to be
>>>>> unique per virtualserver, so applications could use same context root
>>>>> if the applications are loaded on different virtual servers. I had
>>>>> actually discussed with Shingwai on whether we should do
>>>>> pre-validation for context root, Shingwai mentioned it will be tricky
>>>>> to get this part of check correct, so the current logic is done in
>>>>> web
>>>>> container when the application is being loaded where all necessary
>>>>> information are available.
>>>>>
>>>>> Probably once Shingwai checks in fix for 18866, this issue
>>>>> will be
>>>>> mostly fixed too.
>>>> I will checkin the fix once the svn is opened.
>>>> Shing Wai Chan
>>>>> Thanks,
>>>>>
>>>>> - Hong
>>>>>
>>>>>
>>>>> On 7/23/2012 4:45 AM, Lv Songping wrote:
>>>>>> Dear Hong Zhang
>>>>>> Cc:glassfish admin team,Tom
>>>>>>
>>>>>> I have revised the issue(GLASSFISH-18861) about the " After
>>>>>> setting
>>>>>> context root of two wars which deployed on the server target with
>>>>>> the
>>>>>> same
>>>>>> value,both of the two wars accessed failed " and reflected the
>>>>>> modified
>>>>>> files into https://github.com/LvSongping/GLASSFISH-18861 and please
>>>>>> review
>>>>>> it and give me some advices.
>>>>>>
>>>>>> I think it should be an admin-cli issue related to the set
>>>>>> method in
>>>>>> SetCommand.java, it set the new values into domain.xml file before
>>>>>> check the
>>>>>> situation whether the contextroot is already used by another
>>>>>> applications on
>>>>>> the same target. I have defined two method called isCtxRootexist and
>>>>>> validateTargetDup to check whether the contextroot is already exists
>>>>>> on the
>>>>>> same target.
>>>>>>
>>>>>> The issue url is as follows:
>>>>>> http://java.net/jira/browse/GLASSFISH-18861
>>>>>>
>>>>>> --Best Regards
>>>>>> --Jeremy Lv
>>>>>>
>>>>>>
>>>>
>>
>