Hi Swrna,
Thanks a lot for clarifying what are the exact topics your "test
framework development" will cover.
1) In the present setup, i too had realised that sometimes testcases and
hence methods related to that test case becomes quite big and it is best
if we have a different file for each test. It does make the test
framework scalable.
But what will be the criteria to group tests? We can have
installGroup, UninstallGroup, TreeGroup, OperateGroup. Have you chalked
that out?
Then installGroup can have happy path use cases, invalid/negative use
cases using all sort of archives. Anissa had once sent us a list of
valid archives which can be used for testing. If we check that in and
use in our test cases. So each test group should have tests for happy
paths and unhappy paths.
Having test group specific packages will make a test developer
understand where to place his file more clearly.
Right now the package for test files is
com.sun.jbi.jsf.test
I reused the package name from previous HtmlUnit/Rhino Unit JBI tests
which we had before.
We will need to decide on a better package structure also based on
"testGroups"
Yes i initially had tried separating tests into different files and
faced issues. The trickier part will be to tell Selenium that after
exiting the previous test, which frame/window to choose to start the
next test. Similar issues i addressed when i had several tests in one
file and had to switch from one test method to other .
2) Having a parameter file is a good idea. We can use option to ant
target and read that too. I did see that several people suggested
"store" method to store the hardcoded values in variables and reuse it.
But i did not find it was supported in SeleniumIDE tests which i
developed before. This will also avoid any hardcoded path references in
test.
3) Regarding logging, certainly we need to redirect test output to a
file . Right now i used to follow the exception's getMessage() method to
see the error. We might have to add logic to filter out TimedOut
exceptions/errors which are pretty common and should not worry a user.
The timedout exceptions could cause a preceding test to fail sometimes
when it becomes a reason to worry. Only errors like Element not found,
should worry the users
I would like to add one more thing in our test development TODO list.
Robustness in repeatability.
This is an issue i feel needs attention, whenever a user develops a
testcase. My Solaris machine runs tests slow and needs more time for
page to load (selenium.waitForPageToLoad(1000)) and timesout and cannot
find elements on a page, even when it is there, while it works fine for
windows . Same test or timedout parameter works fine for one run but
not for other run in same setup. So the tests should provide a
reproducible path.
Targetting firefox using chrome url is my highest priority as it can be
run on Solaris as well as Windows.
And i suppose ADMINBUI is a typo, as i even saw that in the tests you
developed. It is actually Admin-GUI..where GUI stands for Graphical user
interface.
Documentation is certainly important part of the whole project but can
be addressed later. I think for the time being, if we can address
1) Happy Path coverage of all pages/functionality
2) Make the tests repeatable on at least one browser needing less change
from user in our code.
Gradually we can start addressing other browsers, negative test cases,
documentation, cluster profile (https protocol) and incorporate feedback
from other team members as we move ahead.
I would appreciate feedback from other team members on all these issues.
Priti
SwrnaLata Mishra wrote:
>
> Priti,
>
> I looked the code in the repository, and code looks good. I can see
> that you are executing the JBITest.java through an ant target. I also
> noted the directory structure you created. I will also use the same
> package structure to ensure easy integration of the code.
>
> During the meeting, you asked about what extra value my current work
> will add. May be I need to throw some light on this topic. In the
> current setup,
>
> 1. We have only one .java file containing all the test cases. However,
> in future when there will be hundreds of test cases, we will need a
> better code organization mechanism to arrange our test cases.
>
> 2. Currently the ant 'test' target runs the test cases mentioned in
> this single java file. However, we should be able to convey to this
> ant 'test' target that we either want to run all the test cases, or
> just the selected test cases written in one or more .java files.
>
> 3. The JBITest.java does not log the result of each test execution
> into any output file. Similarly just running the test cases is not
> enough. We need to code the criterion for test success/failure as
> part of the test case itself and log the results appropriately.
>
> 4. We also need to decide on a common documentation template for our
> test cases. That's because just by reading the test case code, one can
> not understand the purpose of the test. This will also lead to the
> tracability of our test cases.
>
> My idea of developing a framework will help in achieving the points
> mentioned above:
>
> 1. Developing and organizing the test cases
>
> Framework will organize tests into "Test Groups". A test group will be
> a collection of tests that will serve a specific purpose. For example,
> there may be one called "Nightly Tests" that contains all the tests
> that should be run after each nightly deployment of GlassFish. There
> may be another called 'System Tests".
>
> 2. Executing them in a selected manner.
>
> The framework will use a parameter file to identify various test
> groups, each containing references to the classes that need to be run
> as well as the properties needed to run them. For example: instead of
> hardcoding the AdminBUI URL or File Upload location in each test
> class, the test classes will pick them from this parameter file.
>
> 3. Directing the output into an XML format file.
>
> The framework will store the output from each test into an XML file,
> so that the results can be viewed and analyzed in an external
> application. The test result will contain purpose of the test,
> tracability to the documented test case, and result of the each test
> alongwith the timestamp.
>
> I am just trying to augment the test framework setup that we have in
> place, therefore I don't see any overlap between our work. Please go
> ahead with your selenium based JBI Upload test case development. Once
> I am done with the framework development, I will tweak your test cases
> to get executed through the framework. Once this is done I will also
> start automating the test cases.
>
> Thanks,
>
> Swarna Mishra
>
>
> ------------------------------------------------------------------------
> The average US Credit Score is 675. The cost to see yours: $0 by
> Experian. <http://g.msn.com/8HMBENUS/2755??PS=47575>