Oracle Agile Engineering Data Management Upgrade Tool Release e6.2.0.0 E52564-02 |
|
![]() Previous |
![]() Next |
This chapter describes how the upgrade is executed interactively. There are different steps available for selection on the ACTION tab, which should be executed in the respective order.
Note: Some steps may be repeated without re-importing the customer dump. For further information please refer to chapter Appendix G: Repeatable Tasks |
The command script upg03_preaction_sql.cmd will be executed. This script executes a couple of SQL scripts, depending on the customer dump version. Log files created in this step in the upgrade/log directory have to be reviewed manually before proceeding with the upgrade.
Additionally, DTV internal repository is taken over from target reference dump into the customer dumps by using SQL script grant_dtv.sql and takeover_dtv.sql.
A complete list of SQL/LOG files created in this step can be found in the Appendix.
This step upgrades the DataView repository and is generally split into two steps:
Compares the data sets from the different dumps and stores the changes in an XML file for the three possible operations:
Delete
dtv_del.xml
Insert
dtv_ins.xml
Update
dtv_upd.xml
The Upgrade Tool selects each row from the source and the target reference dumps, compares the data sets from both dumps to identify the differences and checks if the customer has modified these data. The upgrade action (Insert, Update, and Delete) is determined for each record and the information is stored in a set of XML files. The migration rules are listed in the Appendix.
Note: This step can take between 10 minutes and a few hours. |
Performing operations.
The Upgrade Tool reads the XML files created during the previous operation and performs the corresponding SQL-statements.
Note: This step can take about 15 - 40 min. |
While performing, an error log file data/dtv/dtv_err.xml is created and should be checked for possible errors.
A dtv_customizing.log is also created in the directory data/dtv/. It contains conflicts, which occurred during the DTV-upgrade. This log file must be reviewed.
Check log files in the directory upgrade\data\dtv.
If an error occurs, check the error description in upgrade\log\erorrdetails.log.
Note: The upgrade tool is only able to compare tables with the same table structure. Therefore, the DataView tables in the reference dumps (edb234upgref …) have an Agile e6 structure. |
Note: For better readability, it is possible to create HTML files from the XML log files (see Convert XML files to HTML). |
Executes the command script:
Windows | UNIX |
---|---|
upg07_sync_update.cmd | upg07_sync_update.sh |
This script executes an SQL script (depending on the customer dump version).
Log files created in this step in the upgrade/log directory have to be reviewed manually before proceeding with the upgrade.
A complete list of SQL/LOG files created in this step can be found in the Appendix.
During this step, the table definition in DataView within the customer dump is compared to the physical table structure in the database. SQL statements to create and alter database objects are generated and executed automatically. The adaptation of the physical data structure will consist of the following steps:
Analyze phase
In this phase you can see which statements the program will perform. It creates also a control file named conf/special.xml for field values and special tasks.
Note: This step can take about 1-6 min. |
Review and edit the control file conf/special.xml.
Synchronize phase
In this step, database structure is converted according to the new DataView repository.
Note: This step can take between 1 minute and a few hours. |
The data and log files are placed in the data\sync directory. A detailed description of the errors can be found in log\errordetail.log.
Note: If the program terminates the process and releases the connection because of a server error, drop the special.xml file in conf/ directory and copy the preconfigured template from conf/template into the conf/ directory. Restart the process. |
In this step, a proposal file special.xml is written to the conf/ directory. Error messages are written in file <upg_root>\data\sync\ sync_analysis.log. In chapter Appendix H you can find a list of error messages.
The delivery package contains a preconfigured special.xml, which defines standard settings for all expected cases. Very often the customer dump contains inconsistencies so that in the analyze mode the tool will add entries to the special.xml file. In this case you need to review and adapt the following configuration subsets:
Note: Do not change or delete the default settings. |
Static default values for columns changed from null to not null.
In the following example the column T_TRE_DAT.CUR_FLAG is set to 'n'.
<FieldDefault> <FieldName>T_TRE_DAT.CUR_FLAG</FieldName> <FieldType>S</FieldType> <FieldSize>1</FieldSize> <DefaultValue> <Value>n</Value> </DefaultValue> </FieldDefault>
Dynamic default values: The field values can be computed dynamically based on a Java function / SQL Statement. Preconfigured functions are available to use the number server to set values. Here is an example for an SQL statement and JAVA generated field default:
<FieldDefault> <FieldName>T_CTX_DAT.EDB_SEQ</FieldName> <FieldType>I</FieldType> <FieldSize>4</FieldSize> <DefaultValue> <Select> DISTINCT (SELECT COUNT(*) FROM T_CTX_DAT T WHERE T.C_ID <= thisRec.C_ID)*10 </Select> <Where>C_ID > 0</Where> </DefaultValue> </FieldDefault> <FieldDefault> <FieldName>T_MASTER_DOC.EDB_ID</FieldName> <FieldType>I</FieldType> <FieldSize>10</FieldSize> <DefaultValue> <Function>GetNewEDBID(EDBEDBID)</Function> </DefaultValue> </FieldDefault>
Example to Rename tables:
<RenameTable> <TableName>T_EER_SIT</TableName> <NewTableName>T_DDM_SIT</NewTableName> </RenameTable> <RenameTable> <TableName>T_EER_SIT_STR</TableName> <NewTableName>T_DDM_SIT_STR</NewTableName> </RenameTable>
Move fields:
This option allows moving a column of a table's inclusive stored values to a new location. To move a field you have to specify:
Source field (<table_name>.<column_name>)
Target field (<table_name>.<column_name>)
Path (join condition between old and new table)
The following sample configuration files show 3 different possibilities to move field values to a new location.
<!-- Example transfer from typetable to entitytable. --> <MoveField> <SourceField>T_DOC_DRW.CRE_USER</SourceField> <Path>T_DOC_DRW.C_ID_2</Path> <Path>T_DOC_DAT.C_ID</Path> <DestField>T_DOC_DAT.CAX_CRE_SYSTEM</DestField> </MoveField> <!-- Example transfer from entitytable to entitytable via releationtable. --> <MoveField> <SourceField>T_MASTER_DAT.PART_ID</SourceField> <Path>T_MASTER_DAT.C_ID</Path> <Path>T_MASTER_DOC.C_ID_1</Path> <Path>T_MASTER_DOC.C_ID_2</Path> <Path>T_DOC_DAT.C_ID</Path> <DestField>T_DOC_DAT.CAX_CRE_SYSTEM</DestField> </MoveField> <!-- Example transfer in table. --> <MoveField> <SourceField>T_MASTER_DAT.PART_ID</SourceField> <DestField>T_MASTER_DAT.EDB_ICON</DestField> </MoveField> An example configuration file special_move.xml containing a definition of moved fields is stored in the template directory conf/template/
Change data type of a field.
The upgrade tool allows changing the type definition of columns like integer to string. If the value of a column for all records is null then also incompatible data type changes can be executed (e.g. string to integer). Cutting a string field is only possible if no record contains a longer value. Please check max length of stored values directly within Sql*Plus.
You have to replace "false" by "true" to confirm such critical changes. The type definition "oldType" comes from the database; "newType" is the DataView definition (stored in T_FIELD. C_FORMAT).
<FieldChange> <FieldName>T_DOC_DAT.FOO</FieldName> <ConfirmChange oldType="S80.0" newType="S40.0">false</ConfirmChange> </FieldChange>
The command script upg15_convert_nvarchar2.cmd(Windows) / upg15_convert_nvarchar2.sh ( Unix) is executed.
The command script upg08_postactions is executed. This script executes a set of SQL scripts. Check the results in the log files named log/08_*.log. For a complete list of log files please refer to the Appendix.
During this upgrade step a valid reference to T_STA_LUT is established with values in EDB_STA_REF for each entry in T_CHK_STA. Additionally, EDB_LEVEL within the table T_STA_LUT is filled.
Note: The values of the table T_STA_LUT have to be reviewed after the customizing upgrade. |
Additionally, BVB_ARTIKEL cleanup step is executed for PLM 5.x and older versions. For more information please refer to section Cleanup BVB_ARTIKEL.
Note: PLM5 and older favorites are migrated in this upgrade step as well. For more information please refer to section Favorites Upgrade. |
Note: The favorite migration can be performed after the first takeover execution because it needs standard browser entries, which are inserted during the step BRW-upgrade. |
And finally, the T_PRC_DAT.EDB_PRO_REF column is filled during this upgrade step. For more information please refer to the chapter "Project references in processes".
The command script upg09_common_get is executed. This script starts the SQL script to save and delete standard LogiView models. Check log files named log\09%.log.
During this step, the content of the common Oracle configuration tables are upgraded. The appropriate XML files are stored in data/edb/ directory.
Click create files.
XML files edb_ins.xml, edb_del.xml, and edb_upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occur during these actions are stored in edb_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step the content of the browser configuration tables is upgraded. The appropriate XML files are stored in the data/brw/ directory.
Click create files.
XML files brw_ins.xml, brw_del.xml, and brw_upd.xml are created. They can be reviewed before performing these actions on the database dump.
4. Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occur during these actions are stored in brw_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
Several standard LogiView procedures are changed in every new Agile e-series software release. All LogiView logic models are deleted - if they are identical to new ones, they will be saved if they were changed. Next, all new logical models are re-inserted.
Note: All LogiView changes made by the customer, have to be reapplied after the upgrade. |
Additional standard LogiView variables, constants, and system variables are upgraded in the usual manner.
The appropriate XML files are stored in data/lgv/ directory.
Click create files.
XML files lgv_ins.xml, lgv_del.xml, and lgv_upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors occurred during these actions are stored in lgv_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step, the content of the workflow configuration tables is upgraded. The appropriate XML files are stored in the data/wfl/ directory.
Click create files.
XML files wfl_ins.xml, wfl_del.xml, and wfl_upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions will be stored in wfl_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
Note: Since Agile version e6.0.4 the field T_ACT_HIS.PV_VALUE has been replaced by two new fields (T_ACT_HIS_PV_SL_VALUE and T_ACT_HIS.PV_ML_VALUE) in the History for Workflow activities. Because the old data cannot be mapped exactly for the new schema, they are still stored in the existing old field. The old field is invisible and the new fields T_ACT_HIS_PV_SL_VALUE (single-language value), and T_ACT_HIS.PV_ML_VALUE (multi-language value) are empty after the upgrade. If requested, the old field can be made visible. Since Agile e6.0.4, resp. Agile e6.1 new data is only filled in new fields. |
During this step, the content of the change management configuration tables is upgraded. The appropriate XML files are stored in the data/chg/ directory.
Click create files.
XML files chg_ins.xml, chg_del.xml, and chg_upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions are stored in chg_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step, the content of the classification configuration tables is upgraded. The appropriate XML files are stored in the data/gtm/ directory.
Click create files.
XML files gtm_ins.xml, gtm_del.xml, and gtm_upd.xml are created. They can be reviewed before performing these actions on the database dump.
14. Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions are stored in gtm_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step, the content of the Office Suite configuration tables is upgraded. The appropriate XML files are stored in the data/gtm/ directory.
Click create files.
XML files gdm_ins.xml, gdm_del.xml, and gdm_upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions are stored in gdm_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step, the content of the RMT-module configuration tables is upgraded. The appropriate XML files are stored in the data/rmt/ directory.
Click create files.
XML files rmt_ins.xml, rmt_del.xml, and rmt_upd.xml are created. They can be reviewed before performing these actions on the database dump.
18. Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions are stored in gdm_err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step, the configuration tables of the Advanced Structure Editor (ASE) are upgraded. The appropriate XML files are stored in the data/ase/ directory.
Click create files.
XML files ase_ins.xml, ase _del.xml, and ase _upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occurred during these actions are stored in ase _err.xml - please check this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
During this step the content of the CMG-module configuration tables is upgraded. The appropriate xml files are stored in the data/cmg/ directory.
Click create files.
XML files cmg_ins.xml, cmg _del.xml, and cmg _upd.xml are created. They can be reviewed before performing these actions on the database dump.
Click Perform delete, insert, update.
XML files created in the previous step are read and executed. Errors which occur during these actions are stored in cmg_err.xml - please control this file.
Note: If an error occurs, check the error description in upgrade\log\erorrdetails.log. |
Note: If you are migrating from Agile e6.0 or newer, this step can be skipped in GUI mode. |
The command script upg10_common_update is executed. This script executes an SQL script to migrate non-standard browser entries to the new Agile e6 browser based on new database tables having a prefix T_EXP_*. Check the results in the log files named log/10_*.log. For a complete list of log files please refer to the Appendix.
Note: Please control the log file full.log for the results of LogiView comparison between upgraded and targeted reference dumps. |
Note: If you are migrating from Agile e6.0 or newer, this step can be skipped |
Since Agile e6.0 a new attribute inheritance concept was introduced. To convert existing class attributes, this upgrade step has to be executed.
This optional upgrade step allows a string replacement within table content.
The configuration file conf/specialreplace.xml contains an example definition for replacing a string with another string.
Example: Update the strings 'T_EER_SIT' with 'T_DDM_SIT' in LogiView procedures
<?xml version="1.0" encoding="UTF-8"?> <special> <replace> <table>LV_DT_PRC</table> <field>MAIN</field> <example>T_EER_SIT</example> <replacewith>T_DDM_SIT</replacewith> </replace> </special>
Recreate customer specific indexes, views, packages, procedures, and triggers, database constraints like field defaults, etc. Additionally, recreate adapted standard indexes, views, packages, procedures, and triggers, database constraints like field defaults, etc.
Note: Since DataView does not support GLOBAL TEMPORARY tables, and even the Upgrade Tool, you have to remove table definitions from the DataView repository for global temporary tables. Otherwise, errors will occur during the synchronize phase while creating indexes for global temporary tables. Such error messages must be ignored. The tables have to be adapted manually. |
After the upgrade is tested completely in the interactive mode, it can be used in batch mode. Batch mode is intended for experienced upgrade users. Many upgrade steps are compressed in few batch scripts, so that they possibly need to be adapted manually.
The batch mode can be used in the following manner:
Do an upgrade in interactive mode.
Re-import customer dump.
Make a backup copy of your upgrade directory.
Clean the log directory by executing upg01_pre_cleanlog.cmd.
Run the following scripts:
upg03_preaction_sql.cmd
upg05_dtv_update.cmd
upg07_sync_update.cmd
upg08_postaction.cmd
upg10_common_update.cmd
upg11_cla.cmd
Control log files.
Run post-action scripts by executing upg14_prod2_rep_update.cmd.
Take over production data by executing upg13_prod1_takeover.cmd.
Alternatively, a whole upgrade process may be performed in the batch mode: UNIX scripts for batch mode execution (without interactive actions) are available, too. These scripts have the .sh extension (instead of .cmd for Windows).
Import all dumps.
Start start_upg.cmd.
The Create wallet screen is opened.
Manually create the wallet root directory under <UPG_TOOL_ROOT>/wallet and make it secure.
Click Re-check wallet root.
If wallet root is accepted, you can create the wallet.
Click Create wallet.
After the wallet is created, you need to start Upgrade Tool in UI mode to save encrypted passwords in <UPG_TOOL_ROOT>/conf/ApplicationParameter.xml.
After the password is saved in <UPG_TOOL_ROOT>/conf/ApplicationParameter.xml, you can start Upgrade Tool in batch mode as usual.
Configure the Upgrade Tool.
There is a sample script named batch_upgrade.cmd (batch_upgrade.sh on UNIX), which can be used for a complete dump upgrade. All dumps have to be reimported before performing the batch upgrade
upg03_preaction_sql.cmd
upg04_dtv_get.cmd
upg05_dtv_update.cmd
upg06_sync_get.cmd
Adapt special.xml
Run the following scripts:
upg07_sync_update.cmd
upg08_postaction.cmd
upg09_common_get.cmd
upg10_common_update.cmd
upg11_cla.cmd
Control log files.
Take over production data by executing upg13_prod1_takeover.cmd.
Run post-action scripts by executing upg14_prod2_rep_update.cmd.
Alternatively, a whole upgrade process may be performed in the batch mode: UNIX scripts for batch mode execution (without interactive actions) are available, too. These scripts have the .sh extension (instead of .cmd for Windows).
There is a sample script named batch_upgrade.cmd (batch_upgrade.sh on UNIX), which is a template that can be used for a complete dump upgrade. All dumps have to be re-imported before doing the batch upgrade
Import all dumps.
Start start_upg.cmd.
Manually create the wallet root directory under <UPG_TOOL_ROOT>/wallet and make it secure.
Click Re-check wallet root.
If wallet root is accepted, you can create the wallet.
Click Create wallet.
After the wallet is created, you need to start Upgrade Tool in UI mode to save encrypted passwords in <UPG_TOOL_ROOT>/conf/ApplicationParameter.xml.
After the password is saved in <UPG_TOOL_ROOT>/conf/ApplicationParameter.xml, you can start Upgrade Tool in batch mode as usual.
Configure the upgrade tool.
Run the script batch_upgrade.cmd.
Control log files.
Take over production data by executing upg13_prod1_takeover.cmd.
Run post-action scripts by executing upg14_prod2_rep_update.cmd.
You can convert XML files to HTML and view them with a browser. The batch file xml2html.bat creates HTML files for insert, update, and delete. This function is available on Windows platforms only.
Execute xml2html.cmd with one of the following parameter values, which specify the HTML file to be created for modules:
All
HTML files for all modules are created.
Module name
HTML files only for specified modules are created, possible values are:
dtv edb brw dode lgv wfl chg gdm rmt gtm ase cmg
Note: You need memory for approximately six times the XML file size!The batch job will run several hours. The Java Runtime Environment allocates 512MB of memory. You can adjust the memory allocation by editing the file Upgrade\cmd\upg_env.cmd. |
The next step is to transfer reference data from the production system. This phase is necessary, because during the upgrade and customizing of the new environment, new reference data of all tables except customized tables is available in the old production dump.
Takeover phase is separated in several tasks:
Generate a relevant table list using a database connection to the current production dump.
This database connection is used as source for the tables copied into the new environment. No changes are made in the production database.
Edit the list of tables, which should be copied during the takeover step.
Takeover: drop tables in the new dump and copy them from the current production dump.
Takeover the latest number server and sequence values.
Make the dump up-to-date again by executing the step "Synchronize_repository" and appropriate post-actions.
Perform a test in the new environment.
Test all functionalities, maybe during training of the user. If errors occur, remove them via customizing. Not everything will be done automatically. During the test period, a lot of new data is created in the production system. Therefore, the last two steps can be repeated till the new environment is error-free.
If testing does not raise any errors you can switch the production database to the new one.
Take over your data from the production system and the files! again and the new environment is your production system.
Shut down your old system.
Select the folder Takeover in the Upgrade Tool
Press the button Create Ref File.
The Upgrade Tool will connect to the production database.
Identify all tables.
Synchronize the information with the predefined list (ref_tables.xml).
Only tables with data will be written to the file.
To adapt the list, press Edit ref. File.
For each table, you have to define if it is a reference table. (ref_data = y.) Such reference tables will be dropped to the customer dump and copied from the production system using the connection to the production DB.
Select OK to save the XML file (conf/ref_tables.xml).
Note: Because of a special BVB_ARTMEH upgrade functionality BVB_ARTIKEL, BVB_ARTMEH, and BVB_ARTMEHUFK tables have to be specified as production tables. Other BVB tables are treated as configuration tables. Otherwise, an error can occur during execution of SQL scripts artmeh_1.sql / artmeh_2.sql. |
If you have created new users and / or groups since the date when the dump was exported from the production system, the following DataView tables must be migrated.
Note: New users like EDB-WFL, EDB-DFM, EDB-DDM, EDB-GDM, EDB-EER, DODEKERNEL will get lost. Export these users first with the binary loader and reload them after the upgrade.If doing so, the table T_USER has to be upgraded manually, which means you need to proceed with all T_USER changes that happen in SQL scripts called ora/sql/dtvxxx-xxx.sql. Otherwise, you might not be able to login into the application because of missed columns. The list of involved scripts can be found in log/testupgrade_03_preaction_sql.log |
T_USER
T_GROUP
T_GRP_USR
T_PROFILE
Related tables of the EDM - person management
Note: Table T_DEFAULT should be migrated by the loader (import/overload) otherwise, new defaults will be missing. |
Backup your customer Agile e6 dump.
Press the Takeover button.
The tables containing non-repository information are dropped in your customers dump. The tables will be copied from the defined production environment into your customer dump.
Note: Check the log file log/errordetails.logAfter copying production database tables, a special upgrade step will be automatically executed - Takeover Workflow Masks. This step is described later in this document. If upgrade tool crashes while executing takeover step on large tables(>1GB) with out-of-memory exception, please increase memory parameter in cmd/upg_env.cmd and/or scripts/upg_env.sh: JAVACALL="$JAVA_HOME/bin/java -Djava.endorsed.dirs=../lib/endorsed -Xmx1024m". |
Since number server is used during the step "Synchronize_repository", the newest values have to be transferred to the new environment. The same applies for the database sequences. This is done by executing the step "AFTER TAKEOVER: takeover number server values".
Number Server Values
Please control log files 14_01_get_numvalue.log and 14_02_set_numvalue.log.
Note: If the production database is Oracle 8i, this step will fail and the corresponding SQL scripts have to be executed manually.afterwards as described below: |
Execute the script ..\ora\sql\get_numvalue.sql in the production environment.
It will create a new SQL script called set_numvalue.sql which should be executed in the upgraded environment.
Contact Oracle Support for Assistance.
Database Sequences
Please control log files 14_03_getseq.log, and 14:04_dropseq.log, and 14_05_creseq.log.
Note: In case c_id generation_by database sequences are used, you have to check if sequences for repository tables are used.This is because the script getseq.sql determines all sequences of the productive database schema. Thus, the current sequence in an upgraded environment can be already higher for a sequence for repository tables. If this is the case, the manual execution and cleanup of the generated scripts has to be performed, as described below. Sequences for these tables should not be taken over. |
Execute the script ..\ora\sql\getseq.sql in the production environment.
It will create new SQL scripts called dropseq.sql and creseq.sql.
Execute the scripts dropseq.sql and creseq.sql in the upgraded environment.
Note: Do not run them in the production environment. |
Contact Oracle Support for Assistance.
Tables just copied from the current production environment have an old table structure and must be upgraded. This is done with the following steps:
Step "AFTER TAKEOVER: run before-sync-scripts"
The command script upg07_sync_update.cmd will be executed.
Check log files upg07%.log in the upgrade\log directory.
Step "AFTER TAKEOVER: Synchronize_repository"
Execute "Synchronize" and control log/errors.log, control log files in data\sync.
Convert NVARCHAR2 fields to VARCHAR2 fields.
Step "AFTER TAKEOVER > Run after-sync scripts"
Check log files log/upg15%.log. The complete list of SQL scripts is described in the Appendix.
Note: Since this step executes a post-action script, which is also executed after the synchronize step, log files log/upg08_*.log created in this step will be overwritten. Please make a backup copy if you need them. |
Note: All LogiView changes made by customers in the standard must be applied again after the upgrade.The previous changed standard procedures can be found within the S<timestamp> LogiView models. |
Note: If you are migrating from PLM5.x version or newer, this step must be skipped. |
After the upgrade or productive takeover you have to check the defaults "EDB-STP-DEF-NO-REF" and "EDB-STP-DEF-ORG-REF". During the upgrade to Agile e6, the values of these defaults were changed from "EP" to "NN".
Note: If the values of these defaults in your productive environment are already set "NN", nothing needs to be done. |
These defaults are used as field defaults in several fields (<TABLE>.STEP_NO_REF, <TABLE>.STEP_ORG_REF).
This Upgrade Tool provides a default script (ora\sql\upg_org_ref_default.sql), which changes all values of these fields in the standard tables from "EP" to "NN".
Please review the script according to your customizing. If you have not done any changes to your customizing regarding STEP_ORG_REF,STEP_NO_REF you can run the scripts without changes.
If you do not run the script, it is possible to duplicate PART_ID's in T_MASTER_DAT, because these two fields are used in the unique key constraint of T_MASTER_DAT.
If you have made customer specific dump changes to standard database objects, like views, packages, procedures and triggers, database constraints, field defaults, etc., and these dump changes are not included in the DTV repository, you must apply these changes again to your new dump.
Please keep in mind that the standard database objects you made the changes for may have been changed.
Note: Since DataView does not support GLOBAL TEMPORARY tables (and even Upgrade Tool), you have to remove any index information from the repository for such tables if they are defined in the DataView repository. Otherwise, errors will occur during the synchronizing phase while creating indexes for global temporary tables. Such error messages must be ignored. The tables have to be adapted manually. |