Abstract
The document is aimed at describing the step by step procedure that was followed to upgrade a database from Oracle 9iR2 to 10gR2 using hot backup of 9iR2 database. The basis of this upgrade is a hot backup clone.
Case History
The database to be upgraded is about 185GB in size, this activity was tested on a development system to validate the success of this exercise before proceeding with the production environment.
Analysis
Summary
A hot backup was used to create a database on 10gR2 by recreating the controlfile.
Detailed Analysis
1. A password file was initially created in $ORACLE_HOME/dbs of 10gR2 for this particular instance using the orapwd utility. The pfile from the 9i database was used to start the instance by just changing the compatible parameter to 10.2.0 and no other changes were made to the init.ora file since all the deprecated and obsolete parameters will be listed in the alert log once the database is brought up which can be removed later. The instance was now started.
SQL> startup nomount
This step can also be executed as shown below, which is an observation from the alert log when the database failed to open with an error of “ORA-39700: database must be opened with UPGRADE option" preceded with an error of "ORA-00704: bootstrap process failure"
SQL> startup upgrade
2. The next step is to execute the control file recreation script in order to create the controlfile. This is a text controlfile script which was dumped to a trace file from the production environment and updated to reflect the locations of files in the development server.
SQL> !cat control_9_to_10.sql
CREATE CONTROLFILE SET DATABASE "TEN_DB" RESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 '/home/oracle/dev/log1a.rdo' SIZE 200M,
GROUP 2 '/home/oracle/dev/log2a.rdo' SIZE 200M,
GROUP 3 '/home/oracle/dev/log3a.rdo' SIZE 200M,
GROUP 4 '/home/oracle/dev/log4a.rdo' SIZE 200M
DATAFILE
'/home/oracle/dev/data01.dbf',
'/home/oracle/dev/data02.dbf',
'/home/oracle/dev/data03.dbf',
'/home/oracle/dev/users.dbf',
'/home/oracle/dev/tools.dbf',
'/home/oracle/dev/system01.dbf',
'/home/oracle/dev/undotbs01.dbf',
'/home/oracle/dev/undotbs02.dbf',
'/home/oracle/dev/undotbs03.dbf',
'/home/oracle/dev/index01.dbf',
'/home/oracle/dev/index02.dbf',
'/home/oracle/dev/index04.dbf',
'/home/oracle/dev/index05.dbf',
'/home/oracle/dev/index06.dbf',
'/home/oracle/dev/index03.dbf',
'/home/oracle/dev/index07.dbf',
'/home/oracle/dev/data04.dbf',
'/home/oracle/dev/data05.dbf',
'/home/oracle/dev/data06.dbf',
'/home/oracle/dev/data07.dbf',
'/home/oracle/dev/data08.dbf',
'/home/oracle/dev/data09.dbf',
'/home/oracle/dev/data10.dbf',
'/home/oracle/dev/data11.dbf',
'/home/oracle/dev/data12.dbf',
'/home/oracle/dev/data13.dbf',
'/home/oracle/dev/data14.dbf',
'/home/oracle/dev/data15.dbf',
'/home/oracle/dev/data16.dbf',
'/home/oracle/dev/data17.dbf'
CHARACTER SET US7ASCII;
SQL> @control_9_to_10.sql
Control file created.
3. The database was then recovered to the most recent point of time (until the last available archive file). Since the controlfile was recreated so it was very much necessary to use the keywords “using backup controlfile” during recovery. The files marked in bold are the archives which were applied for recovery.
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 79118 generated at 08/06/2007 15:43:45 needed for thread 1
ORA-00289: suggestion :
/home/oracle/dev/arch/1_43_629467995.dbf
ORA-00280: change 79118 for thread 1 is in sequence #43
Specify log: {
/home/oracle/dev/arch/1_43_629467995.dbf
ORA-00279: change 79120 generated at 08/06/2007 15:43:51 needed for thread 1
ORA-00289: suggestion :
/home/oracle/dev/arch/1_44_629467995.dbf
ORA-00280: change 79120 for thread 1 is in sequence #44
ORA-00278: log file '/home/oracle/dev/arch/1_43.dbf' no
longer needed for this recovery
Specify log: {
/home/oracle/dev/arch/1_44_629467995.dbf
ORA-00279: change 79122 generated at 08/06/2007 15:43:51 needed for thread 1
ORA-00289: suggestion :
/home/oracle/dev/arch/1_45_629467995.dbf
ORA-00280: change 79122 for thread 1 is in sequence #45
ORA-00278: log file '/home/oracle/dev/arch/1_44.dbf' no
longer needed for this recovery
Specify log: {
CANCEL
Media recovery cancelled.
SQL>
4. This is the most daunting step one would go through, because a normal “alter database open resetlogs” reported an error of "ORA-39700: database must be opened with UPGRADE option" in the alert log and an error of "ORA-01092: ORACLE instance terminated. Disconnection forced" at the SQL prompt. It was improvised from this point that a "startup upgrade" can also be used to start the instance. As for the “ORA-39700” error in alert log, the statement shown below was executed and it surprisingly worked.
SQL> alter database open resetlogs upgrade;
Database altered.
5. Since there is no concept of SYSAUX tablespace in 9i and which is one of the essential tablespaces in 10g, a SYSAUX tablespace was then created manually as shown below.
SQL> create tablespace SYSAUX datafile '/home/oracle/dev/sysaux.dbf' size 500M online permanent extent management local segment space management auto;
Tablespace created.
6. A temporary tablespace was created since it was not included in the controlfile recreation script, and made the default temporary tablespace.
SQL> create temporary tablespace temp tempfile
'/home/oracle/dev/temp01.dbf' size 8000M;
Tablespace created.
SQL> alter database default temporary tablespace temp;
Database altered.
7. Now, When trying to query the dba or v$ views, the database would not respond. When trying to interrupt using the traditional interrupt keys on UNIX "Ctrl+c" an error of ORA-600 was reported at the SQL prompt as shown.
SQL> select file_name from dba_data_files;
ORA-00600: internal error code, arguments: [17069], 0xC00000002B8EDDA0],[], [], [], [], [], []
To start it all over again the database was bounced but it would not come up with a normal "startup" and only accepts "startup upgrade" to come up. This is where it was located from the discussion threads of Metalink that in doing an upgrade from 9i to 10g, the data dictionary needs to be synchronized with the version of the database. The Oracle sever makes a match between the database version and data dictionary every time it opens the database thereby only allowing a “startup upgrade" rather than a normal “startup” to bring the database up.
It was suggested from another discussion thread of Metalink that catupgrd.sql script needed to be executed from 10gR2 $ORACLE_HOME/rdbms/admin to achieve the synchronization. The script ran for a while and completed successfully.
SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
The following script was executed for the utility upgrade status check which was successful as well.
SQL> @$ORACLE_HOME/rdbms/admin/utlu102s.sql
The database was bounced yet again and this time it did come up with a normal "startup". The script utlrp.sql was executed a couple of times to recompile all the invalid objects. The global name of the database was then changed to reflect the new name because creating a database with a control file recreation keeps the global_name of the old database unchanged as shown below.
SQL> select * from global_name;
GLOBAL_NAME
-----------------------------------------------------------
NINE_DB.TEST.COM
The global_name was changed by running the following statement which marked the successful completion of upgrade from Oracle 9iR2 to 10gR2.
SQL> alter database rename global_name to TEN_DB.TEST.COM;
Note: If the upgrade is from Oracle 9iR2 to 10gR1 then the synchronization is done by executing u0902000.sql and the utility upgrade status is checked by executing utlu101s.sql from 10gR1 $ORACLE_HOME/rdbms/admin.(Metalink Note: 263809.1)
10gR1 $ORACLE_HOME/rdbms/admin/u0902000.sql --- to upgrade to 10.1.0
which is renamed as catupgrd.sql in 10gR2
10gR1 $ORACLE_HOME/rdbms/admin/utlu101s.sql --- to upgrade to 10.1.0
which is also renamed as utlu102s.sql in 10gR2
Conclusion
A simple concept of cloning a database has been a vital step for the upgrade of an Oracle server version, which not only reduces the downtime for the upgrade but also is achievable with least overhead.
Author: Nawaz Ahmed M.A.S., System Engineer, GAVS Information Services, India. (Metalink Note id: 455990.1)