Look in:

Web oracle-core-dba.blogspot.com

Wednesday, May 24, 2006

Oracle Expert Tuning Secrets

Oracle Expert Tuning Secrets

Oracle Expert Tuning Secrets
by Donald K. BurlesonThe Oracle database currently dominates the market for database software. Through the release of Oracle8, Oracle8i, and Oracle9i, Oracle has evolved into one of the world's most sophisticated database solutions. The challenge for IT professionals is to ensure that they are able to leverage Oracle's powerful features to improve the productivity of their organizations. One of the most effective ways to do this is through Oracle tuning.
Over the past ten years Oracle has evolved into one of the world's most sophisticated databases. As such, there is a mind-boggling array of tuning parameters and techniques, all designed to improve the performance of your Oracle database.
Oracle tuning is a phenomenally complex subject. Entire books have been written about the nuances of Oracle tuning. However, there are some general goals that every Oracle DBA should follow in order to improve the performance of their systems.
During this overview we will briefly cover the following Oracle topics:
-- External tuning - We must remember that Oracle does not run in a vacuum. Here we will look at tuning the Oracle server for high performance.
-- Row re-sequencing to reduce disk I/O - We must understand how reducing I/O is the most important goal of Oracle tuning.
-- Oracle SQL tuning - Oracle SQL tuning is one of the most important areas of Oracle tuning, and it is not uncommon to dramatically improve the performance of an SQL statement by using a few simple SQL tuning rules.-- Tuning Oracle sorting - Sorting is a small of very important component of Oracle performance.-- Tuning Oracle contention - The setting for table and index parameters has a huge impact on UPDATE and INSERT performance.
We always start by tuning the Oracle external environment. No amount of Oracle tuning is going to help if the server has a shortage of RAM or CPU resources.Our WISE tool is the easiest way to analyze Oracle performance and WISE allows you to spot hidden performance trends.
External performance issues
Oracle does not run in a vacuum. The performance of your Oracle database depends heavily on external considerations. These external considerations include:
CPU - The amount of CPU cycles available can slow-down SQL. Whenever the run-queue exceeds the number of CPUs on your Oracle server, you are CPU bound.
RAM memory - The amount of available RAM memory for Oracle can effect the performance of SQL, especially in the data buffers and in-memory sorts. Network - Large amounts of Net8 traffic contribute to slow SQL performance. Many tuning beginners make the mistake of attempting to tune the Oracle database before ensuring that the external environment is not stressed. No amount of Oracle tuning is going to help the performance of the database when an external bottleneck exists.
There are two simple things to monitor when you are checking the external Oracle environment:
1 - Run queue waits - When the number of run queue waits exceeds the number of CPUs on your server, the server is experiencing a CPU shortage. The remedy is to add additional CPUs on the server or turn off high processing components such as Oracle Parallel Query.
2 - RAM page in's - When you see RAM page -in operations, your existing RAM memory has been exceeded, and memory pages are moving in from the swap space on disk. The remedy is to add more RAM, reduce the size of the Oracle SGAs, or turn-on Oracle's multi-threaded server.
You can view server stats in a variety of ways using standard server tools such as vmstat, glance, top and sar. Your goal is to ensure that your database server always has enough CPU and RAM resources to manage the Oracle requests.
Next let's look at how Oracle row-resequencing can dramatically reduce disk I/O.
Row re-sequencing
As we noted, experienced Oracle DBAs know that I/O is the single greatest component of response time and regularly work to reduce I/O. Disk I/O is expensive because when Oracle retrieves a block from a data file on disk, the reading process must wait for the physical I/O operation to complete. Disk operations are about 10,000 times slower than a row's access in the data buffers. Consequently, anything you can do to minimize I/O-or reduce bottlenecks caused by contention for files on disk-can greatly improve the performance of any Oracle database.
If response times are lagging in your high-transaction system, reducing disk I/O is the best way to bring about quick improvement. And when you access tables in a transaction system exclusively through range scans in primary-key indexes, reorganizing the tables with the CTAS method should be one of the first strategies you use to reduce I/O. By physically sequencing the rows in the same order as the primary-key index, this method can considerably speed up data retrieval.
Like disk load balancing, row resequencing is easy, inexpensive, and relatively quick. With both techniques in your DBA bag of tricks, you'll be well equipped to shorten response times-often dramatically-in high-I/O systems.
In high-volume online transaction processing (OLTP) environments in which data is accessed via a primary index, resequencing table rows so that contiguous blocks follow the same order as their primary index can actually reduce physical I/O and improve response time during index-driven table queries. This technique is useful only when the application selects multiple rows, when using index range scans, or if the application issues multiple requests for consecutive keys. Databases with random primary-key unique accesses won't benefit from row resequencing.
Let's explore how this works. Consider a SQL query that retrieves 100 rows using an index:selectsalaryfromemployeewherelast_name like 'B%';
This query will traverse the last_name_index, selecting each row to obtain the rows. This query will have at least 100 physical disk reads because the employee rows reside on different data blocks.
Now let's examine the same query where the rows are re-sequenced into the same order as the last_name_index. We see that the query can read all 100 employees with only three disk I/Os (one for the index, and two for the data blocks), resulting in a saving of over 97 block reads.
The degree to which resequencing improves performance depends on how far out of sequence the rows are when you begin and how many rows you will be accessing in sequence. You can find out how well a table's rows match the index's sequence key by looking at the dba_indexes and dba_tables views in the data dictionary.
In the dba_indexes view, we look at the clustering_factor column. If the clustering factor-an integer-roughly matches the number of blocks in the table, your table is in sequence with the index order. However, if the clustering factor is close to the number of rows in the table, it indicates that the rows in the table are out of sequence with the index.
The benefits of row resequencing cannot be underestimated. In large active tables with a large number of index scans, row resequencing can triple the performance of queries.
Once you have decided to re-sequence the rows in a table, you can use one of the following tools to reorganize the table.
Copy the table using Oracle's Create Table As Select (CTAS) syntax Oracle9i in-place table reorganization tool Next, let's take a look at SQL tuning.SQL tuning
Oracle SQL tuning is a phenomenally complex subject, and entire books have been devoted to the nuances of Oracle SQL tuning. However there are some general guidelines that every Oracle DBA follows in order to improve the performance of their systems. The goals of SQL tuning are simple:
Remove unnecessary large-table full table scans Unnecessary full table scans cause a huge amount of unnecessary I/O, and can drag down an entire database. The tuning expert first evaluates the SQL based on the number of rows returned by the query. If the query returns less and 40 percent of the table rows in an ordered table, or 7 percent of the rows in an unordered table), the query can be tuned to use an index in lieu of the full table scan. The most common tuning for unnecessary full table scans is adding indexes. Standard B-tree indexes can be added to tables, and bitmapped and function-based indexes can also eliminate full table scans. The decision about removing a full table scan should be based on a careful examination of the I/O costs of the index scan vs. the costs of the full table scan, factoring in the multiblock reads and possible parallel execution. In some cases an unnecessary full table scan can be forced to use an index by adding an index hint to the SQL statement. Cache small-table full table scans In cases where a full table scan is the fastest access method, the tuning professional should ensure that a dedicated data buffer is available for the rows. In Oracle7 you can issue alter table xxx cache. In Oracle8 and beyond, the small table can be cached by forcing to into the KEEP pool.

Verify optimal index usage This is especially important for improving the speed of queries. Oracle sometimes has a choice of indexes, and the tuning professional must examine each index and ensure that Oracle is using the proper index. This also includes the use of bitmapped and function-based indexes.

Verify optimal JOIN techniques Some queries will perform faster with NESTED LOOP joins, others with HASH joins, while other favor sort-merge joins. These goals may seem deceptively simple, but these tasks comprise 90 percent of SQL tuning, and they don't require a through understanding of the internals of Oracle SQL. Let's begin with an overview of the Oracle SQL optimizers.
Next, let's briefly explore Oracle sorting and see how sort operations affect performance.
Tuning Oracle sorting operations
As a small but very important component of SQL syntax, sorting is a frequently overlooked aspect of Oracle tuning. In general, an Oracle database will automatically perform sorting operations on row data as requested by a create index or an SQL ORDER BY or GROUP BY statement. In general, Oracle sorting occurs under the following circumstances:
SQL using the ORDER BY clause
SQL using the GROUP BY clause
When an index is created
When a MERGE SORT is invoked by the SQL optimizer because inadequate indexes exist for a table join At the time a session is established with Oracle, a private sort area is allocated in RAM memory for use by the session for sorting. If the connection is via a dedicated connection a Program Global Area (PGA) is allocated according to the sort_area_size init.ora parameter. For connections via the multithreaded server, sort space is allocated in the large_pool. Unfortunately, the amount of memory used in sorting must be the same for all sessions, and it is not possible to add additional sort areas for tasks that require large sort operations. Therefore, the designer must strike a balance between allocating enough sort area to avoid disk sorts for the large sorting tasks, keeping in mind that the extra sort area will be allocated and not used by tasks that do not require intensive sorting. Of course, sorts that cannot fit into the sort_area_size will be paged out into the TEMP tablespaces for a disk sort. Disk sorts are about 14,000 times slower than memory sorts.
As we noted, the size of the private sort area is determined by the sort_area_size init.ora parameter. The size for each individual sort is specified by the sort_area_retained_size init.ora parameter. Whenever a sort cannot be completed within the assigned space, a disk sort is invoked using the temporary tablespace for the Oracle instance.
Disk sorts are expensive for several reasons. First, they are extremely slow when compared to an in-memory sort. Also, a disk sort consumes resources in the temporary tablespace. Oracle must also allocate buffer pool blocks to hold the blocks in the temporary tablespace. In-memory sorts are always preferable to disk sorts, and disk sorts will surely slow down an individual task as well as impact concurrent tasks on the Oracle instance. Also, excessive disk sorting will cause a high value for free buffer waits, paging other tasks' data blocks out of the buffer.
Next, let's take a quick look at Oracle contention and see how the table storage setting affect the performance of SQL UPDATE and INSERT statements.Tuning Oracle contention
One of the benefits of having Oracle is that it manages all of the free space within each tablespace. Oracle handles table and index space management for us and insulates us from the inner workings of the Oracle tables and indexes. However, experienced Oracle tuning professionals need to understand how Oracle manages table extents and free data blocks. This is a very important tuning consideration for systems that have high inserts or updates.
To be proficient at object tuning, you need to understand the behavior of freelists and freelist groups, and their relationship to the values of the pctfree and pctused parameters. This knowledge is especially imperative for enterprise resource planning (ERP) applications where poor DML table performance is often directly related to improper table settings.
The most common mistake for the beginner is assuming that the default Oracle parameters are optimal for all objects. Unless disk consumption is not a concern, you must consider the average row length and database block size when setting pctfree and pctused for a table such that empty blocks are efficiently placed back onto the freelists. When these settings are wrong, Oracle may populate freelists with "dead" blocks that do not have enough room to store a row, causing significant processing delays.Freelists are critical to the effective reuse of space within the Oracle tablespaces and are directly related to the pctfree and pctused storage parameters. When the database is directed to make blocks available as soon as possible (with a high setting of pctused), the reuse of free space is maximized. However, there is a direct trade-off between high performance and efficient reuse of table blocks. When tuning Oracle tables and indexes, you need to consciously decide if you desire high performance or efficient space reuse, and set the table parameters accordingly. Let's take a close look at how these freelists affect the performance of Oracle.
Whenever a request is made to insert a row into a table, Oracle goes to a freelist to find a block with enough space to accept a row. As you may know, the freelist chain is kept in the first block of the table or index, and this block is known as the segment header. The sole purpose of the pctfree and pctused table allocation parameters is to control the movement of blocks to and from the freelists. While the freelist link and unlink operations are simple Oracle functions, the settings for freelist link (pctused) and unlink (pctfree) operations can have a dramatic impact on the performance of Oracle.
As you may know from DBA basics, the pctfree parameter governs freelist un-links. Setting pctfree=10 means that every block reserves 10 percent of the space for row expansion. The pctused parameter governs freelist re-links. Setting pctused=40 means that a block must become less than 40-percent full before being re-linked on the table freelist.
Many neophytes misunderstand what happens when a block is re-added to the freelist. Once a block is re-linked onto the freelist after a delete, it will remain on the freelist even when the space exceeds 60 percent. Only reaching pctfree will take the database block off of the freelist.
Summary of Table and Index Storage Parameter RulesThe following rules govern the settings for the storage parameters freelists, freelist groups, pctfree, and pctused. As you know, the value of pctused and pctfree can easily be changed at any time with the alter table command, and the observant DBA should be able to develop a methodology for deciding the optimal settings for these parameters.
There is a direct trade-off between effective space utilization and high performance, and the table storage parameters control this trade-off:For efficient space reuse A high value for pctused will effectively reuse space on data blocks, but at the expense of additional I/O. A high pctused means that relatively full blocks are placed on the freelist. Hence, these blocks will be able to accept only a few rows before becoming full again, leading to more I/O.

For high performance A low value for pctused means that Oracle will not place a data block onto the freelist until it is nearly empty. The block will be able to accept many rows until it becomes full, thereby reducing I/O at insert time. Remember that it is always faster for Oracle to extend into new blocks than to reuse existing blocks. It takes fewer resources for Oracle to extend a table than to manage freelists.

Let's review the general guidelines for setting of object storage parameters:
Always set pctused to allow enough room to accept a new row. We never want to have free blocks that do not have enough room to accept a row. If we do, this will cause a slowdown since Oracle will attempt to read five "dead" free blocks before extending the table to get an empty block.
The presence of migrated/chained rows in a table means that pctfree is too low or that db_block_size is too small. In most cases within Oracle, RAW and LONG RAW columns make huge rows that exceed the maximum block size for Oracle, making migrated/chained rows unavoidable.
If a table has simultaneous insert SQL processes, it needs to have simultaneous delete processes. Running a single purge job will place all of the free blocks on only one freelist, and none of the other freelists will contain any free blocks from the purge.
The freelist parameter should be set to the high-water mark of updates to a table. For example, if the customer table has up to 20 end users performing insert operations at any time, the customer table should have freelists=20. Note that the freelist groups parameter only applies to Oracle Parallel Server and Real Application Clusters. For these types of Oracle, the freelist groups should be set the number of Oracle Parallel Server instances that access the table.
Conclusion
In this brief paper there has not been enough room to completely cover all of the expert tips that relate to Oracle tuning. For more complete information, see my Oracle Press books:
Oracle High-performance SQL Tuning
Oracle9i High-performance Tuning with STATSPACK

Database Update 9.2.0.3 Linux

Setup Procedure for a Linux Installation.

Follow the procedure below to run the Oracle patch on the Linux platform:

1) Stop all components individually before the Oracle upgrade patch is applied.

2) Login as "oracle" user on the database server and check that the environment variables ORACLE_HOME and ORACLE_SID are set

3) Insert and mount the operating system dependant CD:
Give root>mount /media/cdrom from root comand to mout the cd.

Linux: Database Update 9.2.0.3 Linux

4) Open two shells one is the root comand shell and another shell wherein login as oracle user (oracle shell).

5) In oracle shell make a directory as patch
usr/oracle> mkdir patch

6) Check the cd i.e thru root comand check the cd
root>cd /media/cdrom
now u will inside the cdrom directory
root/media/cdrom> ls (give this command, it will list the content of the directory)
U will be shown with 2 zip files. Copy these zip files from cdrom to a folder in oracle home. (i.e thru oracle shell)
give the following command:
root/media/cdrom> cp *.zip /usr/oracle/patch
Now the zip files would be copied inside the patch directory.

7) Now go to the oracle logged in shell.
go to patch directory
usr/oracle> cd patch
usr/oracle/patch> ls (give this command and u will see the copied zip files)
Unzip the files:
usr/oracle/patch> unzip *.zip (this command will unzip the files)
Then give ls command and u will see one tar and one jar file.

Untar the file:
usr/oracle/patch> tar -xvf *.tar

Unjar the file:
usr/oracle/patch> jar -xvf *.jar
Change permissions as follows:
usr/oracle/patch> chmod -R +x Disk1

After giving this comands give ls command and check the patch directory u will find a Disk1 directory created in patch directory. Keep a note of it.

The above steps are very important steps, so please do it properly or else u will face problems during installation.


8) Shut down the Oracle Server instance with immediate priority through oracle shell as follows:

usr/oracle> sqlplus /nolog
SQL> connect / as sysdba
SQL> shutdown immediate;
SQL> exit;

Stop the listener component as follows from root shell:
root> lsnrctl stop

Important Note
There should be no Oracle process left running. If this is a fresh Oracle
installation the dbsnmp process may still be active even with the database
in the shutdown mode. Terminate this process before proceeding to starting
step 5.

9) Now go to the oracle shell into tat patch folder and
/usr/oracle/patch> cd Disk1
/usr/oracle/patch/Disk1> cd install
/usr/oracle/patch/Disk1/install> cd linux
Now run export command:
/usr/oracle/patch/Disk1/install/linux> export DISPLAY=root:0.0

Now run the installer
/usr/oracle/patch/Disk1/install/linux> runInstaller

Now u will see a GUI interface opening on the screen.

10) Follow the steps given below within the installer:
a. On the Welcome screen, click the Next button.
»This will display the File Locations screen.

b. Click the Browse button for the Source... entry field and navigate to the stage directory where you unpacked the Patch Set tar file.
c. Select the products.jar file.
d. Click OK, then click the Next button
»The products to be loaded will be displayed.

For Solaris: select "Oracle 9i R2 Patch Set 9.2.0.3.0",
For Linux: select "Oracle 9i R2 Patch Set 2 9.2.0.3.0".

If the screen in front of you does not show you the above options:
Then you dont have to wory some more steps would be included for you click next and then click on the install button. For you people there are some more steps that have to be followed which are mentioned at the end of the document.

If your screen shows the above options as it is then continue:
e. Click the Next button.
»The products file will be read and the installer will load the product definitions.
f. Verify the products listed and then click on the Install button.

11). Open a new session as a “root” user for the next step.

12) For the one whose screen has not given the option of installing 9.2.0.3 then for them this step is ommited the system is not gona ask you for root.sh command .
When the installer prompts you to run root.sh, run $ORACLE_HOME/root.sh from the above "root" session.
On a Linux system you will be asked to provide a path, take the default given:
/usr/oracle/bin.
It may be possible that u could be asked by the system that do you want to overwrite the bin folder and select yes i.e Y

Once the installation has completed successfully, it will display "End of Installation".

13). Click on Exit and confirm to exit the installer.

14). Exit the root session.
15). To complete the install of this patch set, you should start up the database as an “oracle” user. Change directory to $ORACLE_HOME and run the following commands/scripts as follows:

usr/oracle> sqlplus /nolog

SQL> connect / as sysdba
SQL> startup migrate
SQL> spool patch.log
SQL> @rdbms/admin/catpatch.sql
SQL> spool off

16). Review the patch.log file for errors and re-run the catpatch script after correcting any problems:

SQL> shutdown
SQL> startup
SQL> @rdbms/admin/utlrp.sql
SQL> shutdown

Important On a Linux system you will be asked to provide a path, take the default given:
/usr/local/bin

SQL> startup
14. Start the listener component as follows:

usr/oracle> lsnrctl start

15. Restart all components individually



SOME MORE STEPS:

Here the installation is complete for some, not for those who have not got the option of installing 9.2.0.3

You have to shutdown the database and the listener. Then login as a oracle user and select the directory install
/usr/oracle> cd install
Then run the installer
/usr/oracle/install> runInstaller

Again you will see the GUI interface, here select do as before but during selection of path click on browse button select the path
/usr/oracle/patch/stage/product.jar
As there are two stage directories first is in patch directory and another in Disk1 directory. Previously you would have selected Disk1 stage directory, now you have select patch stage directory.
and then follow the same procedure from Step 11

To maintain a history of user logins

How can I maintain a history of user logins?


Is there a quick easy way of finding out who’s been logging on to the database and how much work they’ve done?

If you want to get some idea of which users spend most time and consume most resources on the system, you don’t necessarily have to do anything subtle and devious to find out what’s been happening. There has been a (simple) audit trail built into the database for as long as I can remember. (The 8.1 – 10.1 in the banner simply covers the fact that I’ve only recently checked the following comments against those versions)

The init.ora file contains an audit_trail parameter. This can take the values true, false, none, os, db (the true/false options are for backwards compatibility). If you set the value to db (or true), then you have enabled auditing in the database. Once you have restarted the database (the parameter is not modifiable online), you can decide what events you want to audit.For a quick cheap audit of connections, connect as a suitably privileged account (which typically means as a DBA), and issue the command:

audit create session;

If you need to turn this audit off, the corresponding command is:

noaudit create session;

The older syntax for the same level of audit is:

audit connect; noaudit connect;

With this level of audit turned on, every session that logs on (except the SYS sessions) will insert a row into the table sys.aud$ giving various details of who they are and what time they connected. When the session ends, its last action is to update this row with various session-related details, such as log-off time, and the amount of work done. To make the results more readable, Oracle has superimposed the view dba_audit_session on top of the aud$ table; the 9.2 version of this view is as follows:

Name Null? Type----------------------- -------- ----------------OS_USERNAME VARCHAR2(255) WhoUSERNAME VARCHAR2(30)USERHOST VARCHAR2(128) WhereTERMINAL VARCHAR2(255)TIMESTAMP NOT NULL DATE logon date/timeACTION_NAME VARCHAR2(27) LOGOFF_TIME DATE log off date/timeLOGOFF_LREAD NUMBER v$sess_io.consistent_getsLOGOFF_PREAD NUMBER v$sess_io.physical_readsLOGOFF_LWRITE NUMBER v$sess_io.block_changesLOGOFF_DLOCK VARCHAR2(40) Number of deadlocksSESSIONID NOT NULL NUMBERRETURNCODE NOT NULL NUMBERCLIENT_ID VARCHAR2(64)SESSION_CPU NUMBER Session statistic. CPU used by this session

As you can see, there is quite a lot of helpful information here – perhaps good enough for most monitoring purposes. It is also a very light-weight tool, as it requires just one insert on logon, and an index access to update one row on log off.

There are a couple of administrative points. The aud$ table is in the system tablespace and is the one table in the sys schema that you are told you can delete data from. You may want to run a regular purge job to delete data that is more then N days old from this table.

You might consider moving this table to a separate tablespace – but there have been reports of problems with media recovery if you do this (possibly because the recovering processes tries to insert its own audit records and can’t because the tablespace needs recovery) and it is not supported by Oracle.

Finally, if you are running a physical standby database and open it in read only mode, you may find that you can’t connect to it as anyone other than sys. Auditing requires an insert/upate on aud$ - so can’t be allowed to on a read only database. You will have to remember to change the audit_trail parameter to none on your standby before you start it up.

Monitoring Alert file

Monitoring Alert file

I would like to monitor ORA- messages on a Db whithout having access to the filesystem (to the alert file...)Is there a view which allows that or do you have an idea ?

There is no view but if you have an account with:
o select on v_$parametero select on v_$threado create any directory
(you could remove the dependencies on v$parameter and v$thread but you would have to supply the alert log name to this routine) you could use a setup like the following:
drop table alert_log;create global temporary table alert_log( line int primary key,text varchar2(4000))on commit preserve rows/
create or replace procedure load_alertasl_background_dump_dest v$parameter.value%type;l_filename varchar2(255);l_bfile bfile;l_last number;l_current number;l_start number := dbms_utility.get_time;beginselect a.value, 'alert_' b.instance '.log'into l_background_dump_dest, l_filenamefrom v$parameter a, v$thread bwhere a.name = 'background_dump_dest';execute immediate'create or replace directory x$alert_log$x as''' l_background_dump_dest '''';dbms_output.put_line( l_background_dump_dest );dbms_output.put_line( l_filename );delete from alert_log;l_bfile := bfilename( 'X$ALERT_LOG$X', l_filename );dbms_lob.fileopen( l_bfile );l_last := 1;for l_line in 1 .. 50000loopdbms_application_info.set_client_info( l_line ', ' to_char(round((dbms_utility.get_time-l_start)/100, 2 ) ) ', 'to_char((dbms_utility.get_time-l_start)/l_line));l_current := dbms_lob.instr( l_bfile, '0A', l_last, 1 );exit when (nvl(l_current,0) = 0);insert into alert_log( line, text )values( l_line, utl_raw.cast_to_varchar2( dbms_lob.substr( l_bfile, l_current-l_last+1, l_last ) ));l_last := l_current+1;end loop;dbms_lob.fileclose(l_bfile);end;/
It'll use a bfile to load up a temporary table with each line of your alert log. On my particular system, this loaded up 50,000 lines in about 1.5 minutes so its not the speediest but it works.
Enhancements:
- this should be a package. the package would remember where I left off bytewise so that I could call a refresh that would load just NEW lines instead of the entire file again. That way, if I kept a session open for a while -- i could load it once and just refresh during the day.
- this should let me pass in some percentage of the file to load -- eg: load the last 10% of the alert log. That would make this apparently faster as well. you would just use dbms_lob.getlength to figure out what 10% of the file represents and start at that offset (l_last := 1 at the top of the loop would be changed).

Duplicating a Database using RMAN

Duplicating a Database using RMAN

Duplicating a Database using RMAN
By Michael Ritacco

Run the script below on the database in which you plan to duplicate to generate the new paths for the data files. This example assumes you have enough disk space on one mount point. However, you can easily change some of the mount points in VI to point another location or modify the script to suit your specific needs.

select 'set auxname for datafile ' file# ' to ' '''/ora/oradata/' substr(name, instr(name, '/', -1)) ''';'from v$datafile
/

Create the RMAN script to perform the duplicate. The number of log groups must match the source database. You may increase or decrease the number of log members as needed. You must also insert your auxname commands before the rman run block. If you are not doing a point in time dupilcate you can remove the optional line "set until time ...".


connect target sys/@
connect auxiliary /
connect catalog rman/@

run {
-- Optional Line -- set until time '2001-12-18:08:30:00';
allocate auxiliary channel a0 type ;
allocate auxiliary channel a1 type ;
allocate auxiliary channel a2 type ;
allocate auxiliary channel a3 type ;
allocate auxiliary channel a4 type ;
allocate auxiliary channel a5 type ;
allocate auxiliary channel a6 type ;
allocate auxiliary channel a7 type ;
allocate auxiliary channel a8 type ;
allocate auxiliary channel a9 type ;
allocate auxiliary channel a10 type ;
allocate auxiliary channel a11 type ;
allocate auxiliary channel a12 type ;
allocate auxiliary channel a13 type ;
allocate auxiliary channel a14 type ;
allocate auxiliary channel a15 type ;
duplicate target database to logfile
group 1 ('/ora/oradata//redo01_01.rdo') size 200M reuse,
group 2 ('/ora/oradata//redo02_01.rdo') size 200M reuse,
group 3 ('/ora/oradata//redo03_01.rdo') size 200M reuse,
group 4 ('/ora/oradata//redo04_01.rdo') size 200M reuse;}


Log into the server where you will be duplicating the database.

If you are using RMAN duplicate to refresh a TEST or Development database first. Shutdown the instance if it already running.

You will need to recreate/create the password file on for the target database. You do not have to delete the data files of the existing database if one exists (applicable to only a refresh). It would be a good idea to do so if you are not sure that the source and the target you are duplicating too has the same name and number of data files. If not you will have unused data files on the duplicate database, wasting server resources.

STARTUP NOMOUNT the Target database.

Note: You must have an adequately sized SORT_AREA_SIZE for RMAN to do its quering against the auxiliary database since it will not have any temp space available in NOMOUNT.

For the recovery part of the duplicate you will need the archive logs of the source database available. The archive logs can be copied over from the production machine to the target machine as long as the archive logs are placed in the exact path as on the production machine. Otherwise, you must do an archive log backup via Rman to either disk or tape. Remember that if you use a combination of Disk and Tape backups to allocate channels for both tape and disk in your script.

If you are doing a duplicate from a point in time you must set the NLS environmental variables. If not you can skip this step.

C- Shell
setenv NLS_LANG american
setenv NLS_DATE_FORMAT YYYY-MM-DD:HH24:MI:SS
Korn Shellexport NLS_LANG= American
export NLS_DATE_FORMAT= YYYY-MM-DD:HH24:MI:SS

Run the completed duplicate script

$ rman @runscript

Note: If for any reason the duplicate fails due to missing archive log file or any other reason, it is still possible to finish the recovery manually, however the DBID will not be reset. Also, If you are duplicating a Multimaster replicated production database be sure to set job_queue_processes=0 to avoid pushing replication transactions to your production databases.

Make sure your global name is set correctly fro your target database because RMAN does not set this for you. It is also important to note that if you change the domain you must rebuild any database links.

SQL> select * from global_name;
SQL> alter database rename global_name to ;

Remember to add any temporary files for any tablespaces that are using them. Change any passwords for the new database.

Sunday, May 07, 2006

Creation of Controlfile

CREATE-CONTROLFILE
CREATE CONTROLFILE command

PURPOSE:
To recreate a control file in one of these cases:

* All copies of your existing control files have been lost through
media failure.
* You want to change the name of the database.
* You want to change the maximum number of redo log file groups,
redo log file members, archived redo log files, data files, or
instances that can concurrently have the database mounted and
open.

Warning: Oracle Corporation recommends that you perform a full
backup of all files in the database before using this command.

SYNTAX:

CREATE CONTROLFILE [REUSE]
[SET] DATABASE database
LOGFILE [GROUP integer] filespec [, [GROUP integer] filespec] ...
{RESETLOGS | NORESETLOGS}
DATAFILE filespec [, filespec] ...
[MAXLOGFILES integer]
[MAXLOGMEMBERS integer]
[MAXLOGHISTORY integer]
[MAXDATAFILES integer]
[MAXINSTANCES integer]
[ARCHIVELOG | NOARCHIVELOG]

where:

REUSE
specifies that existing control files identified by the
initialization parameter CONTROL_FILES can be reused, thus ignoring
and overwriting any and all information they may currently contain.
If you omit this option and any of these control files already
exist, Oracle returns an error.

SET DATABASE
changes the name of the database. The name of a database can be as
long as eight bytes.

DATABASE
specifies the name of the database. The value of this parameter
must be the existing database name established by the previous
CREATE DATABASE statement or CREATE CONTROLFILE statement.

LOGFILE
specifies the redo log file groups for your database. You must list
all members of all redo log file groups. These files must all
exist.

RESETLOGS
ignores the contents of the files listed in the LOGFILE clause.
Each filespec in the LOGFILE clause must specify the SIZE parameter.
Oracle assigns all redo log file groups to thread 1 and enables this
thread for public use by any instance. After using this option, you
must open the database using the RESETLOGS option of the ALTER
DATABASE command.

NORESETLOGS
specifies that all files in the LOGFILE clause should be used as
they were when the database was last open. These files must be the
current redo log files rather than restored backups. Oracle
reassigns the redo log file groups to the threads to which they were
previously assigned and re-enables the threads as they were
previously enabled. If you specify GROUP values, Oracle verifies
these values with the GROUP values when the database was last open.

DATAFILE
specifies the data files of the database. You must list all data
files. These files must all exist, although they may be restored
backups that require media recovery.

MAXLOGFILES
specifies the maximum number of redo log file groups that can ever
be created for the database. Oracle uses this value to determine
how much space in the control file to allocate for the names of redo
log files. The default and maximum values depend on your operating
system. The value that you specify should not be less than the
greatest GROUP value for any redo log file group.

Note that the number of redo log file groups accessible to your
instance is also limited by the initialization parameter LOG_FILES.

MAXLOGMEMBERS
specifies the maximum number of members, or copies, for a redo log
file group. Oracle uses this value to determine how much space in
the control file to allocate for the names of redo log files. The
minimum value is 1. The maximum and default values depend on your
operating system.

MAXLOGHISTORY
specifies the maximum number of archived redo log file groups for
automatic media recovery of the Oracle Parallel Server. Oracle uses
this value to determine how much space in the control file to
allocate for the names of archived redo log files. The minimum
value is 0. The default value is a multiple of the MAXINSTANCES
value and varies depending on your operating system. The maximum
value is limited only by the maximum size of the control file. Note
that this parameter is only useful if you are using Oracle with the
Parallel Server option in both parallel mode and archivelog mode.

MAXDATAFILES
specifies the maximum number of data files that can ever be created
for the database. The minimum value is 1. The maximum and default
values depend on your operating system. The value you specify
should not be less than the total number of data files ever in the
database, including those for tablespaces that have been dropped.

Note that the number of data files accessible to your instance is
also limited by the initialization parameter DB_FILES.

MAXINSTANCES
specifies the maximum number of instances that can simultaneously
have the database mounted and open. This value takes precedence
over the value of the initialization parameter INSTANCES. The
minimum value is 1. The maximum and default values depend on your
operating system.

ARCHIVELOG
establishes the mode of archiving the contents of redo log files
before reusing them. This option prepares for the possibility of
media recovery as well as instance recovery.

NOARCHIVELOG
establishes the initial mode of reusing redo log files without
archiving their contents. This option prepares for the possibility
of instance recovery but not media recovery.

If you omit both the ARCHIVELOG and NOARCHIVELOG options, Oracle
chooses noarchivelog mode by default. After creating the control
file, you can change between archivelog mode and noarchivelog mode
with the ALTER DATABASE command.

PREREQUISITES:
You must have the OSDBA role enabled. The database must not be
mounted by any instance.

If you are using Trusted Oracle in DBMS MAC mode, your operating
system label must be the equivalent of DBHIGH.