Look in:

Web oracle-core-dba.blogspot.com

Thursday, January 29, 2009

Monitoring Temp Tablespace Use

This query shows the current usage of the temporary tablespace.

This is very useful to monitor if you are getting ORA-1652: unable to extend temp segment errors, as well as for monitoring the use of space in your temp tablespace.

select sess.USERNAME, sess.sid, sess.OSUSER, su.segtype,
su.segfile#, su.segblk#, su.extents, su.blocks
from v$sort_usage su, v$session sess
where sess.sql_address=su.sqladdr and sess.sql_hash_value=su.sqlhash ;

You can compare the EXTENTS and BLOCKS columns above to the total available segments and blocks by querying V$TEMP_EXTENT_POOL.

select * from v$temp_extent_pool;

The EXTENTS_CACHED are the total number of extents cached -- following an ora-1652, this will be the max number of extents available. The EXTENTS_USED column lets you know the total number of extents currently in use.

In a TEMPORARY temporary tablespace (ie. one with a TEMPORARY datafile), all the space in the datafile is reserved for temporary sort segments. (In a well-managed database, this is of course true for non-TEMPORARY temp tablespaces as well).

Extent management should be set to LOCAL UNIFORM with a size corresponding to SORT_AREA_SIZE. In this case, it's easy to see how many temp extents will fit in the temporary tablespace:

SELECT dtf.file_id, dtf.bytes/dt.INITIAL_EXTENT max_extents_allowed
from dba_temp_files dtf, dba_tablespaces dt
where dtf.tablespace_name='TEMPORARY_DATA' and
dtf.tablespace_name=dt.TABLESPACE_NAME;

To know the corresponding Temp tablespace - datafile name and Total bytes allocated:

select name,bytes/1024/1024 from v$tempfile;

To know the exact Temp tablespace details like name, total blocks allocated,used blocks and free blocks:

select tablespace_name,(total_blocks*8)/1024,(used_blocks*8)/1024,(free_blocks*8)/1024 from v$sort_segment;

Resizing Logfiles

The best way to resize logfiles is by creating new logfile groups in the new size, then dropping the old logfile groups.


Example:

SYS> select group#,member from v$logfile;

GROUP# MEMBER
---------- ----------------------------------------

1 D:\DATABASE\TRN3\LOGTRN3_1A.LGF
1 F:\DATABASE\TRN3\LOGTRN3_1A.LGF
2 D:\DATABASE\TRN3\LOGTRN3_2A.LGF
2 F:\DATABASE\TRN3\LOGTRN3_2A.LGF
3 D:\DATABASE\TRN3\LOGTRN3_3A.LGF
3 F:\DATABASE\TRN3\LOGTRN3_3A.LGF

6 rows selected.

SYS> alter database add logfile group 4
2 ('d:\database\trn3\logtrn3_4a.lgf','f:\database\trn3\logtrn3_4b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> alter database add logfile group 5
2 ('d:\database\trn3\logtrn3_5a.lgf','f:\database\trn3\logtrn3_5b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> alter database add logfile group 6
2 ('d:\database\trn3\logtrn3_6a.lgf','f:\database\trn3\logtrn3_6b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 INACTIVE
4 UNUSED
5 UNUSED
6 UNUSED

6 rows selected.

SYS> alter database drop logfile group 2;

Database altered.

SYS> alter database drop logfile group 3;

Database altered.

SYS> alter system switch logfile;

System altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 ACTIVE
4 CURRENT
5 UNUSED
6 UNUSED

SYS> alter system switch logfile;

System altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 INACTIVE
4 ACTIVE
5 CURRENT
6 UNUSED

SYS> alter database drop logfile group 1;

Database altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
4 INACTIVE
5 CURRENT
6 UNUSED

Friday, January 16, 2009

Tarballs

To collect a bunch of files and directories together, use tar.

For example, to tar up your entire home directory and put the tarball into /tmp, do this

$ cd ~srees
$ cd .. # go one dir above dir you want to tar
$ tar cvf /tmp/srees.backup.tar srees

By convention, use .tar as the extension. To untar this file use

$ cd /tmp
$ tar xvf srees.backup.tar

tar untars things in the current directory!

After running the untar, you will find a new directory, /tmp/srees, that is a copy of your home directory.
Note that the way you tar things up dictates the directory structure when untarred.
The fact that I mentioned srees in the tar creation means that I'll have that dir when untarred.

In contrast, the following will also make a copy of my home directory, but without having a srees root dir:

$ cd ~srees
$ tar cvf /tmp/srees.backup.tar *

It is a good idea to tar things up with a root directory so that
when you untar you don't generate a million files in the current directly.

To see what's in a tarball, use

$ tar tvf /tmp/srees.backup.tar

Most of the time you can save space by using the z argument.

The tarball will then be gzip'd and you should use file extension .tar.gz:

$ cd ~srees
$ cd .. # go one dir above dir you want to tar
$ tar cvfz /tmp/srees.backup.tar.gz srees

Unzipping requires the z argument also:

$ cd /tmp
$ tar xvfz srees.backup.tar.gz

If you have a big file to compress, use gzip:

$ gzip bigfile

After execution, your file will have been renamed bigfile.gz.

To uncompress, use

$ gzip -d bigfile.gz


To display a text file that is currently gzip'd, use zcat:

$ zcat bigfile.gz

nohup Execute Commands After You Exit From a Shell Prompt

Most of the time you login into remote server via ssh. If you start a shell script or command and you exit (abort remote connection), the process / command will get killed. Sometime job or command takes a long time. If you are not sure when the job will finish, then it is better to leave job running in background. However, if you logout the system, the job will be stopped. What do you do?

nohup command

Answer is simple, use nohup utility which allows to run command./process or shell script that can continue running in the background after you log out from a shell:


nohup Syntax:
nohup command-name &

Where,

command-name : is name of shell script or command name. You can pass argument to command or a shell script.

& : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.

nohup command example

Eg.
# nohup exp system/password@dbname file=exp_dbname_date.dmp log=exp_dbname_date.log full=y &

or

# vi exp_script.sh
exp system/password@dbname file=exp_dbname_date.dmp log=exp_dbname_date.log full=y
:wq

# ls –l
Check the permissions if that has execute permissions
#chomod 755 exp_script.sh

# nohup ./exp_script.sh &

this ensures of the completion of the Job even if the session terminates... :-)

Checking the Job progress cab be done using:

tail -f nohup.out
ctrl c -- to end the checking.

Some more good options for completing jobs without or when session terminations occur:

Option 1:

You can use the below script to keep your session alive with a simple while loop to run infinite loop.

#while true
>do
>echo " Exp going on"
>sleep 200
>done

Here you will not get prompt till you press Ctl + c (^c). This process will keep your session alive till you press enter.


Option 2:

You can run exp_script.sh script to queue (one minute) later execution:

$ echo "exp_script.sh" | at now + 1 minute


Hope this helps.

To calculate the export dump file size without actually exporting in Oracle 10G and above

Many times DBAs' may need to know the size of the export dump file size before starting the actual export of the database. This has become easy with the datapump option from Oracle version 10G.

Just give the following command:

expdp system full=y ESTIMATE_ONLY=Y NOLOGFILE=Y

for more options see

expdp help=y -- from your command prompt.