Look in:

Web oracle-core-dba.blogspot.com

Tuesday, September 08, 2009

TNS-00516: Permission denied, Solaris Error: 13: Permission denied

Last week our Unix Adminstrators has moved the data related to our Oracle database from a UFS filesystem to a ZFS filesystem on Solaris.
Once the move is completed, when we tried to start the databases and listeners on the server, the databases started normally but we got problems starting the Listeners.


The Problem:
DBA TEST ZONE> lsnrctl start LISTENER_ORA8

LSNRCTL for Solaris: Version 8.0.6.0.0 - Production on 07-SEP-09 10:52:35

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Starting /usr/app/oracle/product/8.0.6/bin/tnslsnr: please wait...

TNSLSNR for Solaris: Version 8.0.6.0.0 - Production
System parameter file is /usr/app/oracle/product/8.0.6/network/admin/listener.ora
Log messages written to /usr/app/oracle/product/8.0.6/network/log/listener_ora8.log
Attempted to listen on: (DESCRIPTION=(CONNECT_TIMEOUT=10)(ADDRESS=(PROTOCOL=IPC)(KEY=test8)))
TNS-12546: TNS:permission denied
TNS-12560: TNS:protocol adapter error
TNS-00516: Permission denied
Solaris Error: 13: Permission denied


The Fix:
The Unix permissions for the hidden directory /tmp/.oracle should be:
drwxrwxrwx

Change the permissions on the .oracle directory:


DBA TEST ZONE> ls -lad /tmp/.oracle /var/tmp/.oracle
/tmp/.oracle: No such file or directory
drwxr-xr-x 2 root root 2 Aug 21 02:05 /var/tmp/.oracle

give full permisions to this file:

DBA TEST ZONE> ls -lad /var/tmp/.oracle
drwxrwxrwx 2 root root 2 Aug 21 02:05 /var/tmp/.oracle


Now again retried to start the Listener:

DBA TEST ZONE> lsnrctl start LISTENER_ORA8

LSNRCTL for Solaris: Version 8.0.6.0.0 - Production on 07-SEP-09 10:57:49

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Starting /usr/app/oracle/product/8.0.6/bin/tnslsnr: please wait...

TNSLSNR for Solaris: Version 8.0.6.0.0 - Production
System parameter file is /usr/app/oracle/product/8.0.6/network/admin/listener.ora
Log messages written to /usr/app/oracle/product/8.0.6/network/log/listener_ora8.log
Listening on: (ADDRESS=(PROTOCOL=ipc)(DEV=8)(KEY=test8))
Listening on: (ADDRESS=(PROTOCOL=tcp)(DEV=13)(HOST=10.52.51.61)(PORT=1526))

Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=test8))
STATUS of the LISTENER
------------------------
Alias LISTENER_ORA8
Version TNSLSNR for Solaris: Version 8.0.6.0.0 - Production
Start Date 07-SEP-09 10:57:52
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security OFF
SNMP OFF
Listener Parameter File /usr/app/oracle/product/8.0.6/network/admin/listener.ora
Listener Log File /usr/app/oracle/product/8.0.6/network/log/listener_ora8.log
Services Summary...
test8 has 1 service handler(s)
The command completed successfully

Monday, June 22, 2009

Delete files of a particular date.

Quick and efficient.


Remove -> ls -ltr | grep "May 23" | awk '{print "rm "$9" /disk1/oradata/arch/"}' | more

above command will list all the files that are having May 23 date as below:

rm orcl_R676045126_T1_S29396.arc.gz
rm orcl_R676045126_T1_S29397.arc.gz
rm orcl_R676045126_T1_S29398.arc.gz
rm orcl_R676045126_T1_S29399.arc.gz
rm orcl_R676045126_T1_S29400.arc.gz
rm orcl_R676045126_T1_S29401.arc.gz
rm orcl_R676045126_T1_S29402.arc.gz
rm orcl_R676045126_T1_S29403.arc.gz
rm orcl_R676045126_T1_S29404.arc.gz

Command for actually doing the job i.e delete files:

Remove -> ls -ltr | grep "May 23" | awk '{print "rm "$9" /disk1/oradata/arch/"}' | sh –x

Use this carefully. Check twice before issuing the command, because sh –x will execute the output. i.e delete all files without a prompt.

Wednesday, April 22, 2009

Changing the archivelog destination online.

Couple of days back received a high priority ticket developers saying not able to connect to database

hard working people they work on saturdays and sundays and make us also do the hard work :(

they are getting an error ORA-00257: archiver error. Connect internal only, until freed.

Found the following error in the alertlog file:

Sat Apr 11 05:20:00 2009
ARC0: Archiving not possible: No primary destinations
ARC0: Failed to archive thread 1 sequence 2029 (4)

When I checked for details from the database I can see the following:

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 devdb2
sun-boxdb1
10.2.0.2.0 10-APR-09 OPEN NO 1 FAILED ARCHIVE LOG
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO


SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /mounts/devdb2_arch/devdb2
Oldest online log sequence 2029
Next log sequence to archive 2029
Current log sequence 2031

Thought may be the archive destination mount is full but shocked coz I cannot see the mount point.
Then I realized the mistake that I not 'I' "they" made the earlier day when we moved the database to the new Sun box.

Our Storage guys forgot to mount the archive Mount point.
I actually prefer seperate mount points for Data,Arch and Temp :)

Dont ask me why I did not checked when I was making the database up..its a friday evening guys...
But did not know how the database was up without the archive mount point destination specified.

So for immediate resolution had to change the archive destination "the day is SATURDAY and storage guys available on saturday??? no way!!! don't look for them"

Did the following:

SQL> alter system set log_archive_dest_1='LOCATION=/mounts/devdb2_temp/devdb2/arch/';

System altered.

SQL> alter system archive log all;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /mounts/devdb2_temp/devdb2/arch/
Oldest online log sequence 2031
Next log sequence to archive 2033
Current log sequence 2033
SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 devdb2
sun-boxdb1
10.2.0.2.0 10-APR-09 OPEN NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO


SQL>

And you can see that all the errors are cleared in the alertlog file:

Sat Apr 11 05:20:03 2009
ARC1: Archiving not possible: No primary destinations
ARC1: Failed to archive thread 1 sequence 2029 (4)
Sat Apr 11 05:20:06 2009
ALTER SYSTEM SET log_archive_dest_1='LOCATION=/mounts/devdb2_temp/devdb2/arch/' SCOPE=BOTH;
Sat Apr 11 05:20:07 2009
Archiver process freed from errors. No longer stopped
Sat Apr 11 05:20:09 2009
Thread 1 advanced to log sequence 2032
Current log# 3 seq# 2032 mem# 0: /mounts/devdb2_data/devdb2/redo03.log
Sat Apr 11 05:20:29 2009
Thread 1 advanced to log sequence 2033
Current log# 1 seq# 2033 mem# 0: /mounts/devdb2_data/devdb2/redo01.log
~


Happies endingss

Thursday, January 29, 2009

Monitoring Temp Tablespace Use

This query shows the current usage of the temporary tablespace.

This is very useful to monitor if you are getting ORA-1652: unable to extend temp segment errors, as well as for monitoring the use of space in your temp tablespace.

select sess.USERNAME, sess.sid, sess.OSUSER, su.segtype,
su.segfile#, su.segblk#, su.extents, su.blocks
from v$sort_usage su, v$session sess
where sess.sql_address=su.sqladdr and sess.sql_hash_value=su.sqlhash ;

You can compare the EXTENTS and BLOCKS columns above to the total available segments and blocks by querying V$TEMP_EXTENT_POOL.

select * from v$temp_extent_pool;

The EXTENTS_CACHED are the total number of extents cached -- following an ora-1652, this will be the max number of extents available. The EXTENTS_USED column lets you know the total number of extents currently in use.

In a TEMPORARY temporary tablespace (ie. one with a TEMPORARY datafile), all the space in the datafile is reserved for temporary sort segments. (In a well-managed database, this is of course true for non-TEMPORARY temp tablespaces as well).

Extent management should be set to LOCAL UNIFORM with a size corresponding to SORT_AREA_SIZE. In this case, it's easy to see how many temp extents will fit in the temporary tablespace:

SELECT dtf.file_id, dtf.bytes/dt.INITIAL_EXTENT max_extents_allowed
from dba_temp_files dtf, dba_tablespaces dt
where dtf.tablespace_name='TEMPORARY_DATA' and
dtf.tablespace_name=dt.TABLESPACE_NAME;

To know the corresponding Temp tablespace - datafile name and Total bytes allocated:

select name,bytes/1024/1024 from v$tempfile;

To know the exact Temp tablespace details like name, total blocks allocated,used blocks and free blocks:

select tablespace_name,(total_blocks*8)/1024,(used_blocks*8)/1024,(free_blocks*8)/1024 from v$sort_segment;

Resizing Logfiles

The best way to resize logfiles is by creating new logfile groups in the new size, then dropping the old logfile groups.


Example:

SYS> select group#,member from v$logfile;

GROUP# MEMBER
---------- ----------------------------------------

1 D:\DATABASE\TRN3\LOGTRN3_1A.LGF
1 F:\DATABASE\TRN3\LOGTRN3_1A.LGF
2 D:\DATABASE\TRN3\LOGTRN3_2A.LGF
2 F:\DATABASE\TRN3\LOGTRN3_2A.LGF
3 D:\DATABASE\TRN3\LOGTRN3_3A.LGF
3 F:\DATABASE\TRN3\LOGTRN3_3A.LGF

6 rows selected.

SYS> alter database add logfile group 4
2 ('d:\database\trn3\logtrn3_4a.lgf','f:\database\trn3\logtrn3_4b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> alter database add logfile group 5
2 ('d:\database\trn3\logtrn3_5a.lgf','f:\database\trn3\logtrn3_5b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> alter database add logfile group 6
2 ('d:\database\trn3\logtrn3_6a.lgf','f:\database\trn3\logtrn3_6b.lgf')
3* size 10240K
SYS> /

Database altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 INACTIVE
4 UNUSED
5 UNUSED
6 UNUSED

6 rows selected.

SYS> alter database drop logfile group 2;

Database altered.

SYS> alter database drop logfile group 3;

Database altered.

SYS> alter system switch logfile;

System altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 ACTIVE
4 CURRENT
5 UNUSED
6 UNUSED

SYS> alter system switch logfile;

System altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 INACTIVE
4 ACTIVE
5 CURRENT
6 UNUSED

SYS> alter database drop logfile group 1;

Database altered.

SYS> select group#, status from v$log;

GROUP# STATUS
---------- ----------------
4 INACTIVE
5 CURRENT
6 UNUSED

Friday, January 16, 2009

Tarballs

To collect a bunch of files and directories together, use tar.

For example, to tar up your entire home directory and put the tarball into /tmp, do this

$ cd ~srees
$ cd .. # go one dir above dir you want to tar
$ tar cvf /tmp/srees.backup.tar srees

By convention, use .tar as the extension. To untar this file use

$ cd /tmp
$ tar xvf srees.backup.tar

tar untars things in the current directory!

After running the untar, you will find a new directory, /tmp/srees, that is a copy of your home directory.
Note that the way you tar things up dictates the directory structure when untarred.
The fact that I mentioned srees in the tar creation means that I'll have that dir when untarred.

In contrast, the following will also make a copy of my home directory, but without having a srees root dir:

$ cd ~srees
$ tar cvf /tmp/srees.backup.tar *

It is a good idea to tar things up with a root directory so that
when you untar you don't generate a million files in the current directly.

To see what's in a tarball, use

$ tar tvf /tmp/srees.backup.tar

Most of the time you can save space by using the z argument.

The tarball will then be gzip'd and you should use file extension .tar.gz:

$ cd ~srees
$ cd .. # go one dir above dir you want to tar
$ tar cvfz /tmp/srees.backup.tar.gz srees

Unzipping requires the z argument also:

$ cd /tmp
$ tar xvfz srees.backup.tar.gz

If you have a big file to compress, use gzip:

$ gzip bigfile

After execution, your file will have been renamed bigfile.gz.

To uncompress, use

$ gzip -d bigfile.gz


To display a text file that is currently gzip'd, use zcat:

$ zcat bigfile.gz

nohup Execute Commands After You Exit From a Shell Prompt

Most of the time you login into remote server via ssh. If you start a shell script or command and you exit (abort remote connection), the process / command will get killed. Sometime job or command takes a long time. If you are not sure when the job will finish, then it is better to leave job running in background. However, if you logout the system, the job will be stopped. What do you do?

nohup command

Answer is simple, use nohup utility which allows to run command./process or shell script that can continue running in the background after you log out from a shell:


nohup Syntax:
nohup command-name &

Where,

command-name : is name of shell script or command name. You can pass argument to command or a shell script.

& : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.

nohup command example

Eg.
# nohup exp system/password@dbname file=exp_dbname_date.dmp log=exp_dbname_date.log full=y &

or

# vi exp_script.sh
exp system/password@dbname file=exp_dbname_date.dmp log=exp_dbname_date.log full=y
:wq

# ls –l
Check the permissions if that has execute permissions
#chomod 755 exp_script.sh

# nohup ./exp_script.sh &

this ensures of the completion of the Job even if the session terminates... :-)

Checking the Job progress cab be done using:

tail -f nohup.out
ctrl c -- to end the checking.

Some more good options for completing jobs without or when session terminations occur:

Option 1:

You can use the below script to keep your session alive with a simple while loop to run infinite loop.

#while true
>do
>echo " Exp going on"
>sleep 200
>done

Here you will not get prompt till you press Ctl + c (^c). This process will keep your session alive till you press enter.


Option 2:

You can run exp_script.sh script to queue (one minute) later execution:

$ echo "exp_script.sh" | at now + 1 minute


Hope this helps.

To calculate the export dump file size without actually exporting in Oracle 10G and above

Many times DBAs' may need to know the size of the export dump file size before starting the actual export of the database. This has become easy with the datapump option from Oracle version 10G.

Just give the following command:

expdp system full=y ESTIMATE_ONLY=Y NOLOGFILE=Y

for more options see

expdp help=y -- from your command prompt.