Tuesday, July 3, 2012

Identify locks

------/*  identify locks */-----

select owner||'.'||object_name obj   ,oracle_username||' ('||s.status||')' oruser   ,os_user_name osuser   ,machine computer   ,

l.process unix   ,s.sid||','||s.serial# ss   ,r.name rs   ,to_char(s.logon_time,'yyyy/mm/dd hh24:mi:ss') time

from v$locked_object l ,dba_objects o ,v$session s ,v$transaction t ,v$rollname r

where l.object_id = o.object_id  and s.sid=l.session_id  and s.taddr=t.addr  and t.xidusn=r.usn

order by osuser, ss, obj;

--------------------------/*  to identify who is blocking whom when one process is blocking another from progressing due to lock issues.*/-----------------

select owner||'.'||object_name obj   ,oracle_username||' ('||s.status||')' oruser   ,os_user_name osuser   ,machine computer   ,

l.process unix   ,s.sid||','||s.serial# ss   ,r.name rs   ,to_char(s.logon_time,'yyyy/mm/dd hh24:mi:ss') time

from v$locked_object l ,dba_objects o ,v$session s ,v$transaction t ,v$rollname r

where l.object_id = o.object_id  and s.sid=l.session_id  and s.taddr=t.addr  and t.xidusn=r.usn

order by osuser, ss, obj;

--------------------

select decode(count(*),0,'There are no blocking locks.',1,'There is 1 blocking lock:',           'There are '||count(*)||' blocking locks:')

from All_Objects o, v$Session sw, v$lock lw, v$Session sh, v$lock lh

where lh.id1  = o.object_id  and  lh.id1  = lw.id1  and  sh.sid  = lh.sid  and  sw.sid  = lw.sid  and  sh.lockwait is null

and  sw.lockwait is not null  and  lh.type = 'TM'  and  lw.type = 'TM';

------------------------------------

SELECT username, gv$lock.sid,

TRUNC(id1/power(2,16)) rbs,

BITAND(id1,TO_NUMBER('ffff','xxxx'))+0 slot, id2 seq, lmode, request

FROM gv$lock, gv$session

WHERE gv$lock.type = 'TX'

AND gv$lock.sid = gv$session.sid;

----AND gv$session.username = 'CLEANSER';

SELECT XIDUSN, XIDSLOT, XIDSQN

FROM gv$transaction;

--------------------------------/*********      ACTIVE TABLE LOCKS

SELECT SUBSTR(a.object,1,25) TABLENAME,SUBSTR(s.username,1,15) USERNAME,SUBSTR(p.pid,1,5) PID,SUBSTR(p.spid,1,10) SYSTEM_ID,

DECODE(l.type,'RT','Redo Log Buffer','TD','Dictionary','TM','DML','TS','Temp Segments','TX','Transaction','UL','User','RW','Row Wait',l.type) LOCK_TYPE

FROM gv$access a, gv$process p, gv$session s, gv$lock l

WHERE s.sid = a.sid AND s.paddr = p.addr AND l.sid = p.pid

GROUP BY a.object, s.username, p.pid, l.type, p.spid

ORDER BY a.object, s.username;

----------------------------------------/****************** ACTIVE LOCKS

SELECT s.username, s.sid, s.serial#, s.osuser, k.ctime, o.object_name object, k.kaddr,

        DECODE(l.locked_mode,1, 'No Lock',2, 'Row Share',3, 'Row Exclusive',4, 'Shared Table',5, 'Shared Row Exclusive',6, 'Exclusive') locked_mode,

        DECODE(k.type,

    'BL','Buffer Cache Management (PCM lock)',

  'CF','Controlfile Transaction',

  'CI','Cross Instance Call',

  'CU','Bind Enqueue',

  'DF','Data File',

  'DL','Direct Loader',

  'DM','Database Mount',

  'DR','Distributed Recovery',

  'DX','Distributed Transaction',

  'FS','File Set',

  'IN','Instance Number',

  'IR','Instance Recovery',

  'IS','Instance State',

  'IV','Library Cache Invalidation',

  'JQ','Job Queue',

  'KK','Redo Log Kick',

  'LA','Library Cache Lock',

  'LB','Library Cache Lock',

  'LC','Library Cache Lock',

  'LD','Library Cache Lock',

  'LE','Library Cache Lock',

  'LF','Library Cache Lock',

  'LG','Library Cache Lock',

  'LH','Library Cache Lock',

  'LI','Library Cache Lock',

  'LJ','Library Cache Lock',

  'LK','Library Cache Lock',

  'LL','Library Cache Lock',

  'LM','Library Cache Lock',

  'LN','Library Cache Lock',

  'LO','Library Cache Lock',

  'LP','Library Cache Lock',

  'MM','Mount Definition',

  'MR','Media Recovery',

  'NA','Library Cache Pin',

  'NB','Library Cache Pin',

  'NC','Library Cache Pin',

  'ND','Library Cache Pin',

  'NE','Library Cache Pin',

  'NF','Library Cache Pin',

  'NG','Library Cache Pin',

  'NH','Library Cache Pin',

  'NI','Library Cache Pin',

  'NJ','Library Cache Pin',

  'NK','Library Cache Pin',

  'NL','Library Cache Pin',

  'NM','Library Cache Pin',

  'NN','Library Cache Pin',

  'NO','Library Cache Pin',

  'NP','Library Cache Pin',

  'NQ','Library Cache Pin',

  'NR','Library Cache Pin',

  'NS','Library Cache Pin',

  'NT','Library Cache Pin',

  'NU','Library Cache Pin',

  'NV','Library Cache Pin',

  'NW','Library Cache Pin',

  'NX','Library Cache Pin',

  'NY','Library Cache Pin',

  'NZ','Library Cache Pin',

  'PF','Password File',

  'PI','Parallel Slaves',

  'PR','Process Startup',

  'PS','Parallel Slave Synchronization',

  'QA','Row Cache Lock',

  'QB','Row Cache Lock',

  'QC','Row Cache Lock',

  'QD','Row Cache Lock',

  'QE','Row Cache Lock',

  'QF','Row Cache Lock',

  'QG','Row Cache Lock',

  'QH','Row Cache Lock',

  'QI','Row Cache Lock',

  'QJ','Row Cache Lock',

  'QK','Row Cache Lock',

  'QL','Row Cache Lock',

  'QM','Row Cache Lock',

  'QN','Row Cache Lock',

  'QO','Row Cache Lock',

  'QP','Row Cache Lock',

  'QQ','Row Cache Lock',

  'QR','Row Cache Lock',

  'QS','Row Cache Lock',

  'QT','Row Cache Lock',

  'QU','Row Cache Lock',

  'QV','Row Cache Lock',

  'QW','Row Cache Lock',

  'QX','Row Cache Lock',

  'QY','Row Cache Lock',

  'QZ','Row Cache Lock',

  'RT','Redo Thread',

  'SC','System Commit number',

  'SM','SMON synchronization',

  'SN','Sequence Number',

  'SQ','Sequence Enqueue',

  'SR','Synchronous Replication',

  'SS','Sort Segment',

  'ST','Space Management Transaction',

  'SV','Sequence Number Value',

  'TA','Transaction Recovery',

  'TM','DML Enqueue',

  'TS','Table Space (or Temporary Segment)',

  'TT','Temporary Table',

  'TX','Transaction',

  'UL','User-defined Locks',

  'UN','User Name',

  'US','Undo segment Serialization',

  'WL','Writing redo Log',

  'XA','Instance Attribute Lock',

  'XI','Instance Registration Lock') type

FROM gv$session s, sys.gv$lock c, sys.gv$locked_object l,

     dba_objects o, sys.gv$lock k, gv$lock v

WHERE o.object_id = l.object_id

AND l.session_id = s.sid

AND k.sid = s.sid

AND s.saddr = c.addr

AND k.kaddr = c.kaddr

AND k.kaddr = v.kaddr

AND v.addr = s.saddr

AND k.lmode = l.locked_mode

AND k.lmode = c.lmode

AND k.request = c.request

ORDER BY object;

--------------------/******** LIST LOCKS

SELECT session_id,lock_type, mode_held, mode_requested, blocking_others, lock_id1

FROM dba_lock l

WHERE lock_type NOT IN ('Media Recovery', 'Redo Thread');

--------------------------------/************* LOCKED OBJECTS

SELECT oracle_username USERNAME, owner OBJECT_OWNER,object_name, object_type, s.osuser,DECODE(l.block,0, 'Not Blocking',1, 'Blocking',2, 'Global') STATUS,

      DECODE(v.locked_mode,0, 'None',1, 'Null',2, 'Row-S (SS)',3, 'Row-X (SX)',4, 'Share',5, 'S/Row-X (SSX)',6, 'Exclusive', TO_CHAR(lmode)) MODE_HELD

FROM gv$locked_object v, dba_objects d,gv$lock l, gv$session s

WHERE v.object_id = d.object_id AND (v.object_id = l.id1) and v.session_id = s.sid

ORDER BY oracle_username, session_id;

------------(OR)

SELECT o.owner, o.object_name, o.object_type, o.last_ddl_time, o.status, l.session_id, l.oracle_username, l.locked_mode

FROM dba_objects o, gv$locked_object l

WHERE o.object_id = l.object_id;

----------(OR)

SELECT DECODE(request,0,'Holder: ','Waiter:') || sid sess, id1, id2, lmode, request, type

FROM gv$lock

WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM gv$lock WHERE request>0)

ORDER BY id1, request;

--------------------------------/*************   OBJECTS THAT HAVE BEEN LOCK FOR 2 MINUTES OR MORE

SELECT SUBSTR(TO_CHAR(w.session_id),1,5) WSID, p1.spid WPID,SUBSTR(s1.username,1,12) "WAITING User",SUBSTR(s1.osuser,1,8) "OS User",

        SUBSTR(s1.program,1,20) "WAITING Program",s1.client_info "WAITING Client",SUBSTR(TO_CHAR(h.session_id),1,5) HSID, p2.spid HPID,

        SUBSTR(s2.username,1,12) "HOLDING User",SUBSTR(s2.osuser,1,8) "OS User",SUBSTR(s2.program,1,20) "HOLDING Program",s2.client_info "HOLDING Client",

        o.object_name "HOLDING Object"

FROM gv$process p1, gv$process p2, gv$session s1,gv$session s2, dba_locks w, dba_locks h, dba_objects o

WHERE w.last_convert > 120 AND h.mode_held != 'None' AND h.mode_held != 'Null' AND w.mode_requested != 'None' AND s1.row_wait_obj# = o.object_id

    AND w.lock_type(+) = h.lock_type AND w.lock_id1(+) = h.lock_id1 AND w.lock_id2 (+) = h.lock_id2 AND w.session_id = s1.sid (+)

    AND h.session_id = s2.sid (+) AND s1.paddr = p1.addr (+) AND s2.paddr = p2.addr (+)

ORDER BY w.last_convert desc;

DISPLAYS ALL CONNECTED SESSIONS

----------/************    DISPLAYS ALL CONNECTED SESSIONS
select rpad(c.name||':',11)||rpad(' current logons='||(to_number(b.sessions_current)),20)||'cumulative logons='||rpad(substr(a.value,1,10),10)||'highwater mark='|| b.sessions_highwater Information
from v$sysstat a,v$license b,v$database c
where a.name = 'logons cumulative';

select 'Sessions on database '||substr(name,1,8) from v$database;
--------------/*************  DBNAME DATABASE|UNIX/ORACLE SESSIONS
select a.spid pid,b.sid sid,b.serial# ser#,b.machine box,b.username username,b.osuser os_user,b.program program
------substr(a.spid,1,9) pid,substr(b.sid,1,5) sid,substr(b.serial#,1,5) ser#,substr(b.machine,1,6) box,substr(b.username,1,10) username,substr(b.osuser,1,8) os_user,substr(b.program,1,30) program
from v$session b,v$process a
where b.paddr = a.addr and type='USER' AND B.USERNAME='CLEANSER' order by spid;

----select 'To kill, enter SQLPLUS>  ALTER SYSTEM KILL SESSION '''||'SID, SER#'||''''||';' from dual;

------------/******** The following script can be used in order quickly identify all lock objects within your Oracle system
select c.owner,c.object_name,c.object_type,b.sid,b.serial#,b.status,b.osuser,b.machine
from v$locked_object a ,v$session b,dba_objects c
where b.sid = a.session_id and a.object_id = c.object_id;

----------------------/*************** RELEASE LOCKS **************/----
select distinct 'ALTER SYSTEM KILL SESSION '||''''||b.sid||','||b.serial#||''''||';'
from v$locked_object a ,v$session b,dba_objects c
where b.sid = a.session_id and a.object_id = c.object_id;

Tracking Oracle database growth

----------------------/* Tracking Oracle database growth  */-----------------------
The following script can be executed once each week to analyze the table and indexes and collect the table and index data. Note that we must set the oratab file location and pass the proper ORACLE_SID when executing this script:
create table stats$idx_stats as
insert into table stats$idx_stats
(   select
      SYSDATE,
      lower('${host}'),
      lower('${ORACLE_SID}'),
      i.tablespace_name,
      t.table_name,
      i.owner,
      i.index_name,
      i.clustering_factor,
      i.leaf_blocks,
      i.blevel,
      s.next_extent,
      s.extents,
      s.bytes
   from dba_indexes  i,
        dba_segments s,
        dba_tables   t
   where
      i.table_name = t.table_name
   and
      segment_name = index_name
   and
      s.tablespace_name = i.tablespace_name
   and  
      s.owner = i.owner
   and
      i.owner not in ('SYS','SYSTEM')
   and
      t.num_rows > 1000
);
----------------------/* AFTER THAT RUN THE FOLLOWING QUERY   */-----------------
insert into stats$idx_stats
create table stats$idx_stats as
(
   select
      SYSDATE snap_time,
      lower('${host}'),
      lower('${ORACLE_SID}'),
      i.tablespace_name,
      i.owner,
      i.index_name,
      i.clustering_factor,
      i.leaf_blocks,
      i.blevel,
      s.next_extent,
      s.extents,
      s.bytes
   from dba_indexes  i,
        dba_segments s,
        dba_tables   t
   where
      i.table_name = t.table_name
   and
      segment_name = index_name
   and
      s.tablespace_name = i.tablespace_name
   and
      s.owner = i.owner
   and
      i.owner not in ('SYS','SYSTEM')
   and
      t.num_rows > 1000
);

select
   new.index_name,
   old.bytes                old_bytes,
   new.bytes                new_bytes,
   new.bytes - old.bytes    change
from
   stats$idx_stats old,
   stats$idx_stats new
where
   old.index_name = new.index_name
and
   new.bytes > old.bytes
and
   new.bytes - old.bytes > 10000
and
   to_char(new.snap_time, 'YYYY-MM-DD') =
          (select max(to_char(snap_time,'YYYY-MM-DD')) from stats$idx_stats)
and
   to_char(old.snap_time, 'YYYY-MM-DD') =
           (select max(mydate) from d1)
and
   new.index_name not like 'STATS$%'
order by
   new.bytes-old.bytes desc
;
----List object growth over last N days, sorted by growth desc

select * from (select c.segment_name "Object Name",b.OWNER,b.CREATED,
                      sum(space_used_delta) / 1024 / 1024  "Growth (MB)"
                from  dba_hist_snapshot sn,
                      dba_hist_seg_stat a,
                      dba_objects b,
                      dba_segments c
               where begin_interval_time > trunc(sysdate) - &days_back
                 and sn.snap_id = a.snap_id
                 and b.object_id = a.obj#
                 and b.owner = c.owner
                 and b.object_name = c.segment_name
               group by c.segment_name,b.OWNER,b.CREATED)
               where owner not in ('SYSMAN','SYS')
order by 2 desc;
------------Oracle Database Growth Report by Month and Year
SELECT TO_CHAR(creation_time, 'RRRR Month') "Month",
       SUM(bytes)/1024/1024/1024 "Growth in GB"
  FROM sys.v_$datafile
 WHERE creation_time > SYSDATE-365
 GROUP BY TO_CHAR(creation_time, 'RRRR Month')
 ORDER BY 1
procedure for Calculating Database Growth and scheduling in DBMS JOBS....
1.Create a Table By the Name db_growth...with following details...

Name Null? Type
----------------------------------------- -------- ----------------------------
DAY DATE
DATABASE_SIZE_MB NUMBER
DAILY_GROWTH_MB NUMBER


2.create or replace PROCEDURE database_growth
AS
today_size NUMBER;
yesterday_size NUMBER;
growth_size NUMBER;
cnt NUMBER;
BEGIN
SELECT sum(bytes)/(1024*1024) INTO today_size FROM SM$TS_USED;
SELECT COUNT(1) INTO cnt FROM db_growth ;
IF cnt > 0
THEN
SELECT database_size_mb INTO yesterday_size FROM db_growth WHERE to_date(d
ay,'dd-mon-yy')=to_date(SYSDATE -1,'dd-mon-yy');
ELSE
yesterday_size:=today_size;
END IF;
growth_size := today_size - yesterday_size;
INSERT INTO db_growth VALUES(sysdate,today_size,growth_size);
EXCEPTION
WHEN no_data_found THEN
INSERT INTO db_growth VALUES(sysdate,today_size,0);
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;

3.Submit in DBMS_JOBS

variable jobno number;
begin
dbms_job.submit(
:jobno,
'database_growth ;',
trunc(sysdate+1) + 4/24,
'trunc(sysdate+1) + 4/24'
);
commit;
end;

DATAPUMP IN ORACLE



For using DATAPUMP through DB CONSOLE

There are two new concepts in Oracle Data Pump that are different from original Export and Import.

Directory Objects

Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. The directory objects enforce a security model that can be used by DBAs to control access to these files.

Interactive Command-Line Mode

Besides regular operating system command-line mode, there is now a very powerful interactive Command-line mode which allows the user to monitor and control Data Pump Export and Import operations. Changing from Original Export/Import to Oracle Data Pump Creating Directory Objects

In order to use Data Pump, the database administrator must create a directory object and grant privileges to the user on that directory object. If a directory object is not specified, a default directory object called data_pump_dir is provided. The default data_pump_dir is available only to privileged users unless access is granted by the DBA.

In the following example, the following SQL statement creates a directory object named

dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.

Create a directory.

    SQL> CREATE DIRECTORY dpump_dir1 AS ‘/usr/apps/datafiles’;

After a directory is created, you need to grant READ and WRITE permission on the directory to other users. For example, to allow the Oracle database to read and to write to files on behalf of user scott in the directory named by dpump_dir1, you must execute the following command:

    SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO scott;

Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories. Once the directory access is granted, the user scott can export his database objects with command arguments:

    >expdp username/password DIRECTORY=dpump_dir1 dumpfile=scott.dmp

Comparison of command-line parameters from Original Export and Import to

Data Pump

Data Pump commands have a similar look and feel to the original Export and Import

commands, but are different. Below are a few examples that demonstrate some of these

differences.

1) Example import of tables from scott’s account to jim’s account

Original Import:

> imp username/password FILE=scott.dmp FROMUSER=scott TOUSER=jim TABLES=(*)

Data Pump Import:

> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp

TABLES=scott.emp REMAP_SCHEMA=scott:jim

Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA option.

2) Example export of an entire database to a dump file with all GRANTS,

INDEXES, and data

> exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y

> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX

DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL

Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job.

Both parameters work with Data Pump Import as well, and you can use different INCLUDE and

EXCLUDE options for different operations on the same dump file.

3) Tuning Parameters

Unlike original Export and Import, which used the BUFFER, COMMIT, COMPRESS,

CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no tuning to achieve maximum performance. Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation.

4) Moving data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. With original Export, you had to run an older version of Export to produce a dump file that was compatible with an older database version.With Data Pump, you use the current Export version and simply use the VERSION parameter to specify the target database version. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g).

Example:

> expdp username/password TABLES=hr.employees VERSION=10.1

DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp

Data Pump Import can always read dump file sets created by older versions of Data Pump Export.

Note that Data Pump Import cannot read dump files produced by original Export.

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters, but once you are comfortable with Data

Pump, there are new capabilities that you will want to explore.

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. Now Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database.)

The number of parallel processes can be changed on the fly using Data Pump’s interactive command-line mode. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa).

For best performance, you should do the following:

• Make sure your system is well balanced across CPU, memory, and I/O.

• Have at least one dump file for each degree of parallelism. If there aren’t enough dump Files, performance will not be optimal because multiple threads of execution will be trying to access the same dump file.

• Put files that are members of a dump file set on separate disks so that they will be written and read in parallel.

• For export operations, use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated.

Example:

> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr

DUMPFILE=par_exp%u.dmp PARALLEL=4

REMAP

• REMAP_TABLESPACE – This allows you to easily import a table into a different

tablespace from which it was originally exported. The databases have to be 10.1 or later.

Example:

> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6

DIRECTORY=dpumpdir1 DUMPFILE=employees.dmp

• REMAP_DATAFILES – This is a very useful feature when you move databases between platforms that have different file naming conventions. This parameter changes the source datafile name to the target datafile name in all SQL statements where the source

datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, it’s best to specify the parameter within a parameter file.

Example:

The parameter file, payroll.par, has the following content:

DIRECTORY=dpump_dir1

FULL=Y

DUMPFILE=db_full.dmp

REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/payroll/tbs6.dbf

You can then issue the following command:

> impdp username/password PARFILE=payroll.par

Even More Advanced Features of Oracle Data Pump

Beyond the command-line and performance features of Oracle Data Pump are new capabilities that DBAs will find invaluable. A couple of prominent features are described here.Interactive Command-Line Mode

You have much more control in monitoring and controlling Data Pump jobs with interactive command-line mode. Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress. Here are some of the things you can do while in this mode:

See the status of the job. All of the information needed to monitor the job’s execution is available.

    Add more dump files if there is insufficient disk space for an export file.
    Change the default size of the dump files.
    Stop the job (perhaps it is consuming too many resources) and later restart it (when more resources become available).
    Restart the job. If a job was stopped for any reason (system failure, power outage), you can attach to the job and then restart it.
    Increase or decrease the number of active worker processes for the job. (Enterprise Edition only.)
    Attach to a job from a remote site (such as from home) to monitor status.

Network Mode

Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk. This is very useful if you’re moving data between databases, like data marts to data warehouses, and disk space is not readily available. Note that if you are moving large volumes of data, Network mode is probably going to be slower than file mode. Network export creates the dump file set on the instance where the Data Pump job is running and extracts the metadata and data from the remote instance. Network export gives you the ability to export read-only databases. (Data Pump Export cannot run locally on a read-only instance because the job requires write operations on the instance.) This is useful when there is a need to export data from a standby database.

Generating SQLFILES

In original Import, the INDEXFILE parameter generated a text file which contained the SQL commands necessary to recreate tables and indexes that you could then edit to get a workable DDL script. With Data Pump, it’s a lot easier to get a workable DDL script. When you run Data Pump Import and specify the SQLFILE parameter, a text file is generated that has the necessary DDL (Data Definition Language) in it to recreate all object types, not just tables and indexes. Although this output file is ready for execution, the DDL statements are not actually executed, so the target system will not be changed.

SQLFILEs can be particularly useful when pre-creating tables and objects in a new database. Note that the INCLUDE and EXCLUDE parameters can be used for tailoring sqlfile output. For example, if you want to create a database that contains all the tables and indexes of the source database, but that does not include the same constraints, grants,and other metadata, you would issue a command as follows:

>impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull.dmp

SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX

The SQL file named expfull.sql is written to dpump_dir2 and would include SQL DDL that could be executed in another database to create the tables and indexes as desired.

Encryption with Oracle Data Pump


Introduction

The security and compliance requirements in today’s business world present manifold challenges. As incidences of data theft increase, protecting data privacy continues to be of paramount importance. Now a de facto solution in meeting regulatory compliances, data encryption is one of a number of security tools in use. The Oracle Advanced Security features built into Oracle Data Pump assist customers in safeguarding sensitive data stored in dump files from unauthorized access. Oracle Database 10g release 2 introduced the Oracle Advanced Security Transparent Data Encryption (TDE) feature that enables column data to be encrypted while stored in the database. Customers who take advantage of this feature can use Oracle Data Pump to encrypt this TDE column data as it is written to the export dump file set. The purpose of this whitepaper is to explain how the Oracle Data Pump TDE Encrypted Column feature works. Please note that this paper does not apply to the Original Export/Import utilities. For information regarding the Oracle Data Pump Encrypted Dump File feature that that was released with Oracle Database 11g release 1 and that provides the ability to encrypt all exported data as it is written to the export dump file set, refer to the Oracle Data Pump Encrypted Dump File Support whitepaper.

The TDE column encryption feature transparently encrypts and decrypts data written to and read from application table columns for any columns that are marked with the ENCRYPT key word. Once a table column is marked with this keyword, encryption and decryption are performed automatically, without the need for any further user or application intervention. The column encryption key used by TDE is taken from randomly generated data or is derived from a password provided during the creation of the table containing the encrypted column. When an authorized user inserts new data into such a column, TDE column encryption encrypts this data prior to storing it in the database. Conversely, when the user selects the column from the database, TDE column encryption transparently decrypts this data back to its original clear text format. Column data encrypted using TDE remains protected while it resides in the database. However, the protection offered by TDE does not extend beyond the database and so this protection is lost if the sensitive column data is extracted in clear text format and stored outside of the database. Oracle Data Pump export extends the protection that TDE offers by taking the extracted clear text column data and re-encrypting it, using a dump file encryption key derived from a userprovided password, before it is written to the export dump file set.. Column data encrypted using Oracle Data Pump encrypted column feature now remains protected outside of the database while it resides in the export dump file set. Whenever Oracle Data Pump unloads or loads tables containing encrypted columns, it uses the external tables mechanism instead of the direct path mechanism. The use of external tables creates a correspondence between the database table data and the export dump file while using the SQL engine to perform the data transfer.

 The steps involved in exporting a table with encrypted columns are as follows:

1. Data Pump performs a SELECT operation on the table that contains the encrypted columns from the database.

2. As part of the SELECT operation, TDE automatically decrypts the encrypted column data back to clear text format using the column encryption key.

3. Data Pump re-encrypts the clear text column data using the dump file encryption key and then writes this encrypted data to the export dump file set. To load an export dump file set containing encrypted column data into a target database, the same encryption password used at export time must be provided to Oracle Data Pump import. After verifying that the correct password has been given, the corresponding dump file decryption key is derived from this password.

 The steps involved in importing a table with encrypted columns are as follows:

1. Data Pump reads the encrypted column data from the export dump file set and decrypts the data back to clear text format using the dump file encryption key.

2. Data Pump performs an INSERT operation of the clear text column data into the table that contains the encrypted column.

3. As part of the INSERT operation, TDE automatically encrypts the column data using the column encryption key and then writes it to the database.

Exporting and importing encrypted columns may have a slightly negative impact on the overall performance of the Data Pump job. Although the data being processed is stored in memory buffers, encryption and decryption are typically CPU intensive operations. Furthermore, additional disk I/O is incurred due to space overhead added to the encrypted data in order to perform data integrity checks and to safeguard against brute force attacks.

Keep in mind that in Oracle Data Pump 10g release 2, the ENCRYPTION_PASSWORD

parameter applies only to TDE encrypted columns. Support for the encryption of the entire dump file is an Oracle Data Pump 11g release 1 feature and is discussed separately in a different section.

Creating a Table with Encrypted Columns

Before using TDE to create and export encrypted columns, it is first necessary to create an Oracle Encryption Wallet, which is a repository for holding entities like authentication and signing credentials as well as database master encryption keys. The SQL ALTER SYSTEM statement is used to create a new encryption wallet and set the database master key. In the following example, the password used in the IDENTIFIED BY clause is required and is used solely for gaining access to the wallet.Next, create a table with an encrypted column. The password used below in the IDENTIFIED

BY clause is optional and TDE uses it to derive the tables column encryption key. If the

IDENTIFIED BY clause is omitted, then TDE creates the tables column encryption key based on random data.

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “wallet_pwd”

SQL> CREATE TABLE DP.EMP (empid NUMBER(6),empname VARCHAR2(100),salary NUMBER(8,2) ENCRYPT IDENTIFIED BY “column_pwd”

Using Oracle Data Pump to Export Encrypted Columns

Oracle Data Pump can now be used to export the table. In the following example, the password provided in the ENCRYPTION_PASSWORD parameter is used to derive the dump files encryption key. Oracle Data Pump re-encrypts the column data in the dump file using this dump file key. When re-encrypting encrypted column data, Oracle Data Pump uses the Advanced Encryption Standard (AES) cryptographic algorithm with a key length of 128 bits (AES128).Note that there is absolutely no connection between the password specified by the Oracle Data Pump ENCRYPTION_PASSWORD parameter and the passwords used in the SQL ALTERSYSTEM and CREATE TABLE statements.

Attempting to use the ENCRYPTION_PASSWORD parameter when the Oracle Encryption Wallet is closed results in an error. This is shown in the following example, in which the Oracle Wallet is manually closed and then the export command is re-issued.

Although the ENCRYPTION_PASSWORD is an optional parameter, it is always prudent to export encrypted columns using a password. In the event that the password is not specified, Oracle Data Pump writes the encrypted column data as clear text in the dump file. In such a case, a warning message (ORA-39173) is displayed, as shown in the following example.

$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

SQL> ALTER SYSTEM SET WALLET CLOSE;

$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \

ENCRYPTION_PASSWORD=dump_pwd

Export: Release 10.2.0.4.0 – Production on Monday, 09 July, 2009

8:21:23

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 – Production

With the Partitioning, Data Mining and Real Application Testing

options

ORA-39001: invalid argument value

ORA-39180: unable to encrypt ENCRYPTION_PASSWORD

ORA-28365: wallet is not open

Restriction with Transportable Tablespace Export Mode

Exporting encrypted columns is not limited to table mode exports, as used in the previous

examples. If a schema, tablespace, or full mode export is performed, then all encrypted columns in any of the exported tables selected for that mode are re-encrypted before being written to the export dump file set. This is true even when these export modes are used in network mode via the Oracle Data Pump NETWORK_LINK parameter.

There is, however, one exception; transportable tablespace export mode does not support

encrypted columns. An attempt to perform an export using this mode when the tablespace

contains tables with encrypted columns yields the following error:

 $ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp

Export: Release 10.2.0.4.0 – Production on Wednesday, 09 July, 2009

8:48:43

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 – Production

With the Partitioning, Data Mining and Real Application Testing

options

Starting “DP”.”SYS_EXPORT_TABLE_01?: dp/******** directory=dpump_dir

dumpfile=emp tables=emp

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 16 KB

Processing object type TABLE_EXPORT/TABLE/TABLE

. . exported “DP”.”EMP” 6.25 KB 3 rows

ORA-39173: Encrypted data has been stored unencrypted in dump file

set.

Master table “DP”.”SYS_EXPORT_TABLE_01? successfully loaded/unloaded

*********************************************************************

Dump file set for DP.SYS_EXPORT_TABLE_01 is:

/ade/jkaloger_lx9/oracle/work/emp.dmp

Job “DP”.”SYS_EXPORT_TABLE_01? completed with 1 error(s) at 08:48:57

$ expdp system/password DIRECTORY=dpump_dir DUMPFILE=dp.dmp \

TRANSPORT_TABLESPACES=dp

Export: Release 10.2.0.4.0 – Production on Thursday, 09 July, 2009

8:55:07

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 – Production

With the Partitioning, Data Mining and Real Application Testing

options

Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01?: system/********

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123: Data Pump transportable tablespace job aborted

ORA-29341: The transportable set is not self-contained

Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01? stopped due to fatal error

at 08:55:25

The ORA-29341 error in the previous example is not very informative. If the same transportable

tablespace export is executed using Oracle Database 11g release 1, that version does a better job

at pinpointing the problem via the information in the ORA-39929 error:

Using Oracle Data Pump to Import Encrypted Columns

Just as when exporting encrypted column data, an Oracle Encryption Wallet must be created and open on the target database before attempting to import a dump file set containing encrypted column data. Otherwise, an ??ORA-28365: wallet not open?? error is returned. Note that the wallet on the target database does not require that the same master key be present as the one used on the source database where the export originally took place. Of course, the same password must be provided in the import ENCRYPTION_PASSWORD parameter that was used during the export.

If the encryption attributes for all columns do not exactly match between the source and target tables, then an ORA-26033 exception is raised when you try to import the export dump file set. In the example of the DP.EMP table, the SALARY column must have the ENCRYPT attribute on both the source and target tables between the time that the export dump file is created and the import of that file is performed. For example, assume in the following example that the DP.EMP table on the target system has been created exactly as it is on the source system except that the ENCRYPT attribute has not been assigned to the SALARY column. The output and resulting error messages would look as follows:

$ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd

$ expdp system/password DIRECTORY=dpump_dir dumpfile=dp.dmp \

TRANSPORT_TABLESPACES=dp

Export: Release 11.1.0.7.0 – Production on Thursday, 09 July, 2009

9:09:00

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release

11.1.0.7.0 – Production

With the Partitioning, Data Mining and Real Application Testing

Options Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01?: system/********

directory=dpump_dir dumpfile=dp transport_tablespaces=dp

ORA-39123: Data Pump transportable tablespace job aborted

ORA-39187: The transportable set is not self-contained, violation list

is ORA-39929: Table DP.EMP in tablespace DP has encrypted columns which

are not supported.

Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01? stopped due to fatal error

at 09:09:21

 Restriction Using Import Network Mode

A network mode import uses a database link to extract data from a remote database and load it

into the connected database instance. There are no export dump files involved in a network

mode import and therefore there is no re-encrypting of TDE column data. Thus the use of the

ENCRYPTION_PASWORD parameter is prohibited in network mode imports, as shown in the

following example:

$ impdp dp/dp TABLES=dp.emp DIRECTORY=dpump_dir NETWORK_LINK=remote \

TABLE_EXISTS_ACTION=APPEND ENCRYPTION_PASSWORD=dump_pwd

Import: Release 10.2.0.4.0 – Production on Friday, 09 July, 2009

11:00:57

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release

10.2.0.4.0 – Production

With the Partitioning, Data Mining and Real Application Testing

options

ORA-39005: inconsistent arguments

ORA-39115: ENCRYPTION_PASSWORD is not supported over a network link

$ impdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp TABLES=emp \

ENCRYPTION_PASSWORD=dump_pwd TABLE_EXISTS_ACTION=APPEND

Import: Release 10.2.0.4.0 – Production on Thursday, 09 July, 2009

10:55:40

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 -

Production

With the Partitioning, Data Mining and Real Application Testing options

Master table “DP”.”SYS_IMPORT_TABLE_01? successfully loaded/unloaded

Starting “DP”.”SYS_IMPORT_TABLE_01?: dp/******** directory=dpump_dir

dumpfile=emp.dmp tables=emp encryption_password=********

table_exists_action=append

Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table “DP”.”EMP” exists. Data will be appended to existing

table but all dependent metadata will be skipped due to

table_exists_action of append

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

ORA-31693: Table data object “DP”.”EMP” failed to load/unload and is being

skipped due to error:

ORA-02354: error in exporting/importing data

ORA-26033: column “EMP”.SALARY encryption properties differ for source or

target table

Job “DP”.”SYS_IMPORT_TABLE_01? completed with 2 error(s) at 10:55:48

Oracle White Paper Encryption with Oracle Data Pump

By removing the ENCRYPTION_PASSWORD parameter you can perform the network mode import. However, it is important to understand that any TDE column data will be transmitted in clear-text format. If you are concerned about the security of the information being transmitted, then consider using Oracle Net Services to configure Oracle Advanced Security Network Data Encryption.

When the ENCRYPTION_PASSWORD Parameter Is Not Needed

It should be pointed out that when importing from an export dump file set that includes

encrypted column data, the encryption password and the Oracle Wallet are required only when the encrypted column data is being accessed. The following are cases in which the encryption password and Oracle Wallet are not needed:

    A full metadata-only import
    A schema-mode import in which the referenced schemas do not include tables with encrypted columns
    A table-mode import in which the referenced tables do not include encrypted columns
    Encrypted Columns and External Tables

The external tables feature allows you to access data in an external operating system file as if it were inside a table residing in the database. An external table definition is created using the SQL syntax CREATE TABLE ORGANIZATION EXTERNAL and specifying the ORACLE_DATAPUMP access driver in the TYPE clause.

The ORACLE_DATAPUMP access driver uses an export dump file to hold the external data.

Note that this external table export dump file is not the same export dump file as produced by the Oracle Data Pump export utility (expdp). As is always the case when dealing with TDE columns, the Oracle Wallet must first be open before creating the external table. The following example creates an external table called DP.XEMP and populates it using the data in the DP.EMP table. Notice that datatypes for the columns are not specified. This is because they are determined by the column datatypes in the source table in the SELECT subquery.

SQL> CREATE TABLE DP.XEMP (

empid,

empname,

salary ENCRYPT IDENTIFIED BY “column_pwd”)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY dpump_dir

LOCATION (’xemp.dmp’)

)

REJECT LIMIT UNLIMITED

AS SELECT * FROM DP.EMP;

 The steps involved in creating an external table with encrypted columns are as follows:

1. The SQL engine selects the data for the table DP.EMP from the database. If any columns in the table are marked as encrypted, as the salary column is for DP.EMP, then TDE decrypts the column data as part of the select operation.

2. The SQL engine then inserts the data, which is in clear text format, into the DP.XEMP table. If any columns in the external table are marked as encrypted, as one of its columns is, then TDE encrypts this column data as part of the insert operation.

3. Because DP.XEMP is an external table, the ORACLE_DATAPUMP access driver is used to write the data to the external export dump file. The data in an external table can be written only once when the CREATE TABLE ORGANIZATION EXTERNAL statement is executed. However, the data in the external table can be selected any number of times using a simple SQL SELECT statement: The steps involved in selecting data with encrypted columns from an external table are as follows:

1. The SQL engine initiates a select operation. Because DP.XEMP is an external table the ORACLE_DATAPUMP access driver is called to read the data from the external export file.

2. The data is passed back to the SQL engine. If any columns in the external table are marked as encrypted, as one of its columns is, then TDE decrypts the data as part of the select operation. The use of the encryption password in the IDENTIFIED BY clause is optional, unless you plan to move the dump file to another database. In that case, the same encryption password must be used for the encrypted columns in the dump file in the table definition on both the source and target database in order to read the data in the dump file. Encryption Parameter Change in 11g Release 1

As previously discussed, in Oracle Database 10g release 2 only TDE encrypted columns could be encrypted by Oracle Data Pump and the only encryption-related parameter available was ENCRYPTION_PASSW ORD. So, by default, if the ENCRYPTION_PASSWORD is present on the command line, then it applies only to TDE encrypted columns (if there are no such columns being exported, then the parameter is ignored).

SQL> SELECT * FROM DP.XEMP;

 Beginning in Oracle Database 11g release 1, the ability to encrypt the entire export dump file set is introduced and with it, several new encrypted-related parameters. A new ENCRYPTION parameter supplies options for encrypting part or all of the data written to an export dump file set. Oracle Database 11g release 1 also brings about a change in the default behavior with respect to encryption. The presence of only the ENCRYPTION_PASSWORD parameter no longer means that TDE columns will be encrypted by Oracle Data Pump but instead means that the entire export dump file set will be encrypted. To encrypt only TDE columns using Oracle Data Pump 11g, it is now necessary to include the new ENCRYPTION parameter with the keyword ENCRYPTED_COLUMNS_ONLY. So, the 10g example previously shown becomes the following in 11g:

$ expdp dp/dp DIRECTORY=dpump_dir DUMPFILE=emp.dmp \

TABLES=emp ENCRYPTION_PASSWORD=dump_pwd \

ENCRYPTION=ENCRYPTED_COLUMNS_ONLY

Steps to switchover standby to primary

SWITCHOVER PRIMARY TO STANDBY DATABASE
Primary =PRIM
Standby = STAN
I. Before Switchover:
1. As I always recommend, test the Switchover first on your testing systems before working on Production.
2. Verify the primary database instance is open and the standby database instance is mounted.
3. Verify there are no active users connected to the databases.
4. Make sure the last redo data transmitted from the Primary database was applied on the standby database. Issue the following commands on Primary database and Standby database to find out:
SQL>select sequence#, applied from v$archvied_log;
Perform SWITCH LOGFILE if necessary.
In order to apply redo data to the standby database as soon as it is received, use Real-time apply.
II. Quick Switchover Steps
1. Initiate the switchover on the primary database PRIM:
SQL>connect /@PRIM as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

2. After step 1 finishes, Switch the original physical standby db STAN to primary role;
Open another prompt and connect to SQLPLUS:
SQL>connect /@STAN as sysdba
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. Immediately after issuing command in step 2, shut down and restart the former primary instance PRIM:
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;

4. After step 3 completes:
- If you are using Oracle Database 10g release 1, you will have to Shut down and restart the new primary database STAN.
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

- If you are using Oracle Database 10g release 2, you can open the new Primary database STAN:
SQL>ALTER DATABASE OPEN;
STAN is now transitioned to the primary database role.
5. On the new primary database STAN, perform a SWITCH LOGFILE to start sending redo data to the standby database PRIM.
SQL>ALTER SYSTEM SWITCH LOGFILE;

JOB SCHEDULING

CRON JOB SCHEDULING –IN UNIX

    To run system jobs on a daily/weekly/monthly basis
    To allow users to setup their own schedules

The system schedules are setup when the package is installed, via the creation of some special directories:

/etc/cron.d
/etc/cron.daily
/etc/cron.hourly
/etc/cron.monthly
/etc/cron.weekly

Except for the first one which is special, these directories allow scheduling of system-wide jobs in a coarse manner. Any script which is executable and placed inside them will run at the frequency which its name suggests.

For example if you place a script inside /etc/cron.daily it will be executed once per day, every day.

The time that the scripts run in those system-wide directories is not something that an administration typically changes, but the times can be adjusted by editing the file /etc/crontab. The format of this file will be explained shortly.

The normal manner which people use cron is via the crontab command. This allows you to view or edit your crontab file, which is a per-user file containing entries describing commands to execute and the time(s) to execute them.

To display your file you run the following command:

crontab -l

root can view any users crontab file by adding “-u username“, for example:

crontab -u skx -l  # List skx's crontab file.

The format of these files is fairly simple to understand. Each line is a collection of six fields separated by spaces.

The fields are:

    The number of minutes after the hour (0 to 59)
    The hour in military time (24 hour) format (0 to 23)
    The day of the month (1 to 31)
    The month (1 to 12)
    The day of the week(0 or 7 is Sun, or use name)
    The command to run

More graphically they would look like this:

*     *     *     *     *  Command to be executed
-     -     -     -     -
|     |     |     |     |
|     |     |     |     +----- Day of week (0-7)
|     |     |     +------- Month (1 - 12)
|     |     +--------- Day of month (1 - 31)
|     +----------- Hour (0 - 23)
+------------- Min (0 - 59)

(Each of the first five fields contains only numbers, however they can be left as ‘*’ characters to signify any value is acceptible).

Now that we’ve seen the structure we should try to ru na couple of examples.

To edit your crontabe file run:

crontab -e

This will launch your default editor upon your crontab file (creating it if necessary). When you save the file and quit your editor it will be installed into the system unless it is found to contain errors.

If you wish to change the editor used to edit the file set the EDITOR environmental variable like this:

export EDITOR=/usr/bin/emacs
crontab -e

Now enter the following:

0   *   *   *   * /bin/ls

When you’ve saved the file and quit your editor you will see a message such as:

crontab: installing new crontab

You can verify that the file contains what you expect with :

crontab -l

Here we’ve told the cron system to execute the command “/bin/ls” every time the minute equals 0, ie. We’re running the command on the hour, every hour.

Any output of the command you run will be sent to you by email, if you wish to stop this then you should cause it to be redirected, as follows:

0   *   *   *   *  /bin/ls   >/dev/null 2&>1

This causes all output to be redirected to /dev/null – meaning you won’t see it.

Now we’ll finish with some more examples:

# Run the `something` command every hour on the hour
0   *   *   *   * /sbin/something

# Run the `nightly` command at ten minutes past midnight every day
10  0   *   *   * /bin/nightly

# Run the `monday` command every monday at 2 AM
0   2   *   *   1  /usr/local/bin/monday

One last tip: If you want to run something very regularly you can use an alternate syntax: Instead of using only single numbers you can use ranges or sets.

A range of numbers indicates that every item in that range will be matched, if you use the following line you’ll run a command at 1AM, 2AM, 3AM, and 4AM:

# Use a range of hours matching 1, 2, 3 and 4AM
*   1-4   *   *   * /bin/some-hourly

A set is similar, consisting of a collection of numbers seperated by commas, each item in the list will be matched. The previous example would look like this using sets:

# Use a set of hours matching 1, 2, 3 and 4AM
*   1,2,3,4   *   *   * /bin/some-hourly



JOB SCHEDULING IN WINDOWS

Cold backup – scheduling in windows environment

 Create a batch file as cold_bkp.bat

@echo off

net stop OracleServiceDBNAME

net stop OracleOraHome92TNSListener

xcopy /E /Y E:\oracle\oradata\HRMS D:\daily_bkp_\coldbackup\hrms

xcopy /E /Y E:\oracle\ora92\database D:\daily_bkp \registry\database

net start OracleServiceDBNAME

net start OracleOraHome92TNSListener

 Save the file as cold_bkp.batGoto start -> control panel -> scheduled tasks.

    Click on add a scheduled tasks.
    Click next and browse your cold_bkp.bat file.
    Give a name for the backup and schedule the timings.
    It will ask for o/s user name and password.
    Click next and finish the scheduling.



Note:

            Whenever the o/s user name and password are changed reschedule the scheduled tasks. If you don’t reschedule it the job won’t run. So edit the scheduled tasks and enter the new password.

Steps for sehema refresh

Schema refresh in oracle 9i
Now we are going to refresh SH schema.
  Steps for schema refresh – before exporting
 Spool the output of roles and privileges assigned to the user .use the query below to view the role s and privileges and spool the out as .sql file.
  1. SELECT object_type,count(*) from dba_objects where owner=’SHTEST’ group by object_type;
  2. Verify total no of objects from above query.
  3. write a dynamic query as below
  4. select ‘grant ‘ || privilege ||’ to sh;’ from session_privs;
  5. select ‘grant ‘ || role ||’ to sh;’ from session_roles;
  6. query the default tablespace and size
  7. select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=’SH’ group by tablespace_name;
  Export the ‘sh’ schema
 exp ‘usernmae/password file=’/location/sh_bkp.dmp’ log=’/location/sh_exp.log’ owner=’SH’ direct=y
steps to drrop and recreate schema
Drop the SH schema
  1. Create the SH schema with the default tablespace and allocate quota on that tablespace.
  2. Now run the roles and privileges spooled scripts.
  3. Connect the SH and verify the tablespace, roles and privileges.
  4. then start importing
  Importing The ‘SH’ schema
 Imp ‘usernmae/password’ file=’/location/sh_bkp.dmp’ log=’/location/sh_imp.log’
Fromuser=’SH’ touser=’SH’
 SQL> SELECT object_type,count(*) from dba_objects where owner=’SHTEST’ group by object_type;
 Compiling and analyzing SH Schema
exec dbms_utility.compile_schema(‘SH’);
execdbms_utility.analyze_schema(‘SH’,'ESTIMATE’,ESTIMATE_PERCENT=>20);
 Now connect the SH user and check for the import data.
 Schema refresh by dropping objects and truncating objects
 Export the ‘sh’ schema
 Take the schema full export as show above
 Drop all the objects in ‘SH’ schema
To drop the all the objects in the Schema
 Connect the schema
Spool the output
 SQL>set head off
SQL>spool drop_tables.sql
SQL>select ‘drop table ‘||table_name||’ cascade constraints purge;’ from user_tables;
SQL>spool off
 SQL>set head off
SQL>spool drop_other_objects.sql
SQL>select ‘drop ‘||object_type||’ ‘||object_name||’;’ from user_objects;
SQL>spool off
Now run the script all the objects will be dropped,
 Importing THE ‘SH’ schema
Imp ‘usernmae/password’ file=’/location/sh_bkp.dmp’ log=’/location/sh_imp.log’
Fromuser=’SH’ touser=’SH’
SQL> SELECT object_type,count(*) from dba_objects where owner=’SHTEST’ group by object_type;
 Compiling and analyzing SH Schema
exec dbms_utility.compile_schema(‘SH’);
execdbms_utility.analyze_schema(‘SH’,'ESTIMATE’,ESTIMATE_PERCENT=>20);
 Now connect the SH user and check for the import data.
To enable constraints use the query below
SELECT ‘ALTER TABLE ‘||TABLE_NAME||’ENABLE CONSTRAINT ‘||CONSTRAINT_NAME||’;'FROM USER_CONSTRAINTS
WHERE STATUS=’DISABLED’;
 Truncate all the objects in ‘SH’ schema
 To truncate the all the objects in the Schema
 Connect the schema
Spool the output
 SQL>set head off 
SQL>spool truncate_tables.sql
SQL>select ‘truncate table ‘||table_name from user_tables;
SQL>spool off
 SQL>set head off 
SQL>spool truncate_other_objects.sql
SQL>select ‘truncate ‘||object_type||’ ‘||object_name||’;’ from user_objects;
SQL>spool off
Now run the script all the objects will be truncated.
Disabiling the reference constraints
 If there is any constraint violation while truncating use the below query to find reference key constraints and disable them. Spool the output of below query and run the script.
 Select constraint_name,constraint_type,table_name FROM ALL_CONSTRAINTS 
 where constraint_type=’R’
 and r_constraint_name in (select constraint_name from all_constraints
 where  table_name=’TABLE_NAME’)
 Importing THE ‘SH’ schema
Imp ‘usernmae/password’ file=’/location/sh_bkp.dmp’ log=’/location/sh_imp.log’
Fromuser=’SH’ touser=’SH’
SQL> SELECT object_type,count(*) from dba_objects where owner=’SHTEST’ group by object_type;
Compiling and analyzing SH Schema
exec dbms_utility.compile_schema(‘SH’);
exec dbms_utility.analyze_schema(‘SH’,'ESTIMATE’,ESTIMATE_PERCENT=>20);
Now connect the SH user and check for the import data.
 Schema refresh in oracle 10g
Here we can use Datapump
 Exporting the SH schema through Datapump
 expdp ‘username/password’ dumpfile=sh_exp.dmp directory=data_pump_dir schemas=sh
Dropping  the ‘SH’ user
  Query the default tablespace and verify the space in the tablespace and drop  the user.
 SQL>Drop user SH cascade;
Importing the SH schema through datapump
impdp ‘username/password’ dumpfile=sh_exp.dmp directory=data_pump_dir schemas=sh
 If you are importing to different schema use remap_schema option.
 Check for the imported objects and compile the invalid objects.

Duplicate Database With RMAN Without Connecting To Target Databas

Duplicate Database With RMAN Without Connecting To Target Database – from metalink Id 732624.1
How to do duplicate database without connecting to target database using backups taken from RMAN on alternate host.
Solution
Follow the below steps
1)Export ORACLE_SID=
create init.ora file and give db_name= and control_files= want controlfile to be restored>
2)Startup nomount pfile=;
3)Connect to RMAN and issue command :
RMAN>restore controlfile from ‘;
controlfile should be restored.
4) Issue “alter database mount”
Make sure that backuppieces are on the same location where it were there on production db. If you
dont have the same location, then make RMAN aware of the changed location using “catalog” command.
RMAN>catalog backuppiece ;
If there are more backuppieces, than they can be cataloged using command :
RMAN>catalog start with ;
5) After catalogging backuppiece, issue “restore database” command. If you need to restore datafiles to a location different to the one recorded in controlfile, use SET NEWNAME command as below:
run {
set newname for datafile 1 to ‘/newLocation/system.dbf’;
set newname for datafile 2 to ‘/newLocation/undotbs.dbf’;

restore database;
switch datafile all;

Monday, May 7, 2012

Shared Memory Realm Does Not Exist

 ORA-01034 & ORA-27101: Shared Memory Realm Does Not Exist

ORA-12701 may occur from various reason . According to the oracle doc's  ,

ORA-27101 :  shared memory realm does not exist

Cause  : Unable to locate shared memory realm .

Action  : Verify that the realm is accessible .

ORA-27101 and ORA-01034  combined may occurs in various scenarios .The main reason for this error is that the database is not up. When we try to connect with oracle database or sometimes during installation we may get this error . We may face this error in some other scenario's too . Here are few possible scenario's (  some scenario's are from window platform and some are from Linux  platform )

Case 1 :  When try to connect with normal user we get the error as

C:\>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Mon Feb 6 11:22:08 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: scott

Enter password:

ERROR:

ORA-01034: ORACLE not available

ORA-27101:  shared memory realm does not exist

Process ID: 0

Session ID: 0 Serial number: 0

To solve this issue , set the ORACLE_SID and start the database using the "sys" user as sysdba and start the database ,

C:\>set ORACLE_SID=noida

C:\>sqlplus sys/xxxx@noida as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon Feb 6 11:25:13 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup

ORACLE instance started.

Total System Global Area  535662592 bytes

Fixed Size                  1375792 bytes

Variable Size             331350480 bytes

Database Buffers          197132288 bytes

Redo Buffers                5804032 bytes

Database mounted.

Database opened.

Case  2 :  On a system with multiple IP addresses, when we connect with sqlplus locally on the server  (i.e. "sqlplus user/password") everything is OK, but connecting through a TNS alias, either from the network or locally on the server, we get the the following errors :

$ sqlplus user/password@db_alias

ERROR:

ORA-01034: ORACLE not available

ORA-27101: shared memory realm does not exist

SVR4 Error: 2: No such file or directory

We need to Manually startup the database or check whether the listener is started or not . So either connect as

$export ORACLE_SID=noida

$ sqlplus sys/password as sysdba

SQL>startup

or

check the status of listener if not started then start the listener  as

$lsnrctl

lsnrctl> stop

lsnrctl>start

lsnrctl>exit

$export ORACLE_SID=noida

$sqlplus sys/password@noida as sysdba

SQL> startup

Case 3 :  Make sure while connecting, that the ORACLE_SID and ORACLE_HOME is correctly set . There should not be trailing trash in ORACLE_HOME  path . So, set ORACLE_SID  before connecting as :

Linux  :

$export ORACLE_SID=noida

$sqlplus sys/xxxx@noida as sysdba

SQL> startup

Window  :

C:\> set ORACLE_SID=noida

C:\> sqlplus sys/xxxx@noida as sysdba

C:\> startup

Case 4 :   Sometimes in case of window ,  if event log is full , then we get this error . So delete the event logs and try to connect again .

Enjoy     :-)

Tuesday, March 13, 2012

Cannot perform a DML operation inside a Query in Functions

ORA-14551: cannot perform a DML operation inside a query

CREATE OR REPLACE FUNCTION vamsi_f(ename IN VARCHAR) RETURN INTEGER
AS empid INTEGER;
BEGIN
INSERT INTO EMP_BK (empno, ename) values(5,ename);
SELECT empno into empid from EMP_BK WHERE EMP_BK.ENAME=ename;
RETURN empid;
END;

I called the function using the command,
sql> select vamsi_f('vamsi') from dual;

which cause to give the bwlow error

ORA-14551: cannot perform a DML operation inside a query
ORA-06512: at "TEST.VAMSI_F", line 4

DML(insert. delete, update) operations should be called using the 'call' command as bellow.

sql> var myvar NUMBER;
sql> call vamsi_f('vamsi') into :myvar;
sql> print myvar

MYVAR STRING = 5

Wednesday, February 22, 2012

Blocks, Extents, Segments in Oracle Database

Data Blocks Extents Segments in Database
Logical storage component of Oracle Database i.e. Data Blocks, Extents and Segment
A. Data Blocks is smallest logical unit to store Oracle Data.
ii) One data block represent specific number of bytes on physical hard disk.
iii) Data Block in Oracle is also called as logical block
iv) Data Block size is usually multiple of operating system block size
v) You can have multiple block sizes with in single database (max. five)
vi) Block Size is specified by initialization parameter DB_BLOCK_SIZE
vii) Format of Data Block is
data blocks in oracle database




) Header : contains generic information like block address and type of segment (index, data..)
b) Table Directory : contains information about table having rows in that block
c) Row Directory : contains information about actual row contained in that block
d) Free Space : available space in data block for additional row or update of row which require more space.
e) Row Data : contains table or index data.First three component of data block (Header, Table & Row directory) collectively known as Overhead

B. Extent is collection of contiguous data blocks.
ii) One or more extents make up a segment.

C. Segment is set of extents allocated for specific data structure (like table or index).
ii) Various kind of segments are table, index, cluster, rollback, temporary …
iii) Important views for segments are dba_segments, user_segments, all_segments
iv) In a Segment, first block of first extent contains segment header information

Things to note w.r.t. Segment, Extent & Datablocks
i) Segment and its associated extents are stored in one table space.
ii) Extents of a segment may not be contiguous on disk
iii) Segment can span multiple datafiles of a particular tablespace (Information on tablespace & datafiles coming soon) but extent can contain data from only one datafile.





What are different normalization forms?

1NF: Eliminate Repeating Groups

Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.

2NF: Eliminate Redundant Data

If an attribute depends on only part of a multi-valued key, remove it to a separate table.

3NF: Eliminate Columns Not Dependent On Key

If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key

BCNF: Boyce-Codd Normal Form

If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.

4NF: Isolate Independent Multiple Relationships

No table may contain two or more 1:n or n:m relationships that are not directly related.

5NF: Isolate Semantically Related Multiple Relationships

There may be practical constrains on information that justify separating logically related many-to-many relationships.

ONF: Optimal Normal Form

A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.

DKNF: Domain-Key Normal Form

A model free from all modification anomalies.