CAUSE:
Goldent Gate DDL is enabled on database. Trigger sys.GGS_DDL_TRIGGER_BEFORE will be fired for utlrp, but the trigger is failed because some Golden Gate object is invalid.
Solution:
1. disable the trigger as sysdba
alter trigger sys.GGS_DDL_TRIGGER_BEFORE disable ;
2. run utlrp
@?/rdbms/admin/utlrp.sql
3. enable the trigger as sysdba
alter trigger sys.GGS_DDL_TRIGGER_BEFORE enable ;
reference:
http://harikrishnakm.wordpress.com/2013/08/19/catuppst-sql-fails-with-gguser-ddlreplication-does-not-exist-after-upgrading-the-database/
Tuesday, November 5, 2013
Monday, October 14, 2013
Duplicate Database using RMAN backup without connecting to target for oracle 10g version on windows
PURPOSE:
Duplicate oracle 10g database in new server for testing without connecting to target.
STEPS:
STEP 1. make a backup
1.1 backup database
rman target /
run
{
allocate channel c1 type disk format 'F:\rman\rman_db_%d_%T_%U.rman';
backup as compressed backupset full database;
release channel c1;
}
1.2 backup controlfile
rman target /
run
{
allocate channel c1 type disk format 'F:\rman\rman_ctl_%d_%T_%U.rman';
backup as compressed backupset current controlfile;
release channel c1;
}
1.3 backup required archive logs
run
{
allocate channel c1 type disk format 'F:\rman_test\rman_arch_%d_%T_%U.rman';
backup archivelog sequence between 100 and 200 ;
release channel c1;
}
STEP 2. Create required directory for new database in new host
2.1 data file, log file and tempfile location
select name from v$datafile;
select member from v$logfile;
select name from v$tempfile;
mkdir D:ORACLE\ORADATA\TEST
2.2 other locations ( dump dest and archivelog dest)
show parameter dest
STEP 3. copy pfile and make modification
STEP 4. create windows service
radim -new -sid test -intpwd test
STEP 5. start instance
startup mount
STEP 6. restore control file and mount database
rman target /
set dbid=123456789;
restore controlfile from 'F:\rman\RMAN_ctl_test_20131011_95OM4JNU_1_1.RMAN';
alter database mount;
STEP 7. restore database from backup
new location need be specified by using "set new name"
catalog start with 'F:\rman'
rman target /
run
{
set newname for datafile 1 to 'D:\ORACLE\ORADATA\TEST1\SYSTEM01.DBF';
set newname for datafile 2 to 'D:\ORACLE\ORADATA\TEST1\UNDOTBS01.DBF';
set newname for datafile 3 to 'D:\ORACLE\ORADATA\TEST1\SYSAUX01.DBF';
set newname for datafile 4 to 'D:\ORACLE\ORADATA\TEST1\USERS01.DBF';
allocate channel c1 type disk format 'F:\rman\\RMAN_DB_%d_%T_%U.RMAN';
restore database ;
release channel c1;
}
STEP 8. rename datafile and logfile
alter database rename file 'D:\ORACLE\ORADATA\TEST\SYSTEM01.DBF' to 'D:\ORACLE\ORADATA\TEST1\SYSTEM01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\UNDOTBS01.DBF' to 'D:\ORACLE\ORADATA\TEST1\UNDOTBS01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\SYSAUX01.DBF' to 'D:\ORACLE\ORADATA\TEST1\SYSAUX01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\USERS01.DBF' to 'D:\ORACLE\ORADATA\TEST1\USERS01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO01.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO01.LOG';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO02.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO02.LOG';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO03.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO03.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO01B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO01B.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO02B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO02B.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO03B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO03B.LOG';
STEP 9. recover database to consistent state
rman target /
recover database until sequence 201;
STEP 10. open database with resetlogs and drop, recreate temp file
alter database open resetlogs;
alter tablespace temp add tempfile 'D:\ORACLE\ORADATA\TEST1\TEMP01.DBF' size 3G ;
ALTER DATABASE TEMPFILE 'D:\ORACLE\ORADATA\TEST\TEMP01.DBF' DROP INCLUDING DATAFILES;
STEP 11. set up network connection
sqlplus sys/test@test as sysdba
STEP 12. copy pfile of test to pfile of test1
chagne db name from test to test1
STEP 13. rename database
orapwd FILE=pwdtest.ora PASSWORD=test
startup mount
nid TARGET=SYS/test@test DBNAME=test1
STEP 14. set oracle_sid as test1 and open database with resetlogs
startup mount
alter database open restlogs;
REFERENCE:
http://dbaregistry.blogspot.com/2011/04/rman-duplicate-without-connecting-to.html
http://docs.oracle.com/cd/B28359_01/server.111/b28310/dfiles005.htm
http://docs.oracle.com/cd/B14117_01/server.101/b10825/dbnewid.htm
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmdupad.htm
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmdupdb.htm
https://forums.oracle.com/thread/2421213
http://www.dba-oracle.com/t_rename_database_oracle_sid.htm
http://docs.oracle.com/html/B13831_01/admin.htm
http://docs.oracle.com/cd/B19306_01/server.102/b14231/dfiles.htm
http://ss64.com/ora/rman_format_string.html
http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams112.htm
http://linuxtechres.blogspot.com/2011/10/how-to-create-oracle-database-service.html
Duplicate oracle 10g database in new server for testing without connecting to target.
STEPS:
STEP 1. make a backup
1.1 backup database
rman target /
run
{
allocate channel c1 type disk format 'F:\rman\rman_db_%d_%T_%U.rman';
backup as compressed backupset full database;
release channel c1;
}
1.2 backup controlfile
rman target /
run
{
allocate channel c1 type disk format 'F:\rman\rman_ctl_%d_%T_%U.rman';
backup as compressed backupset current controlfile;
release channel c1;
}
1.3 backup required archive logs
run
{
allocate channel c1 type disk format 'F:\rman_test\rman_arch_%d_%T_%U.rman';
backup archivelog sequence between 100 and 200 ;
release channel c1;
}
STEP 2. Create required directory for new database in new host
2.1 data file, log file and tempfile location
select name from v$datafile;
select member from v$logfile;
select name from v$tempfile;
mkdir D:ORACLE\ORADATA\TEST
2.2 other locations ( dump dest and archivelog dest)
show parameter dest
STEP 3. copy pfile and make modification
STEP 4. create windows service
radim -new -sid test -intpwd test
STEP 5. start instance
startup mount
STEP 6. restore control file and mount database
rman target /
set dbid=123456789;
restore controlfile from 'F:\rman\RMAN_ctl_test_20131011_95OM4JNU_1_1.RMAN';
alter database mount;
STEP 7. restore database from backup
new location need be specified by using "set new name"
catalog start with 'F:\rman'
rman target /
run
{
set newname for datafile 1 to 'D:\ORACLE\ORADATA\TEST1\SYSTEM01.DBF';
set newname for datafile 2 to 'D:\ORACLE\ORADATA\TEST1\UNDOTBS01.DBF';
set newname for datafile 3 to 'D:\ORACLE\ORADATA\TEST1\SYSAUX01.DBF';
set newname for datafile 4 to 'D:\ORACLE\ORADATA\TEST1\USERS01.DBF';
allocate channel c1 type disk format 'F:\rman\\RMAN_DB_%d_%T_%U.RMAN';
restore database ;
release channel c1;
}
STEP 8. rename datafile and logfile
alter database rename file 'D:\ORACLE\ORADATA\TEST\SYSTEM01.DBF' to 'D:\ORACLE\ORADATA\TEST1\SYSTEM01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\UNDOTBS01.DBF' to 'D:\ORACLE\ORADATA\TEST1\UNDOTBS01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\SYSAUX01.DBF' to 'D:\ORACLE\ORADATA\TEST1\SYSAUX01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\USERS01.DBF' to 'D:\ORACLE\ORADATA\TEST1\USERS01.DBF';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO01.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO01.LOG';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO02.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO02.LOG';
alter database rename file 'D:\ORACLE\ORADATA\TEST\REDO03.LOG' to 'D:\ORACLE\ORADATA\TEST1\REDO03.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO01B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO01B.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO02B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO02B.LOG';
alter database rename file 'E:\ORACLE\ORADATA\TEST\REDO03B.LOG' to 'E:\ORACLE\ORADATA\TEST1\REDO03B.LOG';
STEP 9. recover database to consistent state
rman target /
recover database until sequence 201;
STEP 10. open database with resetlogs and drop, recreate temp file
alter database open resetlogs;
alter tablespace temp add tempfile 'D:\ORACLE\ORADATA\TEST1\TEMP01.DBF' size 3G ;
ALTER DATABASE TEMPFILE 'D:\ORACLE\ORADATA\TEST\TEMP01.DBF' DROP INCLUDING DATAFILES;
STEP 11. set up network connection
sqlplus sys/test@test as sysdba
STEP 12. copy pfile of test to pfile of test1
chagne db name from test to test1
STEP 13. rename database
orapwd FILE=pwdtest.ora PASSWORD=test
startup mount
nid TARGET=SYS/test@test DBNAME=test1
STEP 14. set oracle_sid as test1 and open database with resetlogs
startup mount
alter database open restlogs;
REFERENCE:
http://dbaregistry.blogspot.com/2011/04/rman-duplicate-without-connecting-to.html
http://docs.oracle.com/cd/B28359_01/server.111/b28310/dfiles005.htm
http://docs.oracle.com/cd/B14117_01/server.101/b10825/dbnewid.htm
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmdupad.htm
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmdupdb.htm
https://forums.oracle.com/thread/2421213
http://www.dba-oracle.com/t_rename_database_oracle_sid.htm
http://docs.oracle.com/html/B13831_01/admin.htm
http://docs.oracle.com/cd/B19306_01/server.102/b14231/dfiles.htm
http://ss64.com/ora/rman_format_string.html
http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams112.htm
http://linuxtechres.blogspot.com/2011/10/how-to-create-oracle-database-service.html
Thursday, October 10, 2013
TNS-12518, TNS-12540, TNS-12582, TNS-12615 Listener ERROR caused by swingbench OLTP test
SYMPTOM:
In listener.log file, below error emssage appear.
TNS-12518: TNS:listener could not hand off client connection
TNS-12540: TNS:internal limit restriction exceeded
09-OCT-2013 13:46:31 * establish * 12540
TNS-12540: TNS:internal limit restriction exceeded
09-OCT-2013 13:46:31 * 12582
TNS-12582: TNS:invalid operation
TNS-12615: TNS:preempt error
CAUSE:
Too many concurrent connection requests are received by listener.
Solution:
Increase the queuesize for listener. There is no more TNS error in log file.
listener_name=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=hr-server)(PORT=1521)(QUEUESIZE=1000)))
TRACE LISTERNER in 11g:
TRACE_LEVLE_LISTENER_NAME=16 ( 0, no trace; 4, user; 10, admin; 16, support)
OTHER THOUGHT:
There is article (1399677.1) on metalink for the same set of errors. That is a bug cause by ADR check.
work around is to set _diag_adr_enabled=FALSE and/or apply Patch:9700115.
It look related to Unpublished Bug:9700115
Reference:
http://docs.oracle.com/cd/B28359_01/network.111/b28317/listener.htm
https://forums.oracle.com/thread/2520109
In listener.log file, below error emssage appear.
TNS-12518: TNS:listener could not hand off client connection
TNS-12540: TNS:internal limit restriction exceeded
09-OCT-2013 13:46:31 * establish * 12540
TNS-12540: TNS:internal limit restriction exceeded
09-OCT-2013 13:46:31 * 12582
TNS-12582: TNS:invalid operation
TNS-12615: TNS:preempt error
CAUSE:
Too many concurrent connection requests are received by listener.
Solution:
Increase the queuesize for listener. There is no more TNS error in log file.
listener_name=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=hr-server)(PORT=1521)(QUEUESIZE=1000)))
TRACE LISTERNER in 11g:
TRACE_LEVLE_LISTENER_NAME=16 ( 0, no trace; 4, user; 10, admin; 16, support)
OTHER THOUGHT:
There is article (1399677.1) on metalink for the same set of errors. That is a bug cause by ADR check.
work around is to set _diag_adr_enabled=FALSE and/or apply Patch:9700115.
It look related to Unpublished Bug:9700115
Reference:
http://docs.oracle.com/cd/B28359_01/network.111/b28317/listener.htm
https://forums.oracle.com/thread/2520109
Thursday, October 3, 2013
create table with only data using data pump
REQUIREMENT:
Snapshot of data in schema need be stored for comparison. Data snapshot is refreshed from time to time.
EASY SOLUTION:
1. create new schema
create user test1 identified by test;
grant connect to test1;
2. data pump schema including only table
expdp userid=system/test@test schemas=test include=table dumpfile=test_table_2013_10_01.dump logfile=test_table_2013_10_01.log
3. data pump schema back excluding index, trigger, constraints
impdp userid=system/test@test schemas=test remap_schema=test:test1 exclude=trigger,constraint,index dumpfile=test_table_2013_10_01.dump logfile=test_table_2013_10_01_imp1.log
4. refresh by truncate and insert if data is some.
select ' truncate table test1.' || table_name || ';' from user_tables;
select ' insert into test1.' || table_name || ' select * from test.' || table_name || ';' from user_tables;
5. refresh by data pump data only if data is large.
expdp userid=system/test@test schemas=test include=table dumpfile=test_table_2013_10_03.dump logfile=test_table_2013_10_03.log
impdp userid=system/test@test schemas=test remap_schema=test:test1 exclude=trigger,constraint,index dumpfile=test_table_2013_10_03.dump logfile=test_table_2013_10_03_imp1.log content=data_only
reference:
http://arjudba.blogspot.com/2008/04/datapump-parameters-include-and-exclude.html
Snapshot of data in schema need be stored for comparison. Data snapshot is refreshed from time to time.
EASY SOLUTION:
1. create new schema
create user test1 identified by test;
grant connect to test1;
2. data pump schema including only table
expdp userid=system/test@test schemas=test include=table dumpfile=test_table_2013_10_01.dump logfile=test_table_2013_10_01.log
3. data pump schema back excluding index, trigger, constraints
impdp userid=system/test@test schemas=test remap_schema=test:test1 exclude=trigger,constraint,index dumpfile=test_table_2013_10_01.dump logfile=test_table_2013_10_01_imp1.log
4. refresh by truncate and insert if data is some.
select ' truncate table test1.' || table_name || ';' from user_tables;
select ' insert into test1.' || table_name || ' select * from test.' || table_name || ';' from user_tables;
5. refresh by data pump data only if data is large.
expdp userid=system/test@test schemas=test include=table dumpfile=test_table_2013_10_03.dump logfile=test_table_2013_10_03.log
impdp userid=system/test@test schemas=test remap_schema=test:test1 exclude=trigger,constraint,index dumpfile=test_table_2013_10_03.dump logfile=test_table_2013_10_03_imp1.log content=data_only
reference:
http://arjudba.blogspot.com/2008/04/datapump-parameters-include-and-exclude.html
Rebuild Standby database through rman restore after open it
1. remove archivelogs from default archivelog location
2. make rman backup files accessible to standby database
3. make rman know backup files
sqlplus> shutdown immediate;
sqlplus> startup mount;
rman target /
rman> catalog start with 'file location directory';
4. restore database
rman > run {
sql 'alter session set optimizer_mode=RULE';
sql 'ALTER SESSION SET NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"';
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT 'file location directory\DBFULL_%d_%U.bkp';
restore database from TAG "backup tag" ;
RELEASE CHANNEL d1;
}
5. recover database
5.1 use controlfile before open database
rman target /
rman> recover database until sequence ?????;
5.2 use controlfile after open database
rman target /
rman> list incarnation;
rman> reset database to incarnation 1; ( previous number)
rman> recover database unitl seequnece ????;
5.3 test standby database
sqlplus / as sysdba
sqlplus> alter database open;
sqlplus> select name, host_name, open_mode, database_role from v$database, v$instance;
reference:
http://oracledb-admin.blogspot.com/2011/10/ora-19909-datafile-1-belongs-to-orphan.html
http://www.dba-oracle.com/bk_recover_database_until_cancel_tips.htm
2. make rman backup files accessible to standby database
3. make rman know backup files
sqlplus> shutdown immediate;
sqlplus> startup mount;
rman target /
rman> catalog start with 'file location directory';
4. restore database
rman > run {
sql 'alter session set optimizer_mode=RULE';
sql 'ALTER SESSION SET NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"';
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT 'file location directory\DBFULL_%d_%U.bkp';
restore database from TAG "backup tag" ;
RELEASE CHANNEL d1;
}
5. recover database
5.1 use controlfile before open database
rman target /
rman> recover database until sequence ?????;
5.2 use controlfile after open database
rman target /
rman> list incarnation;
rman> reset database to incarnation 1; ( previous number)
rman> recover database unitl seequnece ????;
5.3 test standby database
sqlplus / as sysdba
sqlplus> alter database open;
sqlplus> select name, host_name, open_mode, database_role from v$database, v$instance;
reference:
http://oracledb-admin.blogspot.com/2011/10/ora-19909-datafile-1-belongs-to-orphan.html
http://www.dba-oracle.com/bk_recover_database_until_cancel_tips.htm
Wednesday, September 25, 2013
GoldenGate (GG) replicat abended process troubleshooting.
GoldenGate (GG) replicat abended process troubleshooting.
1. Try to start the replicate
ggsci > start replicate TEST
2. check the replicate
ggsci > info all
It should be abended
3. check the process report
cd $GG_DIR/bin/dirrpt
less test.rpt
4. find the trail file and RBA from report
5. get sql statement using logdump
/*
configure below setting on logdump
fileheader detail
ghdr on
detail on
usertoken detail
reclen 128
*/
cd $GG_DIR/bin
logdump
logdump> open app/oracle/gg/trail/test/dirdat/pt0001
logdump> pos 100000
logdump> fileheader detail
logdump> ghdr on
logdump> detail on
logdump> usertoken detail
logdump> reclen 128
logdump> n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x00)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 1577 (x0629) IO Time : 2013/09/24 23:15:09.334.111
IOType : 160 (xa0) OrigNode : 0 (x00)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 0 AuditPos : 0
Continued : N (x00) RecCount : 1 (x01)
2013/09/24 23:15:09.334.111 DDLOP Len 1000 RBA 100000
Name:
After Image: Partition 0 G s
2c43 353d 2733 3735 3139 272c 2c42 373d 2733 3735 | ,C5='37519',,B7='375
3139 272c 2c42 323d 2727 2c2c 4233 3d27 4d4f 4e41 | 19',,B2='',,B3='TTT
444d 494e 272c 2c42 343d 2744 4841 5f54 4d50 5f23 | TTT',,B4='TTT_TMP_#
5447 5439 5f33 3138 3237 3133 3227 2c2c 4331 323d | TGT9_000001',,C12=
2727 2c2c 4331 333d 2727 2c2c 4235 3d27 5441 424c | '',,C13='',,B5='TABL
4527 2c2c 4236 3d27 4352 4541 5445 272c 2c42 383d | E',,B6='CREATE',,B8=
2747 4755 5345 522e 4747 535f 4444 4c5f 4849 5354 | 'GGUSER.GGS_DDL_HIST
272c 2c42 393d 274d 4f4e 4144 4d49 4e27 2c2c 4337 | ',,B9='TTTTTTT',,C7
3d27 3130 2e32 2e30 2e34 2e30 272c 2c43 383d 2731 | ='10.2.0.4.0',,C8='1
302e 322e 302e 332e 3027 2c2c 4339 3d27 272c 2c43 | 0.2.0.3.0',,C9='',,C
3130 3d27 3127 2c2c 4331 313d 2764 7264 6976 6572 | 10='1',,C11='test
7327 2c2c 4733 3d27 4e4f 4e55 4e49 5155 4527 2c2c | t',,G3='NONUNIQUE',,
4331 343d 274e 4f27 2c2c 4331 353d 274e 4f27 2c2c | C14='NO',,C15='NO',,
4331 393d 2731 3727 2c2c 4331 3728 2731 2729 3d27 | C19='17',,C17('1')='
4e4c 535f 4c41 4e47 5541 4745 272c 2c43 3138 2827 | NLS_LANGUAGE',,C18('
3127 293d 2741 4d45 5249 4341 4e27 2c2c 4331 3728 | 1')='AMERICAN',,C17(
2732 2729 3d27 4e4c 535f 5445 5252 4954 4f52 5927 | '2')='NLS_TERRITORY'
/*
reset reclen to record length according to output
*/
logdump> reclen 1000
/*
position back to 100000
*/
logdump> pos 100000
logdump> n
/*
you will have output for whole record. The left Panel C1 variable is the SQL statement.
you can try it to find out the real error
*/
6. fixed the issue and restart the replicate
how to fix depends on errors and database settings and even data.
7. skip the transaction and restart the replicate
if you find out that the transaction can be skip.
find RBA position for next record using logdump
logdump> pos 100000
logdump> n
logdump> n
2013/09/24 23:15:09.334.111 DDLOP Len 115 RBA 101000
Name:
After Image: Partition 0 G s
2c43 353d 2733 3735 3139 272c 2c42 373d 2733 3735 | ,C5='37519',,B7='375
3139 272c 2c42 323d 2727 2c2c 4233 3d27 4d4f 4e41 | 19',,B2='',,B3='TTT
444d 494e 272c 2c42 343d 2744 4841 5f54 4d50 5f23 | TTT',,B4='TTT_TMP_#
5447 5439 5f33 3138 3237 3133 3227 2c2c 4331 323d | TGT9_000001',,C12=
2727 2c2c 4331 333d 2727 2c2c 4235 3d27 5441 424c | '',,C13='',,B5='TABL
4527 2c2c 4236 3d27 4352 4541 5445 272c 2c42 383d | E',,B6='CREATE',,B8=
2747 4755 5345 522e 4747 535f 4444 4c5f 4849 5354 | 'GGUSER.GGS_DDL_HIST
272c 2c42 393d 274d 4f4e 4144 4d49 4e27 2c2c 4337 | ',,B9='TTTTTTT',,C7
3d27 3130 2e32 2e30 2e34 2e30 272c 2c43 383d 2731 | ='10.2.0.4.0',,C8='1
302e 322e 302e 332e 3027 2c2c 4339 3d27 272c 2c43 | 0.2.0.3.0',,C9='',,C
3130 3d27 3127 2c2c 4331 313d 2764 7264 6976 6572 | 10='1',,C11='test
7327 2c2c 4733 3d27 4e4f 4e55 4e49 5155 4527 2c2c | t',,G3='NONUNIQUE',,
4331 343d 274e 4f27 2c2c 4331 353d 274e 4f27 2c2c | C14='NO',,C15='NO',,
4331 393d 2731 3727 2c2c 4331 3728 2731 2729 3d27 | C19='17',,C17('1')='
4e4c 535f 4c41 4e47 5541 4745 272c 2c43 3138 2827 | NLS_LANGUAGE',,C18('
3127 293d 2741 4d45 5249 4341 4e27 2c2c 4331 3728 | 1')='AMERICAN',,C17(
2732 2729 3d27 4e4c 535f 5445 5252 4954 4f52 5927 | '2')='NLS_TERRITORY'
ggsci> alter replicat test, extrba 101000
ggsci>info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT STOPPED TEST 16:34:27 00:45:14
ggsci> start replicat test
ggsci> info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING TEST 16:34:27 00:45:14
reference:
http://blog.flimatech.com/2012/03/24/how-to-find-the-transaction-that-abended-oracle-goldengate/
http://satya-dba.blogspot.com/2012/02/ggsci-goldengate-command-interpreter.html
1. Try to start the replicate
ggsci > start replicate TEST
2. check the replicate
ggsci > info all
It should be abended
3. check the process report
cd $GG_DIR/bin/dirrpt
less test.rpt
4. find the trail file and RBA from report
5. get sql statement using logdump
/*
configure below setting on logdump
fileheader detail
ghdr on
detail on
usertoken detail
reclen 128
*/
cd $GG_DIR/bin
logdump
logdump> open app/oracle/gg/trail/test/dirdat/pt0001
logdump> pos 100000
logdump> fileheader detail
logdump> ghdr on
logdump> detail on
logdump> usertoken detail
logdump> reclen 128
logdump> n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x00)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 1577 (x0629) IO Time : 2013/09/24 23:15:09.334.111
IOType : 160 (xa0) OrigNode : 0 (x00)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 0 AuditPos : 0
Continued : N (x00) RecCount : 1 (x01)
2013/09/24 23:15:09.334.111 DDLOP Len 1000 RBA 100000
Name:
After Image: Partition 0 G s
2c43 353d 2733 3735 3139 272c 2c42 373d 2733 3735 | ,C5='37519',,B7='375
3139 272c 2c42 323d 2727 2c2c 4233 3d27 4d4f 4e41 | 19',,B2='',,B3='TTT
444d 494e 272c 2c42 343d 2744 4841 5f54 4d50 5f23 | TTT',,B4='TTT_TMP_#
5447 5439 5f33 3138 3237 3133 3227 2c2c 4331 323d | TGT9_000001',,C12=
2727 2c2c 4331 333d 2727 2c2c 4235 3d27 5441 424c | '',,C13='',,B5='TABL
4527 2c2c 4236 3d27 4352 4541 5445 272c 2c42 383d | E',,B6='CREATE',,B8=
2747 4755 5345 522e 4747 535f 4444 4c5f 4849 5354 | 'GGUSER.GGS_DDL_HIST
272c 2c42 393d 274d 4f4e 4144 4d49 4e27 2c2c 4337 | ',,B9='TTTTTTT',,C7
3d27 3130 2e32 2e30 2e34 2e30 272c 2c43 383d 2731 | ='10.2.0.4.0',,C8='1
302e 322e 302e 332e 3027 2c2c 4339 3d27 272c 2c43 | 0.2.0.3.0',,C9='',,C
3130 3d27 3127 2c2c 4331 313d 2764 7264 6976 6572 | 10='1',,C11='test
7327 2c2c 4733 3d27 4e4f 4e55 4e49 5155 4527 2c2c | t',,G3='NONUNIQUE',,
4331 343d 274e 4f27 2c2c 4331 353d 274e 4f27 2c2c | C14='NO',,C15='NO',,
4331 393d 2731 3727 2c2c 4331 3728 2731 2729 3d27 | C19='17',,C17('1')='
4e4c 535f 4c41 4e47 5541 4745 272c 2c43 3138 2827 | NLS_LANGUAGE',,C18('
3127 293d 2741 4d45 5249 4341 4e27 2c2c 4331 3728 | 1')='AMERICAN',,C17(
2732 2729 3d27 4e4c 535f 5445 5252 4954 4f52 5927 | '2')='NLS_TERRITORY'
/*
reset reclen to record length according to output
*/
logdump> reclen 1000
/*
position back to 100000
*/
logdump> pos 100000
logdump> n
/*
you will have output for whole record. The left Panel C1 variable is the SQL statement.
you can try it to find out the real error
*/
6. fixed the issue and restart the replicate
how to fix depends on errors and database settings and even data.
7. skip the transaction and restart the replicate
if you find out that the transaction can be skip.
find RBA position for next record using logdump
logdump> pos 100000
logdump> n
logdump> n
2013/09/24 23:15:09.334.111 DDLOP Len 115 RBA 101000
Name:
After Image: Partition 0 G s
2c43 353d 2733 3735 3139 272c 2c42 373d 2733 3735 | ,C5='37519',,B7='375
3139 272c 2c42 323d 2727 2c2c 4233 3d27 4d4f 4e41 | 19',,B2='',,B3='TTT
444d 494e 272c 2c42 343d 2744 4841 5f54 4d50 5f23 | TTT',,B4='TTT_TMP_#
5447 5439 5f33 3138 3237 3133 3227 2c2c 4331 323d | TGT9_000001',,C12=
2727 2c2c 4331 333d 2727 2c2c 4235 3d27 5441 424c | '',,C13='',,B5='TABL
4527 2c2c 4236 3d27 4352 4541 5445 272c 2c42 383d | E',,B6='CREATE',,B8=
2747 4755 5345 522e 4747 535f 4444 4c5f 4849 5354 | 'GGUSER.GGS_DDL_HIST
272c 2c42 393d 274d 4f4e 4144 4d49 4e27 2c2c 4337 | ',,B9='TTTTTTT',,C7
3d27 3130 2e32 2e30 2e34 2e30 272c 2c43 383d 2731 | ='10.2.0.4.0',,C8='1
302e 322e 302e 332e 3027 2c2c 4339 3d27 272c 2c43 | 0.2.0.3.0',,C9='',,C
3130 3d27 3127 2c2c 4331 313d 2764 7264 6976 6572 | 10='1',,C11='test
7327 2c2c 4733 3d27 4e4f 4e55 4e49 5155 4527 2c2c | t',,G3='NONUNIQUE',,
4331 343d 274e 4f27 2c2c 4331 353d 274e 4f27 2c2c | C14='NO',,C15='NO',,
4331 393d 2731 3727 2c2c 4331 3728 2731 2729 3d27 | C19='17',,C17('1')='
4e4c 535f 4c41 4e47 5541 4745 272c 2c43 3138 2827 | NLS_LANGUAGE',,C18('
3127 293d 2741 4d45 5249 4341 4e27 2c2c 4331 3728 | 1')='AMERICAN',,C17(
2732 2729 3d27 4e4c 535f 5445 5252 4954 4f52 5927 | '2')='NLS_TERRITORY'
ggsci> alter replicat test, extrba 101000
ggsci>info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT STOPPED TEST 16:34:27 00:45:14
ggsci> start replicat test
ggsci> info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING TEST 16:34:27 00:45:14
reference:
http://blog.flimatech.com/2012/03/24/how-to-find-the-transaction-that-abended-oracle-goldengate/
http://satya-dba.blogspot.com/2012/02/ggsci-goldengate-command-interpreter.html
Tuesday, August 27, 2013
create directory for mapping drive in oracle database on windows
ISSUE:
data pump can not create files in directory created for mapping drive like S:\test_dir on windows.
CAUSE:
Mapping drive is specific for each account. The mapping drive like S:\test_dir must be recognized by the account running oracle service.
SOLUTION:
1. make sure the mapping drive you specified in " create directory statement" is available as the same mapping drive for the account running the oracle service. You can check services->properties -> logon to find the account name.
2. use direct network drive name like \\192.168.1.3\shearplace\testdir
SQL: create directory test_dir as '\\192.168.1.3\shearplace\testdir';
reference:
https://forums.oracle.com/message/9981090
data pump can not create files in directory created for mapping drive like S:\test_dir on windows.
CAUSE:
Mapping drive is specific for each account. The mapping drive like S:\test_dir must be recognized by the account running oracle service.
SOLUTION:
1. make sure the mapping drive you specified in " create directory statement" is available as the same mapping drive for the account running the oracle service. You can check services->properties -> logon to find the account name.
2. use direct network drive name like \\192.168.1.3\shearplace\testdir
SQL: create directory test_dir as '\\192.168.1.3\shearplace\testdir';
reference:
https://forums.oracle.com/message/9981090
Wednesday, July 17, 2013
statistics manipulation commands
1. create local stats table
exec DBMS_STATS.CREATE_STAT_TABLE (ownname => 'TUSER', stattab => 'TEST_STAT_TABLE');
2. export current table stats to local table
exec DBMS_STATS.EXPORT_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE',stattab => 'TEST_STAT_TABLE', statid => 'T1', cascade => true);
3. delete current table stats
exec DBMS_STATS.DELETE_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE', cascade_columns => true, cascade_indexes => true);
4. import local stats to table
exec DBMS_STATS.IMPORT_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE',stattab => 'TEST_STAT_TABLE', statid => 'T1', cascade => true);
5. check statistics
6. statistics stored in table can be moved to other database and be imported.
7. Columns “C5” and “C1” of the statistics table represent schema owner and table name respectively. if owner and table_name is different when imported, update this two columns.
reference:
http://momendba.blogspot.com/2011/01/import-table-statistics-to-another.html
exec DBMS_STATS.CREATE_STAT_TABLE (ownname => 'TUSER', stattab => 'TEST_STAT_TABLE');
2. export current table stats to local table
exec DBMS_STATS.EXPORT_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE',stattab => 'TEST_STAT_TABLE', statid => 'T1', cascade => true);
3. delete current table stats
exec DBMS_STATS.DELETE_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE', cascade_columns => true, cascade_indexes => true);
4. import local stats to table
exec DBMS_STATS.IMPORT_TABLE_STATS(ownname => 'TUSER',tabname => 'TTABLE',stattab => 'TEST_STAT_TABLE', statid => 'T1', cascade => true);
5. check statistics
select statid, count(*) from TEST_STAT_TABLE group by statid;
6. statistics stored in table can be moved to other database and be imported.
7. Columns “C5” and “C1” of the statistics table represent schema owner and table name respectively. if owner and table_name is different when imported, update this two columns.
reference:
http://momendba.blogspot.com/2011/01/import-table-statistics-to-another.html
Swingbench installation
1.down load the file
2. unzip the file to the location you want it to be.
3. include java_home/bin into path variable
4. run oewizard to install soe schema for order entry test
5. check table record change for order entry test
select count(*) from CUSTOMERS ;
select count(*) from LOGON ;
select count(*) from ORDERS ;
select count(*) from ORDER_ITEMS ;
6. below table can be truncate and will be loaded when testing.
truncate table logon;
truncate table order_ITEMS;
reference:
http://www.dominicgiles.com/installation.html
2. unzip the file to the location you want it to be.
3. include java_home/bin into path variable
4. run oewizard to install soe schema for order entry test
5. check table record change for order entry test
select count(*) from CUSTOMERS ;
select count(*) from LOGON ;
select count(*) from ORDERS ;
select count(*) from ORDER_ITEMS ;
6. below table can be truncate and will be loaded when testing.
truncate table logon;
truncate table order_ITEMS;
reference:
http://www.dominicgiles.com/installation.html
Tuesday, May 21, 2013
create standby database from standby database
Task: create a new standby database from another standby database for standard edition_one version.
Since dataguard is not available, archived log need be manually applied.
Method:
1. put standby database to read-only
alter database open (open read only);
2. copy all data file, control file, password file. parameter file to new location.
You need rename them if location is changed.
alter database rename file 'J:\ORACLE\ORADATA\TEST\TEST_01.DBF' to 'O:\ORACLE\ORADATA\TEST\TEST_01.DBF' ;
alter database rename file 'J:\ORACLE\ORADATA\TEST\TEST_01.DBF' to 'O:\ORACLE\ORADATA\TEST\TEST_01.DBF' ;
3. start up database mount and apply log
sqlplus / as sysdba
startup mount
recover standy database.
Points.
1. control file from a standby database can be directly copied and used for new standby database. You don't need create a standby control file from primary one.
2. if control file is not copied during read only time period, you can copy it later. However this will cause a problem if new file was added after datafile copy.
3. if latest control file is used and includes new added data files, the new data files can be copied from standby database again after it is put back to read only. Even though the timestamp is not same for the first copied data files and later copied ones, archive log can be applied to both and make them sync.
4. if you use old control file and new file is added, you will have to add new file manually in 10g version.
alter database create datafile 'D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\UNNAMED00160' as 'I:\ORACLE\ORADATA\TEST_INDX38.DBF' ;
Friday, May 10, 2013
ORA-16014 and All Archive destinations made inactive due to error 333 on windows
ISSUE:
Database hung and no external connection is allowed.
CAUSE:
The only member of one of inactive online redo log groups is corrupted. Archiver can not archive it to disk completely but stopped partially.
ERROR MESSAGES:
ERROR:
ORA-00257: archiver error. Connect internal only, until freed.
SQL> alter system switch logfile; # it hung
SQL> alter system archive log all;
alter system archive log all
*
ERROR at line 1:
ORA-16020: fewer destinations available than specified by
LOG_ARCHIVE_MIN_SUCCEED_DEST
SQL> alter system archive log current;
alter system archive log current
*
ERROR at line 1:
ORA-16038: log 1 sequence# 6484 cannot be archived
ORA-00333: redo log read error block count
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
ALERT LOG MESSAGE:
ARC2: All Archive destinations made inactive due to error 333
Committing creation of archivelog 'D:\ORACLE\ORADATA\TEST\ARCHIVE\TEST_ARC0000047371_07852571900001.ARC' (error 333)
SOLUTION:
Clear the corrupted log group and make a full backup. All old backup is unusable to bring the database back beyond the corrupted point.
Identify problem log group:
Found sequence # from alert log and map it with output of " select * from v$log "
Command:
alter database clear unarchived logfile group 5;
RCA:
Need run blow command to see if redo log is corrupted.
alter system dump logfile '<logfileL_filename>';
Reference:
http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/onlineredo.htm#i1006568
http://www.dba-oracle.com/t_ora_16014_log_sequence_number_not_archived_no_available_destinations.htm
https://forums.oracle.com/forums/thread.jspa?threadID=839923&tstart=0
Database hung and no external connection is allowed.
CAUSE:
The only member of one of inactive online redo log groups is corrupted. Archiver can not archive it to disk completely but stopped partially.
ERROR MESSAGES:
ERROR:
ORA-00257: archiver error. Connect internal only, until freed.
SQL> alter system switch logfile; # it hung
SQL> alter system archive log all;
alter system archive log all
*
ERROR at line 1:
ORA-16020: fewer destinations available than specified by
LOG_ARCHIVE_MIN_SUCCEED_DEST
SQL> alter system archive log current;
alter system archive log current
*
ERROR at line 1:
ORA-16038: log 1 sequence# 6484 cannot be archived
ORA-00333: redo log read error block count
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
ALERT LOG MESSAGE:
ARC2: All Archive destinations made inactive due to error 333
Committing creation of archivelog 'D:\ORACLE\ORADATA\TEST\ARCHIVE\TEST_ARC0000047371_07852571900001.ARC' (error 333)
SOLUTION:
Clear the corrupted log group and make a full backup. All old backup is unusable to bring the database back beyond the corrupted point.
Identify problem log group:
Found sequence # from alert log and map it with output of " select * from v$log "
Command:
alter database clear unarchived logfile group 5;
RCA:
Need run blow command to see if redo log is corrupted.
alter system dump logfile '<logfileL_filename>';
Reference:
http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/onlineredo.htm#i1006568
http://www.dba-oracle.com/t_ora_16014_log_sequence_number_not_archived_no_available_destinations.htm
https://forums.oracle.com/forums/thread.jspa?threadID=839923&tstart=0
Wednesday, May 8, 2013
Bulk insert/update with Cursor in oracle PL/SQL
ISSUE:
table is locked when insert/update millions records and commit at the end. Run time will be very long if commit after every record change.
SOLUTION:
Oracle provide bulk insert/update by fetch multiple records first and perform insert/update on them with commit.
example:
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
/
reference:
http://www.akadia.com/services/ora_bulk_insert.html
table is locked when insert/update millions records and commit at the end. Run time will be very long if commit after every record change.
SOLUTION:
Oracle provide bulk insert/update by fetch multiple records first and perform insert/update on them with commit.
example:
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
/
reference:
http://www.akadia.com/services/ora_bulk_insert.html
Search text stored in long column through like operation in Oracle PL/SQL
ISSUE:
Text stored in long column can not used directly in like clause in sql statement.
WORKAROUND:
use anonymous PL/SQL with corsor to do the like search.
Example:
check trigger body with string "TEST"
declare
l_text long;
tname varchar2(100);
cursor c_tri is select trigger_body, trigger_name from all_triggers where owner='TEST';
begin
open c_tri;
loop
fetch c_tri into l_text, tname;
exit when c_tri%NOTFOUND;
if ( Upper(l_text) like '%TEST%' )
then
dbms_output.put_line(tname);
end if;
end loop;
close c_tri;
end;
/
reference:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:315118200346261192
http://www.dbanotes.com/database-development/introduction-to-oracle-11g-cursors/
Text stored in long column can not used directly in like clause in sql statement.
WORKAROUND:
use anonymous PL/SQL with corsor to do the like search.
Example:
check trigger body with string "TEST"
declare
l_text long;
tname varchar2(100);
cursor c_tri is select trigger_body, trigger_name from all_triggers where owner='TEST';
begin
open c_tri;
loop
fetch c_tri into l_text, tname;
exit when c_tri%NOTFOUND;
if ( Upper(l_text) like '%TEST%' )
then
dbms_output.put_line(tname);
end if;
end loop;
close c_tri;
end;
/
reference:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:315118200346261192
http://www.dbanotes.com/database-development/introduction-to-oracle-11g-cursors/
Wednesday, April 24, 2013
Find (Search) in CMD (DOS command prompt)
The old way to search files containing a string pattern in DOS command prompt.
find /i "test" *.sql
find /i "test" *.sql
Tuesday, April 9, 2013
Spool file from oracle database for microsoft EXCEL
ISSUE:
Business user would like to view reports generated from database through microsoft EXCEL. So tab-separated txt file with xls as file extension name is used to generate reports automatically. However, some cell is displayed in anticipated format like 1.2E+12 (12345678912)or 123456789198000000000 (123456789198,12345567788)
CAUSE:
EXCEL does auto-convert for number from tab-separated text files.
SOLUTION:
I didn't find the way to turn off this auto-convert function to save the number as stored text.
WORKAROUND:
You can add something, which is tolerated by end user, to avoid auto-convert. For example, add a comma ',' at the end of the column from query.
Reference:
http://office.microsoft.com/en-us/excel-help/three-ways-to-convert-numbers-to-text-HA001136619.aspx
http://office.microsoft.com/en-us/excel-help/format-numbers-as-text-HP001216512.aspx
Business user would like to view reports generated from database through microsoft EXCEL. So tab-separated txt file with xls as file extension name is used to generate reports automatically. However, some cell is displayed in anticipated format like 1.2E+12 (12345678912)or 123456789198000000000 (123456789198,12345567788)
CAUSE:
EXCEL does auto-convert for number from tab-separated text files.
SOLUTION:
I didn't find the way to turn off this auto-convert function to save the number as stored text.
WORKAROUND:
You can add something, which is tolerated by end user, to avoid auto-convert. For example, add a comma ',' at the end of the column from query.
Reference:
http://office.microsoft.com/en-us/excel-help/three-ways-to-convert-numbers-to-text-HA001136619.aspx
http://office.microsoft.com/en-us/excel-help/format-numbers-as-text-HP001216512.aspx
Thursday, February 28, 2013
oracle resource manager
1. check current running resource plan
select * from v$rsrc_plan
2. create a simple plan
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_SIMPLE_PLAN(SIMPLE_PLAN => 'BEN_SIMPLE_PLAN',
CONSUMER_GROUP1 => 'BEN_GROUP1', GROUP1_PERCENT => 80,
CONSUMER_GROUP2 => 'BEN_GROUP2', GROUP2_PERCENT => 20);
END;
2. swith plan (INTERNAL_PLAN is equal to no plan
alter system set resource_manager_plan='BEN_SIMPLE_PLAN';
alter system set resource_manager_plan='DEFAULT_MAINTENANCE_PLAN';
alter system set resource_manager_plan='';
3. check available plans
select plan_id, plan from dba_rsrc_plans;
4. check session consumer group
select username, sid, serial#, resource_consumer_group from v$session;
5. check session state under current plan
select sid, CURRENT_CONSUMER_GROUP_ID, state from V$RSRC_SESSION_INFO;
6. check plan drectives
SELECT group_or_subplan, mgmt_p1, mgmt_p2, mgmt_p3, mgmt_p4
FROM dba_rsrc_plan_directives WHERE plan = 'BEN_SIMPLE_PLAN';
7. change session consumer group
BEGIN
DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS ('100', '64491',
'SYS_GROUP');
END;
/
8. change all session of a user consumer group
BEGIN
DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_USER ('BEN',
'BEN_GROUP1');
END;
/
9. resource manager views.
View Description
DBA_RSRC_CONSUMER_GROUP_PRIVS
USER_RSRC_CONSUMER_GROUP_PRIVS
DBA view lists all resource consumer groups and the users and roles to which they have been granted. USER view lists all resource consumer groups granted to the user.
DBA_RSRC_CONSUMER_GROUPS
Lists all resource consumer groups that exist in the database.
DBA_RSRC_MANAGER_SYSTEM_PRIVS
USER_RSRC_MANAGER_SYSTEM_PRIVS
DBA view lists all users and roles that have been granted Resource Manager system privileges. USER view lists all the users that are granted system privileges for the DBMS_RESOURCE_MANAGER package.
DBA_RSRC_PLAN_DIRECTIVES
Lists all resource plan directives that exist in the database.
DBA_RSRC_PLANS
Lists all resource plans that exist in the database.
DBA_RSRC_GROUP_MAPPINGS
Lists all of the various mapping pairs for all of the session attributes.
DBA_RSRC_MAPPING_PRIORITY
Lists the current mapping priority of each attribute.
DBA_HIST_RSRC_PLAN
Displays historical information about resource plan activation. This view contains AWR snapshots of V$RSRC_PLAN_HISTORY.
DBA_HIST_RSRC_CONSUMER_GROUP
Displays historical statistical information about consumer groups. This view contains AWR snapshots of V$RSRC_CONS_GROUP_HISTORY.
DBA_USERS
USERS_USERS
DBA view contains information about all users of the database. It contains the initial resource consumer group for each user. USER view contains information about the current user. It contains the current user's initial resource consumer group.
V$RSRC_CONS_GROUP_HISTORY
For each entry in the view V$RSRC_PLAN_HISTORY, contains an entry for each consumer group in the plan showing the cumulative statistics for the consumer group.
V$RSRC_CONSUMER_GROUP
Displays information about active resource consumer groups. This view can be used for tuning.
V$RSRCMGRMETRIC
Displays a history of resources consumed and cumulative CPU wait time (due to resource management) per consumer group for the past minute.
V$RSRCMGRMETRIC_HISTORY
Displays a history of resources consumed and cumulative CPU wait time (due to resource management) per consumer group for the past hour on a minute-by-minute basis. If a new resource plan is enabled, the history is cleared.
V$RSRC_PLAN
Displays the names of all currently active resource plans.
V$RSRC_PLAN_HISTORY
Shows when Resource Manager plans were enabled or disabled on the instance. It helps you understand how resources were shared among the consumer groups over time.
V$RSRC_SESSION_INFO
Displays Resource Manager statistics for each session. Shows how the session has been affected by the Resource Manager. Can be used for tuning.
V$SESSION
Lists session information for each current session. Specifically, lists the name of the resource consumer group of each current session.
Reference:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/dbrm013.htm#CHDJAJHE
http://docs.oracle.com/cd/E11882_01/server.112/e25494/dbrm003.htm
http://docs.oracle.com/cd/B19306_01/server.102/b14231/dbrm.htm
Friday, February 22, 2013
NirCmd: turn off monitor in windows by shortcut
ISSUE:
The shortcut to lock screen is pretty convenient. A shortcut to turn off monitor will be nice to save power.
SOLUTION:
1. download freeware NirCmd from http://www.nirsoft.net/utils/nircmd.html
2. created a shortcut for nircmd
3. added property of the shortcut to update command as "nircmd monitor off" and shortcut "Crl + Alt + m"
4. press "Crl + Alt + m" to test it.
Reference:
http://www.nirsoft.net/utils/nircmd.html
The shortcut to lock screen is pretty convenient. A shortcut to turn off monitor will be nice to save power.
SOLUTION:
1. download freeware NirCmd from http://www.nirsoft.net/utils/nircmd.html
2. created a shortcut for nircmd
3. added property of the shortcut to update command as "nircmd monitor off" and shortcut "Crl + Alt + m"
4. press "Crl + Alt + m" to test it.
Reference:
http://www.nirsoft.net/utils/nircmd.html
Monday, February 18, 2013
Oracle Account Locked by multiple failed login
ISSUE:
An account was locked because of multiple failed logins. Need find out where those failed logins came from.
SOLUTION:
if audit is turned on for failed login sessions, login information can be extracted from audit trails.
EXAMPLE:
Audit it turned on for sessions and saved to xml file outside of database.
Go to audit directory $ORACLE_BASE/admin/SID/adump
ls -ltr | tail -10 | awk '{print $9}' | xargs grep -i SCHEMA_NAME | grep "<Returncode>1017</Returncode>"
ATTN: return code 1017 -- invalide usename/password
return code 28000 -- account is locked
return code 0 -- login successfuly
action 100/101 -- login
An account was locked because of multiple failed logins. Need find out where those failed logins came from.
SOLUTION:
if audit is turned on for failed login sessions, login information can be extracted from audit trails.
EXAMPLE:
Audit it turned on for sessions and saved to xml file outside of database.
Go to audit directory $ORACLE_BASE/admin/SID/adump
ls -ltr | tail -10 | awk '{print $9}' | xargs grep -i SCHEMA_NAME | grep "<Returncode>1017</Returncode>"
ATTN: return code 1017 -- invalide usename/password
return code 28000 -- account is locked
return code 0 -- login successfuly
action 100/101 -- login
Thursday, February 14, 2013
microsoft msconfigure
msconfigure:
system configuration utility can be used to configure service and start up programs.
mstsc /console:
remote login command.
Wednesday, February 13, 2013
enq: TM - contention
PROBLEM:
One of user session is running for a long time without return.
CAUSE:
The running session is waiting on event: enq: TM - contention. The table is locked by another session.
SOLUTION:
1. Log out of blocking session if task is completed.
2. create a index for foreign key column if it is related to lock parent table.
reference:
http://www.confio.com/logicalread/solving-oracle-enq-tm-contention-wait-events/
One of user session is running for a long time without return.
CAUSE:
The running session is waiting on event: enq: TM - contention. The table is locked by another session.
SOLUTION:
1. Log out of blocking session if task is completed.
2. create a index for foreign key column if it is related to lock parent table.
reference:
http://www.confio.com/logicalread/solving-oracle-enq-tm-contention-wait-events/
Wednesday, February 6, 2013
Drop all objects of a schema
Purpose:
login as schema owner and cleanup all objects.
SQL:
set pagesize 0
spool ydropobj.sql
select 'drop table ' || table_name || ' cascade constraints purge;' from user_tables
;
spool off
--@ydropobj.sql
spool ydropobj.sql
select 'drop ' || object_type ||' ' || object_name || ' ;' from user_objects
;
spool off
--@ydropobj.sql
set pagesize 9999
select count(*) from user_objects;
login as schema owner and cleanup all objects.
SQL:
set pagesize 0
spool ydropobj.sql
select 'drop table ' || table_name || ' cascade constraints purge;' from user_tables
;
spool off
--@ydropobj.sql
spool ydropobj.sql
select 'drop ' || object_type ||' ' || object_name || ' ;' from user_objects
;
spool off
--@ydropobj.sql
set pagesize 9999
select count(*) from user_objects;
ORA-12520: TNS:listener could not find available handler for requested type of server
ISSUE:
cannot connect to database through listener though tnsping is good.
SYMTUM:
lsnrclt services
Service "TEST" has 1 instance(s).
Instance "TEST", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:704169 refused:0 state:blocked
LOCAL SERVER
SOLUTION:
1.reloaded listener.
lsnrctl reload
cannot connect to database through listener though tnsping is good.
SYMTUM:
lsnrclt services
Service "TEST" has 1 instance(s).
Instance "TEST", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:704169 refused:0 state:blocked
LOCAL SERVER
SOLUTION:
1.reloaded listener.
lsnrctl reload
2. register database instance
sqlplus / as sysdba
alter system register
reference:
http://www.orafaq.com/forum/t/60019/2/
https://forums.oracle.com/forums/thread.jspa?threadID=722511
ORA-39726: unsupported add/drop column operation on compressed tables
ISSUE:
ORA-39726 is hit when adding column with default value to basic compressed table.
CAUSE:
compressed table will not allow some operation, which are allowed in uncompressed table.
FAILED STATEMENT:
alter table yt add new_column number(3) default 0;
WORK AGROUND:
Case 1: old records not need have default vaule:
alter table yt add new_column number(3);
alter table yt modify new_column default 0;
Case 2: old record need have default value ;
this has exact same effect on compress table as on uncompress table.
alter table yt add new_column number(3) default 0 not null;
alter table yt modify new_column null;
DROP COLUMN FROM COMPRESS TABLE:
alter table yt_compress set unused column new_column;
if you want to get ride of the data from space, do a table move. And
alter table yt_compress drop unused columns;
Example:
test@TEST.test> create table yt_compress compress as select owner, table_name from dba_tables where owner='SYS';
Table created.
test@TEST.test> create table yt_uncompress as select owner, table_name from dba_tables where owner='SYS';
Table created.
test@TEST.test> select count(*) from yt_compress;
COUNT(*)
----------
985
test@TEST.test> select count(*) from yt_uncompress;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column number(3) default 0;
alter table yt_compress add new_column number(3) default 0
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables
test@TEST.test> alter table yt_uncompress add new_column number(3) default 0;
Table altered.
test@TEST.test> select count(*) from yt_uncompress where new_column is null;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_uncompress where new_column =0;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column number(3);
Table altered.
test@TEST.test> alter table yt_compress modify new_column default 0;
Table altered.
test@TEST.test> select count(*) from yt_compress where new_column =0;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_compress where new_column is null;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column_1 number(3) default 0 not null;
Table altered.
test@TEST.test> alter table yt_compress modify new_column_1 null;
Table altered.
test@TEST.test> select count(*) from yt_compress where new_column is null;
COUNT(*)
----------
985
test@TEST.test> select count(*) from yt_compress where new_column_1 is null;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_compress where new_column_1 =0;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress drop column new_column;
alter table yt_compress drop column new_column
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables
test@TEST.test> alter table yt_compress set unused column new_column;
Table altered.
test@TEST.test> desc yt_compress
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
NEW_COLUMN_1 NUMBER(3)
test@TEST.test>
ORA-39726 is hit when adding column with default value to basic compressed table.
CAUSE:
compressed table will not allow some operation, which are allowed in uncompressed table.
FAILED STATEMENT:
alter table yt add new_column number(3) default 0;
WORK AGROUND:
Case 1: old records not need have default vaule:
alter table yt add new_column number(3);
alter table yt modify new_column default 0;
Case 2: old record need have default value ;
this has exact same effect on compress table as on uncompress table.
alter table yt add new_column number(3) default 0 not null;
alter table yt modify new_column null;
DROP COLUMN FROM COMPRESS TABLE:
alter table yt_compress set unused column new_column;
if you want to get ride of the data from space, do a table move. And
alter table yt_compress drop unused columns;
Example:
test@TEST.test> create table yt_compress compress as select owner, table_name from dba_tables where owner='SYS';
Table created.
test@TEST.test> create table yt_uncompress as select owner, table_name from dba_tables where owner='SYS';
Table created.
test@TEST.test> select count(*) from yt_compress;
COUNT(*)
----------
985
test@TEST.test> select count(*) from yt_uncompress;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column number(3) default 0;
alter table yt_compress add new_column number(3) default 0
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables
test@TEST.test> alter table yt_uncompress add new_column number(3) default 0;
Table altered.
test@TEST.test> select count(*) from yt_uncompress where new_column is null;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_uncompress where new_column =0;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column number(3);
Table altered.
test@TEST.test> alter table yt_compress modify new_column default 0;
Table altered.
test@TEST.test> select count(*) from yt_compress where new_column =0;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_compress where new_column is null;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress add new_column_1 number(3) default 0 not null;
Table altered.
test@TEST.test> alter table yt_compress modify new_column_1 null;
Table altered.
test@TEST.test> select count(*) from yt_compress where new_column is null;
COUNT(*)
----------
985
test@TEST.test> select count(*) from yt_compress where new_column_1 is null;
COUNT(*)
----------
0
test@TEST.test> select count(*) from yt_compress where new_column_1 =0;
COUNT(*)
----------
985
test@TEST.test> alter table yt_compress drop column new_column;
alter table yt_compress drop column new_column
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables
test@TEST.test> alter table yt_compress set unused column new_column;
Table altered.
test@TEST.test> desc yt_compress
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
NEW_COLUMN_1 NUMBER(3)
test@TEST.test>
reference:
http://www.dba-oracle.com/oracle11g/sf_Oracle_11g_Data_Compression_Tips_for_the_DBA.html
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:69076630635645
Monday, January 28, 2013
JOINs in ORACLE
WHAT IS JOIN:
JOIN is to combine two tables to be one in relational database.
CONDITION CLAUSE:
1. using clause:
using (department_ID)
using ( department_ID, employee_ID)
the column name must be same in the parenthesis of using clause
2. on clause:
on ( a.department_ID = b.dpartment_ID)
on (a.dpartment_ID = b.department_ID and a.employee_ID = b.employee_ID)
on (a.dpartment_ID = b.department_ID and a.employee_ID > b.employee_ID)
on (a.dpartment_ID = b.department_ID and a.employee_ID in not NULL )
on (a.dpartment_ID = b.department_ID and a.employee_ID = 12345 )
the column name is not required to be same in joined two tables. Other condition operation than equal can be used in on clause.
3. where clause
where a.department_ID = b.dpartment_ID
where a.dpartment_ID = b.department_ID and a.employee_ID = b.employee_ID
where clause is implicit condition clause for join operation. It is used for row restriction for the final joined table. However its condition can be put to interval join operation to restrict row early and save time and space.
JOIN TYPE in Oracle
1. JOIN
this is general join operation. It must be used with using or on clause.
2. INNER JOIN
INNER join must be used with using or on clause.
3. NATURAL JOIN
this join don't need using clause and cann't have on clause. I must use common column name and default equal conidtion.
4. NATURAL LEFT JOIN
this join is the same as natural join execpt that it displays not matched row from left side table.
5. NATURAL RIGHT JOIN
this join is the same as natural join execpt that it displays not matched row from right side table.
6. NATURAL FULL JOIN
this join is the same as natural join execpt that it displays not matched row from both side table
7. OUTER JOIN
outer jion must use with using clause and acts like natural join.
yli@TEST.corvus> select * from department outer join department_head using (department_ID);
DEPARTMENT_ID DEPARTMENT_NAM EMPLOYEE_ID FIRST_NAME LAST_NAME
------------- -------------- ----------- --------------- ---------------
1 SALES 2 Peter Bank
2 PRODUCT 3 Scott Levy
4 CORP SERVICE 1 James Smith
select * from department d outer join department_head dh on (d.department_id =dh.department_id );
select * from department d outer join department_head dh on (d.department_id =dh.department_id )
*
ERROR at line 1:
ORA-00933: SQL command not properly ended
8 FULL OUTER JOIN ( FULL JOIN)
this join like inner join expect that it displays not matched row from both side table
9. LEFT OUTER JOIN (LEFT JOIN)
this join like inner join expect that it displays not matched row from left side table
10. RIGHT OUTER JOIN (RIGHT JOIN)
this join like inner join expect that it displays not matched row from right side table
11.CROSS JOIN
this cartesian join and can not have using and on clause.
THOUGHTS.
1. Write SQL statement with explicit join conditions
2. Avoid using join and outter jion because their results is depends on conditions and kind of unpredictable.
reference:
http://en.wikipedia.org/wiki/Join_%28SQL%29
http://docs.oracle.com/cd/B28359_01/server.111/b28286/queries006.htm
http://psoug.org/definition/JOIN.htm
http://tedytirta.com/oracle/to-using-natural-join-in-oracle/
Thursday, January 24, 2013
Merge Statement on Oracle database
ISSUE:
Merge statement failed on "ORA-38104: Columns referenced in the ON Clause cannot be updated;" when update a common column of both target and source table.
CAUSE:
The updated common column is included in ON clause of merge condition.
SOLUTION:
Remove ON clause condition including the updated column and put the condition as where clause appended to update statement
EXAMPLE:
MERGE STATEMENT SYNTEX:
SQL> MERGE
INTO dbo.TargetTable tgt -- Target Table
USING dbo.SourceTable src -- Source Table
ON tgt.ID = src.ID -- Main comparison criteria
WHEN MATCHED -- When ID's exist in both tables
THEN
-- DO SOMETHING
WHEN NOT MATCHED -- When ID's from Source do not exist in Target
THEN
-- DO SOMETHING
WHEN NOT MATCHED BY SOURCE -- When ID's from Target do not exist in Source
THEN
-- DO SOMETHING
Merge statement failed on "ORA-38104: Columns referenced in the ON Clause cannot be updated;" when update a common column of both target and source table.
CAUSE:
The updated common column is included in ON clause of merge condition.
SOLUTION:
Remove ON clause condition including the updated column and put the condition as where clause appended to update statement
EXAMPLE:
Failed statement:
MERGE INTO TEST1 target USING
(SELECT * FROM TEST2 WHERE created = inserted ) source
ON (target.ID = source.Id AND target.secondId is null AND target.created = source.created)
WHEN MATCHED THEN
UPDATE SET target.secondId = 'secondid value';
(SELECT * FROM TEST2 WHERE created = inserted ) source
ON (target.ID = source.Id AND target.secondId is null AND target.created = source.created)
WHEN MATCHED THEN
UPDATE SET target.secondId = 'secondid value';
ORA-38104: Columns referenced in the ON Clause cannot be updated: "TARGET"."SECONDID"
Succeeded statement:
MERGE INTO TEST1 target USING
(SELECT * FROM TEST2 WHERE created = inserted ) source
ON (target.ID = source.Id AND target.created = source.created)
WHEN MATCHED THEN
UPDATE SET target.secondId = 'secondid value' where target.secondId is null ;
(SELECT * FROM TEST2 WHERE created = inserted ) source
ON (target.ID = source.Id AND target.created = source.created)
WHEN MATCHED THEN
UPDATE SET target.secondId = 'secondid value' where target.secondId is null ;
MERGE STATEMENT SYNTEX:
SQL> MERGE
INTO dbo.TargetTable tgt -- Target Table
USING dbo.SourceTable src -- Source Table
ON tgt.ID = src.ID -- Main comparison criteria
WHEN MATCHED -- When ID's exist in both tables
THEN
-- DO SOMETHING
WHEN NOT MATCHED -- When ID's from Source do not exist in Target
THEN
-- DO SOMETHING
WHEN NOT MATCHED BY SOURCE -- When ID's from Target do not exist in Source
THEN
-- DO SOMETHING
REFERENCE:
http://www.c-sharpcorner.com/UploadFile/0e9eba/oracle-merge-statement-%E2%80%93-quick-catch-up/
http://stackoverflow.com/questions/10600120/oracle-merge-statement-with-conditional-insert
http://www.idevelopment.info/data/Oracle/DBA_tips/SQL/SQL_14.shtml
http://psoug.org/reference/merge.html
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
Subscribe to:
Posts (Atom)