Wednesday, March 28, 2012

s3cmd configure

s3cmd is an open source package of ftp-like interface to Amazon s3 cloud storage.

1. download
http://s3tools.org/s3cmd

2. installation
rpm -i s3cmd

3. configure
s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key:
Secret Key:

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
Access Key:
Secret Key:
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] Y
Configuration saved to '/home/test/.s3cfg'

4. make a bucket
s3cmd mb s3://test

5. list a bucket
s3cmd ls s3://test

6. put a file
s3cmd put test.txt s3://test

7. get a file
s3cmd get s3://test/test.txt

8. delete a file
s3cmd del s3://test/test.txt

9. remove a bucket
s3cmd rb s3://test

reference:
http://s3tools.org/s3cmd

Tuesday, March 27, 2012

unaccessible to integration service -- informatica powercenter 9.1.3

Issue: When running workflow task, it give error "unaccessable to integration service". However the task run fine a week ago and no change was made to the server.

Cause: port is not open for the integration service.

check integration service port:
netstat -ap | grep pmserv

root cause: the port range congifured on the node is 6013-6113. Port opened in the server is 6001-6015 according to installation guide. Integration service will randomly pick up a port in node port range to run when started. Since there are two integration services running, they will use 6014 and 6016 ( don't know whey not use 6015). the integration service we used mainly may run on 6014 or 6016 upto how the ports were picked up by Informatica when started. if 6016 was picked up, we will hit the problem. Nothing changed but restarted.

You can check repository service port:
netstat -ap | grep pmrepagent

Monday, March 26, 2012

ORA-18008: cannot find OUTLN schema

if you dropped OUTLN schema, normal user cannot login with below error:

ERROR:
ORA-18008: cannot find OUTLN schema


Error accessing PRODUCT_USER_PROFILE
Warning: Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM
ERROR:
ORA-18008: cannot find OUTLN schema


Error accessing package DBMS_APPLICATION_INFO
SP2-0575: Use of Oracle SQL feature not in SQL92 Entry Level.


There are two notes on metalink for this issue:

855104.1 - Unable To Start The Database With OUTLN Schema Dropped ORA-01092& ORA-18008
790116.1 - Recreating Dropped Outln Schema Errors With Ora-00604 And Ora-18008


there are two options:
1. restore to time before dropping
2. work around by recreate OUTLN
2.1 shutdown immediate
2.2 alter system set replication_dependency_tracking=fales scope=pfile;
2.3 shutdown immediate
2.4 startup upgrade
2.5 run $ORACLE_HOME/rdbms/admin/doptim.bsq
2.6 alter system set replication_dependency_tracking=true scope=pfile;
2.7 shutdown immediate
2.8 startup


you might get error when run recreation script

create index i_obj#_intcol# on cluster c_obj#_intcol#
*
ERROR at line 1:
ORA-02033: a cluster index for this cluster already exists

sname varchar2("M_IDEN") not null, /* Name of set */
*
ERROR at line 2:
ORA-00910: specified length too long for its datatype

statistics raw("M_CSIZ"), /* uninterpreted statistics */
*
ERROR at line 7:
ORA-00910: specified length too long for its datatype

ol_name varchar2("M_IDEN"), /* named is potentially generated */
*
ERROR at line 3:
ORA-00910: specified length too long for its datatype


create unique index outln.ol$name on outln.ol$(ol_name)
*
ERROR at line 1:
ORA-00942: table or view does not exist


create unique index outln.ol$signature on outln.ol$(signature,category)
*
ERROR at line 1:
ORA-00942: table or view does not exist


ol_name varchar2("M_IDEN"), /* outline name */
*
ERROR at line 3:
ORA-00910: specified length too long for its datatype


create unique index outln.ol$hnt_num on outln.ol$hints(ol_name, hint#)
*
ERROR at line 1:
ORA-00942: table or view does not exist


ol_name varchar2("M_IDEN"), /* outline name */
*
ERROR at line 3:
ORA-00910: specified length too long for its datatype


create index outln.ol$node_ol_name on outln.ol$nodes(ol_name)
*
ERROR at line 1:
ORA-00942: table or view does not exist

sname varchar2("M_IDEN"),
*
ERROR at line 3:
ORA-00910: specified length too long for its datatype


pname varchar2("M_IDEN"),
*
ERROR at line 4:
ORA-00910: specified length too long for its datatype


reference:
https://forums.oracle.com/forums/thread.jspa?threadID=2257491

Wednesday, March 21, 2012

compile in oracle

$ORACLE_HOME/rdbms/admin/utlrp.sql
$ORACLE_HOME/rdbms/admin/utlprp.sql

above scripts are used to recompile all invalid objects in oracle database.
Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.


reference:
http://www.oracle-base.com/articles/misc/RecompilingInvalidSchemaObjects.php

Tuesday, March 20, 2012

flashback oracle 11g

flashback is a feature of oracle database, whcih use undo to get old version of your data.

it is very useful for human error like misdeletion.

1. you can get old version of your data from flashback query:

select * from tablename as of timestamp sysdate -1/24/60*20; ( data twenty minutes ago)

2. you can restore the table by

insert into tablename
select * from tablename as of timestamp sysdate -1/24/60*20;

commit;

3. you might get error: ORA-01466: unable to read data - table definition has changed

the problem is the timestamp is too old. change 20 to 10 or 15 might get ride of the error.


reference:
http://it.toolbox.com/blogs/surachart/rewinding-a-table-using-oracle-flashback-table-33812
http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm
http://snandan.blogspot.com/2007/09/flashback-query-error-ora-01466-unable.html

Friday, March 16, 2012

The requested operation could not be performed due to the following error: [LDAP_10003] LDAP group [{0}] in security domain [{1}] queried from LDAP

The requested operation could not be performed due to the following error: [LDAP_10003] LDAP group [{0}] in security domain [{1}] queried from LDAP

LDAP is Sun Java System Directory

above error was displayed when secuirty domain was setup in informatica power center 9.1.0 HF3.

the cause is that ldap://hostname was used to identify ldap server. Even thouhg test connection is successful, syncrhonize security domain failed on above error.

change ldap://hostname to just hostname works.

Thursday, March 15, 2012

login failed due to ORA-06502

below errors displayed when use sqlplus to login database.

SQL> conn informatica_abc_efgh@test
Enter password:
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 10


However some user can login from the same machine. and the same user can login from database server locally and from some other windows laptop.

the cause is implementation of an afterlogin trigger. when you set up afterlogin trigger, be careful for set the variable size.

ldap

conifure file
/etc/ldap.conf or /etc/nslcd.conf

check groups
getent group | grep "pattern"

Wednesday, March 14, 2012

merge statement in oracle

merge statement is especially useful for copy data of reference tables. since truncate and insert would work for refernece table with data in child table.

merge statement provide the best way to manipulate data in a parent reference table.

MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status
WHEN NOT MATCHED THEN
INSERT (object_id, status)
VALUES (b.object_id, b.status);


reference:
http://www.oracle-base.com/articles/10g/MergeEnhancements10g.php

Thursday, March 8, 2012

missing index after data pump import oracle

Missing index was found during validating imported schemas by querying dba_objects table.

However, index count from dba_indexes is same for both orginial schema and imported one. The problem is the LOB index. one database count it as index in dba_objets and the other is not.

the only differenct between the two database is the edition. One is standard edition and the other is Enterprise one.

Monday, March 5, 2012

SQL*Loader-605: Non-data dependent ORACLE error

this error might be caused by tablespace full.

check tablespace and add more space.

reference:
https://forums.oracle.com/forums/thread.jspa?threadID=1096491

Thursday, March 1, 2012

drop materialized veiw

when you drop a materialized view, you may get ora-03113 or other error.

here is a work around.

1. find the bad object
select object_name, object_id, object_type, status
from dba_objects

the materialized view is invalid and table is valid.

The problem is that there is still a summary remaining that has the same OBJECT_ID.
2. find the summary

select owner,summary_name,container_owner,last_refresh_date,contains_views,query
from dba_summaries

3. drop the summary
drop summary .;`

4. drop the mview

SQL> col owner for a22
col query for a24 wrapped
select owner,summary_name,container_owner,last_refresh_date,contains_views,query
from dba_summaries SQL> SQL> 2

no rows selected

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system flush shared_pool;

System altered.

SQL> alter system flush shared_pool;

System altered.

SQL> drop MATERIALIZED VIEW ;

Materialized view dropped.

5. if step 4 failed.
SET serveroutput ON
SET echo ON

DECLARE
CURSOR mv_indexes
IS
SELECT 'DROP INDEX '
|| index_name AS stmt
FROM user_indexes
WHERE table_name = 'INSTRUMENT'
AND table_owner = 'P1_OLAP_LEVELONEBANK';
BEGIN
FOR ix IN mv_indexes
LOOP
dbms_output.put_line('Executing: ' || ix.stmt);
EXECUTE immediate ix.stmt;
END LOOP;
END;
/

alter system flush buffer_cache;
alter system flush shared_pool;

the drop MVIEW again.

clean up datapump jobs

view name: dba_datapump_jobs, dba_datapump_sessions

1. check jobs in dba_datapump_jobs.

SQL> desc dba_datapump_jobs;
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER_NAME VARCHAR2(30)
JOB_NAME VARCHAR2(30)
OPERATION VARCHAR2(30)
JOB_MODE VARCHAR2(30)
STATE VARCHAR2(30)
DEGREE NUMBER
ATTACHED_SESSIONS NUMBER
DATAPUMP_SESSIONS NUMBER

2. find datapump job state

select owner_name, job_name, state from dba_datapump_jobs;

for cleanup, state should be not running.

3. you might kill stop_job jobs.

test jobs: expdp/impdp attache=job_name. If it is a stoped job, it will start automatically.

drop table owner.job_name;

4. check if job has gone from dba_datapump_jobs


interactive command:
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.


reference:
http://hi.baidu.com/nixsql/blog/item/e5415336e0bd2987a61e12b3.html

ORA-03113: end-of-file on communication channel

symtom: I can login to database but create tabble failed with the error.
I can run the same create table statement in other database.

cause: the mount disk oracle software exits is full.

solution: clean up the mount disk.