STORAGE 절 NEXT 값이 맘대로 변경이 되는 경우가....

매일 2008년 1월 1일부터 어제까지의 자료를 싱크시키는 테이블입니다.
매일 밤 12시에 싱크작업을 합니다.
기존의 자료는 전부 지우고 전체 데이터를 가져오는 방식으로 싱크(복사)를 합니다.
싱크시키는 시간이 오래 걸려서 FM_ARR_PCL 테이블의 각 파티션의 STORAGE 절 NEXT 값이 16K로 되어 있길래
전부 5M로 변경했습니다.(사실 16K로 만든 적도 없습니다..)
5M로 변경작업은 테이블을 DROP 시키고 다시 생성하는 방법으로 했습니다.
그러고 나니 속도는 빨라 졌죠...
그런데 이상한 점은
STORAGE 절 NEXT 값 변경후 이 NEXT 값이 매일 변경이 일어나고 있어서요...
첫날은 서너개의 파티션에서 NEXT 값이 변경이 일어났지만
시일이 지날수록 NEXT 값이 변경되는 파티션의 수가 늘어나고
NEXT 의 값은 줄어드는 현상이 있습니다...
전체 작업중 테이블을 ALTER 시키는 작업은 없습니다.
에러도 뜨지 않구요...
왜 이럴까요????
아래는 테이블 스페이스와 파티션 된 테이블의 생성 스크립트입니다.
오라클은 8.1.7.4.0 버전이고
OS는 AIX 5.1 버전입니다.
/* TABLESPACE Script */
CREATE TABLESPACE TS_DATA DATAFILE
'/DATA/df_ts_data_01.dbf' SIZE 30720M AUTOEXTEND OFF,
'/DATA/df_ts_data_02.dbf' SIZE 15360M AUTOEXTEND OFF
NOLOGGING
DEFAULT STORAGE (
INITIAL 10M
NEXT 10M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
ONLINE
PERMANENT
EXTENT MANAGEMENT DICTIONARY;
/* Table Script */
CREATE TABLE FM_ARR_PCL
PS_CD VARCHAR2(5) NOT NULL,
WM_NO VARCHAR2(10) NOT NULL,
CKCAT_DTL_CD VARCHAR2(5),
APV_WEEK INTEGER,
APV_DT VARCHAR2(8)
TABLESPACE TS_DATA
PCTUSED 99
PCTFREE 0
INITRANS 81
MAXTRANS 255
NOLOGGING
PARTITION BY RANGE (APV_WEEK)
PARTITION PT_ARR_01 VALUES LESS THAN (2)
NOLOGGING
TABLESPACE TS_DATA
PCTUSED 99
PCTFREE 0
INITRANS 81
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 9
FREELIST GROUPS 9
BUFFER_POOL DEFAULT
PARTITION PT_ARR_02 VALUES LESS THAN (3)
NOLOGGING
TABLESPACE TS_DATA
PCTUSED 99
PCTFREE 0
INITRANS 81
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 9
FREELIST GROUPS 9
BUFFER_POOL DEFAULT
. (파티션으로 나누어진 테이블로 파티션이 01~max 까지 총 54개로 이루어져 있음.)
PARTITION PT_ARR_53 VALUES LESS THAN (54)
NOLOGGING
TABLESPACE TS_DATA
PCTUSED 99
PCTFREE 0
INITRANS 81
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 3224K /* <-- 5M로 변경했지만 다시 변경된 부분.. */
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 9
FREELIST GROUPS 9
BUFFER_POOL DEFAULT
PARTITION PT_ARR_MAX VALUES LESS THAN (MAXVALUE)
NOLOGGING
TABLESPACE TS_DATA
PCTUSED 99
PCTFREE 0
INITRANS 81
MAXTRANS 255
STORAGE (
INITIAL 10M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 9
FREELIST GROUPS 9
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL;
글 수정: ifeelyou

Similar Messages

  • Change next clause (alter index)

    Hello,
    We've an index created with a too much next value and we want to decrease it.
    Is there any wait to do it different to rebuild?
    Thanks in advance for your help.
    Regards,
    Carles

    Yes you can do with rebuild:
    ALTER INDEX indexname REBUILD storage(next 1m);
    Note: if the index resides in LMT with auto or uniform extent allocation, it will take tablespace default next extent size, rather the specified one.
    Jaffar

  • Error while altering a table in Oracle Portal

    i have a table with primary key. i realize that the primary is not required and when i get rid of primary key thru Oracle Portal and say OK ... i am encountered with the following error -
    Error:
    ORA-25150: ALTERING of extent parameters not
    permitted (WWV-11230)
    Failed to parse as PORTAL30 - alter table
    BPSITEST.JEN_TEST_PRIMARY_KEYS
    drop PRIMARY KEY
    modify(
    EMP_ID NUMBER(10),
    LAST_NAME VARCHAR2(10),
    FIRST_NAME VARCHAR2(10))
    PCTFREE 10
    PCTUSED 40
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    ENABLE CONSTRAINT SYS_C003038 ENABLE CONSTRAINT
    SYS_C003039 ENABLE CONSTRAINT SYS_C003040
    (WWV-08300)
    Any Ideas .... anyone ... ??
    thanx
    null

    ORA-00054: resource busy and acquire with NOWAIT specified The table is currently in use by something else.
    Werner

  • How to create a primary key by 2 columns in sql??

    as title
    thx a lot.

    Chris,
    Well you have two basic ways. One is to issue an alter table command and define a PRIMARY KEY (also called a PK). The other is to create a UNIQUE index. The PRIMARY KEY creates an index anyway, though I prefer to just use unique indexes as you can specify more options in this manner. A table can only have 1 PK where as you can have multiple UNIQUE indexes. I also do not combine the PK definition directly into the CREATE TABLE command, as normally one would keep a seperate PK script, even if they did contain only unique indexes commands and not PK alter table commands.
    Anyway, below are the two methods of making a PK/Unique object for your table. The table name used here is T1 ...
    Method #1 (Primary Key)
    ALTER TABLE T1 DROP PRIMARY KEY;
    ALTER TABLE T1 ADD PRIMARY KEY
    (Column_1,Column_2, ...)
    STORAGE
         NEXT 512K
         MINEXTENTS 1
         MAXEXTENTS UNLIMITED
         PCTINCREASE 0
    Method #2 (preferred) : Unique Index
    DROP INDEX PK_T1;
    CREATE UNIQUE INDEX PK_T1 ON T1
    (Column_1,Column_2, ...)
    TABLESPACE INDX
    PCTFREE 10
    STORAGE
         INITIAL 1M
         NEXT 512K
         MINEXTENTS 1
         MAXEXTENTS UNLIMITED
         PCTINCREASE 0
    It's always best to define the storage clause for both the create table and it's indexes.
    Hope this helps ya,
    Tyler

  • Abcdef

    to_date('03-23-2010','mm-dd-yyyy')
    to_date('2008-06-08','yyyy-mm-dd')
    DBMS_OUTPUT.PUT_LINE(' 4th Where clause: ' || WHERE_CLAUSE);
    HKey_Local Machine -> Software -> Microsoft -> MSLicensing
    topas
    Removing batch of Files in linux:
    =====================================
    find . -name "*.arc" -mtime +20 -exec rm -f {} \;
    find . -name "*.dbf" -mtime +60 -exec mv {} /backup/Arch_Bkp_02May11/ \;
    ALTER DATABASE
    SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY | PERFORMANCE | PROTECTION};
    ================================================================================
    Find top N records:
    ===================
    select * from (select ename from emp order by sal)
    where rownum <=n;
    Find top Nth record: (n=0 for 1st highest)
    =========================================
    select * from emp a
    where (n =
    (select count(distinct b.sal) from emp b
    where b.sal > a.sal));
    Query for Listing last n records from the table
    =================================================
    select * from (select * from emp order by rownum desc) where rownum<4
    HOW TO tablespace wise and file wise info
    ============================
    col file_name for a45
    col tablespace_name for a15
    set linesize 132
    select a.tablespace_name,a.file_name,a.AUTOEXTENSIBLE,----a.status,
    round(a.bytes/1024/1024,2) Total_MB,
    round(sum(b.bytes)/1024/1024,2) Free_MB,
    round((a.bytes/1024/1024 - sum(b.bytes)/1024/1024),2) Used_MB
    from dba_data_files a,dba_free_space b
    where a.file_id=b.file_id
    and a.tablespace_name=b.tablespace_name
    group by a.tablespace_name,b.file_id,a.file_name,a.bytes,a.AUTOEXTENSIBLE--,a.status
    order by tablespace_name;
    col tablespace_name for a15
    SELECT tablespace_name,ts_#,num_files,sum_free_mbytes,count_blocks,max_mbytes,
    sum_alloc_mbytes,DECODE(sum_alloc_mbytes,0,0,100 * sum_free_mbytes /sum_alloc_mbytes ) AS pct_free
    FROM (SELECT v.name AS tablespace_name,ts# AS ts_#,
    NVL(SUM(bytes)/1048576,0) AS sum_alloc_mbytes,
    NVL(COUNT(file_name),0) AS num_files
    FROM dba_data_files f,v$tablespace v
    WHERE v.name = f.tablespace_name (+)
    GROUP BY v.name,ts#),
    (SELECT v.name AS fs_ts_name,ts#,NVL(MAX(bytes)/1048576,0) AS max_mbytes,
    NVL(COUNT(BLOCKS) ,0) AS count_blocks,
    NVL(SUM(bytes)/1048576,0) AS sum_free_mbytes
    FROM dba_free_space f,v$tablespace v
    WHERE v.name = f.tablespace_name(+)
    GROUP BY v.name,ts#)
    WHERE tablespace_name = fs_ts_name
    ORDER BY tablespace_name;
    ==================================
    col file_name for a45
    col tablespace_name for a15
    set linesize 132
    select a.tablespace_name,a.file_name,a.AUTOEXTENSIBLE,----a.status,
    round(a.bytes/1024/1024,2) Total_MB,
    round(sum(b.bytes)/1024/1024,2) Free_MB,
    round((a.bytes/1024/1024 - sum(b.bytes)/1024/1024),2) Used_MB
    from dba_data_files a,dba_free_space b
    where a.file_id=b.file_id
    and a.tablespace_name=b.tablespace_name
    group by a.tablespace_name,b.file_id,a.file_name,a.bytes,a.AUTOEXTENSIBLE--,a.status
    order by file_name;
    =============================================================
    HOW TO FIND CHILD TABLES
    ===========================================
    col column_name for a30
    col owner for a10
    set linesize 132
    select --a.table_name parent_table,
    b.owner,
    b.table_name child_table
    , a.constraint_name , b.constraint_name
    from dba_constraints a ,dba_constraints b
    where a.owner='LEIQA20091118'
    and a.constraint_name = b.r_constraint_name
    --and b.constraint_type = 'R'
    and a.constraint_type IN ('P','U')
    and a.table_name =upper('&tabname');
    List foreign keys and referenced table and columns:
    ======================================================
    SELECT DECODE(c.status,'ENABLED','C','c') t,
    SUBSTR(c.constraint_name,1,31) relation,
    SUBSTR(cc.column_name,1,24) columnname,
    SUBSTR(p.table_name,1,20) tablename
    FROM user_cons_columns cc, user_constraints p,
    user_constraints c
    WHERE c.table_name = upper('&table_name')
    AND c.constraint_type = 'R'
    AND p.constraint_name = c.r_constraint_name
    AND cc.constraint_name = c.constraint_name
    AND cc.table_name = c.table_name
    UNION ALL
    SELECT DECODE(c.status,'ENABLED','P','p') t,
    SUBSTR(c.constraint_name,1,31) relation,
    SUBSTR(cc.column_name,1,24) columnname,
    SUBSTR(c.table_name,1,20) tablename
    FROM user_cons_columns cc, user_constraints p,
    user_constraints c
    WHERE p.table_name = upper('PERSON')
    AND p.constraint_type in ('P','U')
    AND c.r_constraint_name = p.constraint_name
    AND c.constraint_type = 'R'
    AND cc.constraint_name = c.constraint_name
    AND cc.table_name = c.table_name
    ORDER BY 1, 4, 2, 3
    List a child table's referential constraints and their associated parent table:
    ==============================================================
    SELECT t.owner CHILD_OWNER,
    t.table_name CHILD_TABLE,
    t.constraint_name FOREIGN_KEY_NAME,
    r.owner PARENT_OWNER,
    r.table_name PARENT_TABLE,
    r.constraint_name PARENT_CONSTRAINT
    FROM user_constraints t, user_constraints r
    WHERE t.r_constraint_name = r.constraint_name
    AND t.r_owner = r.owner
    AND t.constraint_type='R'
    AND t.table_name = <child_table_name>;
    parent tables:
    ================
    select constraint_name,constraint_type,r_constraint_name
    from dba_constraints
    where table_name ='TM_PAY_BILL'
    and constraint_type in ('R');
    select CONSTRAINT_NAME,TABLE_NAME,COLUMN_NAME from user_cons_columns where table_name='FS_FR_TERMINALLOCATION';
    select a.OWNER,a.TABLE_NAME,a.CONSTRAINT_NAME,a.CONSTRAINT_TYPE
    ,b.COLUMN_NAME,b.POSITION
    from dba_constraints a,dba_cons_columns b
    where a.CONSTRAINT_NAME=b.CONSTRAINT_NAME
    and a.TABLE_NAME=b.TABLE_NAME
    and a.table_name=upper('TM_GEN_INSTRUCTION')
    and a.constraint_type in ('P','U');
    select constraint_name,constraint_type,r_constraint_name
    from dba_constraints
    where table_name ='TM_PAY_BILL'
    and constraint_type in ('R');
    ===============================================
    HOW TO FIND INDEXES
    =====================================
    col column_name for a30
    col owner for a25
    select a.owner,a.index_name, --a.table_name,a.tablespace_name,
    b.column_name,b.column_position
    from dba_indexes a,dba_ind_columns b
    where a.owner='SCE'
    and a.index_name=b.index_name
    and a.table_name = upper('&tabname')
    order by a.index_name,b.column_position;
    col column_name for a40
    col index_owner for a15
    select index_owner,index_name,column_name,
    column_position from dba_ind_columns
    where table_owner= upper('VISILOGQA19') and table_name ='TBLTRANSACTIONGROUPMAIN';
    -- check for index on FK
    ===============================
    set linesize 121
    col status format a6
    col columns format a30 word_wrapped
    col table_name format a30 word_wrapped
    SELECT DECODE(b.table_name, NULL, 'Not Indexed', 'Indexed' ) STATUS, a.table_name, a.columns, b.columns from (
    SELECT SUBSTR(a.table_name,1,30) table_name,
    SUBSTR(a.constraint_name,1,30) constraint_name, MAX(DECODE(position, 1,
    SUBSTR(column_name,1,30),NULL)) || MAX(DECODE(position, 2,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 3,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 4,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 5,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 6,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 7,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 8,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position, 9,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,10,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,11,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,12,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,13,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,14,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,15,', '|| SUBSTR(column_name,1,30),NULL)) || max(DECODE(position,16,', '|| SUBSTR(column_name,1,30),NULL)) columns
    from user_cons_columns a, user_constraints b
    WHERE a.constraint_name = b.constraint_name
    AND constraint_type = 'R'
    GROUP BY SUBSTR(a.table_name,1,30), SUBSTR(a.constraint_name,1,30) ) a, (
    SELECT SUBSTR(table_name,1,30) table_name,
    SUBSTR(index_name,1,30) index_name, MAX(DECODE(column_position, 1,
    SUBSTR(column_name,1,30),NULL)) || MAX(DECODE(column_position, 2,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 3,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 4,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 5,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 6,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 7,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 8,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position, 9,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,10,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,11,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,12,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,13,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,14,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,15,', '||SUBSTR(column_name,1,30),NULL)) || max(DECODE(column_position,16,', '||SUBSTR(column_name,1,30),NULL)) columns
    from user_ind_columns group by SUBSTR(table_name,1,30), SUBSTR(index_name,1,30) ) b
    where a.table_name = b.table_name (+) and b.columns (+) like a.columns || '%';
    ==================================================
    HOW TO FIND unique keys
    ===========================
    col column_name for a30
    col owner for a10
    set linesize 132
    select a.owner , --a.table_name,
    a.constraint_name,a.constraint_type,
    b.column_name,b.position
    from dba_constraints a, dba_cons_columns b
    where a.table_name = upper('&tabname')
    and a.constraint_name = b.constraint_name
    and a.constraint_type in ('P','U')
    and a.owner=b.owner
    order by a.owner,a.constraint_name,b.position;
    ==================================
    HOW TO FIND ROWlocks
    ======================
    col object_name for a30
    col terminal for a20
    set linesize 1000
    col spid for a10
    col osuser for a15
    select to_char(logon_time,'DD-MON-YYYY HH24:MI:SS'),OSUSER,--owner,
    s.sid, s.serial#,p.spid,
    s.terminal,l.locked_mode,o.object_name,l.ORACLE_USERNAME --,o.object_type
    from v$session s, dba_objects o,v$locked_object l, V$process p
    where o.object_id=l.object_id
    and s.sid=l.session_id
    and s.paddr=p.addr
    order by logon_time;
    SELECT OWNER||'.'||OBJECT_NAME AS Object, OS_USER_NAME, ORACLE_USERNAME,
    PROGRAM, NVL(lockwait,'ACTIVE') AS Lockwait,DECODE(LOCKED_MODE, 2,
    'ROW SHARE', 3, 'ROW EXCLUSIVE', 4, 'SHARE', 5,'SHARE ROW EXCLUSIVE',
    6, 'EXCLUSIVE', 'UNKNOWN') AS Locked_mode, OBJECT_TYPE, SESSION_ID, SERIAL#, c.SID
    FROM SYS.V_$LOCKED_OBJECT A, SYS.ALL_OBJECTS B, SYS.V_$SESSION c
    WHERE A.OBJECT_ID = B.OBJECT_ID AND C.SID = A.SESSION_ID
    ORDER BY Object ASC, lockwait DESC;
    SELECT DECODE(request,0,'Holder: ','Waiter: ')||sid sess,
    id1, id2, lmode, request, type
    FROM V$LOCK
    WHERE (id1, id2, type) IN
    (SELECT id1, id2, type FROM V$LOCK WHERE request>0)
    ORDER BY id1, request;
    find locks
    =====================
    set linesize 1000
    SELECT --osuser,
    a.username,a.serial#,a.sid,--a.terminal,
    sql_text
    from v$session a, v$sqltext b, V$process p
    where a.sql_address =b.address
    and a.paddr = p.addr
    and p.spid = '&os_pid'
    order by address, piece;
    select sql_text
    from V$sqltext_with_newlines
    where address =
    (select prev_sql_addr
    from V$session
    where username = :uname and sid = :snum) ORDER BY piece
    set pagesize 50000
    set linesize 30000
    set long 500000
    set head off
    select s.username su,s.sid,s.serial#,substr(sa.sql_text,1,540) txt
    from v$process p,v$session s,v$sqlarea sa
    where p.addr=s.paddr
    and s.username is not null
    and s.sql_address=sa.address(+)
    and s.sql_hash_value=sa.hash_value(+)
    and spid=&SPID;
    privileges
    ===========
    select * from dba_sys_privs where grantee = 'SCE';
    select * from dba_role_privs where grantee = 'SCE'
    select * from dba_sys_privs where grantee in ('CONNECT','APPL_CONNECT');
    Check high_water_mark_statistics
    ===================================
    select * from DBA_HIGH_WATER_MARK_STATISTICS;
    Multiple Blocksizes:
    =========================
    alter system set db_16k_cache_size=64m;
    create tablespace index_ts datafile '/data1/index_ts01.dbf' size 10240m blocksize 16384;
    11g default profiles:
    ========================
    alter profile default limit password_lock_time unlimited;
    alter profile default limit password_life_time unlimited;
    alter profile default limit password_grace_time unlimited;
    logfile switch over:
    select GROUP#,THREAD#,SEQUENCE#,BYTES,MEMBERS,ARCHIVED,
    STATUS,to_char(FIRST_TIME,'DD-MON-YYYY HH24:MI:SS') switch_time
    from v$log;
    Temporary tablespace usage:
    ============================
    SELECT b.tablespace,
    ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
    a.sid||','||a.serial# SID_SERIAL,
    a.username,
    a.program
    FROM sys.v_$session a,
    sys.v_$sort_usage b,
    sys.v_$parameter p
    WHERE p.name = 'db_block_size'
    AND a.saddr = b.session_addr
    ORDER BY b.tablespace, b.blocks;
    SELECT A2.TABLESPACE, A2.SEGFILE#, A2.SEGBLK#, A2.BLOCKS,
    A1.SID, A1.SERIAL#, A1.USERNAME, A1.OSUSER, A1.STATUS
    FROM V$SESSION A1,V$SORT_USAGE A2 WHERE A1.SADDR = A2.SESSION_ADDR;
    ========================================
    ALTER SYSTEM KILL SESSION 'SID,SERIAL#';
    Inactive sessions killing:
    SELECT 'ALTER SYSTEM KILL SESSION ' || '''' || SID || ',' ||
    serial# || '''' || ' immediate;' text
    FROM v$session
    WHERE status = 'INACTIVE'
    AND last_call_et > 86400
    AND username IN (SELECT username FROM DBA_USERS WHERE user_id>56);
    Procedure:
    CREATE OR REPLACE PROCEDURE Inactive_Session_Cleanup AS
    BEGIN
    FOR rec_session IN (SELECT 'ALTER SYSTEM KILL SESSION ' || '''' || SID || ',' ||
    serial# || '''' || ' immediate' text
    FROM v$session
    WHERE status = 'INACTIVE'
    AND last_call_et > 43200
    AND username IN (SELECT username FROM DBA_USERS WHERE user_id>60)) LOOP
    EXECUTE IMMEDIATE rec_session.text;
    END LOOP;
    END Inactive_Session_Cleanup;
    sequence using plsql
    =========================
    Declare
    v_next NUMBER;
    script varchar2(5000);
    BEGIN
    SELECT (MAX(et.dcs_code) + 1) INTO v_next FROM et_document_request et;
    script:= 'CREATE SEQUENCE et_document_request_seq
    MINVALUE 1 MAXVALUE 999999999999999999999999999 START WITH '||
         v_next || ' INCREMENT BY 1 CACHE 20';
    execute immediate script;
    end;
    ===========================
    Terminal wise session
    select TERMINAL,count(*) from v$session
    group by TERMINAL;
    total sessions
    select count(*) from v$session
    where TERMINAL not like '%UNKNOWN%'
    and TERMINAL is not null;
    HOW TO FIND DUPLICATE TOKEN NUMBERS
    ===========================================
    select count(distinct a.token_number) dup
    from tm_pen_bill a,tm_pen_bill b
    where a.token_number = b.token_number
    and a.bill_number <> b.bill_number
    and a.token_number is not null;
    when Block Corruption occurs:
    select * from DBA_EXTENTS
    WHERE file_id = '13' AND block_id BETWEEN '44157' and '50649';
    select BLOCK_ID,SEGMENT_NAME,BLOCKS from dba_extents where FILE_ID='14'
    and BLOCK_ID like '%171%';
    select BLOCK_ID,SEGMENT_NAME,BLOCKS from dba_extents where FILE_ID='14'
    and SEGMENT_NAME = 'TEMP_TD_PAY_ALLOTMENT_NMC';
    DBVERIFY:
    dbv blocksize=8192 file=users01.dbf log=dbv_users01.log
    ==============================================================
    DBMS_REPAIR:(Block Corruption)
    exec dbms_repair.admin_tables(table_name=>'REPAIR_TABLE',table_type=>dbms_repair.repair_table,action=>dbms_repair.create_action,tablespace=>'USERS');
    variable v_corrupt_count number;
    exec dbms_repair.check_object('scott','emp',corrupt_count=>:v_corrupt_count);
    print v_corrupt_count;
    ==============================================================
    Password:
    select login,substr(utl_raw.cast_to_varchar2(utl_raw.cast_to_varchar2(password)),1,30) password
    from mm_gen_user where active_flag = 'Y' and user_id=64 and LOGIN='GOPAL' ;
    CHARACTERSET
    select * from NLS_DATABASE_PARAMETERS;
    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;
    select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    ==========================================================
    EXPLAIN PLAN TABLE QUERY
    ========================
    EXPLAIN PLAN SET STATEMENT_ID='5'
    FOR
    "DML STATEMENT"
    PLAN TABLE QUERY
    ===============================
    set linesize 1000
    set arraysize 1000
    col OBJECT_TYPE for a20
    col OPTIMIZER for a20
    col object_name for a30
    col options for a25
    select COST,OPERATION,OPTIONS,OBJECT_TYPE,
    OBJECT_NAME,OPTIMIZER
    --,ID,PARENT_ID,POSITION,CARDINALITY
    from plan_table
    where statement_id='&statement_id';
    Rman settings: disk formats
    %t represents a timestamp
    %s represents the backup set number
    %p represents the piece number
    The dbms_workload_repository.create_snapshot procedure creates a manual snapshot in the AWR as seen in this example:
    EXEC dbms_workload_repository.create_snapshot;
    Calculation of a table the size of the space occupied by
    ========================================================
    select owner, table_name,
    NUM_ROWS,
    BLOCKS * AAA/1024/1024 "Size M",
    EMPTY_BLOCKS,
    LAST_ANALYZED
    from dba_tables
    where table_name = 'XXX';
    Finding statement/s which use lots of shared pool memory:
    ==========================================================
    SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions) "TotExecs"
    FROM v$sqlarea
    WHERE executions < 5
    GROUP BY substr(sql_text,1,40)
    HAVING count(*) > 30
    ORDER BY 2;
    See a table size table
    =========================================
    select sum (bytes) / (1024 * 1024) as "size (M)" from user_segments
    where segment_name = upper ('& table_name');
    See a index size table
    =========================================
    select sum (bytes) / (1024 * 1024) as "size (M)" from user_segments
    where segment_name = upper ('& index_name');
    monitoring table space I / O ratio
    ====================================
    select B.tablespace_name name, B.file_name "file", A.phyrds pyr,
    A.phyblkrd pbr, A.phywrts pyw, A.phyblkwrt pbw
    from v $ filestat A, dba_data_files B
    where A.file # = B.file_id
    order by B.tablespace_name;
    monitor the file system I / O ratio
    =====================================
    select substr (C.file #, 1,2) "#", substr (C.name, 1,30) "Name",
    C.status, C.bytes, D.phyrds, D.phywrts
    from v $ datafile C, v $ filestat D
    where C.file # = D.file #;
    the hit rate monitor SGA
    =========================
    select a.value + b.value "logical_reads", c.value "phys_reads",
    round (100 * ((a.value + b.value)-c.value) / (a.value + b.value)) "BUFFER HIT RATIO"
    from v $ sysstat a, v $ sysstat b, v $ sysstat c
    where a.statistic # = 38 and b.statistic # = 39
    and c.statistic # = 40;
    monitoring SGA in the dictionary buffer hit ratio
    ==================================================
    select parameter, gets, Getmisses, getmisses / (gets + getmisses) * 100 "miss ratio",
    (1 - (sum (getmisses) / (sum (gets) + sum (getmisses ))))* 100 "Hit ratio"
    from v $ rowcache
    where gets + getmisses <> 0
    group by parameter, gets, getmisses;
    monitoring SGA shared cache hit ratio should be less than 1%
    =============================================================
    select sum (pins) "Total Pins", sum (reloads) "Total Reloads",
    sum (reloads) / sum (pins) * 100 libcache
    from v $ librarycache;
    select sum (pinhits-reloads) / sum (pins) "hit radio", sum (reloads) / sum (pins) "reload percent"
    from v $ librarycache;
    monitoring SGA in the redo log buffer hit ratio should be less than 1%
    =========================================================================
    SELECT name, gets, misses, immediate_gets, immediate_misses,
    Decode (gets, 0,0, misses / gets * 100) ratio1,
    Decode (immediate_gets + immediate_misses, 0,0,
    immediate_misses / (immediate_gets + immediate_misses) * 100) ratio2
    FROM v $ latch WHERE name IN ('redo allocation', 'redo copy');
    control memory and hard disk sort ratio, it is best to make it smaller than .10, an increase sort_area_size
    =============================================================================================================
    SELECT name, value FROM v$sysstat WHERE name IN ('sorts (memory)', 'sorts (disk)');
    monitoring what the current database who are running SQL statements?
    ===================================================================
    SELECT osuser, username, sql_text from v $ session a, v $ sqltext b
    where a.sql_address = b.address order by address, piece;
    monitoring the dictionary buffer?
    =====================================
    SELECT (SUM (PINS - RELOADS)) / SUM (PINS) "LIB CACHE" FROM V $ LIBRARYCACHE;
    SELECT (SUM (GETS - GETMISSES - USAGE - FIXED)) / SUM (GETS) "ROW CACHE" FROM V $ ROWCACHE;
    SELECT SUM (PINS) "EXECUTIONS", SUM (RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V $ LIBRARYCACHE;
    The latter divided by the former, this ratio is less than 1%, close to 0% as well.
    SELECT SUM (GETS) "DICTIONARY GETS", SUM (GETMISSES) "DICTIONARY CACHE GET MISSES"
    FROM V $ ROWCACHE
    see the table a high degree of fragmentation?
    =================================================
    SELECT owner,segment_name table_name, COUNT (*) extents
    FROM dba_segments WHERE owner NOT IN ('SYS', 'SYSTEM') GROUP BY owner,segment_name
    HAVING COUNT (*) = (SELECT MAX (COUNT (*)) FROM dba_segments GROUP BY segment_name);
    =======================================================================
    Fragmentation:
    =================
    select table_name,round((blocks*8),2)||'kb' "size"
    from user_tables
    where table_name = 'BIG1';
    Actual Data:
    =============
    select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
    from user_tables
    where table_name = 'BIG1';
    The establishment of an example data dictionary view to 8I
    =======================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATALOG.SQL
    The establishment of audit data dictionary view with an example to 8I
    ======================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATAUDIT.SQL
    To establish a snapshot view using the data dictionary to 8I Case
    =====================================================
    $ ORACLE_HOME / RDBMS / ADMIN / CATSNAP.SQL
    The table / index moving table space
    =======================================
    ALTER TABLE TABLE_NAME MOVE TABLESPACE_NAME;
    ALTER INDEX INDEX_NAME REBUILD TABLESPACE TABLESPACE_NAME;
    How can I know the system's current SCN number?
    =================================================
    select max (ktuxescnw * power (2, 32) + ktuxescnb) from x$ktuxe;
    Will keep a small table into the pool
    ======================================
    alter table xxx storage (buffer_pool keep);
    Check the permissions for each user
    ===================================
    SELECT * FROM DBA_SYS_PRIVS;
    =====================================================================
    Tablespace auto extend check:
    =================================
    col file_name for a50
    select FILE_NAME,TABLESPACE_NAME,AUTOEXTENSIBLE from dba_data_files
    order by TABLESPACE_NAME;
    COL SEGMENT_NAME FOR A30
    select SEGMENT_NAME,TABLESPACE_NAME,BYTES,EXTENTS,INITIAL_EXTENT,
    NEXT_EXTENT,MAX_EXTENTS,PCT_INCREASE
    from user_segments
    where segment_name in ('TD_PAY_CHEQUE_PREPARED','TM_PAY_BILL','TD_PAY_PAYORDER');
    select TABLESPACE_NAME,INITIAL_EXTENT,NEXT_EXTENT,MAX_EXTENTS,PCT_INCREASE
    from dba_tablespaces;
    alter tablespace temp default storage(next 5m maxextents 20480 pctincrease 0);
    ALTER TABLE TD_PAY_CHEQUE_PREPARED
    default STORAGE ( NEXT 10 M maxextents 20480 pctincrease 0);
    Moving table from one tablespace to another
    ===============================================
    alter table KHAJANE.TEMP_TM_PAY_ALLOTMENT_NMC move tablespace khajane_ts;
    ==============================================
    for moving datafiles location:
    ========================================
    alter database rename file a to b;
    ======================================================================
    for logfile Clearence:
    select * from global_name;
    col member for a50
    set linesize 132
    set trimspool on
    select 'alter database clear logfile ' || '''' || member || '''' || ';'
    from v$logfile where status ='STALE';
    logfile switch over:
    select GROUP#,THREAD#,SEQUENCE#,BYTES,MEMBERS,ARCHIVED,
    STATUS,to_char(FIRST_TIME,'DD-MON-YYYY HH24:MI:SS') switch_time
    from v$log;

    Answered

  • Import of Folder with RON failed with Ora-25150

    The import within RON of an dmp File created with export-Ron failed straightaway with the Ora -Error Message ora25150, alter table xt_sdd_files modify lob(contents blob) storage(next 3M pctincrease 50)). The import stopps.
    Is this a bug of an earlier version, I am using Designer 6i, Ron 6.5.52.20, Configuration 4.0.12. The error even occurs, if a export a folder, where no row is exported from the table xt_sdd_files. For your Help thanking you in advance. Best regards Udo

    Hello Udo,
    Yes, there was a bug in Designer 6i that caused such
    an error to occur. Basically, when you are trying to
    import a workarea that the user is not the owner of,
    the error occurs because the compilation at the end
    of an import requires that the owner should have the
    select access right. If the user doing the import is
    not the owner, you do not have the select access and
    so the error occurs.
    The latest download of Designer 6i available from here
    on OTN works OK (fixed a couple of releases ago).
    However, please note this error may occur for several
    other reasons not necessarily due to Designer. If you
    exported the file from an 8i database for example and
    then try to import it into a 9i database, you will get
    this error if the 9i tablespaces are locally managed,
    the but the 8i exporting tablespaces were dictionary
    managed.
    Hope this helps.
    Regards,
    Dominic
    Designer Product Management
    Oracle

  • Oracle date parameter query not working?

    http://stackoverflow.com/questions/14539489/oracle-date-parameter-query-not-working
    Trying to run the below query, but always fails even though the parameter values matches. I'm thinking there is a precision issue for :xRowVersion_prev parameter. I want too keep as much precision as possible.
    Delete
    from CONCURRENCYTESTITEMS
    where ITEMID = :xItemId
    and ROWVERSION = :xRowVersion_prev
    The Oracle Rowversion is a TimestampLTZ and so is the oracle parameter type.
    The same code & query works in Sql Server, but not Oracle.
    Public Function CreateConnection() As IDbConnection
    Dim sl As New SettingsLoader
    Dim cs As String = sl.ObtainConnectionString
    Dim cn As OracleConnection = New OracleConnection(cs)
    cn.Open()
    Return cn
    End Function
    Public Function CreateCommand(connection As IDbConnection) As IDbCommand
    Dim cmd As OracleCommand = DirectCast(connection.CreateCommand, OracleCommand)
    cmd.BindByName = True
    Return cmd
    End Function
    <TestMethod()>
    <TestCategory("Oracle")> _
    Public Sub Test_POC_Delete()
    Dim connection As IDbConnection = CreateConnection()
    Dim rowver As DateTime = DateTime.Now
    Dim id As Decimal
    Using cmd As IDbCommand = CreateCommand(connection)
    cmd.CommandText = "insert into CONCURRENCYTESTITEMS values(SEQ_CONCURRENCYTESTITEMS.nextval,'bla bla bla',:xRowVersion) returning ITEMID into :myOutputParameter"
    Dim p As OracleParameter = New OracleParameter
    p.Direction = ParameterDirection.ReturnValue
    p.DbType = DbType.Decimal
    p.ParameterName = "myOutputParameter"
    cmd.Parameters.Add(p)
    Dim v As OracleParameter = New OracleParameter
    v.Direction = ParameterDirection.Input
    v.OracleDbType = OracleDbType.TimeStampLTZ
    v.ParameterName = "xRowVersion"
    v.Value = rowver
    cmd.Parameters.Add(v)
    cmd.ExecuteNonQuery()
    id = CType(p.Value, Decimal)
    End Using
    Using cmd As IDbCommand = m_DBTypesFactory.CreateCommand(connection)
    cmd.CommandText = " Delete from CONCURRENCYTESTITEMS where ITEMID = :xItemId and ROWVERSION = :xRowVersion_prev"
    Dim p As OracleParameter = New OracleParameter
    p.Direction = ParameterDirection.Input
    p.DbType = DbType.Decimal
    p.ParameterName = "xItemId"
    p.Value = id
    cmd.Parameters.Add(p)
    Dim v As OracleParameter = New OracleParameter
    v.Direction = ParameterDirection.Input
    v.OracleDbType = OracleDbType.TimeStampLTZ
    v.ParameterName = "xRowVersion_prev"
    v.Value = rowver
    v.Precision = 6 '????
    cmd.Parameters.Add(v)
    Dim cnt As Integer = cmd.ExecuteNonQuery()
    If cnt = 0 Then Assert.Fail() 'should delete
    End Using
    connection.Close()
    End Sub
    Schema:
    -- ****** Object: Table SYSTEM.CONCURRENCYTESTITEMS Script Date: 1/26/2013 11:56:50 AM ******
    CREATE TABLE "CONCURRENCYTESTITEMS" (
    "ITEMID" NUMBER(19,0) NOT NULL,
    "NOTES" NCHAR(200) NOT NULL,
    "ROWVERSION" TIMESTAMP(6) WITH LOCAL TIME ZONE NOT NULL)
    STORAGE (
    NEXT 1048576 )
    Sequence:
    -- ****** Object: Sequence SYSTEM.SEQ_CONCURRENCYTESTITEMS Script Date: 1/26/2013 12:12:48 PM ******
    CREATE SEQUENCE "SEQ_CONCURRENCYTESTITEMS"
    START WITH 1
    CACHE 20
    MAXVALUE 9999999999999999999999999999

    still not comming...
    i have one table each entry is having only one fromdata and one todate only
    i am running below in sql it is showing two rows. ok.
      select t1.U_frmdate,t1.U_todate  ,ISNULL(t2.firstName,'')+ ',' +ISNULL(t2.middleName ,'')+','+ISNULL(t2.lastName,'') AS NAME, T2.empID  AS EMPID, T2.U_emp AS Empticket,t2.U_PFAcc,t0.U_pf 
       from  [@PR_PRCSAL1] t0 inner join [@PR_OPRCSAL] t1
       on t0.DocEntry = t1.DocEntry
       inner join ohem t2
       on t2.empID = t0.U_empid  where  t0.U_empid between  '830' and  '850'  and t1.U_frmdate ='20160801'  and  t1.u_todate='20160830'
    in commond promt
      select t1.U_frmdate,t1.U_todate  ,ISNULL(t2.firstName,'')+ ',' +ISNULL(t2.middleName ,'')+','+ISNULL(t2.lastName,'') AS NAME, T2.empID  AS EMPID, T2.U_emp AS Empticket,t2.U_PFAcc,t0.U_pf 
       from  [@PR_PRCSAL1] t0 inner join [@PR_OPRCSAL] t1
       on t0.DocEntry = t1.DocEntry
       inner join ohem t2
       on t2.empID = t0.U_empid  where  t0.U_empid between  {?FromEmid} and  {?ToEmid} and t1.U_frmdate ={?FDate} and  t1.u_todate={?TDate}
    still not showing any results..

  • ORA-1653: unable to extend table PERFSTAT.STATS

    Hi there,
    I know it's Friday and by the end of the week we normally are not that alert anymore.
    However now we have a very puzzling problem, one that leaves two DBA's very amazed.
    This morning our alert-log of a 9.2.0.8 database on AIX 5.3 showed:
    ORA-1653: unable to extend table PERFSTAT.STATS in tablespace TOOLSEasy, one would say. Extend the tablespace and you're done.
    However the tablespace is on autoextend, not even mentioned that it has 2.5Gb of free space.
    It is also "Locally Managed", with uniform extent size of 16Kb and manual "segment space management"
    The index of this table is in the same tablespace.
    The storage parameters are set to "unlimited" possibilities.
    A manual
    exec statspack.snapresults in the same error where as a
    create table statstest as select * from stats$sqltext ; works fine. The mentioned source table here is the one which seems unable to extend due to the "tablespace restrictions"
    Some storage parameters:
    CREATE TABLE "PERFSTAT"."STATS$SQLTEXT" (
    "HASH_VALUE" NUMBER NOT NULL ENABLE,
    "TEXT_SUBSET" VARCHAR2 (31) NOT NULL ENABLE,
    "PIECE" NUMBER NOT NULL ENABLE,
    "SQL_TEXT" VARCHAR2 (64),
    "ADDRESS" RAW (8),
    "COMMAND_TYPE" NUMBER,
    "LAST_SNAP_ID" NUMBER,
    CONSTRAINT "STATS$SQLTEXT_PK" PRIMARY KEY
    ("HASH_VALUE", "TEXT_SUBSET", "PIECE
    USING INDEX
    PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE
    INITIAL 1048576
    NEXT 1048576
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    ) TABLESPACE "TOOLS"
    ENABLE
    PCTFREE 5
    PCTUSED 40
    INITRANS 1
    MAXTRANS 255
    NOCOMPRESS
    LOGGING
    STORAGE (INITIAL 5242880
    NEXT 5242880
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT)
    TABLESPACE "TOOLS"Can this be some kind of Data Dictionairy corruption ??

    virendra.k wrote:
    The next extent clause in creation script says that it is required to have at least 1G of contiguous memory. But the satement fails which means that a chunk of this size cannot be allocated. The situation may arise due to fragmentation of tablespace. See metalink doc id [1020182.6|https://metalink2.oracle.com/metalink/plsql/f?p=130:14:9000433346754441541::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,1020182.6,1,0,1,helvetica] if the largest free chunk >= 1G. Other wise increase the size of tablespace. It may help you.
    I don't understand the result of 1G you calculated.
    I only see: NEXT 1048576 of the primary key, which is 1M and NEXT 5242880 ( 5M) of the table itself.
    However it the Note lead me to the solution.
    The largest piece of contiguous free space in the tablespace is, according to this Note:
    TABLESPACE NAME CONTIGUOUS BYTES
    TOOLS                                 3,407,872 ==> 3Mb
    TOOLS 3,407,872
    TOOLS 3,407,872
    TOOLS 3,301,376
    TOOLS 3,194,880
    TOOLS 3,194,880
    TOOLS 3,194,880
    TOOLS 3,194,880
    TOOLS 3,088,384
    So I executed the following:
    SQL> alter table stats$sqltext storage (next 1m);And subsequently:
    SQL> exec statspack.snap;Which now succeeds !!
    Conclusion: Tablespace REORG needs to be planned.
    One more strange thing however:
    I altered the NEXT_EXTENT size back to 5M, and again the statspack.snap now works OK.
    It must be the either a background COALESCE that solved the problem, or the (maybe existing) corruption in the dictionary is now fixed/gone
    Thanks for the assistance

  • ORA-1653: unable to extend table (but tablespace is not full!)

    Hi folks,
    I was navigating through the Alert Log file and I'm strangely noticing the error:
    ORA-1653: unable to extend table PROMO.DETAILS by 40973 in tablespace PROMO
    I'm defining it as strange because:
    (1) The tablespace is only 65% full (there are 750MB of freespace), and
    (2) There is ample space on the harddisk
    I then used TOAD to try to debug the problem and there is a tool which allows me to view a map of the tablespace. I could see that the tablespace "PROMO" had indeed freespace, but the table "DETAILS" looked like it had no space where to extend (there was a table both before, and after it in the map). Is there a way to solve this problem , or isn't a problem at all?

    this problem occurs because
    your table don't find one big free space for next extent in tablespace.
    solutions:-
    1st solution
    * alter tablespace <tablespace_name> add datafile ' path';
    OR
    2nd solution
    - coalesce your tablespace 'alter tablespace <tablespace name> coalesce'.
    OR
    3rd solution
    check u r pctincrease parameter if it is 50 then
    ALTER TABLE <tablename> STORAGE (NEXT 1M PCTINCREASE 0);
    kuljeet pal singh

  • Redefinition For Move Tablespace.

    Hi,
       I am using Release 11.2.0.3.0 of oracle. Now i am supposed to move a 'big table big_tab1(~700GB)+its indexes' to a different tablespace which is encrypted. I am trying to follow redefinition approach inspite of ALTER TABLE MOVE , as because we wont be able to have much downtime on prod. But below are my difficulties.
       I am having a foreign key constraint(con_1) on table tab1 which refers to the primary key of big table  big_tab1. if i am using COPY_TABLE_DEPENDENTS for copying all the constraints and indexes as it is, then all my indexes not being shifted to encrypted tablespace, they are created in the existing unencrypted tablespace only?
                                                                             else
       If i am creating the indexes/constraints manually (by specifing the encrypted tablespace) on the interim table, just after the START_REDEF_TABLE, then my indexeses will be created on new encrypted tablespace but i have to drop the existing foreign key(con_1) and create the 'foreign key constraint ' that is on table tab1, and it will refer to the redefined table big_tab1, so it will take lock for that duration of ALTER statement, and the purpose of online redefinitio will not be full filled.
       Please suggest?

    You are confused.
    The data for a table is stored in segments. For a partitioned table the data is contained in partitions: the table HAS NO DATA. Each partitions data is stored in its own segment which can be in its own tablespace.
    For a subpartitioned table the data is contained in subpartitions: the table HAS NO DATA, the partitions HAVE NO DATA. ALL DATA IS IN THE SUBPARTITIONS. Each subpartitions data can be in its own tablespace. There is NO index data to be moved. There is NO index partition data to be moved. There is NO data except the data in the index subpartitions.
    If you want to modify the tablespace the NEW partitions or subpartitions will use (as opposed to moving existing data to a new tablespace) then you need to use the 'modify_index_default_attrs' clause of the ALTER INDEX statement.
    See ALTER INDEX in the SQL Reference
    http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_1008.htm#BGECEJHE
    >
    modify_index_default_attrs
    Specify new values for the default attributes of a partitioned index.
    Restriction on Modifying Partition Default Attributes The only attribute you can specify for a hash-partitioned global index or for an index on a hash-partitioned or composite-partitioned table is TABLESPACE.
    TABLESPACE Specify the default tablespace for new partitions of an index or subpartitions of an index partition.
    >
    See the example in the doc
    >
    Modifying Default Attributes: Example The following statement alters the default attributes of local partitioned index prod_idx, which was created in "Creating an Index on a Hash-Partitioned Table: Example". Partitions added in the future will use 5 initial transaction entries and an incremental extent of 100K:
    ALTER INDEX prod_idx
    MODIFY DEFAULT ATTRIBUTES INITRANS 5 STORAGE (NEXT 100K);

  • Caching on DB level

    Hi,
    Our scenario is like one frequently accessed table which seldom changes is to be cached so that instead of retrieving the data from the filers each time a user requests for data it retrieves from cache for performance impprovement.
    so instead of using folowing method
    create table md_cube_64
    storage (next . . . )
    cache;
    Alter table md_cube_64 cache;
    what other methods are there for table caching?
    is there any database level caching ?
    Regards
    Asif
    Message was edited by:
    Mohammad Asif

    1) Assigning table to keep pool
    2) Using different table type (not heap table but for example index organized table, single table hash cluster) to reduce necessary i/o amount to get table data
    3) if table data never changes and you are reusing table data much in the same session then reading them into a pl/sql collection structure in the first run and after that using that collection instead of querying actual table
    I'm sure there are other possibilities...
    Gints Plivna
    http://www.gplivna.eu

  • ORA-1547 조치 방법

    제품 : ORACLE SERVER
    작성날짜 : 1995-11-02
    ORA-1547 에러 발생의 원인으로는 TABLESPACE가 에러에 명시된 ORACLE block수 만큼의 요청된 EXTENT를 할당할 충분한 FREE SPACE를 갖고 있지 못 할 경우에 발생된다.
    현재의 EXTENT가 실제 데이타 화일에 존재하는 FREE SPACE 중에서 할당 가능한 연속적인 ORACLE block을 요구하게 된다.
    < ORA-1547 에러가 발생하는 경우 >
    1. 데이타 INSERT나 UPDATE시 DATA SEGMENT가 차지하게될 연속적인 ORACLE 블럭을 할당받지 못할 경우에 발생한다.
    2. 인덱스를 생성할 경우에 발생한다.
    - ROLLBACK SEGMENT가 사용할 RBS 또는 USER TABLESPACE의 영역이
    부족하여 발생할 수 있다.
    - 인덱스 생성시 SORT 영역으로 사용되는 TEMPORARY TABLESPACE내의
    SPACE의 부족으로 발생할 수 있다.
    3. SQL*FORMS30, SQL*REPORTWRITER등의 프로그램을 데이타베이스에
    [SAVE]시 관련 테이블들을 포함하고 있는 SYSTEM 또는 TOOLS
    TABLESPACE등의 영역이 부족한 경우에 발생된다.
    < ORA-1547 에러 확인 >
    이러한 경우 EXTENT에 관련된 DATA DICTIONARY VIEW인 USER_TABLES,
    USER_EXTENTS, USER_SEGMENTS와 DBA_FREE_SPACE 등을 조회해서 관련 내용을
    확인한다.
    예를 들어, 데이타 INSERT 시 "ORA-1547 : Failed to allocate extent of size
    'num' in tablespace 'TOOLS'" 에러가 발생될 경우를 고려해 보자.
    1. [USER_TABLES]에서 INSERT에 관련된 테이블의 NEXT_EXTENT 크기를 확인 한다.
    SQL> SELECT * FROM USER_TABLES
    WHERE TABLE_NAME = 'EMP';
    INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE MIN_EXTENTS MAX_EXTENTS
    10240 190464 50 1 121
    (A)
    (A) : 다음에 할당되는 EXTENT의 크기를 나타내며 BYTES 단위이다.
    2. [DBA_FREE_SPACE]에서 현재 TABLESPACE에 존재하는 FREE SPACE 중 가장 큰
    연속된 영역을 확인한다.
    [DBA_FREE_SPACE]는 SQLDBA에서 확인한다.
    SQLDBA> SELECT MAX(BYTES) MAX_CONTIGUOUS_SPACE
    FROM DBA_FREE_SPACE
    WHERE TABLESPACE_NAME = 'TOOLS';
    MAX_CONTIGUOUS_BYTES
    19730432
    (B)
    (B) : 현재 TABLESPACE에 남아있는 FREE SPACE 중 가장 큰 연속된 영역으로
    BYTES 단위로 나타난다.
    3. 위에서 살펴본 바와 같이 2)-(B)의 MAX(BYTES) 크기가 1)-(A)의 NEXT_EXTENT
    크기보다 커야하므로 최소 1)-(A)의 NEXT_EXTENT 크기 이상의 데이타 화일을
    "TOOLS" TABLESPACE에 "ALTER TABLESPACE tablespace_name ADD
    DATAFILE.... " 명령을 이용하여 추가한다.
    < ORA-1547의 처리 방법 >
    1. 해당 TABLESPACE내에서 연속된 영역의 ORACLE block 할당할 수 있도록 데이타
    화일을 추가한다. 3) 에서 "TOOLS" TABLESPACE에 영역을 추가하는 방법은
    다음과 같다.
    $ sqlplus system/manager
    SQL> ALTER TABLESPACE tools ADD DATAFILE
    '/usr/../tools1ORA7.dbf' SIZE 50M;
    <참고> DATAFILE의 이름 즉 '/usr/../tools1ORA7.dbf' 는 이미 존재하는
    DATAFILE 이름과 다르게 해야 한다.
    2. TABLE의 STORAGE PARAMETER에서 INITIAL EXTENT, NEXT EXTENT의 크기를
    조정하여 TABLE을 재구축할 수 있다. 즉, TABLE의 STORAGE PARAMETER 중에서
    NEXT를 현재 TABLESPACE에 남아 있는 FREE SPACE 중 가장 큰 연속된 영역
    (DBA_FREE_SPACE의 MAX(BYTES) ) 보다 작게 변경할 수 있다.
    예를 들면, "ALTER TABLE..." 명령을 이용하여 다음과 같이 재조정한다.
    SQL> ALTER TABLE EMP STORAGE ( NEXT 100K );
    3. 또 다른 방법으로는 관련 TABLESPACE를 재구성하는 것이다.
    여기서는 TABLESPACE를 재구축한 후, TABLE을 다시 생성함으로써 DISK
    FRAGMENTATION을 없앨 수 있다.

    See MOSC notes on the ORA-12545 error:
    Note 284909.1 - Intermittent ORA-12545 When Trying To Connect To RAC Database
    Note 364855.1 - RAC Connection Redirected To Wrong Host/IP ORA-12545
    Note 291175.1 - Clients Failing to Connect Due to Intermittent ORA-12545 in RAC Environment
    Note 333159.1 - ORA-12545 Frequent Client Connection Failure - 10g Standard RAC

  • Best way of reading clob and loadng into table

    Hi,
    I'm loading the data from clob to one of our table. For this task its taking 10 minutes for 8,000 records. Down the road we are expecting more than 20,000 records.
    Is there any fastest way to load the data in this approach. please help me out
    sournce table is lob_effect1 and target table canbe any table.
    CREATE TABLE lob_effect1 (
      id  INTEGER NULL,
      loc CLOB    NULL
      STORAGE (
        NEXT       1024 K
    CREATE OR REPLACE FUNCTION f_convert(p_list IN VARCHAR2)
        RETURN varchar2
      AS
        l_string       VARCHAR2(32767) := p_list || ',';
        l_comma_index  PLS_INTEGER;
        l_index        PLS_INTEGER := 1;
       -- l_tab          test_type := test_type();
          v_col_val                         varchar2(32767);
       v_col_val_str            varchar2(32767);
      BEGIN
        LOOP
         --     dbms_output.put_line(l_string);
          l_comma_index := INSTR(l_string, ',', l_index);
                   EXIT WHEN l_comma_index = 0;
          v_col_val := SUBSTR(l_string, l_index, l_comma_index - l_index);
                   v_col_val_str :=v_col_val_str ||','||chr(39)||v_col_val|| chr(39);
                   v_col_val_str :=ltrim(v_col_val_str,',');
              --     dbms_output.put_line(v_col_val_str);
          l_index := l_comma_index + 1;
        END LOOP;
        RETURN v_col_val_str;
      END f_convert;
    CREATE OR REPLACE
    PROCEDURE p_load_clob1(
        p_date     IN DATE DEFAULT NULL,
        p_tab_name IN VARCHAR2,
        p_clob     IN CLOB DEFAULT NULL)
    IS
      var_clob CLOB;
      var_clob_line            VARCHAR2(4000);
      var_clob_line_count      NUMBER;
      var_clob_line_word_count NUMBER;
      v_col_val                VARCHAR2(32767);
      v_col_val_str            VARCHAR2(32767);
      v_tab_name               VARCHAR2(200):='coe_emea_fi_fails_new_tmp';
      v_sql                    VARCHAR2(32767);
      n_id                     NUMBER;
      CURSOR cur_col_val(p_str VARCHAR2)
      IS
        SELECT * FROM TABLE(fn_split_str(p_str));
    BEGIN
      INSERT
      INTO lob_effect VALUES
          seq_lob_effect.nextval,
          p_clob
      RETURNING id
      INTO n_id;
      COMMIT;
      SELECT loc INTO var_clob FROM lob_effect1 WHERE id =n_id;
      var_clob_line_count := LENGTH(var_clob) - NVL(LENGTH(REPLACE(var_clob,chr(10))),0) + 1;
      FOR i                                  IN 1..var_clob_line_count
      LOOP
        var_clob_line           := regexp_substr(var_clob,'^.*$',1,i,'m');
        var_clob_line_word_count:=LENGTH(var_clob_line) - NVL(LENGTH(REPLACE(var_clob_line,',')),0) + 1;
        v_col_val_str           :=NULL;
        v_col_val               :=NULL;
        FOR rec_col_val         IN cur_col_val(var_clob_line)
        LOOP
          v_col_val     :=rec_col_val.column_value;
          v_col_val_str :=v_col_val_str ||','||chr(39)||v_col_val|| chr(39);
          v_col_val_str :=ltrim(v_col_val_str,',');
        END LOOP;
        v_sql :='insert into '||p_tab_name||' values ('||v_col_val_str||')';
        EXECUTE immediate v_sql;
      END LOOP;
      COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
      dbms_output.put_line('Error:' || SQLERRM);
    END;
    /Thanks & Regards,
    Ramana.

    Thread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • CLOB column resize

    Hi,
    Below question is for oracle 11g database on RHEL.
    we are having a CLOB column of 8K. Now the developer is asking to get that increase to 32k how can i make that?
    As i know we cannot change "chunk" size once after it is created. I thought we need to recreate a new table with chunk 32k and then export/import the data from old table and drop that old table.
    But, the point where i'm struck is ,
    the DB_BLOCK_SIZE is 8k. So, can we create a new table with 32k chunk size. Does it conflict?
    If thats not possible, i thought an alternative of recreating a tablespace with 32k. I heard that, we can create new tablespace with different db_block_size.. i'm not sure if i'm right? and if you don't mind can you just explain me the lob in-line and out-line.. i read the documents but, i didn't get that well
    Can you please suggest me the better way and correct me if i'm wrong..

    These steps might help:
    How to alter LOB storage definition.
    You may use the following to alter the table and the associated LOB allowing for more extents:
    alter table owner.table_name
    modify LOB(lob_column_name)
    (storage (next 100M MAXEXTENTS 1000));
    Example:
    User is getting the following error:
    APP-FND-00565: The export failed in APP_EXPORT.DUMP_DATA
    PL/SQL ERROR: ORA-01691: unable to extend lob segment APPLSYS.SYS_LOB0000033586C00004$$ by 311075 in tablespace APPLSYSD
    Looking at tablespace APPLSYSD I can see that it does have room to extent at datafile level for another 800 MB. And TEMP has plenty of room as well.
    Using the SQL provided in this page: Looking at the LOB:
    SYS_LOB0000033586C00004$$
    It looks like Column FILE_DATA of datatype LOBSEGMENT in table APPLSYS.FND_LOBS has grown to be 5 GB in size and was trying to extent itself grabbing another 2.3 GB (NEXT_EXTENT is 2548301824) which of course it could not do, I modified it so it can extent only by another 100 MB.
    SQL> alter table APPLSYS.FND_LOBS modify lob (FILE_DATA)
    (storage (maxextents 1000 next 100M));
    Table altered.
    For more detail see: http://www.idbasolutions.com/database/find-lob.html

  • How to Remove the appropriate extent parameters from the command

    Hi friends
    I am altering one B-tree index
    alter index employees_last_name_idx storage(next 400k maxextents 100);
    I am getting error
    Ora-25150:-Remove the appropriate extent parameters from the command
    How I can remove it
    Plz help me
    Best regards
    Raza
    Edited by: user8021439 on Dec 8, 2009 1:35 AM

    If the tablespace has been created with EXTENT MANAGEMENT LOCAL UNIFORM or EXTENT MANAGEMENT LOCAL AUTOALLOCATE, you cannot change the NEXT and MAXEXTENTS parameters of segments (tables or indexes) after having created them.
    Check the tablespace parameters in DBA_TABLESPACES for the tablespace that index has been created in.
    Why do you want to change those parameters anwyay ? Such changes were sometimes used in Tablespaces that are Dictionary Managed -- pre 8i/9i
    Hemant K Chitale

Maybe you are looking for

  • Can No longer import video on new intel Mac Pro Tower

    Help...I am a teacher and my students cannot use our brand new computers to import footage. No problems with old G5. FOr some reason i-movie will not recognize a camera is attached on the new mac pro's Is there something I am doing wrong. The camera

  • Missing class file for PdfGo

    Hello If I start my Servlet, apache returns following error message: java.lang.NoClassDefFoundError: org/apache/log4j/Layout. I've downloaded the log4j classes but there is no Layout class in log4j? Please help. Regards b

  • HI Guys, ABT getting into SAP SD

    Hi Guys, I am Amod ,Automobile Engg. from India  but now in Australia from 2 yrs.in same field . I am preparing for my sap Sd (order fulfilment1&2) certification.I have exp. of 10 yrs. in Auto industry and used Sap for day to day functions (limited )

  • 27" iMac Firewire Port fails after 300 days of use

    I had to take my 27" iMac (purchased in 2012) to the "Genius" bar today because the firewire port failed. Prior to this: My Apogee Duet Firewire Sound Interface has given me all kinds of errors with the iMac. I wrote emails to Apogee complaining abou

  • Please help - Unable to get the cluster database home page

    I have both the RACDB1 & RACDB2 instances running (and the 2 listeners also) . But I do not get the the cluster database home page. Instead I get startup/shutdown page. EM later confirms both instances are up. But how do I connect to the RACDB databa