Fragmentation and uncontrolled growth of the 'PJID' tablespace.

Hi,
We are running oracle Application in HP ux and db is 9.2.0.5.0 and got issue wile running the UPDATE PROJECT AND RESOURCE BASE SUMMARIES job and it failed with
Cause: FDPSTP failed due to ORA-12801: error signaled in parallel query server P006
ORA-01658: unable to create INITIAL extent for segment in tablespace PJID
ORA-06512: at "APPS.PJI_FM_SUM_MAIN", line 1637
We have added 4GB space into the PJID table space and job is still failing
below is the out put of the sql
SELECT tablespace_name,COUNT(*) AS fragments,
SUM(bytes) AS total,
MAX(bytes) AS largest
FROM dba_free_space
where TABLESPACE_NAME='PJID'
GROUP BY tablespace_name
TABLESPACE_NAME FRAGMENTS TOTAL LARGEST
PJID 456 5264588800 20111360
Would like to know how to re-organised the table space.
Do you have any steps or metalink note ID. need to do it in Prod need a immediate help.
thx

There is a top Oracle Metalink Note:303709.1 for "How to Reclaiming unused space in APPLSYSD tablespace". You can use this methodology for your desired tablespace.
>> need to do it in Prod need a immediate help.
Generally, you can not fix anything directly on your production instance, until and unless it's tested and approved by your management. Moreover, on Oracle Apps Instances, We must have to apply the solution only after succeeding on test/dev/uat instance. I would suggest you to do this first on any of your Oracle Apps Dev/Test/UAT instances, then movie it to production.
Regards,
Sabdar Syed.

Similar Messages

  • Can i change the temporary tablespace for schemas during the transactions??

    In My Prod database some of the tablespaces assigned system as Temporary tablespace. I want to change the temporary tablespace for these schemas and the default temporary tablespace of the database.
    Can I make this change while the users are accessing the database. Is there any impact If i make this change while the transactions are running.
    Below is the change i want to do........
    1. Change the users for SYSTEM to TEMP in the temporary tablespace by executing the following SQL
    statements:
    alter user SYSTEM temporary tablespace TEMP;
    alter user SYS temporary tablespace TEMP;
    alter user AD_MONITOR temporary tablespace TEMP;
    alter use SI_INFORMTN_SCHEMA temporary tablespace TEMP;
    alter user EM_MONITOR temporary tablespace TEMP;
    alter user ORDPLUGINS temporary tablespace TEMP;
    alter user TSMSYS temporary tablespace TEMP;
    alter user XDB     temporary tablespace TEMP;
    alter user SCOTT temporary tablespace TEMP;
    alter user DBSNMP temporary tablespace TEMP;
    alter user DIP     temporary tablespace TEMP;
    alter user OUTLN temporary tablespace TEMP;
    alter user ANONYMOUS temporary tablespace TEMP;
    alter user ORDSYS temporary tablespace TEMP;
    alter user MDDATA temporary tablespace TEMP;
    2. Set the default temporary tablespace to TEMP by executing the following SQL statement:
    alter DATABASE default temporary tablespace TEMP;

    user11829256 wrote:
    But if one transaction is using the old temporary tablespace and if i change the temporary tablespace of user will that transactions of that user fails which is using the old temp segment....It will continue to use the old one till the end of transaction.
    Here a quick test (hopefully readable) :
    -- session 1 as a dba user
    SQL> grant create session to nga identified by nga;
    Grant succeeded.
    SQL> alter user nga temporary tablespace tmp;
    User altered.
    SQL> select temporary_tablespace from dba_users where username='NGA';
    TEMPORARY_TABLESPACE
    TMP
    -- session 2 as NGA
    SQL> insert into gtt
      2  select * from (select * from all_objects union all select * from all_objects union all select * from all_objects);
    80559 rows created.
    -- session 1 as a dba user
    SQL> select tablespace from v$tempseg_usage where username='NGA';
    TABLESPACE
    TMP
    SQL> alter user nga temporary tablespace pstemp;
    User altered.
    SQL> select tablespace from v$tempseg_usage where username='NGA';
    TABLESPACE
    TMP
    SQL> select temporary_tablespace from dba_users where username='NGA';
    TEMPORARY_TABLESPACE
    PSTEMP
    -- session 2 as NGA
    80559 rows created.
    SQL> roll;
    Rollback complete.
    -- session 1 as a dba user
    SQL> select tablespace from v$tempseg_usage where username='NGA';
    no rows selected
    -- session 2 as NGA
    SQL> insert into gtt
      2  select * from (select * from all_objects union all select * from all_objects union all select * from all_objects);
    80559 rows created.
    -- session 1 as a dba user
    SQL> select tablespace from v$tempseg_usage where username='NGA';
    TABLESPACE
    PSTEMPNicolas.

  • Rapid and Huge growth of of used space of temporary tablespace

    Hi,
    Have a query (select) that run quick (no more than 10 seconds).
    As soon I insert the data into a temporary table or even on physical table, the temporary table used space starts to growth very fast. The used space is totally used and the query crash since e reach the limit (65GB), or even more if I add more table files to temporary tablespace!
    The problem also happen only if the period (dates) is one year (2013). If the period is the first trimestre of 2013 (same amount of data), the problem does not happen!!
    I also confirm that on another instance ( a test one), even with less resources this ORACLE behavior do not happen. I confirm differente execution plan queries, between the two instances .
    What I really do not understant is the behavior of the ORACLE with the huge and rapid growth!!!
    Any one experiment such a similiar situation?
    Thanks in advance,Rui
    Plan
    INSERT STATEMENT ALL_ROWSCost: 15.776 Bytes: 269 Cardinality: 1
    28 LOAD TABLE CONVENTIONAL MIDIALOG_OLAP.MED_INVCOMP_FACTTMP_BEFGROUPBY
    27 FILTER
    26 NESTED LOOPS
    24 NESTED LOOPS Cost: 15.776 Bytes: 269 Cardinality: 1
    22 NESTED LOOPS Cost: 15.775 Bytes: 255 Cardinality: 1
    19 NESTED LOOPS Cost: 15.774 Bytes: 205 Cardinality: 1
    17 NESTED LOOPS Cost: 15.773 Bytes: 197 Cardinality: 1
    14 NESTED LOOPS Cost: 15.770 Bytes: 180 Cardinality: 1
    11 NESTED LOOPS Cost: 15.767 Bytes: 108 Cardinality: 1
    9 HASH JOIN Cost: 15.757 Bytes: 8.346.500 Cardinality: 83.465
    7 HASH JOIN Cost: 13.407 Bytes: 6.345.012 Cardinality: 83.487
    5 HASH JOIN Cost: 11.163 Bytes: 5.010.550 Cardinality: 100.211
    3 HASH JOIN Cost: 5.642 Bytes: 801.288 Cardinality: 22.258
    1 INDEX RANGE SCAN INDEX MIDIALOG.IX_INSCOMP_DTCEIDICIDLCPECIDOP Cost: 120 Bytes: 489.676 Cardinality: 22.258
    2 INDEX FAST FULL SCAN INDEX (UNIQUE) MIDIALOG.IX_LINHACOMPRADA_IDLCIDOPSEQ Cost: 5.463 Bytes: 123.975.530 Cardinality: 8.855.395
    4 INDEX FAST FULL SCAN INDEX (UNIQUE) MIDIALOG.IX_LINHACOMPRADA_IDLCIDOPSEQ Cost: 5.463 Bytes: 123.975.530 Cardinality: 8.855.395
    6 TABLE ACCESS FULL TABLE MIDIALOG.ITEM_AV Cost: 1.569 Bytes: 6.963.736 Cardinality: 267.836
    8 TABLE ACCESS FULL TABLE MIDIALOG.ITEM_AV Cost: 1.572 Bytes: 7.713.672 Cardinality: 321.403
    10 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.IX_BOFINALBO_IDBOIDFINALBO Cost: 0 Bytes: 8 Cardinality: 1
    13 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.INSERCAO_COMPRADA Cost: 3 Bytes: 72 Cardinality: 1
    12 INDEX RANGE SCAN INDEX (UNIQUE) MIDIALOG.IX_INSCOMPRADA_IDLCDATAPECAINS Cost: 2 Cardinality: 1
    16 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.INSERCAO_ITEMFACTURA Cost: 3 Bytes: 17 Cardinality: 1
    15 INDEX RANGE SCAN INDEX MIDIALOG.IX_INSITFACT_INSCOMPRADA Cost: 2 Cardinality: 1
    18 INDEX RANGE SCAN INDEX (UNIQUE) MIDIALOG.UQ_ITEMFACTURA_IDITF_IDFACT Cost: 1 Bytes: 8 Cardinality: 1
    21 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.FATURA Cost: 1 Bytes: 50 Cardinality: 1
    20 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.PK_FATURA Cost: 0 Cardinality: 1
    23 INDEX UNIQUE SCAN INDEX (UNIQUE) MIDIALOG.PK_TIPO_ESTADO Cost: 0 Cardinality: 1
    25 TABLE ACCESS BY INDEX ROWID TABLE MIDIALOG.TIPO_ESTADO Cost: 1 Bytes: 14 Cardinality: 1
    Edited by: rr**** on 19/Fev/2013 15:25

    I run the select with sucess, no more that 1 minute from on year of data. Few temporary used space used.
    As soon I plug the insert (global temporary table, also experiment with physical table) the used space of temporary table space start to grow crazy!!
    insert into midialog_olap.med_invcomp_facttmp_befgroupby
    select fac.numefatura,
    fac.codpessoa,
    fac.dtemiss,
    tef.nome as estado_factura,
    opsorig.demid,
    opsorig.anoplano,
    opsorig.numplano,
    opsorig.numplanilha,
    ops.nome as ordem_publicidade,
    ops.external_number as numero_externo,
    ops.estado,
    lic.seq,
    inc.data,
    inc.peca,
    fac.id_versao_plano,
    fac.ano_proforma || '.' || fac.numrf as num_proforma,
    iif.tipo_facturacao,
    opsorig.codveiculo as id_veiculo,
    opsorig.codfm as id_fornecedor_media,
    icorig.chkestado as id_estado_checking,
    0 as percentagem_comissao_agencia,
    0 as valor_pbv,
    0 as valor_stxtv,
    0 as valor_ptv,
    0 as valor_odbv,
    0 as valor_pbbv,
    0 as valor_dnv,
    0 as valor_pbnv,
    0 as valor_stxv,
    0 as valor_pbtv,
    0 as valor_dav,
    0 as valor_plv,
    0 as valor_odlv,
    0 as valor_pllv,
    0 as valor_ca,
    0 as valor_trv,
    0 as valor_txv,
    0 as valor_base_facturacao,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pb_compra * fac.percentagem_facturada / 100))
    as valor_pbc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_stxt_compra * fac.percentagem_facturada / 100))
    as valor_stxtc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pt_compra * fac.percentagem_facturada / 100))
    as valor_ptc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_odb_compra * fac.percentagem_facturada / 100))
    as valor_odbc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pbb_compra * fac.percentagem_facturada / 100))
    as valor_pbbc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_dn_compra * fac.percentagem_facturada / 100))
    as valor_dnc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pbn_compra * fac.percentagem_facturada / 100))
    as valor_pbnc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_stx_compra * fac.percentagem_facturada / 100))
    as valor_stxc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pbt_compra * fac.percentagem_facturada / 100))
    as valor_pbtc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_da_compra * fac.percentagem_facturada / 100))
    as valor_dac,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pl_compra * fac.percentagem_facturada / 100))
    as valor_plc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_odl_compra * fac.percentagem_facturada / 100))
    as valor_odlc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_pll_compra * fac.percentagem_facturada / 100))
    as valor_pllc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_transcricoes * fac.percentagem_facturada / 100))
    as valor_trc,
    decode(ops.estado, :WKFOPR_BOOKINGORDER_CANCELED, 0,
    decode(iif.tipo_facturacao, :BILLING_TYPE_ONLYCOMISSION, 0,
    inc.total_tx_compra * fac.percentagem_facturada / 100))
    as valor_txc,
    --nvl((select cfm.total_comprado
    -- from fin_custos_facturados_media cfm
    -- where cfm.id_factura = fac.id_factura and
    - -- cfm.id_op = ops.id_op
    -- ), 0) / opsorig.number_of_bought_insertions as custos_associados,
    0 as custos_associados,
    fac.iss as percentagem_iva,
    fac.percentagem_facturada,
    fac.currency_exchange as taxa_cambio,
    iif.associated_code as insertions_associated_code
    from fatura fac, item_fatura itf, insercao_itemfactura iif,
    insercao_comprada icorig, linha_comprada lcorig, item_av opsorig,
    med_bookingorder_finalbo opfin,
    insercao_comprada inc,
    linha_comprada lic, item_av ops,
    --veiculo vei,
    tipo_estado tef
    where fac.id_factura = itf.id_factura and
    itf.id_itemfactura = iif.id_itemfactura and
    iif.id_ic = icorig.id_ic and
    icorig.id_lc = lcorig.id_lc and
    lcorig.id_op = opsorig.id_op and
    opsorig.id_op = opfin.id_booking_order and
    opsorig.number_of_bought_insertions > 0 and
    opfin.id_final_booking_order = ops.id_op and
    -- ops.id_op = (
    -- select max(ops.id_op)
    -- from item_av ops
    -- start with ops.id_op = opsorig.id_op
    -- connect by prior ops.id_opsubstituicao = ops.id_op) and
    ops.id_op = lic.id_op and
    lic.seq = lcorig.seq and
    lic.id_op = inc.id_op and
    lic.id_lc = inc.id_lc and
    inc.data = icorig.data and
    inc.peca = icorig.peca and
    --opsorig.codveiculo = vei.codveiculo and
    fac.estado = tef.estado and
    fac.estado != 305 and
    ops.estado != 223 and
    iif.tipo_facturacao != 'SO_CA' and
    icorig.data between :dtBeginDate and :dtEndDate and
    (fac.codagenciafat = :iIdAgency or :iIdAgency is null);

  • How we find tablespace fragmentation and resolve this problem

    Hi,
    How we find tablespace fragmentation and resolve this in 10 G (R2) windows XP.
    Regards
    Faheem

    Hi,
    >>Are you using Dictionary Managed or Locally Managed Tablespaces...??
    In fact, there is no way to create a DMT if SYSTEM tablespace is LMT. So, what I said in my previous post "Unless the OP is using DMT ..." is impossible in Oracle 10g ...
    SQL> create tablespace tbs_test
      2      datafile '/u01/oradata/BDRPS/test01.dbf' size 5m
      3      extent management dictionary;
    create tablespace tbs_test
    ERROR at line 1:
    ORA-12913: Cannot create dictionary managed tablespace
    SQL> select extent_management from dba_tablespaces
      2  where tablespace_name='SYSTEM';
    EXTENT_MAN
    LOCALCheers
    Legatti

  • What are differences between the target tablespace and the source tablespac

    The IMPDP command create so manay errors. But the EXAMPLE tablespace is transported to the target database successfully. It seems that the transported tablespace is no difference with the source tablespace.
    Why create so many errors?
    How to avoid these errors?
    What are differences between the target tablespace and the source tablespace?
    Is this datapump action really successfull?
    Thw following is the log output:
    [oracle@hostp ~]$ impdp system/oracle dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Import: Release 10.2.0.1.0 - Production on Sunday, 28 September, 2008 18:08:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SYSTEM"."SYS_IMPORT_TABLESPACE_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_TABLESPACE_01": system/******** dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "OE"."CUSTOMERS" ("CUSTOMER_ID" NUMBER(6,0), "CUST_FIRST_NAME" VARCHAR2(20) CONSTRAINT "CUST_FNAME_NN" NOT NULL ENABLE, "CUST_LAST_NAME" VARCHAR2(20) CONSTRAINT "CUST_LNAME_NN" NOT NULL ENABLE, "CUST_ADDRESS" "OE"."CUST_ADDRESS_TYP" , "PHONE_NUMBERS" "OE"."PHONE_LIST_TYP" , "NLS_LANGUAGE" VARCHAR2(3), "NLS_TERRITORY" VARCHAR2(30), "CREDIT_LIMIT" NUMBER(9,2), "CUST_EMAIL" VARCHAR2(30), "ACCOUNT_MGR_ID" NU
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "IX"."ORDERS_QUEUETABLE" ("Q_NAME" VARCHAR2(30), "MSGID" RAW(16), "CORRID" VARCHAR2(128), "PRIORITY" NUMBER, "STATE" NUMBER, "DELAY" TIMESTAMP (6), "EXPIRATION" NUMBER, "TIME_MANAGER_INFO" TIMESTAMP (6), "LOCAL_ORDER_NO" NUMBER, "CHAIN_NO" NUMBER, "CSCN" NUMBER, "DSCN" NUMBER, "ENQ_TIME" TIMESTAMP (6), "ENQ_UID" VARCHAR2(30), "ENQ_TID" VARCHAR2(30), "DEQ_TIME" TIMESTAMP (6), "DEQ_UID" VARCHAR2(30), "DEQ_
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SH"."CUSTOMERS" 9.850 MB 55500 rows
    . . imported "SH"."SUPPLEMENTARY_DEMOGRAPHICS" 695.9 KB 4500 rows
    . . imported "OE"."PRODUCT_DESCRIPTIONS" 2.379 MB 8640 rows
    . . imported "SH"."SALES":"SALES_Q4_2001" 2.257 MB 69749 rows
    . . imported "SH"."SALES":"SALES_Q1_1999" 2.070 MB 64186 rows
    . . imported "SH"."SALES":"SALES_Q3_2001" 2.129 MB 65769 rows
    . . imported "SH"."SALES":"SALES_Q1_2000" 2.011 MB 62197 rows
    . . imported "SH"."SALES":"SALES_Q1_2001" 1.964 MB 60608 rows
    . . imported "SH"."SALES":"SALES_Q2_2001" 2.050 MB 63292 rows
    . . imported "SH"."SALES":"SALES_Q3_1999" 2.166 MB 67138 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "EXAM_03"
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_LNAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_EMAIL_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"PM"."PRINTMEDIA_PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_CREDIT_LIMIT_MAX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_ID_MIN" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"PM"."PRINTMEDIA__PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"IX"."SYS_C005192" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUSTOMERS_PK" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_LNAME_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_EMAIL_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"PM"."PRINTMEDIA_PK" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"OE"."CUSTOMERS_ACCOUNT_MANAGER_FK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39083: Object type REF_CONSTRAINT failed to create with error:
    ORA-00942: table or view does not exist
    Failing sql is:
    ALTER TABLE "OE"."ORDERS" ADD CONSTRAINT "ORDERS_CUSTOMER_ID_FK" FOREIGN KEY ("CUSTOMER_ID") REFERENCES "OE"."CUSTOMERS" ("CUSTOMER_ID") ON DELETE SET NULL ENABLE
    ORA-39112: Dependent object type REF_CONSTRAINT:"PM"."PRINTMEDIA_FK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUST_UPPER_NAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_UPPER_NAME_IX" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    ORA-39112: Dependent object type PROCACT_INSTANCE skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
    ORA-01403: no data found
    ORA-01403: no data found
    Failing sql is:
    BEGIN
    SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_SIGNATURE_TABLE('AQ$_ORDERS_QUEUETABLE_G');COMMIT; END;
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_V" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_N" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_R" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_E" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Job "SYSTEM"."SYS_IMPORT_TABLESPACE_01" completed with 63 error(s) at 18:09:14

    Short of trying to then reverse-engineer the objects that are in the dump file (I believe Data Pump export files contain some XML representations of DDL in addition to various binary bits, making it potentially possible to try to scan the dump file for the object definitions), I would tend to assume that the export didn't include those type definitions.
    Since it looks like you're trying to set up the sample schemas, is there a reason that you wouldn't just run the sample schema setup scripts on the destination database? Why are you using Data Pump in the first place?
    Justin

  • Is there an update to iOS 6? I've been having several technical issues since the upgrade. Uncontrolled zoom of the entire screen, Siri taking 25 seconds to respond and not recognizing names, locking up of the phone. All requiring a reset.

    Is there an update to iOS 6? I've been having several technical issues since the upgrade. Uncontrolled zoom of the entire screen, Siri taking 25 seconds to respond and not recognizing names, locking up of the phone. All requiring a reset.

    I would restore with backup and if that doesn't fix it try as new
    http://support.apple.com/kb/HT4137

  • Table and index in the same tablespace

    I have a table and its associated indexes all in one tablespace. Now, will creating a separate tablespace just for the indexes and drop and recreating the indexes the table has in the new tablespace help ???
    Thanks

    In all probability, no. This may have some housekeeping/ recovery benefits but it is very unlikely to have performance benefits. Most modern disk arrays are configured so that data access is spread pretty evenly across all the spindles. If your disk is configured such that you have "hot" and "cold" disks, and if you can move the index to a different tablespace that can be placed on one of the "cold" disks, this will tend to balance out disk I/O and may improve performance.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • There is no data in the partitioned  tablespace?

    I am creating a new partition table like this:
    create table PART_TABLE
    eid number(10),
    ename varchar2(30),
    hiredate date,
    jdate date,
    pdate date
    nologging
    parallel
    partition by range(hiredate)
    partition par_new1 values less than
    (to_date('01-JAN-2009','DD-MON-YYYY'))
    tablespace PART_MOHAN,
    partition par_new2 values less than
    (to_date('01-JAN-2010','DD-MON-YYYY'))
    tablespace PART_MOHAN2
    Now,there is no data in the Partitioned table.
    Now, i am inserting rows into the table.
    But, rows are inserting into the table not in the partition tablespaces.
    I checked the used space and free space of tablespaces, there is no change in size after inserting the row also.
    But, when i check the table, rows have inserted into the table.
    But, i dont know , in which tablespace these data are stored. How to find out that?
    SQL> INSERT INTO part_table values(1,'MohanaKr','01-JAN-2008','01-JAN-2009','01-JAN-2009');
    1 row created.
    SQL> /
    1 row created.
    SQL> /
    1 row created.
    SQL> /
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from part_mohan;
    COUNT(*)
    4
    SQL> select * from part_table;
    EID ENAME HIREDATE JDATE PDATE
    1 MohanaKr 01-JAN-08 01-JAN-09 01-JAN-09
    1 MohanaKr 01-JAN-08 01-JAN-09 01-JAN-09
    1 MohanaKr 01-JAN-08 01-JAN-09 01-JAN-09
    1 MohanaKr 01-JAN-08 01-JAN-09 01-JAN-09
    But,
    SQL> select PARTITION_NAME,TABLESPACE_NAME,num_rows from dba_tab_partitions where table_name='PART_MOHAN';
    PARTITION_NAME TABLESPACE_NAME NUM_ROWS
    PAR_NEW1 PART_MOHAN
    PAR_NEW2 PART_MOHAN2

    Firstly, the NUM_ROWS column is populated by statistics gathering. It is not a true realtime accurate of the number of rows in the table but at a point in time. Secondly, try something like the following if you want to see the row count per partition:
    1* select partition_name from dba_tab_partitions where table_name = 'SALES' an
    d table_owner = 'SH'
    PARTITION_NAME
    SALES_1995
    SALES_1996
    SALES_H1_1997
    SALES_H2_1997
    SALES_Q1_1998
    SALES_Q1_1999
    SALES_Q1_2000
    SALES_Q2_1998
    SALES_Q2_1999
    SALES_Q2_2000
    SALES_Q3_1998
    SALES_Q3_1999
    SALES_Q3_2000
    SALES_Q4_1998
    SALES_Q4_1999
    SALES_Q4_2000
    16 rows selected.
    SQL> select count(*) from sh.sales;
    COUNT(*)
    1016271
    SQL> select count(*) from sh.sales partition(sales_q4_2000);
    COUNT(*)
    74736

  • The growth of the internet routing table + LISP

    I have been challenged with the task of putting together a 15-20 minute presentation of how I might go about solving the problem of the growth of the internet routing table.
    If I am understanding the question correctly - as the internet grows so will the number of IPv4 and (moreso) IPv6 prefixes that appear in the global internet routing table. This will mean that the PE and P routers in a Service Provider network will find themselves having to deal with more prefixes that will in turn increase the possibility of bogons interferring with proper routing and increase the load on the routers themselves.
    Besides being a very open ended question, I am trying to look at it from the perspective of a Service Provider (whom I work for) and have come up with the following options (admittedly this is only after a quick google on the topic):
    > Improvise in the short term by adjusting CAM table allocation
    > Use selective hearing by filtering prefixes that are not important
    > Use external assistance like LISP and DNS
    > Spend Money and upgrading existing routers to handle the load
    Obviously with only a 20 minute window I cannot talk about much and I would like the options to be innnovative and interesting. LISP seems like an interesting option and I would like to learn about it - however I am having trouble tracking down resources that give a basic introduction to exactly what and how LISP works (every time I try and search for it a get pushed to sites talking about the programming language ).
    So this leads me to two questions:
    1. Is there anything important, vital or interesting that I have not included in my quickly put together list above.
    2. Is anyone aware for a good site/resource that explains LISP from a beginners/tutorial-type perspective.

    I have been challenged with the task of putting together a 15-20 minute presentation of how I might go about solving the problem of the growth of the internet routing table.
    If I am understanding the question correctly - as the internet grows so will the number of IPv4 and (moreso) IPv6 prefixes that appear in the global internet routing table. This will mean that the PE and P routers in a Service Provider network will find themselves having to deal with more prefixes that will in turn increase the possibility of bogons interferring with proper routing and increase the load on the routers themselves.
    Besides being a very open ended question, I am trying to look at it from the perspective of a Service Provider (whom I work for) and have come up with the following options (admittedly this is only after a quick google on the topic):
    > Improvise in the short term by adjusting CAM table allocation
    > Use selective hearing by filtering prefixes that are not important
    > Use external assistance like LISP and DNS
    > Spend Money and upgrading existing routers to handle the load
    Obviously with only a 20 minute window I cannot talk about much and I would like the options to be innnovative and interesting. LISP seems like an interesting option and I would like to learn about it - however I am having trouble tracking down resources that give a basic introduction to exactly what and how LISP works (every time I try and search for it a get pushed to sites talking about the programming language ).
    So this leads me to two questions:
    1. Is there anything important, vital or interesting that I have not included in my quickly put together list above.
    2. Is anyone aware for a good site/resource that explains LISP from a beginners/tutorial-type perspective.

  • Logical corruption found in the sysaux tablespace

    Dear All:
    We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
    we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
    Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
    Any help, advise, suggestion will be highly appreciated.

    If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
    Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
    if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Unable to descripe the table and unable to drop the table

    Hi,
    I have a temp table that we use like staging table to import the data in to the main table through some scheduled procedures.And that will dropped every day and will be created through the script.
    Some how while I am trying to drop the table manually got hanged, There after I could not find that table in dba_objects, dba_tables or any where.
    But Now I am unable to create that table manually(Keep on running the create command with out giving any error), Even I am not getting any error (keep on running )if I give drop/desc of table.
    Can you please any one help on this ? Is it some where got stored the table in DB or do we any option to repair the table ?
    SQL> select OWNER,OBJECT_NAME,OBJECT_TYPE,STATUS from dba_objects where OBJECT_NAME like 'TEMP%';
    no rows selected
    SQL> desc temp
    Thank in advance.

    Hi,
    if this table drops then it moved DBA_RECYCLEBIN table. and also original name of its changed automatically by oracle.
    For example :
    SQL> create table tst (col varchar2(10), row_chng_dt date);
    Table created.
    SQL> insert into tst values ('Version1', sysdate);
    1 row created.
    SQL> select * from tst ;
    COL        ROW_CHNG
    Version1   16:10:03
    If the RECYCLEBIN initialization parameter is set to ON (the default in 10g), then dropping this table will place it in the recyclebin:
    SQL> drop table tst;
    Table dropped.
    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
      2  from recyclebin
    SQL> /
    OBJECT_NAME                    ORIGINAL_NAME TYPE  UND PUR DROPTIME
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE YES YES 2013-10-08:16:10:12
    All that happened to the table when we dropped it was that it got renamed. The table data is still there and can be queried just like a normal table:
    SQL> alter session set nls_date_format='HH24:MI:SS' ;
    Session altered.
    SQL> select * from "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;
    COL        ROW_CHNG
    Version1   16:10:03
    Since the table data is still there, it's very easy to "undrop" the table. This operation is known as a "flashback drop". The command is FLASHBACK TABLE... TO BEFORE DROP, and it simply renames the BIN$... table to its original name:
    SQL> flashback table tst to before drop;
    Flashback complete.
    SQL> select * from tst ;
    COL        ROW_CHNG
    Version1   16:10:03
    SQL> select * from recyclebin ;
    no rows selected
    It's important to know that after you've dropped a table, it has only been renamed; the table segments are still sitting there in your tablespace, unchanged, taking up space. This space still counts against your user tablespace quotas, as well as filling up the tablespace. It will not be reclaimed until you get the table out of the recyclebin. You can remove an object from the recyclebin by restoring it, or by purging it from the recyclebin.
    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
      2  from recyclebin
    SQL> /
    OBJECT_NAME                    ORIGINAL_NAME TYPE                      UND PUR DROPTIME
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE                     YES YES 2006-09-01:16:10:12
    SQL> purge table "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;
    Table purged.
    SQL> select * from recyclebin ;
    no rows selected
    Thank you
    And check this link:
    http://www.orafaq.com/node/968
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables011.htm
    Thank you

  • What is the current schedule for 6.1.2 and will it fix the Exchange transaction log issue in 6.1?

    Just spent the entire night with virtually no sleep with our firm's group of IT engineers trying to keep our Exchange system online due to massive transaction log growth. Confirmed the issue related to 6.1 calendar bug with Activesync. The workarounds are not practical for large groups of users who depend on their mobile devices for work. Our users have no way of knowing that they are causing an issue so the Apple guidance isnt terribly useful to communicate to 1000 users. When can we expect a resolution? The problem is only going to get worse as more and more users hit the bug. Does anyone know if the issue will resolve as soon as someone installs the 6.1.2 update, assuming that has the fix. Im not trying to bash anyone but this is a very serious problem in enterprise deployments.

    The update was released some time today. 6.1.2 appears to specifically fix the Exchange issue causing excess comms and logging issues. However, although the update is available i do not see the notification badge on the Settings icon. Is this controlled by Apple or is there a user setting i am missing somewhere? I would prefer that all users see the badge to expedite user action.

  • Dbms_redefinition package and COMPRESS attribute of the target table

    Hi experts,
    we have an already partitioned and compressed table under Oracle 10g R2 wich we want to move to another tablespace using online redefinition.
    The table should keep the Partitions and compressed data after the move.
    My question is: How much storage we must have in place for the move of the table ?
    Example:
    tab (compressed size) : 1000 MB
    tab (uncompressed size) : 4000 MB
    Seems it depends on how redefinition handles the move of the compressed data.
    So if redefinition uses INSERT /* APPEND */ .... is should be roundabout 1000 MB ("compress during write")
    Is this assumption correct ?
    Can anybody shed some light on redefinition wich kind of compression-conserving stragetgy it uses ?
    bye
    BB

    From the 11.2 admin guide:
    Create an empty interim table (in the same schema as the table to be redefined) with all of the desired logical and physical attributes. If columns are to be dropped, do not include them in the definition of the interim table. If a column is to be added, then add the column definition to the interim table. If a column is to be modified, create it in the interim table with the properties that you want.The table being redefined remains available for queries and DML during the entire process.
    Execute the FINISH_REDEF_TABLE procedure to complete the redefinition of the table. During this procedure, the original table is locked in exclusive mode for a very short time, independent of the amount of data in the original table. However, FINISH_REDEF_TABLE will wait for all pending DML to commit before completing the redefinition.>
    If you did not want to create an interim table, then this approach is not going to work for you. There is no requirement for you to create anything other than the interim table, and any dependent objects can be done automatically, including materialized views. Where did you see that you have to create mview logs?

  • Staging area and base are in different schema/tablespace

    Please can someone give me some advantages and disadvantages to keep staging tables and base tables in same schema. For eg we have staging area where we daily truncate staging table, load in the stg table and do transformation process. Once done we will move the staging table data to base tables. Base tables sizes are huge volume like 6 to 8 million rows as there is no purging done.
    I want to suggest to my team that we should keep them in separate schemas as I understand it will be good from I/O point.
    Is there any other reason to keep staging and base tables in separate schema/tablespace.

    Hi,
    Definitely I agree with previous answers. You wrote that staging area, transformation etc.. So I think it's a datawarehouse. Staging and base tables should be stored in different data files also in different harddrives. During the ETL task there can be high load on the Disk subsystem. Storing them in different schemas is another subject which is not related with the disk performance.
    When it comes to blocksize of the datafile, there is a approach like this:
    If the database is used for OLTP systems, it would be good idea to setup smaller size blocks. But when we're talking about a datawarehouse, it's highly recommended to use larger block sizes like 16KB or 32KB. This setting has an advantage related to reading, accessing data blocks.
    Regards,
    Cuneyt

Maybe you are looking for

  • Can I install a 1TB Hard Drive into a late 2007 17" MacBook Pro?

    Can I install a 1TB Hard Drive into a late 2007 17" MacBook Pro currently running a 160 GB Hard Drive?

  • Issues with 'next' and 'previous' buttons grayed out in 'Preview'

    First off.. HELLO MAC FORUM!! Woo hoo!! I just got a MacBook Pro and I'm LOVING it.. I'm having one small issue that I can't find help with after googling.. when I'm in 'Preview' viewing pics my 'next' and 'previous' buttons are grayed out and I've l

  • Deleting Space - Internal Table

    Hi All, I wish to know how to delete the leading / trailing spaces in <b>Internal Table's</b> field & bolding the field name in Report Output. Ex: In report output - when i output BSID-WRBTR via an Internal table how to delete the blank spaces. I app

  • Setting up an iPad mini when I already have an iPad 2

    I bought an ipad mini today. I already have an ipad 2 but will be passing it on to my three year old. I have many apps that are both suitable for myself and ones that are suitable for my three year old. As a result of this I want to use the same appl

  • Extended controller class error

    Hi, I have extended the CompetenciesCO.class file by creating java class in the jdev, after that i written my logic then i saved and transfered that specific file to server (where the CompetenciesCO is located). But while i run the page, it says coul