Issue with Buliding Warehouse Tables..

Hi all,
i want try to create DWH tables through Dac.but that time i getting error
i.e)done.some takes will be fail.we are useing Sibel 8.1.1 system.
once i chicked my logfile.
createwtables.log
Siebel Enterprise Applications ODBC DDL Import Utility, Version 7.7 [18030] ENU
Copyright (c) 2001 Siebel Systems, Inc. All rights reserved.
This software is the property of Siebel Systems, Inc., 2207 Bridgepointe Parkway,
San Mateo, CA 94404.
User agrees that any use of this software is governed by: (1) the applicable
user limitations and other terms and conditions of the license agreement which
has been entered into with Siebel Systems or its authorized distributors; and
(2) the proprietary and restricted rights notices included in this software.
WARNING: THIS COMPUTER PROGRAM IS PROTECTED BY U.S. AND INTERNATIONAL LAW.
UNAUTHORIZED REPRODUCTION, DISTRIBUTION OR USE OF THIS PROGRAM, OR ANY PORTION
OF IT, MAY RESULT IN SEVERE CIVIL AND CRIMINAL PENALTIES, AND WILL BE
PROSECUTED TO THE MAXIMUM EXTENT POSSIBLE UNDER THE LAW.
If you have received this software in error, please notify Siebel Systems
immediately at (650) 295-5000.
C:\orahome\10gR3_1\bifoundation\dac\UTILITIES\BIN\DDLIMP /I N /s N /u OLAPUSER /p ***** /c OLAP_USER /G SSE_ROLE /f C:\orahome\10gR3_1\bifoundation\dac/conf/sqlgen/ctl-file/oracle_bi_dw.ctl /b /K /X /W Y
Connecting to the database...
Connected.
Reading tables and indexes from DDL file...
Read 1110 tables and 0 indexes from DDL file...
Reading existing schema...
Read 0 tablespaces, 0 tables and 0 indexes from existing schema...
Running SQL statements against the database...
S1000: [DataDirect][ODBC Oracle driver][Oracle]ORA-01031: insufficient privileges
create table M_10A_ORG_D (
DATASOURCE_NUM_ID number(10, 0) not null,
ETL_PROC_WID number(10, 0) default 0 not null,
GEO_WID number(10, 0) default 0 not null,
INTEGRATION_ID varchar2(30 char) not null,
ROW_WID number(10, 0) default 0 not null,
ACCNT_FLG char(1 char),
ACCNT_REVN number(22, 7),
ACTIVE_FLG char(1 char),
CHANNEL_FLG char(1 char),
CHNL_ANNL_SALES number(22, 7),
COMPETITOR_FLG char(1 char),
CREATED_DT date,
DIVN_FLG char(1 char),
EMP_COUNT number(22, 7),
FORMED_DT date,
HIST_SLS_VOL number(22, 7),
ORG_FLG char(1 char),
ORG_PRTNR_FLG char(1 char),
PROSPECT_FLG char(1 char),
PRTNRSHP_START_DT date,
PRTNR_FLG char(1 char),
PRTNR_SALES_RANK number(22, 7),
PTNTL_SLS_VOL number(22, 7),
PTSHP_END_DT date,
PTSHP_FEE_PAID_FLG char(1 char),
PTSHP_RENEWAL_DT date,
PTSHP_SAT_INDEX number(22, 7),
PUBLIC_LISTING_FLG char(1 char),
SALES_EMP_CNT number(10, 0),
SERVICE_EMP_CNT number(10, 0),
U_ACCNT_RVN number(22, 7),
U_ACNTRVN_EXCH_DT date,
U_CHNL_ANNL_SLS number(22, 7),
U_CH_ASLS_EXCH_DT date,
U_HIST_SLS_VOL number(22, 7),
U_HST_SLS_EXCH_DT date,
U_PTL_SLS_EXCH_DT date,
U_PTL_SLS_VOL number(22, 7),
ACCNT_LOC varchar2(50 char),
ACCNT_STATUS varchar2(30 char),
ACCNT_STATUS_I varchar2(50 char),
ACCNT_TYPE_CD varchar2(30 char),
ACCNT_TYPE_CD_I varchar2(50 char),
ANNUAL_REVN_CAT varchar2(30 char),
ANNUAL_REVN_CAT_I varchar2(50 char),
BASE_CURCY_CD varchar2(20 char),
BU_NAME varchar2(100 char),
CHNL_SALES_GRWTH varchar2(30 char),
CHNL_SALES_GRWTH_I varchar2(50 char),
CITY varchar2(50 char),
COUNTRY varchar2(30 char),
DIVN_TYPE_CD varchar2(30 char),
DIVN_TYPE_CD_I varchar2(50 char),
DOM_ULT_DUNS_NUM varchar2(15 char),
DUNS_NUM varchar2(15 char),
EXPERTISE varchar2(30 char),
EXPERTISE_I varchar2(50 char),
FREQUENCY_CAT varchar2(30 char),
FREQUENCY_CAT_I varchar2(50 char),
FRGHT_TERMS_CD varchar2(30 char),
FRGHT_TERMS_CD_I varchar2(50 char),
GLBLULT_DUNS_NUM varchar2(15 char),
KEY_COMPETITOR varchar2(100 char),
LINE_OF_BUSINESS varchar2(30 char),
MAIN_PH_NUM varchar2(40 char),
MGR_NAME varchar2(160 char),
MONETARY_CAT varchar2(30 char),
MONETARY_CAT_I varchar2(50 char),
NAME varchar2(100 char),
NUM_EMPLOY_CAT varchar2(30 char),
NUM_EMPLOY_CAT_I varchar2(50 char),
ORG_CITY varchar2(50 char),
ORG_COUNTRY varchar2(30 char),
ORG_MAIN_PH_NUM varchar2(40 char),
ORG_MGR_NAME varchar2(160 char),
ORG_NAME varchar2(100 char),
ORG_PRTNR_TIER varchar2(30 char),
ORG_PRTNR_TIER_I varchar2(50 char),
ORG_PRTNR_TYPE varchar2(30 char),
ORG_PRTNR_TYPE_I varchar2(50 char),
ORG_STATE varchar2(50 char),
ORG_ST_ADDRESS varchar2(200 char),
ORG_TERR_NAME varchar2(75 char),
ORG_ZIPCODE varchar2(30 char),
PAR_DUNS_NUM varchar2(15 char),
PAR_INTEGRATION_ID varchar2(30 char),
PAR_ORG_NAME varchar2(100 char),
PRI_LST_NAME varchar2(50 char),
PRTNR_NAME varchar2(100 char),
PR_COMPETITOR varchar2(100 char),
PR_INDUST_NAME varchar2(50 char),
PR_ORG_TRGT_MKT varchar2(50 char),
PR_PTSHP_MKTSEG varchar2(50 char),
PTSHP_PRTNR_ACCNT varchar2(100 char),
PTSHP_STAGE varchar2(30 char),
PTSHP_STAGE_I varchar2(50 char),
RECENCY_CAT varchar2(30 char),
RECENCY_CAT_I varchar2(50 char),
REGION varchar2(30 char),
REGION_I varchar2(50 char),
REVN_GROWTH_CAT varchar2(30 char),
REVN_GROWTH_CAT_I varchar2(50 char),
STATE varchar2(50 char),
ST_ADDRESS varchar2(200 char),
U_ACNTRVN_CURCY_CD varchar2(20 char),
U_CH_ASLS_CURCY_CD varchar2(20 char),
U_HST_SLS_CURCY_CD varchar2(20 char),
U_PTL_SLS_CURCY_CD varchar2(20 char),
VIS_PR_BU_ID varchar2(15 char),
VIS_PR_POS_ID varchar2(15 char),
ZIPCODE varchar2(30 char))
writeExecDDL error (UTLOdbcExecDirectDDL pDDLSql).
writeExecDDL error (pOperCallback UTLDbDdlOperTblCreate).
Error in MainFunction (UTLDbDdlDbMerge).
Error in Main function...
Please let me know how to reslove this error......

Hi,
I was Check the DAC user it is working fine.Actually copy the DAC client floder from the server And paste it my local machine.we are not install DAC Client and BIAPPS in my local machine.that time i can't find the Oracle Merant ODBC Driver.that way created Dsn for Oracle it working fine.when i try to drop DWH database getting error.
Please let me know how to over come this on
Thanks,
Avinash
Edited by: 957685 on Sep 10, 2012 6:13 AM
Edited by: 957685 on Sep 10, 2012 6:20 AM

Similar Messages

  • Issue with Update of Table VARINUM

    Hi,
    I am getting waiting Issues with Update of table VARINUM. Has anybody faced such an issue.
    I have a lot of Jobs which are running in background. I am submitting it through a report. what can be the issue.
    Regards,
    Abhishek jolly

    Thisi is quite old, but not answered properly yet, so there you go:
    SAP generates a new job and temporary variant on report RSDBSPJS, for each HTTP call,which creates database locks on table VARINUM .
    This causes any heavyweight BSP application  to hang and give timeout errors.
    The problem is fixed applying OSS note 1791958, which is not included in any service pack.

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Oracle 10g - issue with "DELETE from TABLE WHERE ID in (1,2,3)" (cfqueryparam used)

    Hello, everyone.
    I am having issues with running a DELETE statement on an Oracle 10g database.
    DELETE
    FROM tableA
    WHERE ID in (1,2,3)
    If there is only one ID for the IN clause, it works.  But if more than one ID is supplied, I get an "SQL command not properly ended" error message.  Here is the query as CF:
    DELETE
    FROM TRAINING
    WHERE userID = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#trim(form.userID)#">
         AND TRAINING_ID in <cfqueryparam value="#form.trainingIDs#" cfsqltype="CF_SQL_INTEGER" list="yes">
    Anyone work with Oracle that can help me with this?  I'm an experienced MS-SQL developer; Oracle is new to me.
    Thanks,
    ^_^

    Nevermind.. a co-worker just told me that I still have to use parenthesis around the values for the IN clause. 

  • Issue with Period Control Table after copying Essbase adapter

    Hi Experts,
    I'm working on version 11.1.1.3 and have copied the adapters (Essbase, Pull + EPRi) in the work bench so I can add an additional target for the FDM application. However, I have an issue with the import process; it returns an error with the Time & Periods (I guess it's something to do with the Periods category).
    I have reimported the Periods Control Table and updated the new application's Target Period & Year (whilst changing the system code in the Application settings to the newl adapter) and still receive the same error message.
    Any direction or thoughts would be welcome.
    Thanks
    Mark

    The time periods do not copy. You need to maintain them through the UI or upload them from excel. There is a KM article on this if you need additional detail.

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Issue with DWH DB tables creation

    Hi,
    While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
    But when i checked in the log file 'generate_ctl.log', it have the below message:
    +"Schema will be created from the following containers:+
    Oracle 11.5.10
    Oracle R12
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    +The column properties that are different :[keyTypeCode]+
    Success! "
    When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
    Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
    Also, should i need to drop any of EBS container to create DWH tables successfully?
    The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
    Edited by: userOO7 on Nov 19, 2008 2:41 PM

    I saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
    *****START LOAD SESSION*****
    Load Start Time: Wed Nov 19 17:13:42 2008
    Target tables:
    W_BOM_ITEM_FS
    READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
    READER_2_1_1> BLKR_16008 Reader run completed.
    TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
    WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
    WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
    WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
    WRT_8044 No data loaded for this target
    WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
    WRITER_2_*_1> WRT_8006 Writer run completed.
    I now see it is covered in the release notes:
    http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
    1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
    The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
    In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
    Workaround
    For all Oracle EBS customers:
    Open package body bompexpl.
    Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
    Save and compile the package.

  • Issue with the shawdow table

    I am in the process of understanding the shawdow table and the error log table.
    i have a table created with a shadow table in place.
    1. ex : table emp( empno, ename), emp_err( ...., empno,ename)
    it contains the values (1,'A').
    Now i place the empno with a data rule, unique not null ... and configure the operator
    as MOVE TO ERROR.
    when i try to insert a row with 1,A, its not only moving the new row to be inserted but also the existing row in the table emp ie two rows are getting populated in the error table emp_err
    2. I have a scenario where i want to update a row in the fact, from an incoming row.
    If there is no match of the incoming row to that of the fact, how do i put that into the
    error_table ?
    Any ideas or tricks appreciated.
    Thanks
    Narayana.

    Hi,
      Remove the the internal tables memory by using FREE statement after processing the internal table.
       Also u can ask your Basis person to increase the page area.
    Reward if helpful.
    Regards,
    Umasankar.

  • Direct Path Loading Issues with Global Temporary Tables - OCI & OCILib

    I am writing some code to import data into a warehouse from a CPU grid which computes risk data. Due to the fact a computing grid is used there will be many clients which can load the data concurrently and at any point in time.
    Currently the import uses Binding in OCCI and chunking with a prepared statement to import the data into a global temporary table in a staging area after which a stored procedure is called within the same session which will process the data and load the data into a star schema.
    The GTT has the advantage that if any clients have issues no dirty data will be left and each client only sees their own instance of the data.
    I have been looking at using direct path loading to increase the performance of the load and have written some OCI code to perform the same task. I have manged to import the data into a regular heap based table using the OCI direct path apis. However when I try and use the same code to import against a Global Temporary Table I get an OCI Error (ORA-00600: internal error code, arguments: [6979], [16], [1], [1318528], [], [], [], [], [], [], [], [])
    I get error when the function OCIDirPathPrepare is executed. The same issue occurs in both OCI and OCILib.
    Is it not possible to use Direct Path Loading against a Global Temporry Table ? Because you can use the /*+ APPEND */ hint and load global temporary tables this way from tools like SQL Devloper / toad which is surely informing the SQL Engine to use Direct Path ?
    Looking at the table USER_OBJECTS I can see that for a Global Temporary Table the DATA_OBJECT_ID is null. Does this mean that it is impossible to us a direct path load into Global Temporary Tables ?
    Any ideas / suggestions would be really appreciated. If this means redesigning the application then I would appreciate suggestions which would allow many client to quick write processes in a parallel fashion. If this means creating a new parition in a Heap Table for each writer and direct path loading into this table then so be it.
    Thanks
    H
    Edited by: 813640 on 19-Nov-2010 11:08

    Replying to my own message in case anyone else is interested.
    I have now managed to successfully load data using direct path into a global temporary table with OCI. There appears to be no reason why this approach will not work.
    I loaded data into the temporary table and then issued a select count(*) on the table from within the session and from a new session. The results were as expected.
    The resaon for the ORA-006000 error was due to the fact that I had enabled table level parallel loading
    ie
    OCIAttrSet((dvoid *) context, (ub4) OCI_HTYPE_DIRPATH_CTX, *(ub1) 1*, (ub4)0, (ub4) OCI_ATTR_DIRPATH_PARALLEL, errhp)
    When loading a Global Temporary Table the OCI_ATTR_DIRPATH_PARALLEL attribute needs to be zero
    This makes sense, since the temp table does not have any partitions so it would not be possible to write in parallel to multiple paritions.
    Edited by: 813640 on 22-Nov-2010 08:42

  • Issue with ADF Tree Table

    Hi,
    I have the following requirement where i need to display a tree table. Here is how the initial implementation is:
    I have created the read only view for : ManagersVO > PoolsVO > MachinesVO. Where 'MachinesVO' is the destination view. And created view links between ManagersVO & PoolsVO using ManagerId and PoolsVO & MachinesVO using PoolId.
    And using this implementation, successfully created tree table on the UI. Now we got an enhancement for this:
    i.e., MachinesVO should return list of machines as per user logs in. i.e., we have 4 different roles. 'Super Admin', 'Sys Admin', 'App Admin', 'End User'. The default query for MachinesVO is for 'Super Admin'. The query for other user roles is different except the SELECT statement.
    The requirement is to dynamically change the query of MachinesVO based on user logs in and display the tree table accordingly. To implement the same i have tried using setQuery() operation on 'MachinesVO' which results with the following error:
    JBO-26016: InvalidOperException
    Cause: You cannot set customer query (calling setQuery()) on a view object if it is the detail view object in a master detail view link.
    Action: Do not call setQuery() if the view object is a detail.
    Can one suggest me a best solution to implement this.
    Thanks & Regards,
    Kiran

    Hi Navaneetha Krishnan,
    Here is how i implemented based on your comments. As i have tree table based 3 different VO's, created the following method at middle view(i.e., PoolsVO).
    1.Tree Model hierarchy
    ManagersVO > PoolsVO > MachinesVO
    I actually want to filter the data at Machines level. Hence wrote a method at PoolsVOImpls and exposed it in the PoolsVO client interface. Here is the code that i have placed in the PoolVOImpl
    public class PoolsVOImpl extends ViewObjectImpl implements PoolsVO{
         * This is the default constructor (do not remove).
        public PoolsVOImpl () {
      public void filterMachinesDataByUserRole(String userRole,String vzId){
        Row row = getCurrentRow();
        String query = "";
        if(row != null){
          RowSet rowSet = (RowSet)row.getAttribute("MachinesVO");
          if(rowSet != null){
            MachinesVOImpl machinesVOImpl = (MachinesVOImpl)rowSet.getViewObject();
            if(userRole.equalsIgnoreCase("SYS ADMIN")){
                    machinesVOImpl .setWhereClause(query related to sysadmin);
             //Similarly for other user roles.
             executeQuery();
    }And this piece of code needs to be executed before the jsff(which has the tree table) renders. Hence, i created a this methodAction as a default activity in the respective taskflow where the jsff is placed. Once this method get executed, the page should render the machines specific to the user.
    Here is the issue: getCurrentRow() method call is returning always NULL.
    Please correct me if i'm doing something wrong. I do tried the above mentioned approach by creating the method at '*ManagersVOImpl*' level too. Still the same issue.
    Thanks & Regards,
    Kiran

Maybe you are looking for

  • Setting up multiple devices on the same account

    In my house, we have 2 iPhones, 2 iPods, and an iPad mini.  Seems like everytime we add a device, they are registering under the same account.  How do I correct that problem without restoring each device?

  • Video embedded in picture frame on wal

    Need advise on which After Effect plugin to use which will enable video to be embedded into a picture frame on a wall where the frame and the video would get larger or smaller depending if you walk towards or away from the object. I am trying to crea

  • Travel & expenses: Remove User settings option

    Hi, we have implemented Travel & Expense module in Portal using ABAP web dynpro. We are at EP 7.0 EHp2. Navigation : ESS -> Travel & expenses -> My Trips and Expenses Here If the User perform Right Click option then he get the option for : " User set

  • Unwanted yahoo search when open new tab, can't remove it.

    I have picked up yahoo search and it won't remove. doesn't appear in add ons or toolbars or programs on computer or in registry. I deleted and reloaded firefox and its still there. yahoo is not in my search engines.

  • Changing Calendar Format

    Hello, Everybody. I am beginner to HTMLDB. In my project I want to change the format of calendar like as follows, Date 1 2 3 4 5 6 7 Day Sun Mon Tue Wed Thu Fri Sat Column x y z a b c d Please help me. Thanks, Sumanth.