Staging area and base are in different schema/tablespace

Please can someone give me some advantages and disadvantages to keep staging tables and base tables in same schema. For eg we have staging area where we daily truncate staging table, load in the stg table and do transformation process. Once done we will move the staging table data to base tables. Base tables sizes are huge volume like 6 to 8 million rows as there is no purging done.
I want to suggest to my team that we should keep them in separate schemas as I understand it will be good from I/O point.
Is there any other reason to keep staging and base tables in separate schema/tablespace.

Hi,
Definitely I agree with previous answers. You wrote that staging area, transformation etc.. So I think it's a datawarehouse. Staging and base tables should be stored in different data files also in different harddrives. During the ETL task there can be high load on the Disk subsystem. Storing them in different schemas is another subject which is not related with the disk performance.
When it comes to blocksize of the datafile, there is a approach like this:
If the database is used for OLTP systems, it would be good idea to setup smaller size blocks. But when we're talking about a datawarehouse, it's highly recommended to use larger block sizes like 16KB or 32KB. This setting has an advantage related to reading, accessing data blocks.
Regards,
Cuneyt

Similar Messages

  • Code and core tables in different schemas

    Hi,
    My db version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    I would like to understand, the pros and cons of the following situation:
    We have a single application.
    The db design for this application has 44 tables out of which 28 are core tables related by PK and FK relationships. The remaining 16 are look up and reference tables which are not related to any.
    The team decided to place the 28 core tables in one schema A and the remaining 16 in another schema B within same database. (The reason for this is..since it is done in other projects lets do it here too).
    Now comming to the code (stored procs, functions, packages etc). The teams wants to place most of the code in the schema B that has the 16 ref tables. (the reason again being the same).
    What are the pros and cons doing this???
    Please advice.
    PS:
    I have googled and found sth on these lines:
    cons: 
    o harder to manage
    o harder to upgrade
    o harder to patch
    o harder to maintain
    o causes your shared pool size to increase 1,000 times (shared sql goes down the tubes)
    o takes more space
    o queries against the dictionary will be impacted
    o latching on the shared pool goes WAY up (latching = locks = serialization device =
    slows you down)
    pros:
    o none that I can think of.

    >
    I would like to understand, the pros and cons of the following situation:
    Yes I am straining to find more points (was not good at it though).
    >
    You just want to understand? Are you sure? Your thread reads more like you just want to do things your way and are looking for support.
    >
    The team decided to place the 28 core tables in one schema A and the remaining 16 in another schema B within same database. (The reason for this is..since it is done in other projects lets do it here too).
    Now comming to the code (stored procs, functions, packages etc). The teams wants to place most of the code in the schema B that has the 16 ref tables. (the reason again being the same).
    >
    My question to you is: what PROBLEM are you trying to solve? If the 'team' already uses this approach and there haven't been any substantive problems then why try to change things now? Why have you chosen to fight this battle?
    Your 'team' has already decided and now, after that decision, you want to argue about it with them? The time to present arguments for/against a given plan is BEFORE the decisions are made, not after. Once a decision is made you need to be a team player and implement that decision to the best of your ability.
    One thing I'm certain of. If you try to support your argument using things like that 'AskTom' link you posted any credibility you had will go out the window. That link, as you already hinted yourself, is not your use case at all. All it takes is for one of your 'team' members to point that out and everyone will pretty much stop listening to any other arguments you make.
    People are generally not going to 'change their ways' unless you can show them:
    a) there is something seriously wrong with the way they are now doing things or
    b) a new way of doing things provides some substantial benefits
    Choice 'a' above is where you need to start but you haven't provided ANY information in this post that you have identified any serious issues with the status quo.
    The main task for Oracle is to be able to FIND the objects being referenced. So, in my opinion, that is what you should focus on when looking for PRO/CON arguments.
    That is: What issues are there if an object being referenced is in a different schema than the session that needs to use the object?
    1. objects may need to be prefixed with the schema name
    2. public or private synonyms may need to be created/maintained to avoid having to deal with item #1 above
    3. new grants may be needed to implement/maintain the proper security
    4. new roles may need to be created to maintain proper security (see item #3 above)
    5. additional work will be needed to maintain the new roles in item #4 above
    6. PL/SQL code may not be able to reference the object or may reference the wrong object
    7. Roles are disabled in PL/SQL (see item #6 above) - this means that the new grants (see item #3) may need to be granted directly to the schema users that need access instead of to roles. That can make it harder to create and maintain a role-base security schema.
    If I were you I would spend my time on other more important thingsd. But if I chose to fight this particular battle then I would make a list of problems that occured in the past with the current method of doing things and also problems related to the above list of items and then show how many of those problems will 'disappear' if the new method is used.

  • Using the Import utility from other users and going to a different schema

    I has a user today with rights
    Insert into XXXX.TABLE values(); Works just fine for another schema's table which has select,insert,update,delete.
    We tried to utiile the import utility from OTHER USERS
    Insert failed ORA-00942
    Do you want ot ignore all errors
    Even though we have rights to the schema under OTHER USERS.
    When connected as the owner it works fine.

    I've be trying to find a work-around to this issue & found this old post -- I'm having the same problem.
    I'm using Oracle 11g and SD 2.1.1.64.
    I have user A with a table that grants select,insert,update, and delete privs to user B. Logging into user B I, of course, can do inserts, deletes, etc. on the the table in user A's schema.
    When I use the import data feature to load data from a CSV file I can't get it to work while logged into user B. It does work fine if logged into user A. It looks like the issue may be that it doesn't put the schema prefix of "A." on the insert statements.
    Has anyone found a way to get around this issue yet?

  • Staging area and Target area in ODI

    Hii...experts.
    How Staging area is different from target area and same as target area?? what are their benefits and drawbacks in both??
    Please suggest me with some pictorial representation..
    Thanks

    Hi SRK
    I think you'll find your answer in this old white paper from oracle. The new version of the best practices are much smaller :/.
    http://www.oracle.com/technetwork/middleware/data-integrator/overview/odi-bestpractices-datawarehouse-whi-129686.pdf
    Page 87-95.
    Hope it helps.
    Regards,
    Jerome
    ps : p79-84 are interesting too

  • Moving Subpartitions to a duplicate table in a different schema.

    +NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
    Hello Ladies and Gentlemen.
    We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
    At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
    In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
    Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
    I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
    Any helpful replies welcome.
    Cheers.
    James

    You CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
    See :
    SQL> drop table part_subpart purge;
    Table dropped.
    SQL> drop table NEW_part_subpart purge;
    Table dropped.
    SQL> drop table STG_part_subpart purge;
    Table dropped.
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    Index created.
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'C');
    1 row created.
    SQL> insert into part_subpart values (11,'A');
    1 row created.
    SQL> insert into part_subpart values (11,'C');
    1 row created.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    Table created.
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    Table truncated.
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
         COL_1 COL_2
            11 A
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    no rows selected
    SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
    NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
    Added clarification for cross-schema exchange.

  • How to Compare Data length of staging table with base table definition

    Hi,
    I've two tables :staging table and base table.
    I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
    so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
    cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
    But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
    Can anyone tell me how to get desired results?
    Thanks,
    Mahender.

    This is a shot in the dark but, take a look at this example below:
    SQL> DROP TABLE STAGING;
    Table dropped.
    SQL> DROP TABLE BASE;
    Table dropped.
    SQL> CREATE TABLE STAGING
      2  (
      3          ID              NUMBER
      4  ,       A               VARCHAR2(40)
      5  ,       B               VARCHAR2(40)
      6  ,       ERR_LENGTH      VARCHAR2(1)
      7  );
    Table created.
    SQL> CREATE TABLE BASE
      2  (
      3          ID      NUMBER
      4  ,       A       VARCHAR2(25)
      5  ,       B       VARCHAR2(25)
      6  );
    Table created.
    SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXX
    SQL> UPDATE  STAGING ST
      2  SET     ERR_LENGTH = 'Y'
      3  WHERE   EXISTS
      4          (
      5                  WITH    columns_in_staging AS
      6                  (
      7                          /* Retrieve all the columns names for the staging table with the exception of the primary key column
      8                           * and order them alphabetically.
      9                           */
    10                          SELECT  COLUMN_NAME
    11                          ,       ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
    12                          FROM    ALL_TAB_COLUMNS
    13                          WHERE   TABLE_NAME='STAGING'
    14                          AND     COLUMN_NAME != 'ID'
    15                          ORDER BY 1
    16                  ),      staging_unpivot AS
    17                  (
    18                          /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
    19                           * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
    20                           * in the same order as the ROW_NUMBER() function in columns_in_staging
    21                           */
    22                          SELECT  ID
    23                          ,       COLUMN_NAME
    24                          ,       DECODE
    25                                  (
    26                                          RN
    27                                  ,       1,A
    28                                  ,       2,B
    29                                  )  AS VAL
    30                          FROM            STAGING
    31                          CROSS JOIN      COLUMNS_IN_STAGING
    32                  )
    33                  /*      Only return IDs for records that have at least one column value that exceeds the length. */
    34                  SELECT  ID
    35                  FROM
    36                  (
    37                          /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
    38                           * the check to see if there are any differences in the length if so set a flag.
    39                           */
    40                          SELECT  STAGING_UNPIVOT.ID
    41                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
    42                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
    43                          FROM    STAGING_UNPIVOT
    44                          JOIN    ALL_TAB_COLUMNS ATC     ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
    45                          WHERE   ATC.TABLE_NAME='BASE'
    46                  )       A
    47                  WHERE   COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
    48                  AND     ST.ID = A.ID
    49          )
    50  /
    2 rows updated.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX                Y
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX               Y
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
    This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
    Thanks!

  • How do I write into multiple target tables in DIFFERENT schemas?

    It is easy to have a mapping that writes into 2 or more tables it's results. I now need that all these tables are in different schemas!
    When I create a 2nd warehouse target with a 2nd location and configure this location to be a different schema on the database, validation tells me, that everything is okay.
    When I generate it, there are several warnings, when I execute it, it doesn't work :( It complains that it cannot find <something>.
    I'm sorry, I don't have the error-message at hand :(
    I've you got an idea, how I could have different schemas for my tables, please let me know!

    Art,
    Could it be that the target schema into which you install the runtime components does not have privileges on the tables in the other schemas? You have to have at least the right privileges (INSERT, UPDATE, DELETE) on the target tables in the other schemas in order for this to work. However, then there should be no reason, assuming your tables are in different modules related to different locations.
    Thanks,
    Mark.

  • Application Processes unique across different schemas?

    I have a test and dev schema on the same server. both sets of code are identical in every way. however when i try and call an on demand application process in dev nothing happens. if i try and call an identical on demand application process on test i get the results i was expecting.
    I changed the name of the application process in dev and i can call it ok. i was just wondering if two on demand application processes can have the same name if they are in different schemas? what else could be causing this issue?
    Thanks
    Tom

    Sorry,
    I copied the wrong code:
    function get_select_list_xml(pThis,pSelect){
    var l_Return = null;
    var l_Select = html_GetElement(pSelect);
    var get = new htmldb_Get(null,html_GetElement('pFlowId').value,
    'APPLICATION_PROCESS=CASCADING_SELECT_LIST_D',0);
    get.add('AJAX_ORACLE_JOB_NUM',pThis.value);
    gReturn = get.get('XML');
    if(gReturn && l_Select){
    var l_Count = gReturn.getElementsByTagName("option").length;
    l_Select.length = 0;
    for(var i=0;i<l_Count;i++){
    var l_Opt_Xml = gReturn.getElementsByTagName("option");
    appendToSelect(l_Select, l_Opt_Xml.getAttribute('value'),
    l_Opt_Xml.firstChild.nodeValue)
    get = null;
    This application process:
    BEGIN
    OWA_UTIL.mime_header ('text/xml', FALSE);
    HTP.p ('Cache-Control: no-cache');
    HTP.p ('Pragma: no-cache');
    OWA_UTIL.http_header_close;
    HTP.prn ('<select>');
    HTP.prn ('<option value="' || 1 || '">' || '- Stream ID -' || '</option>');
    FOR c IN (
    SELECT DISTINCT b1.STREAM_ID empno , b1.STREAM_ID ename
    FROM BILLINGS b1
    WHERE b1.ORACLE_JOB_NUM = TO_NUMBER(:AJAX_ORACLE_JOB_NUM)
    AND b1.STREAM_ID NOT LIKE ('TWS%E')
    AND b1.STREAM_ID NOT LIKE ('%ERR')
    AND (SELECT COUNT(S.BATCH_ID)
    FROM CHANGE_SITES S
    WHERE (regexp_like(UPPER(nvl(:ajax_site,'%')),(select site_expression from print_sites where upper(print_site_desc) = upper(S.SITE)))
    OR S.SITE IS NULL)
    AND S.ORACLE_JOB_NUM = b1.oracle_job_num
    AND S.STREAM_ID = b1.stream_id
    AND S.BATCH_ID = b1.batch_id) > 0
    UNION
    SELECT DISTINCT STREAM_ID empno , STREAM_ID ename
    FROM BILLINGS b2
    WHERE b2.ORACLE_JOB_NUM = TO_NUMBER(:AJAX_ORACLE_JOB_NUM)
    AND b2.STREAM_ID NOT LIKE ('TWS%E')
    AND b2.STREAM_ID NOT LIKE ('%ERR')
    AND (SELECT COUNT(s2.batch_id)
    FROM change_sites s2
    WHERE S2.ORACLE_JOB_NUM = b2.oracle_job_num
    AND s2.STREAM_ID = b2.stream_id
    AND S2.BATCH_ID = b2.batch_id) = 0
    AND (SELECT COUNT(sv.BATCH_ID)
    FROM SITE_VIEW SV
    WHERE (regexp_like(UPPER(nvl(:ajax_site,'%')),(select site_expression from print_sites where upper(print_site_desc) = upper(Sv.SITE)))
    OR SV.SITE IS NULL)
    AND sv.oracle_job_num = b2.oracle_job_num
    AND sv.stream_id = b2.stream_id
    AND sv.batch_id = b2.batch_id) > 0)
    LOOP
    HTP.prn ('<option value="' || c.empno || '">' || c.ename || '</option>');
    END LOOP;
    HTP.prn ('</select>');
    END;

  • Staging area and target area

    hi..friends.
    should staging area and target area be same or different???
    if same what are the advantages and dis-advantages???
    and if not same what are the advantages and dis-advantages????
    I want your help..
    Thanks.

    Staging area is the place where transformation takes place.
    Basically c$,I$ tables gets created at this place and once processing is done , the same gets dropped .
    By defalut staging area and traget area are the same . But most of the DBA does not like to have the temp tables created at the target as this cause dbms performance, maintance issue in long term.
    Instead you can have a dedicated schema created just for staging and use the same in your interface.
    Plus when your target is non dbms then you need to go for a dedicated staging area which can be a separate schema in your database or it could be in memory schema. You will find performance issue if you are using in memory schema as your staging area in case the data volume increase.
    Thanks,
    Sutirtha

  • Staging area different from target - which KM???

    Hi,
    I need to transfer data from CSV to DB (only new inserts operation)
    I am working the following KM's
    LKM File to SQL
    IKM SQL Control Append
    It's working fine. However now i need to keep the staging area (where C$, I$ etc table are created) in a separate schema.
    I have created a diff schema for staging area and selected in the interface "Staging area different from target"
    However not sure of which KM's to use.
    Please let me know how to achieve this.
    Thanks,
    Rosh

    Hi
    1st thing you have to give the workschema for temp tables when creating physical schema in dataserver for target.
    Then you have to select the workschema in overview of interface as "staging different from target".
    After doing this when you will use predefined KM for the interface it will create temp tables in workschema.
    Now suppose you are not giving the workschma at the time of creating physical schema and you have selected "staging different than target" in interface.Here your C$ table will be created in your workschema but I$ table that is used by IKM is going to be created on target schema.So for this again you have change the IKM KM i.e. where to create I$ table (wokrschema) by selecting the corresponding logical schema.
    So its better you give the workschema at the time of creating physical schema.
    Here is the query to give privilege by the sys_dba
    Grant create any table to ODI_TEMP.
    Hope you got it
    (Please mark the answer as correct or helpful and close the thread)
    Thanks

  • Staging Area Schema

    My source and target table is same and I am trying to update certain column in table XXX, I have set staging area different then target and specify the staging schema where I do have the create table prevelige but when I run this interface its trying to create temp table in target schema where I do not have preveilege to create table.Any clue ...?
    In the interface I have checked all my mapping process is in staging area not source or target.
    Any clue ...?
    Thanks

    HI,
    It depends on the KM that you are using... If you are using a IKM Incremental Update then the process will create a temp table at target because it needs it to make the PK comparation.
    My suggestions to work around it:
    1) if your ETL allows, work with the IKM Control Append
    2) If you need of a Incremental Update ELT just point the "Work Area" at the target Physical Schema to the "Staging Area" Schema, change the Database connection user to the user of Staging Area and give him the necessary grants at the tables of the target schema (insert and update i imagine),
    Just to be sure, are you using Oracle DB as source, staging and target?

  • Work Schema Vs Staging Area

    Can anybody explain what's the difference between Work Schema and Staging Area in ODI.
    Work schema we specify while creating Physical schema in Topology manager.
    Is Staging area same as work schema?
    Thanks

    Hi,
    you have 3 step
    1-source
    2-staging area
    3-target
    in any mapping (designer9 you may choose in which step execute the operation (according to step technology). The work schema is the place in which you execute operation of this step. So, if you say to
    LOGICAL : MY_SRC_DB
    schema: ora_schema
    work_schema: ora_w_schema
    LOGICAL : MY_TARGET_DB
    schema: ora_t_schema
    work_schema: ora_wt_schema
    you could say to odi having staging area different from target (for example SUNOPSIS MEMORY ENGINE)
    the operation on source would be executed on ora_w_schema
    the operation on staging would be executed on SUNOPSIS_MEMORY ENGINE's work
    the operation on target would be executed on ora_wt_schema
    At Oracle University teacher said me to put work=schema. You may fill in them different only for security issue (or performance)
    i hope to be usefull, my english is not so good;P
    Decaxd

  • Staging area different from target

    I am using oracle 10g as target database. I want to upload a flat data file into oracle table.
    I uploaded sucessfully using LKM Oracle (SQL Loader) in Interface. In this case i am using oracle database as a staging area. So all load transferred to my database at the timing of processing.
    I want to create staging area at my file server. So please help me out to resolve this problem.

    A staging area must accept SQL syntax.
    A File server won't accept SQL syntax.
    So if you speak about Physical Server where you got your file and if there is a RDBMS installed in maybe but else I don't really know how you can...
    You can specify another data server for your staging area or use the SUNOPSIS_MEMORY_ENGINE which use the cache I think but it's not optimized I think...
    For this in the Definition Tab of your Interface choose Staging Area different from the Target and then select in the list box SUNOPSIS_MEMORY_ENGINE.

  • Maintaining the structure of a SQL Server staging area and dwh aligned with the Oracle data source

    Hi,
    I'm working in a context where the data source system, in Oracle, is a continuos work in progress. Each 1-2-3 weeks the data source system in the prod environment is updated with new tables or updated table (with new columns or altered columns). So, it is
    important to apply in a rapid manner the related changes in the data structure of the SQL Server staging area and dwh.
    The issues to solve are:
    a. maintaining SQL Server data structure of the staging area aligned with the data structure of the Oracle data source;
    b. maintaing SQL Server data structure of the staging area aligned with the data structure of the SQL Server staging area.
    In order to solve these issues it could be useful to think to an authomatic manner to alert when a data structure change occurs and a simple manner to apply the changes on SQL Server data structure.
    Any suggests, please? Many thanks

    We use Oracle CDC service in SQLServer. It has a flag to indicate the schema changes happening at the Oracle end. We track the schema changes using it and then apply changes to sqlserver side. And regarding automation you can use Biml as suggested or .NET
    scripts inside script task.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • ETD and Base line are not matching

    Hi All,
    My requirement is in production server, for some invoices the Expected time departure and base line date are not matching,
    What will be the reason behind this? Can you please advice how to approach to this problem.?
    Thanks in advance
    jaya.G

    Hi,
    could you please eloborate the issue?
    Thanks,
    Srinu

Maybe you are looking for