Warehouse table in sync with OLTP table

I have a quick question, I have a table called orders and and also another table called orders_warehouse. We warehouse every day's data from orders to orders_warehouse. Question is when we add a new column to orders table is there a magic way to add that column to warehouse table also. The problem we have is developers tend to add new columns to orders table but not to warehouse table. because of this our warehouse process fails with mismatch of columns between warehouse and non warehouse tables. Is there way all the new ddl on orders table automatically applies to orders_warehouse table also.
Thanks for your help

I think you need to have some kind of change managment process imlemented in your organization. In warehouse, I also encountered where transaction systems changed the table structure but warehouse was not changed, as transaction system's developer's were not completely aware of impact of their change. I am not sure how your organization manages meta data information and also data profiling. IF you have well managed meta data and data profiling then you can stream line such process based on tools you are using.

Similar Messages

  • Copy the structure of a table to another with another table name.

    how to copy the structure of a table to another with another table name.
    ie. i want a emp table with same values/structure to be copied to another table called my_employee.
    how can this be done?

    create table my_emp as select * from emp;
    If you do not want the data to be copied then do the following:
    create table my_emp as select * from emp
    where 1=2;
    Avanti.

  • BI - ABAP To pick the field from One table and link with 2nd table

    Hello, I am stuck to write the correct abap code. Requirement is that My Client BMW  California require the below:
    <b>1.</b>  list of invoices (BELNR) that a has tcode = FB01 and FBVB" from Table BKPF.  Account Number field (BELNR) is there in 0fiap_4 Data source
    <b>2.</b> Now Link the above invoice Number (BELNR)  that a has tcode = FB01 and FBVB" from Table BKPF to BMW Custom Table I_BMW_WI. - Field "PAT ID" which is the cocatenation of OBJNR (Object Number), Invoice Number and Purchase Year.  where PICK only thos PAT ID where "TYPEID"  Value = "BMW" in table I_BMW_WI.
    To solve Number 1 Please fix my abap code and help me to write the Code to Solve Number 2 and Link #1 AND #2.
    Please help me soon
    Code # 1
    case I_datasource.
    WHEN '0FI_AP_4'.
    loop at C_t_data into l_s_DTFIAP_3.
    l_tabix = sy-tabix.
    clear I_BMW_WI.
    if sy-subrc = 0.
    select single * from BKPF into I_BKPF where BELNR = l_s_DTFIAP_3-BELNR.
    LOOP AT I_BKPF into I_FINAL WHERE
    Code below is not working
    (I_BKPF-TCODE = ‘FV50’) OR (I_BKPF-TCODE = ‘FB01’)
    I_FINAL-BELNR = I_BKPF-BELNR
    modify I_FINAL.
    endloop
    modify C_t_data from l_s_DTFIAP_3  index l_tabix.
    endif.
    endloop.
    endcase
    Code # 2 -
    Please help with templates.
    Thanks
    Soniya Kapoor

    Hi,
    Ad Code #1
    First of all the key for BKPF consists of:
    BUKRS, BELNR and GJAHR so I think you should use all key fields in your SELECT statement:
    select single * from BKPF into I_BKPF
    where BUKRS = l_s_DTFIAP_3-BUKRS and
          BELNR = l_s_DTFIAP_3-BELNR and
          GJAHR = l_s_DTFIAP_3-GJAHR.
    And why do you test SY-SUBRC after CLEAR ?
    Your IF statement should be after SELECT...
    And last but not least when you are using SELECT SINGLE you always get only one record so what is the purpose of LOOP statement ?
    Try following code:
    case I_datasource.
    WHEN '0FI_AP_4'.
      loop at C_t_data into l_s_DTFIAP_3.
        l_tabix = sy-tabix.
        select single * from BKPF into I_BKPF
          where BUKRS = l_s_DTFIAP_3-BUKRS and
                BELNR = l_s_DTFIAP_3-BELNR and
                GJAHR = l_s_DTFIAP_3-GJAHR.
        if sy-subrc = 0.
          if (I_BKPF-TCODE = ‘FV50’) OR
             (I_BKPF-TCODE = ‘FB01’).
            I_FINAL-BELNR = I_BKPF-BELNR
            modify I_FINAL.
          endif.
          modify C_t_data from l_s_DTFIAP_3 index l_tabix.
        endif.
      endloop.
    endcase.
    regards
    Krzys

  • Removal of data from 1 table after comparing with other table

    Hi,
    I have 2 table. Both have same primary key i.e. WO_ID. 1st table wrk_ord have no redundant data and the 2nd table wo_audit have few redundant data. Both table are related with WO_ID. Now I want to remove the redundant data from table wo_audit so that the uniq WO would be same in both the table. both the table are very huge. WO_ID table have 31million of record. I ran query
    delete from wo_audit where wo_id not in (select wo_id from wrk_ord);
    this query throw an error ora-01555. I just want to know the how can optimize this query.
    Thanks.

    Hi,
    delete from wo_audit where wo_id not in (select wo_id from wrk_ord);AFAIk, you not removing redundant data, but you are removing the data which does not exists in wrk_ord. Try to check things from business perspective.
    Tune the value of the UNDO_RETENTION parameter, check the the below one
    select max(maxquerylen) from v$undostat;
    - Pavan Kumar N
    Edited by: Pavan Kumar on Apr 8, 2011 1:36 PM

  • Query to determined what tables are associated with parent table

    Hello -
    how would I query the data dictionary to determine what tables are referenced by a parent table via Pk/FK relationship?
    Thanks in advance!
    Mike

    Hi,
    Try this:
    UNDEFINE table
    UNDEFINE owner
    ACCEPT owner CHAR PROMPT 'Enter Owner: '
    ACCEPT table CHAR PROMPT 'Enter Table: '
    COLUMN y new_value sid NOPRINT
    SELECT name||'_'||TO_CHAR(sysdate, 'ddmonyy_hh24miss') y FROM v$database;
    SPOOL constraints_&owner..&table..&sid..txt
    SELECT a.constraint_name constraint, DECODE(a.constraint_type,
                                                                  'C', 'Check',
                                                                  'P', 'Primary Key',
                                                                  'U', 'Unique Key',
                                                                  'R', 'Referential Integrity',
                                                                  'V', 'With Check Option',
                                                                  'O', 'With Read Only') constraint_type,
           a.index_name, a.owner||'.'||a.table_name table, a.status,
           DECODE(a.r_owner||'.'||a.r_constraint_name,'.',null,a.r_owner||'.'||a.r_constraint_name) rconstraint,
           f.constraint_name fconstraint, DECODE(f.owner||'.'||f.table_name,'.',null,f.owner||'.'||f.table_name) ftable,
           f.status fstatus
    FROM dba_constraints a, dba_constraints f
    WHERE a.owner = f.r_owner(+) AND
          a.constraint_name = f.r_constraint_name(+) AND
          a.owner LIKE UPPER('&owner') AND
          a.table_name LIKE UPPER('&tabla')
    ORDER BY 3,1;
    SELECT constraint_name constraint, owner||'.'||table_name||'.'||column_name column, position
    FROM dba_cons_columns
    WHERE owner LIKE UPPER('&owner') AND
          table_name LIKE UPPER('&tabla')
    ORDER BY 1,2;
    SPOOL OFF
    UNDEFINE table
    UNDEFINE owner
    PROMPT
    PROMPT ******************************************** DEPENDENCIES ************************************************************
    PROMPT
    UNDEFINE object
    UNDEFINE owner
    UNDEFINE type
    ACCEPT owner CHAR PROMPT 'Enter Owner: '
    ACCEPT object CHAR PROMPT 'Enter Object: '
    ACCEPT type CHAR PROMPT 'Enter Type: '
    COLUMN REFERENCED_LINK_NAME FORMAT a10
    PROMPT ******************************************** OBJECTS WITH DIRECT REFERENCE
    SELECT owner||'.'||name object, type , referenced_owner||'.'||referenced_name robject, referenced_type rtype,                    dependency_type, referenced_link_name
    FROM dba_dependencies
    WHERE owner LIKE UPPER('&owner') AND
          name LIKE UPPER('&object');
    execute deptree_fill('&type','&owner','&object');
    PROMPT ********************************************  DEPENDENCIES TREE
    SELECT nested_level, schema||'.'||name object, type, seq#
    FROM deptree
    ORDER BY seq#;
    UNDEFINE object
    UNDEFINE owner
    UNDEFINE typeCheers,
    Francisco Munoz Alvarez
    http://www.oraclenz.com

  • Fact table is joining with Other table having values for measures

    I have one Fact table named Fact1 where i am going to create some calculation measures based on the values in Table2.
    How should be the physical join between them and the logical join.
    Earlier we had those VALUES in Fact1 only
    Why we created table Table2 is there was bit confusion for the aggregation in Fact1.
    We are bit confused on whether to make Table2 as dimension or Fact as it is having some measures and not having the foreign keys
    How we have to proceed now..!! Please help us!!?

    I have one Fact table named Fact1 where i am going to create some calculation measures based on the values in Table2.
    How should be the physical join between them and the logical join.
    Earlier we had those VALUES in Fact1 only
    Why we created table Table2 is there was bit confusion for the aggregation in Fact1.
    We are bit confused on whether to make Table2 as dimension or Fact as it is having some measures and not having the foreign keys
    How we have to proceed now..!! Please help us!!?

  • Project Server 2013 : Report table not updated with Published tables

    Hello Team,
    We found the issue that Publish and Reporting table have difference in work attribute. is their any way that we can refresh the Reporting table in one go.
    Thanks.

    If this issue is specific to one or few projects, using Save for sharing would be much more easier.
    If this issue is for all the projects then as NicoOosthuysen recommended RDB refresh would be the option. Having said that  you have to take extra measure before you start RDb refresh , since it is very high system resource and time consuming activity
    Steps for Save for sharing
    1. Open MS Project Pro 2013.
    2. Open the project plan from the server.
    3. Now save it locally without making changes in the File Name, using File > Share > Save For Sharing. Provide local location. Do not close Project Professional
    4. Now save back the project back to server using File > Save As (Project Name should be grayed out)
    5. Save and then publish the project plan.
    Hrishi Deshpande Senior Consultant

  • How to check Index is in sync with Table

    Hi DBAS,
    OS = RHEL 4
    DB = 10.2.0.4
    How do we sync an index to a table ? Can we sync an index? if an index is not having same number of records as in table ?
    Thanks,
    Hari

    The question does not make sense.
    An index is always "in sync" with a table.
    The only context that talking about "synching" an index makes sense is with a "special" Oracle Text index.
    if an index is not having same number of records as in table ?If you're talking about NUM_ROWS in ALL/USER/DBA_TABLES and ALL/USER/DBA_INDEXES then these are just statistics, not necessarily 100% accurate and possibly gathered at different times.
    See http://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#CNCPT811:
    The database automatically maintains and uses indexes after they are created.
    The database also automatically reflects changes to data, such as adding, updating,
    and deleting rows, in all relevant indexes with no additional actions required by users.

  • MTL Table name mapped with IC_ITEM_MST_B in oracle apps r12

    Hi Experts,
    Can anyone suggest me the MTL table name mapped with IC_ITEM_MST_B table in oracle apps with all the columns.
    thanks,

    Response to: I don't see this option "Periodic Sequences in Format" under "Payment Instruction Format" table. I can see only Payment File Information.
    You are maybe missing this (from Implementation Guide):
    "Note: If no payment system is selected or entered for the Payment
    System field in the Payment System subtab of the Update Payment
    Process Profile page, then the Periodic Sequences in Format region is
    not displayed."
    The payment system must be selected at the time you create the profile. It does not seem to allow adding afterwards.
    Edited by: user11974306 on Jan 25, 2013 1:49 PM
    Edited by: user11974306 on Jan 25, 2013 1:49 PM

  • TimesTen synchronize with OLTP-compressed table

    Hi,
    We are looking at potentially using TimesTen as a frontend data store for our Oracle database. Database is at version 11.2.0.3, and many tables in the databases are OLTP-compressed (Advanced Compression). Is synchronizing with compressed table supported?  Is there any known issues doing that?
    Thanks,
    George

    Thanks Chris, for you quick reply.
    George

  • Sync with manually created tables or selected publication items

    HI
    I have the following situation :
    Server side:
    Table1, Table2,Table3,Table4,Table5,Table6
    Client type1 :
    Table1, Table2,Table3,Table4
    Client type2:
    Table3,Table4,Table5,Table6
    so i cant create two different publications for clients.
    I have 1 publication with Table1, Table2,Table3,Table4,Table5,Table6
    In this situation, is it possible to synchronize with manually created lite database and tables, or mobile server knows how to sync only with snapshots created by his own ?
    Sync process reports "sync ok" but nothing happens
    And one more question is :
    can i use some sync api to create database with snapshots
    1,2,3,4 for client1 and snapshots 3,4,5,6 for client2?
    Today it works in following way:
    1)install mobile client on mobile devices
    2)install mobile application type 1 and 2 on mobile devices
    3)call msync to create database with snapshots 1-6 on all devices
    4)client type 1 do not use tables 5,6 sync only tables 1,2,3,4
    5)client type 2 do not use tables 1,2, sync only tables 3,4,5,6

    only tables defined in the application and synchronised down to the client will be synchronised - manually created tables on the client are ignored (list of tables to be synchronised is controlled by the table c$table_list in the concli database)
    for your requirement, either
    use one application for both client types, but ignore the 'unused' tables in the client software - advantage=easy, disadvantage overhead in synchronising and composing data not needed
    or
    create two seperate applications .no problem about using the same table in multiple applications, we do that for reference data all of the time, sequences cannot be shared (but there is a work around for that), and then associate each client user to one or the other application - advantage=better meets the requirement, disadvantage=maintenance and different database names in the client configuration

  • Function Modue with Dynamic Table as output parameter

    Hi experts,
    i have function (below) which reads "dynamically" data from table which is specify as input parameter, the data are "saved" in <ft>.
    How can i return <ft> as output parameter of function module (table) ?
    function
    import parameter  - > IC_TABLE
    source code
    data :   lt_OPTIONS   type standard table of RFC_DB_OPT,
             lt_fields    type standard table of rfc_db_fld,
             lt_data      type standard table of tab512,
             la_rfcdata   type tab512,
             la_rfcfields type rfc_db_fld,
             lr_dref      type ref to data.
    field-symbols: <ft>         type table.
    field-symbols: <structure>  type any.
    field-symbols: <field_to>   type any.
    field-symbols: <field_from> type any.
    CALL FUNCTION 'RFC_READ_TABLE'
      EXPORTING
        query_table                = IC_TABLE
      DELIMITER                  = ' '
      NO_DATA                    = ' '
      ROWSKIPS                   = 0
      ROWCOUNT                   = 0
      tables
        OPTIONS                    = lt_OPTIONS
        fields                     = lt_fields
        data                       = lt_data
    EXCEPTIONS
       TABLE_NOT_AVAILABLE        = 1
       TABLE_WITHOUT_DATA         = 2
       OPTION_NOT_VALID           = 3
       FIELD_NOT_VALID            = 4
       NOT_AUTHORIZED             = 5
       DATA_BUFFER_EXCEEDED       = 6
       OTHERS                     = 7.
      create data lr_dref type table of (ic_table).
      assign lr_dref->* to <ft>.
    Fill data from OLTP table into ct_table.
      assign local copy of initial line of <ft> to <structure>.
      loop at lt_data into la_rfcdata.
        loop at lt_fields into la_rfcfields.
          assign component sy-tabix of structure <structure> to <field_to>.
          if sy-subrc is initial.
            assign la_rfcdata+la_rfcfields-offset(la_rfcfields-length)
              to <field_from>.
            <field_to> = <field_from>.
          endif.
        endloop.
        append <structure> to <ft>.
      endloop.
    Thanks in advance
    Martin

    Hi Martin,
    parameters with generic types are not allowed. So TYPE ANY TABLE etc. will not work. As Alex alreads said, you could return a reference to your table.
    Your parameter should be typed like:
    re_table type ref to data.
    At the end of your function module get a reference of your table into your parameter:
    get reference of <ft> into re_table
    After the call of your function module you can now handle and work with your table as wished:
    * Declaration
    data: re_table type ref to data.
    field-symbols: <my_table> type standard table.
    * Assign reference to fieldsymbol
    assign re_table->* to <my_table>
    if sy-subrc NE 0.
    " Error: Could not assign reference
    endif.
    Best regards,
    Fabian

  • Best way to update an OLTP table ?

    Hi,
    We have an OLTP table with huge data.
    We need to update a status column from 'N' to 'Y' for almost 70% of rows based on some condition.
    This table may be accessed by hundreds of sessions at a time.
    So, what is the best way to do the same.
    Rgds,
    Rup

    if someone is using the table, ddl cannot be done (or at least you would have to wait maybe a long time)
    quick test...
    SQL> create table bank
      2  (id number primary key
      3  ,acc number
      4  ,ind varchar2(1)
      5  )
      6  /
    Table created.
    SQL> insert into bank
      2  select rownum
      3       , rownum * 10
      4       , 'N'
      5    from all_objects
      6   where rownum <= 10
      7  /
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> update bank
      2     set acc = -10
      3   where id = 10
      4  /
    1 row updated.new session
    SQL> alter table bank
      2  add new_ind varchar2(1)
      3  /
    alter table bank
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specifiedwell, not a long time... but anyway you can't do ddl while someone is working on the table.

  • Weight factors in a many-to-many relationship with bridge table

    Hi, I have the same N:N relationship schema of this link:
    http://www.rittmanmead.com/2008/08/28/the-mystery-of-obiee-bridge-tables/
    In my bridge table I have a weight factor for every couple (admission,diagnosis). If I aggregate and query in Answers these columns:
    DIAGNOSIS | ADMISSIONS_COSTS
    every single diagnosis has the sum of the WHOLE Admission_cost it refers to, not its contribute to it (for example 0.30 as weight factor). The result is an ADMISSION_COSTS sum larger than the ADMISSION_COSTS sum in the lowest detail level, because it sums many times the same cost.
    How could I use my weight factor and calculate the right diagnosis contribute to its admission? In BI Admin I tried to build a calculated LogicalColumn based on Physical column, but in the expression builder I can select only the ADMISSION_COST measure physical column, and it doesn't let me pick the weight factor from the bridge table.
    Thanks in advance!

    I'm developing a CS degree project with 2 professors, Matteo Golfarelli and Stefano Rizzi, who have developed the Dimensional Fact Model for data warehouses and wrote many books about it.
    They followed the Kimball theory about N:N and used its bridge table concept, so when I said them that in OBIEE there is this definition they were very happy.
    But they stopped this happiness when I said that bridge tables only connect fact tables to dimension tables, and to create N:N between levels at higher aggregation we should use logical joins as you said in your blog. I need to extract metadata concepts from UDML exportation language, and about N:N I can do it only with bridge table analysis, I can't extract and identify a N:N level relationship from a multiple join schema as in your blog... this is the limit of your solution for our project, only this!
    PS: sorry for my english, I'm italian!
    thanks for the replies!

  • Can IdM use TimeStamp files in its Active Sync for Database table ?

    I have an IdM 7.1 implementation that I inherited
    and have a Database Table resource adapter with Active Sync.
    Here's a few ways to set up Active Sync, but I want to explore the latter.
    -Static Search Predicate (clause)
    You can use a flag (column) in your data table. Does not require any mapping, and presumably whatever process you're kicking off would turn off the flag, so the record is not picked up subsequently.
    -Last Fetched Predicate (documented in the Resource Reference, under Database Table),
    Normally, you'd be doing a comparison based on timestamps, and the mapping between a timestamp-User Extended Attribute AND timestamp-Database Column
    In this implementation, I do not see a User Extended Attribute (UXA) but I do see a 0 byte timestamp file on the server. Did not see anything like this discussed in the docs, but my hypothesis is that this is being used, or has been configured somehow. I wonder if I am right ?
    Let's call it 'MyTS'
    I see MyTS both in ActiveSync logs, as well as in the 'XML Data' object, resource_SYNC, that A/S creates. Maybe this is a hidden feature, or mostly undocumented, or from an earlier version. Anyone care to offer a suggestion or explanation ? Your thoughts would be welcome.
    thanks

    'MyTS' would be a resource attribute. Have a look in the schema map for your resource.
    It doesn't need to be, and would not normally be, a user extended attribute.

Maybe you are looking for