Copy from staging table to multiple tables

We are using an SSIS package to fast load our data source into a staging table for processing.
The reason we are using a staging table is that we need to copy the data from staging to our actual DB tables (4 in total), and the insertion order matters as we have foreign key relationships.
When copying the data from our staging table, should we enumerate through all the records and use an insert-select method for each row or is there a more efficient way to do this?

Our raw data source is a mdb file and we are using SSIS to fast load into SQL Server, and we are looking to transform the data set into 3 tables (using a stored proc):
Site (SiteID, Name)
Gas (ID, Date, Time, GasFIeld1, GasField2....., SiteID)
GenSet (ID, Date, Time, GenSetField1, GenSetField2.....,
SiteID)
Each record in our raw data source contains a Name field which identifies the Site. We only need to add a new site to the Site table if it does not already exist. This is already coded and working using insert-select and NOT EXISTS.
We now need to iterate over all records and extract a subset of data for the Gas table and extract a subset of data for the GenSet table and link each row with the
associated SiteID, using Name field.
The insertion order should be Site table first then remaining tables.
Are you saying it would be better to transform this data using SSIS and not to use a staging table and stored procedure?
I would prefer using staging + sp appproach here as that would involve set based logic and would be faster performance wise
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Similar Messages

  • STO Delivery Date is not copy from Allocation Table Site Date

    Hi,
    I have a problem when Stock Transfer Order created based on the Allocation Table, the delivery date is not copy from Allocation Table. E.g. In Allocation Table delivery date is 26.09.2009, but in STO delivery date is 06.11.2009.
    Check the configuration and also check the stock, there is stock available to be delivered on 26.09.2009.
    Can someone please give me a hint to find out what is missing or wrong, and how to fixed this?
    Greatly appreciate for your suggestion and sharing.
    Kind Regards,
    2tea

    I found the problem finally.

  • Why data has not been copied from staging DB to Reporting DB even timer job status is success ?

    Hello,
    I am facing following issue regarding web analytic service for specific date 2014-08-18:
    Verified following services which are running and status was success on that specific date 2014-08-18 :
    Web Analytics Service
    Microsoft Usage Data Import
    Microsoft SharePoint Foundation Usage Data Processing
    Observed latest Web Analytics Setting as below :
    LastAggregationDateId 
    20140817 -   The date id of the last successfully completed data aggregation
    LastAggregationTime
    2014-08-18T00:00:13.210 - The time of the last successfully completed data aggregation
    LastBestBetSuggestionAggregationDateId
    20140818 - The date id of the last successfully completed best bet suggestion data aggregation. This date id should not be reset to an earlier date manually.
    LastDataCopyTime
    2014-08-17T23:59:09.693 - The last time the data were copied from the staging databases to this reporting database for aggregation
    Above settings says that Data has not copied from Staging DB to Reporting DB on Date 2014-08-18
    In order to verify the same I have checked as follow:
    SELECT COUNT (*) FROM [dbo].[WATrafficAggregationByDate] WITH (NOLOCK)
    WHERE  [DateId] = 20140818
    Above SQL Query has returned ZERO value.
    Can anyone please let me know Why data has not been copied from staging DB to Reporting DB even timer job status is success ?
    Your help will be much appreciated.
    Thanks and Regards,
    Dipti Chhatrapati

    The image itself has the answer
    The user requests a page, and the action gets picked up by the Web Analytics service that runs on SharePoint.
    The Web Analytics service logs this in the “.usage” files.
    A Timer job called “Microsoft SharePoint Foundation Usage Data Import” by default runs every 30 minutes. It imports the logs into the staging database.
    Each night the “Microsoft SharePoint Foundation Usage Data Processing” Timer job runs and transforms the data into the reporting database.
    The last run time of the import from staging and the aggregation is logged in the Settings table in the Reporting database.
    Usage and Health Data Collection Service Application collects Data about Usage and Health of your farm.
    This information is used for Health Monitoring and this is also required for running the Web Analytics Service. If you do not have a Usage and Health Data Collection Service Application or your
    Usage and Health Data Collection Proxy is stopped, you will not see any data in the Web Analytics Report
    Regards Chen V [MCTS SharePoint 2010]

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • How can I INSERT INTO from Staging Table to Production Table

    I’ve got a Bulk Load process which works fine, but I’m having major problems downstream.
    Almost everything is Varchar(100), and this works fine. 
    Except for these fields:
    INDEX SHARES, INDEX MARKET CAP, INDEX WEIGHT, DAILY PRICE RETURN, and DAILY TOTAL RETURN
    These four fields must be some kind of numeric, because I need to perform sums on these guys.
    Here’s my SQL:
    CREATE
    TABLE [dbo].[S&P_Global_BMI_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    NUMERIC(18, 12),
    [INDEX MARKET CAP]
    NUMERIC(18, 12),
    [INDEX WEIGHT]
    NUMERIC(18, 12),
    [DAILY PRICE RETURN]
    NUMERIC(18, 12),
    [DAILY TOTAL RETURN]
    NUMERIC(18, 12),
    From the main staging table, I’m writing data to 4 production tables.
    CREATE
    TABLE [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    FLOAT(20),
    [INDEX MARKET CAP]
    FLOAT(20),
    [INDEX WEIGHT] FLOAT(20),
    [DAILY PRICE RETURN]
    FLOAT(20),
    [DAILY TOTAL RETURN]
    FLOAT(20),,
    INSERT
    INTO [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
      SELECT
    [CHANGE],
    Etc.
    [DAILY TOTAL RETURN]
      FROM
    [dbo].[S&P_Global_BMI_(US_Dollar)]
      WHERE
    isnumeric([Effective Date])
    = 1
      AND
    [CHANGE] is
    null
      AND
    [COUNTRY] <>
    'US'
      AND ([SIZE] =
    'L' OR [SIZE]
    = 'M')
    The Bulk Load is throwing errors like this (unless I make everything Varchar):
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 1
    When I try to load data from the staging table to the production table, I get this.
    Msg 8115, Level 16, State 8, Line 1
    Arithmetic overflow error converting varchar to data type numeric.
    The statement has been terminated.
    There must be an easy way to overcome this, right.
    Please advise!
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Nothing is returned.  Everything is VARCHAR(100).  the problem is this.
    If I use FLOAT(18) or REAL, I get exponential numbers, which is useless to me.
    If I use DECIMAL(18,12) or NUMERIC(18,12), I get errors. 
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 8, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 9, column 23 (INDEX SHARES).
    There must be some data type that fits this!
    Here's a sample of what I'm dealing with.
    -0.900900901
    9.302325581
    -2.648171501
    -1.402805723
    -2.911830584
    -2.220960866
    2.897762349
    -0.219640074
    -5.458448607
    -0.076626094
    6.710940231
    0.287200186
    0.131682908
    0.124276221
    0.790818723
    0.420505119
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Importing data from an old backup AppSet (copying from FactFinance tables)

    Hello - we made a mistake a few weeks ago that wiped out some of our old data. Without realizing the implications, I made a base-level member of our DEPARTMENT dimension a parent, so BPC wiped out all the data for that base-level member. We know not to do that now, and we've changed the member back to base-level. But, our data is still gone.
    But, our system automatically makes a backup every day, so we have the data saved. I am now trying to get the old data into our current AppSet. One of our DBA's queried the FactFinance table and found 30,000 records where the department = the one we accidentally erased. This sounded like the correct # of records, so I had him send me the data in a .csv file. I have now verified that all of the data is there in the backup. The DBA copied the records from the the FactFinance table to a test environment of our current AppSet. But, when I go in to look at it, very few records are there. For example, he says he copied 30,000 records, but I only see about 2,000 when I query the AppSet. If it matters, the records that we see only relate to the last three fiscal years, and none of the older data is there.
    Can someone tell me how we can get all of these records to appear in our AppSet by copying them to the tables? Or, is there a way to import the data using the Data Manager? I tried saving the records as a .txt file and importing them, but that did not work, probably due to my lack of knowledge of how to manipulate transformation files. Although the file did validate with a transformation file, when I went to import it, it said "Convert Data: Success", but then said "The number of failing rows exceeds the maximum specified".

    If you did a backup of your appset then you will be ablle to recuperate all data for that member of dimension.
    1. First extract the data from backup from follow tables:
    tblfact"yourapplication"
    tblfac2"yourapplication"
    tblfactwb"yourapplication".
    2. Make sure your member is right now base member into your application
    3. Delete all the existing records for Department member for all three table
    fact,fac2, wb  of your current database. By the way if this dimension is share then you will have this problem in all application not just in one.
    4. Copy extracted records into tblfac2"yourapplication"
    5. Process from SSAS fac2 partition of your application.
    After that you should be able to see all data except the data inserted after your change from base member to parent.
    Kind Regards
    Sorin Radulescu

  • Issue while importing customer records from Staging tables

    hi,
    we are trying to insert customer records from other erp system to oracle ebs.
    We are able to push the data into staging table and call the procedure 'hz_cust_account_v2pub.create_cust_account'
    Now we could see the data being entered in 'HZ_PARTIES'
    but could not see the records from frontend.
    On the other hand in case of vendors and employee we could see the data from frontend as well
    Is there any particular flag or field we are missing?
    thanks for the help in advance
    Edited by: sam on Sep 13, 2012 1:15 AM

    Has the " DQM Serial Sync Index Program" run after you have run the import? If not try and manually submit this job and see if you can view your changes

  • Copy from 1 table to anaother

    Hi
    it very urgent
    I have more than 1 million records in 1 table
    and i need to copy all data from that table to anaother table .
    So how should i do it ??
    if i use internal table it comes to short dump.
    Please help
    Praff

    Hi there,
    If you are talking using ABAP.
    don't use internal table. Just simple use
    select * from table
    insert table content into another table.
    endselect.
    Or are you talking ODS table? If this is the case, using this table as source to load it into other new ODS
    Hope this helps.
    Weidong

  • Not able to select and copy from adf table in IE and chrome if we enable row selection

    Hi All,
    We have an adf table and user wants to select and copy table cell values.
    We enabled row selection on adf table. Ifrow selection is in place, IE and Chrome are not allowing user to select and copy data. But Firefox is allowing.
    Do we have any solution to this? For our customer IE is the standard browser and they do test app on IE.
    Regards
    PavanKumar

    Hi Timo,
    Sorry forgot to mention versions.
    We are using 11.1.1.7 and IE 9.
    I tried in Google but could not get the solution.
    Kindly let me know solution for this.
    PavanKumar

  • How to increase Array  size while copying from remote table

    Hi everyone,
    I am using remote copy to copy 900,000(9 millions) records to local data base ,but it is taking long time obviuosly. The message i see while copying is
    Array fetch/bind size is 15 (array size is 15)
    can Array size be changed to make copy faster or
    any otherway to copy 9 million reocrds fastly?
    regards,
    Karna

    I guess that you are using the 'copy' command in sql*plus.
    1. You could export the remote table and import the data into your local table. That perhaps would be efficient.
    2. You could increase the array size for example,
    SQL> show arraysize
    arraysize 15
    SQL> set arraysize 5000
    SQL> show arraysize
    arraysize 5000
    SQL> What is the value for sql*plus option 'copycommit'?
    SQL> show copycommit
    copycommit 0
    SQL>

  • Synchronizing Updates on a Staging Table

    Please help me out with the resolving the following issue:
    A load script is running for moving records from a data file to a staging table.
    After this script completes, there is a code to update two fields of the staging table.
    To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
    This script will create a file that looks like the following (when 5000000 is passed in):
    1 | 5000000
    5000001 | 10000000
    10000001 | 15000000
    15000001 | 20000000
    20000001 | 25000000
    25000001 | 30000000
    30000001 | 35000000
    35000001 | 40000000
    40000001 | 45000000
    45000001 | 50000000
    50000001 | 55000000
    55000001 | 60000000
    60000001 | 65000000
    The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
    The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
    So there are 13 updates happenng behind the scenes.
    The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
    So please help me out with fixing this problem programmatically.
    Thanks for your cooperation in advance.
    Regards,
    Ayan.

    Ayan,
    Are you really sure you want to update 65 million rows ?
    Alternative: create table as select <columns with 2 columns updated> from staging table;
    While using this approach, you probably don't need to split the update.
    Regards,
    Rob.

  • My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    Hi,
    You could use FND_FLEX_LOADER_APIS.UP_VALUE_SET_VALUE to upload them from staging table (I suppose you mean value set values...).
    You can find a sample scripts if you google around.
    What do you mean "how to map any API to our staging table" ?
    You should do at least the following mapping (which column(s) in the staging table will provide these information):
    - the 3 value sets name which you're going to update/upload (I suppose these are existing value sets or which have been already created)
    - the value set values and  description
    Try to start with something and if there is any issues the community could then help... but for the time being with the description of the problem you have provided, that's the best I can do...

  • Copy from Fact: "Invalid selection passed"

    Using OS5.0 SP2, has anyone received this error while running a standard copy from fact table package---"invalid selection passed"? Odd thing is, the copy package will work with the exact same parameters.
    I didn't find anything about this in the sap notes.

    Can you list out the source and destination settings you're using?  I cannot remember the last time I ran that particular package but can try and run it with similar setting in apshell.

  • Copying large amount of data from one table to another getting slower

    I have a process that copies data from one table (big_tbl) into a very big archive table (vb_archive_tbl - 30 mil recs - partitioned table). If there are less than 1 million records in the big_tbl the copy to the vb_archive_table is fast (-10 min), but more importantly - it's consistant. However, if the number of records is greater than 1 million records in the big_tbl copying the data into the vb_archive_tbl is very slow (+30 min - 4 hours), and very inconsistant. Every few days the time it takes to copy the same amount of data grows signicantly.
    Here's an example of the code I'm using, which uses BULK COLLECT and FORALL INSERST to copy the data.
    I occasionally change 'LIMIT 5000' to see performance differences.
    DECLARE
    TYPE t_rec_type IS RECORD (fact_id NUMBER(12,0),
    store_id VARCHAR2(10),
    product_id VARCHAR2(20));
    TYPE CFF_TYPE IS TABLE OF t_rec_type
    INDEX BY BINARY_INTEGER;
    T_CFF CFF_TYPE;
    CURSOR c_cff IS SELECT *
    FROM big_tbl;
    BEGIN
    OPEN c_cff;
    LOOP
    FETCH c_cff BULK COLLECT INTO T_CFF LIMIT 5000;
    FORALL i IN T_CFF.first..T_CFF.last
    INSERT INTO vb_archive_tbl
    VALUES T_CFF(i);
    COMMIT;
    EXIT WHEN c_cff%NOTFOUND;
    END LOOP;
    CLOSE c_cff;
    END;
    Thanks you very much for any advice
    Edited by: reid on Sep 11, 2008 5:23 PM

    Assuming that there is nothing else in the code that forces you to use PL/SQL for processing, I'll second Tubby's comment that this would be better done in SQL. Depending on the logic and partitioning approach for the archive table, you may be better off doing a direct-path load into a staging table and then doing a partition exchange to load the staging table into the partitioned table. Ideally, you could just move big_tbl into the vb_archive_tbl with a single partition exchange operation.
    That said, if there is a need for PL/SQL, have you traced the session to see what is causing the slowness? Is the query plan different? If the number of rows in the table is really a trigger, I would tend to suspect that the number of rows is causing the optimizer to choose a different plan (with your sample code, the plan is obvious, but perhaps you omitted some where clauses to simplify things down) which may be rather poor.
    Justin

  • Copy selected values from a table control into another table control

    hi there,
    as seen in the subject i need to copy selected values from a table control into another table control in the same screen. as i dont know much about table controls i made 2 table controls with the wizard and started to change the code... right now im totally messed up. nothing works anymore and i don't know where to start over.
    i looked up the forums and google, but there is nothing to help me with this problem (or i suck in searching the internet for solutions)
    i have 2 buttons. one to push the selected data from the top table control into the bottom tc and the other button is to push selected data from the bottom tc into the top tc. does somebody has a sample code to do this?

    you're funny
    i still don't get it... can't believe, there is no tutorial or sample code around how to copy multiple selected rows from a tc.
    here's my code, maybe you can tell me exactly were i have to change it:
    tc1 = upper table control
    tc2 = lower table control
    SCREEN 0100:
    PROCESS BEFORE OUTPUT.
      MODULE status_0100.
      MODULE get_nfo. --> gets data from the dictionary table
      MODULE tc1_change_tc_attr.
      LOOP AT   it_roles_tc1
           INTO wa_roles_tc1
           WITH CONTROL tc1
           CURSOR tc1-current_line.
      ENDLOOP.
      MODULE tc2_change_tc_attr.
      LOOP AT   it_roles_tc2
           INTO wa_roles_tc2l
           WITH CONTROL tc2
           CURSOR tc2-current_line.
      ENDLOOP.
    PROCESS AFTER INPUT.
      LOOP AT it_roles_tc1.
        CHAIN.
          FIELD wa_roles_tc1-agr_name.
          FIELD wa_roles_tc1-text.
        ENDCHAIN.
        FIELD wa_roles_tc1-mark
          MODULE tc1_mark ON REQUEST.
      ENDLOOP.
      LOOP AT it_roles_tc2.
        CHAIN.
          FIELD wa_roles_tc2-agr_name.
          FIELD wa_roles_tc2-text.
        ENDCHAIN.
        FIELD wa_roles_tc2-mark
          MODULE tc2_mark ON REQUEST.
      ENDLOOP.
      MODULE ok_code.
      MODULE user_command_0100.
    INCLUDE PAI:
    MODULE tc1_mark INPUT.
      IF tc1-line_sel_mode = 2
      AND wa_roles_tc1-mark = 'X'.
        LOOP AT it_roles_tc1 INTO g_tc1_wa2
          WHERE mark = 'X'.    -
    > big problem here is, that no entry has an 'X' there
          g_tc1_wa2-mark = ''.
          MODIFY it_roles_tc1
            FROM g_tc1_wa2
            TRANSPORTING mark.
        ENDLOOP.
      ENDIF.
      MODIFY it_roles_tc1
        FROM wa_roles_tc1
        INDEX tc1-current_line
        TRANSPORTING mark.
    ENDMODULE.                    "TC1_MARK INPUT
    MODULE tc2_mark INPUT.
      IF tc2-line_sel_mode = 2
      AND wa_roles_tc2-mark = 'X'.
        LOOP AT it_roles_tc2 INTO g_tc2_wa2
          WHERE mark = 'X'.             -
    > same here, it doesn't gets any data
          g_tc2_wa2-mark = ''.
          MODIFY it_roles_tc2
            FROM g_tc2_wa2
            TRANSPORTING mark.
        ENDLOOP.
      ENDIF.
      MODIFY it_roles_tc2
        FROM wa_roles_tc2
        INDEX tc2-current_line
        TRANSPORTING mark.
    ENDMODULE. 
    thx for anybody who can help with this!

Maybe you are looking for