Performance updating a extra huge table

Hi guys, just an advice. I'm handling table with more than 300 millions rows, sometimes even 800 millions and so far I came up with some good solution but now I really need to be concerned about the performance. I got a table with:
FlyID int, FlyNumber int, SettlDate datetime2, SettlPeriod double, Consumpt dec, Ixl dec, Aunit int
300 millions rows. The settldate is a date , settperiod is an half hour ( so 48 period each day).
The other table is:
BMUnit int,  SettlDate datetime2, SettlPeriod double, Chargefact dec
I'm going to join the two table on bmunit=bmunit, settdate=settdate, settperiod=settperiod and with an insert filling a new table
Fingers crossed and I hope it wors within a reasonable time ( 3 hours...more?)
The real concern is:
I got another table with
FlyID int, Company varchar, CompanyID int, FromDate datetime, ToDate datetime
The logic should be something like this:
Update table1 set 1companyid=dd.companyid , company=company
where table1.flyid=company.flyid
and settlementdate >= fromdate and settlementdate <= todate
but just yesterday I tried something without date and the querr ran for more than seven hours and so I had to killed it. I'm wondering if there is a better way...all this stuff because I'm going to build several cube taking as source a big table. That's
it's going to make the retrievement really fast, so far I cut pratically entire hours but now I need you this more element and before I start to write some code I'd like to hear some your advice..
Thanks

Tables that large are always a problem to do major maintenance.
I would do your update in batches:
DECLARE @cnt int;
SET @cnt = 1;
WHILE @cnt > 0
BEGIN
Update TOP 1000000 table1 set 1companyid=dd.companyid , company=company
where table1.flyid=company.flyid
and settlementdate >= fromdate and settlementdate <= todate
SET @cnt=@@ROWCOUNT
END

Similar Messages

  • Status control table - 'Perform update for status control table for cube '

    Hello Experts,
    While loading data to DSO, I am getting this message - 'Perform update for status control table for cube Z3MLQUA'.
    What is status control table ? and can I have some inputs as to how to solve this issue
    Regards,
    arjun

    Hi,
    This message is not a issue in itself. It is just a SAP message which means that the system is checking/updating the system tables to carry out the operation whether it can be done or not.
    If your loads are failing at this step or after this message there can be many reasons, check some of  the possible reasons below
    Performing check and potential update for status control table
    Update from PSA error in Process Chain
    problem in deleting request
    Reporting not available ,even if Data is sucessfully loaded into DT
    Hope this helps,
    Kush kashyap

  • Update records in huge table

    Hi,
    I need to update two fields in a huge table (> 200.000.000 records). I've created 2 basic update scripts with a where clause. The problem is that there isn't an index on these fields in the where clause. How can I solve this? Creating a new index is not an option.
    An other solution is to update the whole table (so without a where clause) but I don't know if it takes a lot of time, locks records,...
    Any suggestions?
    Thanks.
    Ken

    Ken,
    You may be better off reading the metalink documents. PDML stands for Parallel DML. You can use parallel slaves to get the update done quickly. Obviously this is dependent on the number of parallel slaves you have and the degree you set
    Type PDML on metalink
    G

  • Need help with performance for very very huge tables...

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
    My DB has many tables and out of which I am interested in getting data from product and sales.
    select /*parallel 32*/count(1) from (
    select /*parallel 32*/distinct prod_code from product pd, sales s
    where pd.prod_opt_cd is NULL
    and s.sales_id = pd.sales_id
    and s.creation_dts between to_date ('2012-07-01','YYYY-MM-DD') and
    to_date ('2012-07-31','YYYY-MM-DD')
    More information -
    Total Rows in sales table - 18001217
    Total rows in product table - 411800392
    creation_dts dont have index on it.
    I started query in background but after 30 hours I saw the error saying -
    ORA-01555: snapshot too old: rollback segment number 153 with name
    Is there any other way to get above data in optimized way?

    Formatting your query a bit (and removing the hints), it evaluates to:
    SELECT COUNT(1)
    FROM  (SELECT DISTINCT prod_code
           FROM   product pd
                  INNER JOIN sales s
                  ON s.sales_id = pd.sales_id 
           WHERE  pd.prod_opt_cd is NULL
           AND    s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
                                     AND TO_DATE('2012-07-31','YYYY-MM-DD')
          );This should be equivalent to
    SELECT COUNT(DISTINCT prod_code)
    FROM   product pd
           INNER JOIN sales s
           ON s.sales_id = pd.sales_id 
    WHERE  pd.prod_opt_cd is NULL
    AND    s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
                              AND TO_DATE('2012-07-31','YYYY-MM-DD');On the face of it, that's a ridiculously simple query If s.sales_id and pd.sales_id are both indexed, then I don't see why it would take a huge amount of time. Even having to perform a FTS on the sales table because creation_dts isn't indexed shouldn't make it a 30-hour query. If either of those two is not indexed, then it's a much uglier prospect in joining the two tables. However, if you often join the product and sales tables (which seems likely), then not having those fields indexed would be contraindicated.

  • Performing check and potential update for status control table

    Cheers Experts,
    I searched in the forum, but I didn't find an answer.
    I'm working on a 3.0B BW system.
    After updating from PSA to an ODS, in the log I see the following line:
    "Performing check and potential update for status control table" and double clicking on it, it says:
    If data is loaded into an InfoCube, or existing data is edited (aggregated/compressed/deleted/got from a DataMart), then there is a change in  the potential reportability of the data , or the possibility of deleting data by request, or of aggregating or compressing.
    This status of each one of these Cubes is stored in a status table, that is updated when there is any change to the status of a request in the Cube.
    The system now analyzes the requests in the Cube and compares the calculated status with the status table.
    If deviations from the status table arise then you are given the option of adjusting the status table.                                         
    I have 2 questions:
    1) What is BW doing, mainly because I'm working on an ODS and not an Infocube?
    2) What can I do to reduce its execution time, since it lasts several minutes?
    Thanks and greetings.
    Roberto

    Roberto, did you ever find an answer to your question?  We also are experiencing long runtimes during this particular process running 3.x deltas from ODS to ODS and cubes.

  • On submit perform an insert on one table and an update on aother table

    I am trying to perform and insert on the table one table (the wizard created my form the insert is going against the table that I created using the wizard) and on the form is on field that is also in another table. Therefore, I am trying to perform an update on one attribute of one table and a insert into another table. How do I do this in apex?

    If you have used wizard to create form, then you may see a process of type 'Automatic Row Processing (DML)' in your page which will perform INSERT/UPDATE/DELETE on your form table. Here you can see APEX performs INSERT only when REQUEST is in 'INSERT, CREATE, CREATE_AGAIN, CREATEAGAIN'
    So create one more PL/SQL page process which will execute at 'on Submit after validations' and write update process as follows
    begin
    -- pseudo table/columns
    update tbl_second
    set col1 = :p1_item
    where pk_col = :p1_pk_item;
    end;Make this process conditional so that it will perform UPDATE only when request value is in 'INSERT, CREATE, CREATE_AGAIN, CREATEAGAIN' ( i.e. only when you are inserting into your form table)
    Cheers,
    Hari
    p.s. I think you may also need to update the second table when some-one updates your form table.
    Edited by: Hari_639 on Oct 26, 2009 9:46 AM

  • Trigger format to perform updates on another table

    Hello there,
    I am struggling to write a trigger that performs updates on table B when rows in my table A are updated , can anybody give me any general pointers in that direction?

    Duplicate thread:
    Will this trigger code work?
    (this one is earlier, but at least the other one has some code).

  • Can xsu not be used to update more than one table at a time.

    I would like to update two or more tables using the same XML package. However when I use XML that contains tags using column names from both tables I get an error that the second table's columns names can not be found in the first table even if I use setUpdateColumnList and build an array that contains only columns from the first table.
    Xsu does not do me a lot of good if only one table at a time can be updated.

    Assuming you are using Oracle 8i you may want to bone up on 'Object Views' and 'instead of' triggers. This is the functionality that is commonly used to perform multiple table updates from a single data source such as XML.
    An object view can be based on a query such as 'select a.col1, a.col2, b.col1, b.col2 from a, b where a.key1 = b.key1'.
    The object view is then used by the XSU to perform the XML query/update rather than using tables a and b.
    You would then write an 'instead of' trigger on the object view. The trigger would update the appropriate table (a or b) based on the columns provided in the update list.
    Steve's book 'Building Oracle XML Applications' in Chapter 12 explains how to do it and has sample code that demonstrates the process.
    The book is available everywhere and you can download the sample code from the book at o'reilly's web site.
    Good luck!

  • Insert delete or update entry in Custom table from KONV entry changed

    Hi All,
    I have custom table ZKONV with only few required columns and should have same number of records as KONV has at any point in time.
    KONV is a cluster table so its not readable from ORACLE level. So ZKONV is created. But I dont know how to keep these both tables in sync.
    I need to perform insert delete or update entry in Custom table if insert delete or update happens on cluster table KONV from any transactions.
    As KONV is a cluster table and does not have changed time stamp I am not able to know the number of records changed in perticulat time period.
    Thanks,

    Thanks for reply,
    There is a Outside SAP system which needs to read KONV data to feed into their system, but as KONV is cluster table they are not able to read it from ORACLE level.
    To solve this we are thinking to create a transparent Z-table and will fill it with KONV and catch Update, delete or Insert statement and do same on ZKONv.
    Is this possible some how? by some database event or something....

  • Bad performance updating purchase order (ME22N)

    Hello!
    Recently, we face bad performance updating purchase orders using transaction ME22N. The problem occurs since we implemented change documents for a custom table T. T is used to store additional data to purchase order positions using BAdIs ME_PROCESS_PO_CUST and ME_GUI_PO_CUST.
    I've created a change document C_T for T using transaction SCDO. The update module of the change document is triggered in the method POST of BAdI ME_PROCESS_PO_CUST.
    Checking transaction SM13, I recognized that the update requests of ME22n have status INIT for several minutes before they are processed. I also tried to exclude the call of the update module for change document C_T (in Method POST) - the performance problem still occurs!
    The problem only occurs with transaction ME22N, thus I assume that the reason is the new change document C_T.
    Thanks for your help!
    Greetings,
    Wolfgang

    I agree with vikram, we don't have enough information, even not a small hint on usage of this field, so which answer do you expect (The quality of an answer depends ...) This analysis must be executed on your system...
    From a technical point of view, the BAPI_PO_CHANGE has EXTENSIONIN table parameter, fill it using structure BAPI_TE_MEPOITEM[X] alreading containing CI_EKPODB (*) and CI_EKPODBX (**)
    Regards,
    Raymond
    (*) I guess you have used this include
    (**) I guess you forgot this one (same field names but data element always BAPIUPDATE)

  • Updating model for pivot table in af:iterator

    Using Master Detail relationship, i am trying to create multiple forms at runtime. I want to show all the forms at once. Each form has some header fields and a pivot table. Header field is coming from master table and pivot table should be made from child.
    I am using af:iterator to iterate over master table rows and show header:
    <af:iterator id="i1"
                value="#{bindings.MasterTableVO.collectionModel}" var="row"
                varStatus="index"> For pivot table, i dragged Child View Object from Data Controls to make pivot table.
    <dvt:pivotTable id="pivotTable1"
              value="#{bindings.ChildTableVO.pivotTableModel}"
              summary="pivot table" sizing="auto"
              pivotEnabled="false" drillingEnabled="false"/>                                    Now while iterating, i am not able to get updated model for pivot table. Values from Master table are being updated, but pivot table from child table showing details corresponding to the first row of master table everywhere.
    So can you please help me, how to get the updated pivot table model corresponding to the current row in parent iterator ?
    I am using JDeveloper 11.1.1.6.0
    Thank You.
    VS

    Hi Aswini ,
    I didnot worked for me. I need to change the usecase a bit..
    however you can eaisly display the af:table inside iterator in readonly mode, but when you perform crud operations inline then it gives problems..
    Can you explain your usecase.
    Regards,
    Santosh.

  • Efficient way of updating data to database table

    what is the efficient way of updating data to database table  .
    i have huge amount of data in my internal table  , how to use update statement in this case .
    1. database table having 20 fields  ,
    2. one is key field and suppose 20 th field i want to change  .
    3. I have data for only 2 fields , i.e for ( 1 st and last 20 th field ) .
    i can't use update statement in loop , as it is not good practice(hits database several times ) .
    do it effects all the 20 fields for  particular record .

    Hi,
    Use UPDATE statement , check below description from SAP help.
    UPDATE dbtab FROM TABLE itab. or UPDATE (dbtabname) FROM TABLE itab.
    Effect
    Mass update of several lines in a database table.Here, the primary key for identifying the lines tobe updated and the values to be changed are taken from the lines of theinternal table itab. 
    The system field SY-DBCNT contains the number of updated lines,i.e. the number of lines in the internal table itab which havekey values corresponding to lines in the database table.
    Regards
    L Appana

  • Bitmap index or Composite index better on a huge table

    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.
    Regars,
    Madhu K.

    Dear,
    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.First of all, bitmap indexes are not recommended for write intensive OLTP applications due to the locking threat they can produce in such a kind of applications.
    You told us that this table is never updated; I suppose it is not deleted also.
    Second, bitmap indexes are suitable for columns having low cardinality. The question is how can we define "low cardinality", you said that you have 100,000 distincts group_no on a table of 100,000,000 rows.
    You have a cardinality of 100,000/100,000,000 =0,001. Group_no column might be a good candidate for a bitmap index.
    You said that order_no is unique so you have a very high cardinality on this column and it might not be a candidate for your bitmap index
    Third, your query where clause involves only the group_no column so why are you including both columns when testing the bitmap and the b-tree index?
    Are you designing such a kind of index in order to not visit the table? but in your case the table is made only of those two columns, so why not follow Hermant advise for an Index Organized Table?
    Finally, you can have more details about bitmap indexes in the following richard foot blog article
    http://richardfoote.wordpress.com/2008/02/01/bitmap-indexes-with-many-distinct-column-values-wotsuh-the-deal/
    Best Regards
    Mohamed Houri

  • Function module updating the data base table

    Hi,
      This post is regarding the function module not updating the data base table.
    I am calling the FM SD_SHIPMENT_HEADER_CHANGE inside the ZFM. It's returning success an changing the  table c_xvttk_new with the new TDLNR value. But it's not updating the Shipment table VTTK-TDLBR or VT02N Forwarding agent.
    When I directly updating Forwarding agent in VT02N it's updating fine.
    Please let me know what the extra step need to be included to update or COMMIT the FM for updating
    Forwarding Agent(VTTK-TDLNR).
    Best Regards,
    Mahesh

    hi friend,
    This link wont five u the complete help but if u ananyse it  then it might provide u some idea......
    BAPI change shipment doc
    regards
    kanishak

  • Database performance side effects of huge inserts

    All,
    I know this might not be the right place to raise this question. Just thought any of you might have come across this in db programming.
    I have a table in database in which there would be huge no. of insertions through out the day. The volume of insertions per day would be around 20000 rows. And there is no process other than this which would be querying this table.
    My question is whether so many INSERTs on one table would affect in any way the performance of the whole database ?
    Would this affect the response time of the processes working with other tables in the same database ?

    My question is whether so many INSERTs on one tablewould affect in any way the performance of the whole
    database ?
    Would this affect the response time of the processesworking with other tables in the same database ?
    Yeah since it's running on the same PC.
    The volume of insertions per day would be around20000 rows. And there is
    no process other than this which would be queryingthis table.
    But 20000 rows is a very small insert number for
    database, so you needn't to consider too much about
    performance. If you wish to insert quicker, you
    should use PreparedStatement to insert your rows, or
    use batch insert.
    One exception is that you'r inserting 20000 rows into
    a very big table with many index expressions on that
    table or imported/exported keys on other big tables.
    That will be slower.Well this table wont have any foreign keys on any big table...
    though the table size would be huge... would gross upto 1 gb data in a month. I am not concerned about the performance of queries on this table at any stage.. but would continuous insertions... 10 INSRTS a sec in this table affect the performance of the rest of the tables in the db in any way

Maybe you are looking for