I need to improve the performance of a migration

Hello!
I need to migrated a lot of data and i want to improve the performance
I can use a cursor and then with a for ... loop insert every row or I can use insert into () select ....
I would like to know which one is better?
Thanks.

If you need to do a lot of per-line processing the CURSOR FOR LOOP might be better. But almost certainly, if you can code it as straight SQL statements then that's the way to go. PL/SQL comes with tremendous overheads and is usually a lot slower than SQL.
However, don't take my word for it: run some tests for yourself.
Cheers, APC

Similar Messages

  • Need to improve the performance

    Hi,
    New fields are added in the standard table,Before the we used select single * from the  table in program,Now because of those fields it reduced the performance,How to improve the performance of the select queary.
    Thanks in advance

    Hi
    folow these rules
    When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where condition you can even use Select Single.  Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Reward if usefull</b>

  • Improving the performance of Stored Procedure

    need to improve the performance of this SP that is hitting two tables that holds about 24000 rowsUSE [trouble_database]
    GO
    SET ANSI_NULLS OFF
    GO
    SET QUOTED_IDENTIFIER OFF
    GO
    ALTER PROCEDURE [dbo].[the_trouble_StoredProcedure]
    @troubleMedicatiotrouble_durationl int
    as
    declare @trouble_refil table
    ( troubleMedicatiotrouble_durationl int,
    trouble_durationl int
    declare @nRefillDispense int
    if exists (select ml.lid from trouble_MedicaticDirectmd
    inner join trouble_Medicatication ml on ml.troubleMedicatiotrouble_durationl=md.lID
    where isnull(md.trouble_bRefillDisp,0)=1
    and ml.lID <>(Select MIN(lID) from trouble_Medicatication where troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl)
    begin
    select @nRefillDispense = isnull(nRefillDispense,9) from trouble_MedicaticDirect where lID=@troubleMedicatiotrouble_durationl
    insert @trouble_refil (trouble_durationl)
    select
    Case szIncrement
    when 'Day(s)' then isnull(trouble_durationl,0)
    when 'Week(s)' then isnull(trouble_durationl,0) * 7
    when 'Month(s)' then isnull(trouble_durationl,0) * 30
    when 'Year(s)' then isnull(trouble_durationl,0) * 365
    else 0
    end as trouble_durationl
    from
    trouble_Medicatication where
    troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl
    and lID<>(Select min(lID) from trouble_Medicatication where troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl)
    select @nRefillDispense as nRefillDispense, sum(trouble_durationl)as trouble_durationl from @trouble_refil
    end
    else
    select 0 as nRefillDispense,0 as trouble_durationl
    k

    you can use Execution plain
    http://stackoverflow.com/questions/7359702/how-do-i-obtain-a-query-execution-plan
    and add index.
    Index according to the fields you ask queries can improve performance greatly larger!
    You can use the statistics for building indexes:
    http://www.mssqltips.com/sqlservertip/2979/querying-sql-server-index-statistics/
    Tzuri Ben Ezra | My Certifications:
    CompTIA A+ ,Microsoft MCP, MCTS, MCSA, MCITP |
    FaceBook: Tzuri FaceBook | vCard:
    Tzuri vCard | 
    Microsoft ID:
    Microsoft Transcript 
     |

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • Need suggestions on improving the performance end to end

    I referred following links and as many as 25 previous posts before posting this question.
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10b54994-f569-2a10-ad8f-cf5c68a9447c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/402fae48-0601-0010-3088-85c46a236f50
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0068bc1-6f8c-2a10-52bb-c6ee3562feb2
    /people/boris.zarske/blog/2007/06/13/sizing-a-system-for-the-system-landscape-directory-of-sap-netweaver
    I have some queries related to improving the performance of a scenario in my landscape.
    Scenario : IDOC-SOAP (Synchronous with BPM)
    The mapping is simple with 3 fld direct mapping.
    Works grt with no problem in normal cases for one message transfer.
    But initial load ( when we put it in new system, the scenario generates erros if we send more than 23 IDOCs at a time)
    I have decided to perform the following to solve it.
    /people/william.li/blog/2008/03/07/step-by-step-guide-in-processing-high-volume-messages-using-pi-71s-message-packaging
    Will it help me because its a IDOC sync scenario.
    What can I do to improve the performance of this scenario?
    Please suggest me good suggestions to improve the performance
    Please do list me out points that I have to perform (for a sync scenario with BPM) as I am already confused watching lots of blogs posts.
    Nikhil.

    do you think that the performance tuning that I mentioned in the link will hold good with sync scenarios?
    i dont think so.....in this scenario the async mesg processing is made to wait
    from the blog:
    As you can see, for the 1st minute, all the messages are waiting to be processed. After 60 seconds,
    the packages will be created and processed. There is no change to the monitoring of each of the
    individual messages in SXI_MONITOR.
    ....but if you make a sync scenario wait....then probably you may run into the risk of blocking the Queues.......
    If this is going to be in production then i would have been more careful...becoz firstly it is BPM....then synchronous BPM.....then a processing wait......normally do not try to make the BPM processing wait...my small suggestion

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • How to improve the performance of adobe forms

    Hi,
    Please give me some suggestions as to how to improve the performance of adobe form?
    Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
    one it is hanging.
    I read about Wizard form design approach, how to use the same here.
    Thanks,
    Aravind

    Hi Otto,
    The form is created using HCM forms and processes. I' am performing user events in the form.
    User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
    happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
    the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
    re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
    I was reading ways to improve performance during re-rendering given below.
    http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
    It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
    Let me know if you need further details.
    Thanks,
    Aravind

  • Help to improve the performance of a procedure.

    Hello everybody,
    First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
    Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
    The shorter calls (less than 30 minutes) have segmentID = 'C1'.
    The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
    So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
    But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
    As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
    So I decided to come here and ask you for some tips on how to improve the performance of this.
    I think you are getting confused already, so I'm just going to put some comments in the code.
    I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
    DECLARE
    CURSOR cur_c21 IS
        select * from table1
        where segmentID = 'C21'
        order by start_date_of_call;     // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
    CURSOR cur_c22 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;
    CURSOR cur_c22_2 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;  
    cursor cur_c23 is
        select * from table1
        where segmentID = 'C23'
        order by start_date_of_call;
    v_temp_rec_c22 cur_c22%ROWTYPE;
    v_dur table1.duration%TYPE;           // using this for storage of the duration of the call. It's number.
    BEGIN
    insert into table2
    select * from table1 where segmentID = 'C1';     // inserting the calls which are less than 30 minutes long
    -- and here starts the mess
    FOR rec_c21 IN cur_c21 LOOP        // taking the first part of the call
       v_dur := rec_c21.duration;      // recording it's duration
       FOR rec_c22 IN cur_c22 LOOP     // starting to check if there is a middle part for the call
          IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
            (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)                
    /* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
          THEN
             v_dur := v_dur + rec_c22.duration;     // updating the new duration
             v_temp_rec_c22:=rec_c22;               // recording the current record in another variable because I use it for the next check
             FOR rec_c22_2 in cur_c22_2 LOOP
                IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND 
                  (rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)        
    /* logic is the same as before but comparing with the last value in v_temp...
    And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
                THEN
                   v_dur:=v_dur + rec_c22_2.duration;
                   v_temp_rec_c22:=rec_c22_2;
                END IF;
             END LOOP;                     
          END IF;
          EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
                   (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);       
    /* exiting the loop if we have at least one middle part.
    (I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
       END LOOP;
       FOR rec_c23 IN cur_c23 LOOP             
          IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
             (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration          
    /* we should always have one last part, so we need this check.
    If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
    (yes we can have these situations in calls longer than 30 and less than 60 minutes). */
          THEN
             v_dur:=v_dur + rec_c23.duration;
             rec_c21.duration:=v_dur;               // updating the duration
             rec_c21.segmentID :='C1';
             INSERT INTO table2 VALUES rec_c21;     // inserting the whole call in table2
          END IF;
          EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
                    (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;                 
                    // exit the loop when the last part has been found.
       END LOOP;
    END LOOP;
    END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
    It's my first post here so hope this is the right sub-forum.
    I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
    I know I'm still missing a lot of knowledge so every help is really appreciated.
    Thank you very much in advance!

    Atiel wrote:
    Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
    SQL> ed
    Wrote file afiedt.buf
      1  select 'C1' as segmentid
      2        ,start_date_of_call, duration, callingnumber, callednumber
      3  from (
      4        select distinct
      5               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      6              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      7              ,callingnumber
      8              ,callednumber
      9        from table1
    10*      )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349
    Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
    Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
    SQL> select * from table1;
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    C21        15-MAR-2012 09:07:26  134676480 5581790386      0113496771567          219        100      10.16
    C23        11-MAY-2012 09:37:26  134676480 5581790386      0113496771567          321         73       2.71
    C21        11-MAY-2012 12:13:10 3892379648 1982032041      0631432831624          959         80       2.87
    C22        11-MAY-2012 12:43:10 3892379648 1982032041      0631432831624          375         57       8.91
    C22        11-MAY-2012 13:13:10  117899264 1982032041      0631432831624          778         27       1.42
    C23        11-MAY-2012 13:43:10  117899264 1982032041      0631432831624          308         97       3.26
    7 rows selected.
    SQL> ed
    Wrote file afiedt.buf
      1  with t2 as (
      2  select 'C1' as segmentid
      3        ,start_date_of_call, duration, callingnumber, callednumber
      4  from (
      5        select distinct
      6               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      7              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      8              ,callingnumber
      9              ,callednumber
    10        from table1
    11       )
    12  )
    13  --
    14  select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
    15        ,t1.col1, t1.col2, t1.col3
    16  from   t2
    17         join table1 t1 on (   t1.start_date_of_call = t2.start_date_of_call
    18                           and t1.callingnumber = t2.callingnumber
    19                           and t1.callednumber = t2.callednumber
    20*                          )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624          959         80       2.87
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567          219        100      10.16
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows).

  • Improve the performance in stored procedure using sql server 2008 - esp where clause in very big table - Urgent

    Hi,
    I am looking for inputs in tuning stored procedure using sql server 2008. l am new to performance tuning in sql,plsql and oracle. currently facing issue in stored procedure - need to increase the performance by code optmization/filtering the records using where clause in larger table., the requirement is Stored procedure generate Audit Report which is accessed by approx. 10 Admin Users typically 2-3 times a day by each Admin users.
    It has got CTE ( common table expression ) which is referred 2  time within SP. This CTE is very big and fetches records from several tables without where clause. This causes several records to be fetched from DB and then needed processing. This stored procedure is running in pre prod server which has 6gb of memory and built on virtual server and the same proc ran good in prod server which has 64gb of ram with physical server (40sec). and the execution time in pre prod is 1min 9seconds which needs to be reduced upto 10secs or so will be the solution. and also the exec time differs from time to time. sometimes it is 50sec and sometimes 1min 9seconds..
    Pl provide what is the best option/practise to use where clause to filter the records and tool to be used to tune the procedure like execution plan, sql profiler?? I am using toad for sqlserver 5.7. Here I see execution plan tab available while running the SP. but when i run it throws an error. Pl help and provide inputs.
    Thanks,
    Viji

    You've asked a SQL Server question in an Oracle forum.  I'm expecting that this will get locked momentarily when a moderator drops by.
    Microsoft has its own forums for SQL Server, you'll have more luck over there.  When you do go there, however, you'll almost certainly get more help if you can pare down the problem (or at least better explain what your code is doing).  Very few people want to read hundreds of lines of code, guess what's it's supposed to do, guess what is slow, and then guess at how to improve things.  Posting query plans, the results of profiling, cutting out any code that is unnecessary to the performance problem, etc. will get you much better answers.
    Justin

  • Is There any way to improve the performance on this code

    Hi all can any one tell me how to improve the performance of this below code.
    Actually i need to calculate opening balance of gl account so instead of using bseg am using bsis
    So is there any way to improve this code performance.
    Any help would be appreciated.
    REPORT  ZTEMP5 NO STANDARD PAGE HEADING LINE-SIZE 190.
    data: begin of collect occurs 0,
           MONAT TYPE MONAT,
           HKONT TYPE HKONT,
           BELNR TYPE BELNR_D,
           BUDAT TYPE BUDAT,
           WRBTR TYPE WRBTR,
           SHKZG TYPE SHKZG,
           SGTXT TYPE SGTXT,
           AUFNR TYPE AUFNR_NEU,
           TOT   LIKE BSIS-WRBTR,
    end of collect.
    TYPES: BEGIN OF TY_BSIS,
           MONAT TYPE MONAT,
           HKONT TYPE HKONT,
           BELNR TYPE BELNR_D,
           BUDAT TYPE BUDAT,
           WRBTR TYPE WRBTR,
           SHKZG TYPE SHKZG,
           SGTXT TYPE SGTXT,
           AUFNR TYPE AUFNR_NEU,
    END OF TY_BSIS.
    DATA: IT_BSIS TYPE TABLE OF TY_BSIS,
          WA_BSIS TYPE TY_BSIS.
    DATA: TOT TYPE WRBTR,
          SUMA TYPE WRBTR,
          VALUE TYPE WRBTR,
          VALUE1 TYPE WRBTR.
    SELECTION-SCREEN: BEGIN OF BLOCK B1.
    PARAMETERS:  S_HKONT LIKE WA_BSIS-HKONT DEFAULT '0001460002' .
    SELECT-OPTIONS: S_BUDAT FOR WA_BSIS-BUDAT,
                    S_AUFNR FOR WA_BSIS-AUFNR DEFAULT '200020',
                    S_BELNR FOR WA_BSIS-BELNR.
    SELECTION-SCREEN: END OF BLOCK B1.
    AT SELECTION-SCREEN OUTPUT.
      LOOP AT SCREEN.
        IF SCREEN-NAME = 'S_HKONT'.
          SCREEN-INPUT = 0.
          MODIFY SCREEN.
        ENDIF.
      ENDLOOP.
    START-OF-SELECTION.
      SELECT MONAT
             HKONT
             BELNR
             BUDAT
             WRBTR
             SHKZG
             SGTXT
             AUFNR
             FROM BSIS
             INTO TABLE IT_BSIS
             WHERE HKONT EQ S_HKONT
             AND   BELNR IN S_BELNR
             AND   BUDAT IN S_BUDAT
             AND   AUFNR IN S_AUFNR.
    *  if sy-subrc <> 0.
    *    message 'No Data' type 'I'.
    *  endif.
      SELECT SUM( WRBTR )
             FROM BSIS
             INTO COLLECT-TOT
             WHERE HKONT EQ S_HKONT
             AND BUDAT < S_BUDAT-LOW
             AND AUFNR IN S_AUFNR.
    END-OF-SELECTION.
      CLEAR: S_BUDAT, S_AUFNR, S_BELNR, S_HKONT.
      LOOP AT IT_BSIS INTO WA_BSIS.
    IF wa_bsis-SHKZG = 'H'.
       wa_bsis-WRBTR = 0 - wa_bsis-WRBTR.
    ENDIF.
        collect-MONAT  = wa_bsis-monat.
        collect-HKONT  = wa_bsis-hkont.
        collect-BELNR  = wa_bsis-belnr.
        collect-BUDAT  = wa_bsis-budat.
        collect-WRBTR  = wa_bsis-wrbtr.
        collect-SHKZG  = wa_bsis-shkzg.
        collect-SGTXT  = wa_bsis-sgtxt.
        collect-AUFNR  = wa_bsis-aufnr.
        collect collect into  collect.
        CLEAR: COLLECT, WA_BSIS.
      ENDLOOP.
      LOOP AT COLLECT.
        AT end of HKONT.
          WRITE:/65 'OpeningBalance',
                 85  collect-tot.
          skip 1.
        ENDAT.
        WRITE:/06 COLLECT-BELNR,
               22 COLLECT-BUDAT,
               32 COLLECT-WRBTR,
               54 COLLECT-SGTXT.
        AT end of MONAT.
          SUM.
          WRITE:/ COLLECT-MONAT COLOR 1.
          WRITE:32 COLLECT-WRBTR COLOR 1.
          VALUE = COLLECT-WRBTR.
          SKIP 1.
        ENDAT.
        VALUE1 = COLLECT-TOT +  VALUE.
        AT end of MONAT.
          WRITE:85 VALUE1.
        ENDAT.
      endloop.
      CLEAR: COLLECT, SUMA, VALUE, VALUE1.
    TOP-OF-PAGE.
      WRITE:/06 'Doc No',
             22 'Post Date',
             39 'Amount',
             54 'Text'.
    Moderator message : See the Sticky threads (related for performance tuning) in this forum. Thread locked.
    Edited by: Vinod Kumar on Oct 13, 2011 11:12 AM

    Hi Ben,
    both BSIS selects would become faster if you can add Company Code BUKRS as 1st field of WHERE clause, because it's the 1st field of primary key and HKONT is the 2nd field of primary key.
    If you have no table index with HKONT as 1st field it's a full database access.
    If possible, try to add BUKRS as 1st field of WHERE clause, otherwise ask for an additional BSIS index at your basis team.
    Regards,
    Klaus

  • How to improve the performance of one program in one select query

    Hi,
    I am facing performance issue in one program. I have given some part of the code of the program.
    it is taking much time below select query. How to improve the performance.
    Quick response is highly appreciated.
    Program code
    DATA: BEGIN OF t_dels_tvpod OCCURS 100,
    vbeln LIKE tvpod-vbeln,
    posnr LIKE tvpod-posnr,
    lfimg_diff LIKE tvpod-lfimg_diff,
    calcu LIKE tvpod-calcu,
    podmg LIKE tvpod-podmg,
    uecha LIKE lips-uecha,
    pstyv LIKE lips-pstyv,
    xchar LIKE lips-xchar,
    grund LIKE tvpod-grund,
    END OF t_dels_tvpod,
    DATA: l_tabix LIKE sy-tabix,
    lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
    ls_dels_tvpod LIKE t_dels_tvpod.
    SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
    FOR ALL ENTRIES IN t_dels_tvpod
    WHERE vbeln = t_dels_tvpod-vbeln
    AND erdat IN s_erdat
    AND bldat IN s_bldat
    AND podat IN s_podat
    AND ernam IN s_ernam
    AND kunnr IN s_kunnr
    AND vkorg IN s_vkorg
    AND vstel IN s_vstel
    AND lfart NOT IN r_del_types_exclude.
    Waiting for quick response.
    Best regards,
    BDP

    Bansidhar,
    1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
    2)  Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
    3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
    Try doing something like this.
    TYPES: BEGIN OF ty_del_types_exclude,
             sign(1)   TYPE c,
             option(2) TYPE c,
             low       TYPE likp-lfart,
             high      TYPE likp-lfart,
           END OF ty_del_types_exclude.
    DATA: w_del_types_exclude TYPE          ty_del_types_exclude,
          t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
          t_dels_tvpod_tmp    LIKE TABLE OF t_dels_tvpod        .
    IF NOT t_dels_tvpod[] IS INITIAL.
    * Assuming that I would like to exclude delivery types 'LP' and 'LPP'
      CLEAR w_del_types_exclude.
      REFRESH t_del_types_exclude.
      w_del_types_exclude-sign = 'E'.
      w_del_types_exclude-option = 'EQ'.
      w_del_types_exclude-low = 'LP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      w_del_types_exclude-low = 'LPP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      t_dels_tvpod_tmp[] = t_dels_tvpod[].
      SORT t_dels_tvpod_tmp BY vbeln.
      DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
        COMPARING
          vbeln.
      SELECT vbeln
        FROM likp
        INTO TABLE lt_dels_tvpod
        FOR ALL ENTRIES IN t_dels_tvpod_tmp
        WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
        AND erdat IN s_erdat
        AND bldat IN s_bldat
        AND podat IN s_podat
        AND ernam IN s_ernam
        AND kunnr IN s_kunnr
        AND vkorg IN s_vkorg
        AND vstel IN s_vstel
        AND lfart IN t_del_types_exclude.
    ENDIF.

  • How to improve the performance of serialization/deserialization?

    Hi, Friends,
    I have a question about how to improve the performance of serialization/deserialization.
    When an object is serialized, the entire tree of objects rooted at the object is also serialized. When it is deserialized, the tree is reconstructed. For example, suppose a serializable Father object contains (a serializable field of) an array of Child objects. When a Father object is serialized, so is the array of Child objects.
    For the sake of performance consideration, when I need to deserialize a Father object, I don't want to deserialize any Child object. However, I should be able to know that Father object has children. I should also be able to deserialize any child of that Father object when necessary.
    Could you tell me how to achieve the above idea?
    Thanks.
    Youbin

    You could try something like this...
    import java.io.*;
    import java.util.*;
    class Child implements Serializable {
        int id;
        Child(int _id) { id=_id; }
        public String toString() { return String.valueOf(id); }
    class Father implements Serializable
        Child[] children = new Child[10];
        public Father() {
         Arrays.fill(children, new Child(1001));
        public void readObject(ObjectInputStream stream)
         throws IOException, ClassNotFoundException
         int numchildren = stream.readInt();
         for(int i=0; i<numchildren; i++)
             children[i] = (Child)stream.readObject();
         stream.close();
        public void writeObject(ObjectOutputStream stream) throws IOException
         stream.writeInt(children.length);
         for(int i=0; i<children.length; i++)
             stream.writeObject(children);
         stream.close();
    Child[] getChildren() { return children; }
    class FatherProxy
    int numchildren;
    String filename;
    public FatherProxy(String _filename) throws IOException
         filename = _filename;
         ObjectInputStream ois =
         new ObjectInputStream(new FileInputStream(filename));
         numchildren = ois.readInt();
         ois.close();
    int getNumChildren() { return numchildren; }
    Child[] getChildren() throws IOException, ClassNotFoundException
         ObjectInputStream ois =
         new ObjectInputStream(new FileInputStream(filename));
         Father f = (Father)ois.readObject();
         ois.close();     
         return f.getChildren();
    public class fatherref
    public static void main(String[] args) throws Exception
         // create the serialized file
         Father f = new Father();
         ObjectOutputStream oos =
         new ObjectOutputStream(new FileOutputStream("father.ser"));
         oos.writeObject(f);
         oos.close();
         // read in just what is needed -- numchildren
         FatherProxy fp = new FatherProxy("father.ser");
         System.out.println("numchildren: " + fp.getNumChildren());
         // do some processing
         // you need the rest -- children
         Child[] c = fp.getChildren();
         System.out.println("children:");
         for(int i=0; i<c.length; i++)
         System.out.println("i " + i + ": " + c[i]);

  • How to improve the performance in Xcelsius

    Hi ,
    I have nearly 5000 records in excel and when I  try to import this data in to xcelsius its taking nearly 3 hrs to get imported and after that when I try ti select the sorce data again its taking more than 3 hrs .I think this is happening because I have increased max. no. of rows to 5000 but is there any way to improve the performance
    Regards,
    Sany

    hi sany
    5000 records is much to much. Xcelsius is designed for highly aggregated small data sets.
    Although 5000 records may work, think about that the default maxrows value is about 512. I think this is a kind of sign.
    Try to reduce you data set or dynamically refresh the data you really need at runtime.
    Best regards
    Ulrich
    http://www.xcelsius-insight.com

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • Please help me how to improve the performance of this query further.

    Hi All,
    Please help me how to improve the performance of this query further.
    Thanks.

    Hi,
    this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
    The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
    The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
    and we're left with nothing.
    Start by reading this blog post: Kyle Hailey &amp;raquo; Power of DISPLAY_CURSOR
    and applying this knowledge to your case.
    Best regards,
      Nikolay

Maybe you are looking for

  • Safari quit unexpectectedly while using the lilbrooksbas.dylib plug-in.

    I can not access my safari due to this message - Sadari quit unexpectedly while using the lilbrooksbas.dylib plug-in. I have tried uninstalling Rapport and restarting but I have had no luck. Please help! Thanks.

  • Screen coming up with a sad faced Ipod with a ! next to it

    my ipod wont load at all and only gets as far as a screen where it shows an ipod with a sad face and a ! next to it with the website for apple support underneath it? It makes funny noises when its trying to load and doesnt get any further then the un

  • Passing object type as parameters

    HI All, I am having a procedure as parameters below Procedure xyz( one in Varchar2 two in Varchar2 three in inv_abc_obj four out inv_def_obj); No i need to run this procedure as stand alone. The object inv_abc_obj is again of type inv_abc_obj as obje

  • All programs slow or unresponsive, beach ball appears for almost every action

    It has been happening for a few months, it has periods where it is terrible (i.e. can not use computer at all) and where it is tolerable. I feel like I've tried everything, uninstalling and installing affected programs, taking it to the mac store (th

  • Call Trasaction (BDC for CRMD_ORDER) in webui

    Hi Experts, I know this is not a good option to use a call transaction in webui nad we are gettinCNTL_ERROR for that too. Our requirement is like , from custom webui report we are wants to call the BDC with some data based on the inputs. But here we