Execution time is too high.

Hi,
I got several xsql-files that uses aprox. 1 minute to execute by them selves. I want to run one xsql that includes them all, but it times out, and I get an server-error. Is there any way I can get the execution time down, or change the timeout-setting?
I am running the xsql that comes with 8.1.7
Terje K.

If Oracle8i JServer is included in the Oracle 8i package, then yes. The database itself is not large (aprox. 50Mb with data), but the results of the queries can get somewhat large. Here is an example:
1. first I made a view:
create view view_section2_issue as SELECT SPVSPC.OPN, OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, TO_CHAR(ISOD,'dd.MM.YY') AS ISOD, TO_CHAR(PCSOD,'dd.MM.YY') AS OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12,PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD
from SPVSISSU, SPVSISS2, SPVSPCS2, SPVSPC
where SPVSISSU.OPN = SPVSPC.OPN
and SPVSISSU.ISS is not null
and SPVSISS2.OPN = SPVSISSU.OPN
and SPVSISS2.ISSUE = SPVSISSU.ISR2
and SPVSPCS2.OPCS = SPVSISS2.IOPCS
and SPVSPCS2.PCSR = SPVSISS2.IPCS_REV
then I made the query (with some cursors):
SELECT OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12, PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD,
CURSOR( SELECT PNS, POD, PWTH, PSCH
FROM spvspcs4
WHERE spvspcs4.opn = view_section2_issue.opn
and spvspcs4.pcs = view_section2_issue.pcs
and spvspcs4.pcsr = view_section2_issue.pcsr ) as wallThickness,
CURSOR( SELECT PELM, SST, PDSTD, PFS, PTS, PTY, PMN, MDS, ESK, PRM, PAGEBREAK, PMELL, page, start_remark(opn,pcs,pcsr,pel,pell) starten,
end_remark(opn,pcs,pcsr,pel,pell,start_remark(opn,pcs,pcsr,pel,pell)) as slutt
FROM spvspcs6
WHERE spvspcs6.opn = view_section2_issue.opn
and spvspcs6.pcs = view_section2_issue.pcs
and spvspcs6.pcsr = view_section2_issue.pcsr ) as elements,
CURSOR( SELECT PVELM, VDS, PVFS, PVTS, PVRM, PMVELL
FROM spvspcs7
WHERE spvspcs7.opn = view_section2_issue.opn
and spvspcs7.pcs = view_section2_issue.pcs
and spvspcs7.pcsr = view_section2_issue.pcsr ) as vType,
CURSOR( SELECT PBLP, PAGEBREAK, LTXT
FROM spvspcs5
WHERE spvspcs5.opn = view_section2_issue.opn
and spvspcs5.pcs = view_section2_issue.pcs
and spvspcs5.pcsr = view_section2_issue.pcsr ) as kommentar,
CURSOR( SELECT count(*) as tot
FROM spvspcs5
WHERE pagebreak = 'P'
and spvspcs5.opn = view_section2_issue.opn
and spvspcs5.pcs = view_section2_issue.pcs
and spvspcs5.pcsr = view_section2_issue.pcsr ) as kpages,
CURSOR( SELECT count(*) as tot
FROM spvspcs6
WHERE pagebreak = 'P'
and spvspcs6.opn = view_section2_issue.opn
and spvspcs6.pcs = view_section2_issue.pcs
and spvspcs6.pcsr = view_section2_issue.pcsr ) as tpages
from view_section2_issue
where OPN = {@opn}

Similar Messages

  • SCOM2012 Alert: SQL 2008 DB Average Wait Time & Recompilationis too high

    WE have SCOM 2012sp1 CU3 updated.
    I receive the below critical alerts from SQL servers continuously, please help me to resolve this issue.
    SQL 2008 DB Average Wait Time is too high
       SQL DB 2008 SQL Recompilation is too high

    I don't know about anyone else but overriding those monitors and rules didn’t work for me, I had to override<o:p></o:p>
    SQL Re-Compilation monitor for SQL 2012 DB Engine<o:p></o:p>
    SQL Re-Compilation monitor for SQL 2008 DB Engine<o:p></o:p>
    Average Wait Time monitor for SQL 2012 DB<o:p></o:p>
    Average Wait Time monitor for SQL 2008 DB<o:p></o:p>
    Now I am wondering if other monitors are valid as well in particular I have multiple alerts for<o:p></o:p>
    Buffer Cache Hit Ratio monitor for SQL 2008 DB Engine is too low<o:p></o:p>
    Page Life Expectancy (s) for 2008 DB Engine is too low<o:p></o:p>
    is anyone else seeing these issues as well?

  • EP Load time is too High - and Browser crashes

    Hi,
      We have customized Enterprise Portal for our Company. All are going fine, but some users who are in same network as others are facing Long load time issue. It takes lot of time to load the home page and some time it even crashes.
    Is there are other reason for such occurrences other than network error, like browser setting or any thing else. Please help me out in this.
    Best Regards,
    -Shabir Rahim.

    Shabir,
    You might want to look at these threads.
    Performance tuning problem
    How to guide for Fine tuning the performance of SAP EP
    Good Luck!
    Sandeep Tudumu

  • The Average Wait Time of SQL instance "CONFIGMGRSEC" on computer " SEC_SITE_SERVER " is too high

    I have a SCCM 2012 SP1 CU4 environment with SCOM monitoring installed.
    I also do have 4 secondary sites installed below my primary. The secondaries are using SQL 2012 Express version default deployed using the secondary site installation.
    My SCOM monitoring is generating tickets with the following message:
    The Average Wait Time of SQL instance "CONFIGMGRSEC" on computer "<SEC_SITE_SERVER>" is too high
    How can i solve this ? Or do I need to ignore this ?

    Never ignore messages, but tune them.
    In this specific case you might want to take a look at this:
    http://social.technet.microsoft.com/Forums/en-US/ffeefe0d-0ef7-49a3-862e-9be27989dc5d/scom2012-alert-sql-2008-db-average-wait-time-recompilationis-too-high?forum=operationsmanagergeneral
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Why the execution time increases with a while loop, but not with "Run continuously" ?

    Hi all,
    I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
    I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
    I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
    After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
    Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
    Thanks a lot for your help!
    Solved!
    Go to Solution.
    Attachments:
    TimeTesting.zip ‏70 KB
    Graph.PNG ‏20 KB

    jul7290 wrote:
    Thank you very much for your help! I added the "Clear task" vi and now it works properly.
    If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Execution time of a simple vi too long

    I'm working with LabVIEW 6.0.2 on a computer (AMD ~700MHz) under Windows 2000. The computer is connected to the instruments (eg Keithley 2400 Sourcemeter) via GPIB (NI PCI-GPIB Card). When trying to read the output of the K2400 with a very simple vi (sending the string READ? to the instrument with GPIBWrite (mode 2) and subsequently reading 100byte with GPIBRead (mode 2) from the instrument, the execution time mostly exceeds 1s (execution highlighting disabled). Sometimes, it can be much faster but this is very irreproducible. I played around with the GIPBRead and Write modes and with the number of bytes to be read from the device as well as with the hardware settings of the Keithley 2400 but nothing seemed to work. The API calls ca
    ptured by NI Spy mainly (lines 8 - 160) consist of ThreadIberr() and ibwait(UD0, 0x0000).
    As this problem is the main factor limiting our measurement speed, I would be grateful for any help.
    Thanks a lot
    Bettina Welter

    Hello,
    Thanks for contacting National Instruments. It seems like the 1 second delay that is occurring is due to the operation being called. ThreadIberr returns the value of iberr, while ibwait simply implements a wait. These two get called repeatedly while the GPIB device waits for the instrument (K2400, in your case) to finish its operation and respond back. It is quite possible that when you query the Keithley to send back 100 bytes of data, it has to gather them from its buffer (if its already been generated). And if there aren't 100 bytes of data in the buffer, the Keithley will keep the NRFD line asserted while it gathers 100 btyes of data. After the data has been gathered, the NRFD line is deasserted, at which point, ThreadIberr will detect the change in th
    e ibsta status bit and read the 100 bytes.
    So make sure that the 100 bytes of data that you are requesting don't take too long to be gathered, since this is where the main delay lies. Hope this information helps. Please let us know if you have further questions.
    A Saha
    Applications Engineering
    National Instruments
    Anu Saha
    Academic Product Marketing Engineer
    National Instruments

  • Execution time of query with high variance

    I have an Oracle Database 11.2 R2 which is set up just for testing purposes, so there is no other activity except mine. Now I have a query which I ran 10 times in a row. Between the executions I always flushed BUFFER_CACHE and SHARED_POOL. The strange thing is, that the execution time of the query is strongly varying from 13 seconds up to 207 seconds. From the 10 executions I have 4 times <25 seconds and 4 times > 120 seconds.
    What could be the reason for this? As I've said, there is no other activity on the database and it is always the same query with the same parameters running on the same set of data.
    The background to this is that I would like to compare the execution time of exactly the same query with different database settings. So I thought I could just run the query ten times and use the average but I didn't expect such a high variance.
    Kind regards
    Peter

    Hi,
    for each execution, look at:
    1) plan hash value
    2) total logical I/O
    3) physical I/O
    That should give you some clues as to what's going on.
    Best regards,
    Nikolay

  • BluRay error message "code 6, audio buffer underflows. Total bitrate is too high near time = 000000 seconds."

    Hello,
    I’m trying to create BluRays using Encore and I keep getting the following error messages: ‘code 6, audio buffer underflows. Total bitrate is too high near time = 000000 seconds.”
    For info, I created a H264 Bluray - NTSC 24fps master from an Apple prores HQ in Adobe Media Encoder. Duration: 52m, size: 11go. I'm on Mac OSX 10.9.5, the bluray burner is Samsung SE-506CB/RSWD and the BR disks are TDK Blu-ray Disc 50 Spindle - 25GB 4X BD-R - Printable.
    I looked around in forums and tried the following without success: replacing disk name to shorter name without space, creating bluray without menu frame, I also tried with a Mpeg2 bluray master. I tried to export a new master but I can't seem to be able to change the audio bitrate.
    Can anyone please help ?
    Thanks.

    Hi Stan, thanks for getting back.
    I tried to create a new master from Media Encoder but I can only export audio in PCM, I don't get a dolby option. See pic below.
    I tried a new project in Encore and chose PCM instead of Dolby in the preference menu, but I still got the same error. Should I try again limiting bitrate to 15 ? I was on 30 before.
    Please help, I've already wasted 10 bluray and this is getting really frustrating!

  • When burning a bluray I get this error message: Total bitrate is too high near time = 4.760000

    When burning a bluray I get this error message: Total bitrate is too high near time = 4.760000
    The video is encoded with sonic cinevison as avchd file and one AC3 file.

    What can I do to avoid it?
    Well, about the only thing you can do to solve "bitrate too high" errors is to lower the bitrate.

  • Execution time too low

    I was trying to measure the execution time. The rate is 1KS/s per channel and the samples to read is 100 per channel in a for loop of 10. The time should be around 1000ms, but it's only 500-600ms. And when I changed the rate/numbers of samples, the execution time doesn't change..... how could this happen?
    Solved!
    Go to Solution.
    Attachments:
    trial 6.vi ‏19 KB

    JudeLi wrote:
    I've tried to drag the clear task out of the loop but every time I did it, it ended up in a broken wire saying that the soure is a 1-D array of  DAQmx event and the type of sink is DAQmx event.....
    You can right-click on the output tunnel and tell it to not auto index.  But even better would be to use shift registers just in case you tell that FOR loop to run 0 times.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    trial 6.vi ‏13 KB

  • Queries taking high execution time for zero count

    Hi,
    i have procedures executing as jobs.
    the procedures take a lot of time to execute when the cursor count is zero.
    what might be the reason for this?

    GreenHorn wrote:
    cursor 1 - select a.col1, b.col1,decode(c.col1,1,c.col1,2,c.col2,null) col3 from a,b,c
    where joing conditions
    and nvl(c.col3,c.col4) = b.col3
    and c.col5 is null
    cursor 2 - cursor 1 - select a.col1, b.col1,decode(c.col1,1,c.col1,2,c.col2,null) col3 from a,b,c
    where joing conditions
    and a.timestamp > sysdate-1
    and nvl(c.col3,c.col4) = b.col3
    and c.col5 is not nulll
    cursor 2 first updates the values of col5 to null
    cursor 1 recalculates the value of col5 and updates it.
    c is a partitioned table and partition code is also present in the where condition.One question: Since you say that the cursor is "updating", but the cursor is a query, does this mean that you're performing row-by-row processing in a loop?
    If yes, you might be better off with doing this in one or two plain SQL statements, which is probably much faster.
    Another question: You say that after taking the described measures the performance was significantly better but became again worse after a couple of days again, is this right?
    Can you provide more details, what "good" and "bad" performance means, e.g. in terms of execution time?
    You might want to check if the execution plans change between the "good" performance and the "bad" performance.
    If your table continuously gets data deleted and for some reason the deleted rows are not re-used, e.g. by using direct-path inserts to add new data, then your segment might become larger and larger and you would need to re-organize the table if you use regularly full table scans against it.
    The execution plan posted is not really helpful. Try to use DBMS_XPLAN.DISPLAY to get a proper output including the "Predicate Information" section below the plan and specify to which of the two statements the plan corresponds.
    Use the {noformat}{noformat} tags to format the plan output properly here in mono-space fonts.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Dec 12, 2008 9:57 AM
    Note regarding execution plan added                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • When I print business cards, they sometimes print 1/4 too high but not always.

    I have my business cards in a Word document using an Avery template. I have used the same file for years. I never had a problem with my Canon printer but after buying an HP Photosmart D110 I have had nothing but problems. When I print then they print about 1/4 inch too high ruining the cardstock and wasting ink. This doesn't happen all the time. Sometimes I print and they are perfect but when I go back and print another, the problem returns. This is the same card stock I have used for years and when it is inserted in the printer it doesn't matter if it is a single page or a stack of them. I have tried previewing the page before printing and the alignment is correct. I've tried printing to a PDF and then printing the PDF, same problem. I try it on the Canon and no problem. Only reason I bought this in the first place was the cartridges were cheaper and I do a lot of color printing and scanning!  I have just replaced the cartridges and did a head alignment so that is not the problem. I have tried it with regular paper and the problem persists. I'm ready to drop kick this thing and buy another Canon!
    This question was solved.
    View Solution.

    Did you ever find a solution for this problem? I have no problem printing the business cards out on a Canon printer, or a different HP printer I used to have, but the D110 is nothing but a pain in the you know what !  I'm using publisher and the avery 8371 card stock,  I searched solutions for days and did everything possible but still they're not printing correctly!! I'm dissapointed in this printer, every time I have a problem with something it always seems to be an HP product!!!  Very frustrating because I never had to mess with any settings in the other printers I had, I would just put the file in, change the quality and click the print button with no problems. I'm ready to throw this printer out the window Anyway thanks in advance and I hope somebody can help me with this issue !!! 

  • IMac Hard Drive expectations, Are mine too high?

    Hey everyone.
    I have a 27" iMac i5. Its around 22 months old now and its being used as a personal PC at home.
    Lately, I have been experiencing HDD related issues, It hangs, load times are poor, OS takes ages to boot (windows and OSX)
    I backed up all my data to external drives and did a clean OS restore to Lion. No help, first boot with the OS was just as bad as a boot full of clutter. Loading Windows on the win partition also sometimes gives me the "cannot find OS boot disk" error which is scary.
    Anyway's doing research its all sure tell-tale signs of a failing HDD and ironically last night on my Win7 partition, I got the windows error message saying my "windows detected a hard disk problem, back up all data and contact your manufacture".
    Now, back on topic. Is this reasonable? a $2000+ computer suffering from a hardware fault like this when the HDD has never had more than 30% of the 1TB used in its life? (everything I keep backed up on externals.)
    I was honestly expecting a few more years out of this, **** even my 5 year old Macbook has never had an issue in its life and its been bagged around more places I can think of in its lifetime. How does a computer that sits on a desk suffer for a hardware failure such as this?
    From day one, I had a faulty power board inside, it caused a loud electronic buzzing noise. I did not get this fixed for aroun 9 months as tbh, it sounded normal but in the last month leading upto the replacement, it got much louder over a few days. This was replaced by Apple and I was sent on my way happy it was fixed. could this have caused something to overload?
    I am a little bit bitter which I am sure you can understand, but was my expectation of this hard drive just too high? or am I in my right to be a little upset by this?
    And before anyone askes, I was OS when my warranty ran out, my Macbook, iPad 3XiPhones and Mac Mini are all covered by their own Apple Care. the only thing that isnt is the iMac.. just my luck!
    Edit: the old link for the power board.
    https://discussions.apple.com/thread/2737002

    Thanks guys.
    Just got off the phone to Applecare.
    they confirmed my drive is faulty. Im just a little upset it died like this. I have never had a drive fail before. Id rather my externals go than the internal.
    Spoke to Apple, they said that he spoke to his Tech Specalist and they would not offer a free replacement. I agree is its 10 months out of warranty, but even the Apple rep I was speaking too said he would not be impressed either if it was his personal computer
    Oh well. Time to look at HDD Upgrade options! Not that I am totally impressed about it.

  • How to reduce execution time ?

    Hi friends...
    I have created a report to display vendor opening balances,
    total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
    Plz help me. It's a very urgent report...
    The coding is as below.....
    report  yfiin_rep_vendordetail no standard page heading.
    tables : bsik,bsak,lfb1,lfa1.
    type-pools : slis .
    --TABLE STRUCTURE--
    types : begin of tt_bsik,
            bukrs type bukrs,
            lifnr type lifnr,
            budat type budat,
            augdt type augdt,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
         end of tt_bsik,
         begin of tt_lfb1,
             lifnr type lifnr,
             mindk type mindk,
         end of tt_lfb1,
        begin of tt_lfa1,
            lifnr type lifnr,
            name1 type name1,
            ktokk type ktokk,
        end of tt_lfa1,
        begin of tt_opbal,
            bukrs type bukrs,
            lifnr type lifnr,
            gjahr type gjahr,
            belnr type belnr_d,
            budat type budat,
            bldat type bldat,
            waers type waers,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            blart type blart,
            monat type monat,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
            tdr type  dmbtr,
            tcr type  dmbtr,
            tbal type  dmbtr,
          end of tt_opbal,
         begin of tt_bs ,
            bukrs type bukrs,
            lifnr type lifnr,
            name1 type name1,
            prctr type prctr,
            tbal type dmbtr,
            bala type dmbtr,
            balb type dmbtr,
            balc type dmbtr,
            bald type dmbtr,
            bale type dmbtr,
            gbal type dmbtr,
        end of tt_bs.
    ************WORK AREA DECLARATION *********************
    data :  gs_bsik type tt_bsik,
            gs_bsak type tt_bsik,
            gs_lfb1 type tt_lfb1,
            gs_lfa1 type tt_lfa1,
            gs_ageing  type tt_ageing,
            gs_bs type tt_bs,
            gs_opdisp type tt_bs,
            gs_final type tt_bsik,
            gs_opbal type tt_opbal,
            gs_opfinal type tt_opbal.
    ************INTERNAL TABLE DECLARATION*************
    data :  gt_bsik type standard table of tt_bsik,
            gt_bsak type standard table of tt_bsik,
            gt_lfb1 type standard table of tt_lfb1,
            gt_lfa1 type standard table of tt_lfa1,
            gt_ageing type standard table of tt_ageing,
            gt_bs type standard table of tt_bs,
            gt_opdisp type standard table of tt_bs,
            gt_final type standard table of tt_bsik,
            gt_opbal type standard table of tt_opbal,
            gt_opfinal type standard table of tt_opbal.
    ALV DECLARATIONS *******************
    data : gs_fcat type slis_fieldcat_alv ,
           gt_fcat type slis_t_fieldcat_alv ,
           gs_sort type slis_sortinfo_alv,
           gs_fcats type slis_fieldcat_alv ,
           gt_fcats type slis_t_fieldcat_alv.
    **********global data declration***************
    data :   kb type dmbtr ,
              return like  bapireturn ,
              balancespgli like  bapi3008-bal_sglind,
              noteditems like  bapi3008-ntditms_rq,
              keybalance type table of  bapi3008_3 with header line,
             opbalance type p.
    SELECTION SCREEN DECLARATIONS *********************
    selection-screen begin of block b1 with frame .
    select-options : so_bukrs for bsik-bukrs obligatory,
                     so_lifnr for bsik-lifnr,
                     so_hkont for bsik-hkont,
                     so_prctr for bsik-prctr ,
                     so_mindk for lfb1-mindk,
                     so_ktokk for lfa1-ktokk.
    selection-screen end of block b1.
    selection-screen : begin of block b1 with frame.
    parameters       : p_rb1 radiobutton group rad1 .
    select-options   : so_date for sy-datum .
    selection-screen : end of block b1.
    ********************************ASSIGNING ALV GRID
    ****field catalog for balance report
    gs_fcats-col_pos = 1.
    gs_fcats-fieldname = 'BUKRS'.
    gs_fcats-seltext_m =  text-001.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 2 .
    gs_fcats-fieldname = 'LIFNR'.
    gs_fcats-seltext_m = text-002.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 3.
    gs_fcats-fieldname = 'NAME1'.
    gs_fcats-seltext_m =  text-003.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 4.
    gs_fcats-fieldname = 'BALC'.
    gs_fcats-seltext_m =  text-016.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 5.
    gs_fcats-fieldname = 'BALA'.
    gs_fcats-seltext_m =  text-012.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 6.
    gs_fcats-fieldname = 'BALB'.
    gs_fcats-seltext_m =  text-013.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 7.
    gs_fcats-fieldname = 'TBAL'.
    gs_fcats-seltext_m =  text-014.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 8.
    gs_fcats-fieldname = 'GBAL'.
    gs_fcats-seltext_m =  text-015.
    append gs_fcats to gt_fcats .
    data : repid1 type sy-repid.
    repid1 = sy-repid.
    INITIALIZATION EVENTS ******************************
    initialization.
    *Clearing the work area.
    clear gs_bsik.
    Refreshing the internal tables.
    refresh gt_bsik.
    ******************START OF  SELECTION EVENTS **************************
    start-of-selection.
    *get data for balance report.
      perform sub_openbal.
      perform sub_openbal_display.
    *&      Form  sub_openbal
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal .
      if   so_date-low > sy-datum or so_date-high > sy-datum .
          message i005(yfi02).
         leave screen.
    endif.
         select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsik into table gt_opbal
           where bukrs in so_bukrs and lifnr in so_lifnr
           and hkont in so_hkont and prctr in so_prctr
           and budat in so_date .
        select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsak appending table gt_opbal
           for all entries in gt_opbal
           where lifnr = gt_opbal-lifnr
           and budat in so_date .
    if sy-subrc <> 0.
      message i007(yfi02).
      leave screen.
      endif.
    select lifnr mindk from lfb1 into table gt_lfb1
      for all entries in gt_opbal
        where lifnr = gt_opbal-lifnr and mindk in so_mindk.
    select lifnr name1 ktokk from lfa1 into table gt_lfa1
      for all entries in gt_opbal
       where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
       loop at gt_opbal into gs_opbal .
         loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
           loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
            gs_opfinal-bukrs = gs_opbal-bukrs.
            gs_opfinal-lifnr = gs_opbal-lifnr.
            gs_opfinal-gjahr = gs_opbal-gjahr.
            gs_opfinal-belnr = gs_opbal-belnr.
            gs_opfinal-budat = gs_opbal-budat.
            gs_opfinal-bldat = gs_opbal-bldat.
            gs_opfinal-waers = gs_opbal-waers.
            gs_opfinal-dmbtr = gs_opbal-dmbtr.
            gs_opfinal-wrbtr = gs_opbal-wrbtr.
            gs_opfinal-shkzg = gs_opbal-shkzg.
            gs_opfinal-blart = gs_opbal-blart.
            gs_opfinal-monat = gs_opbal-monat.
            gs_opfinal-hkont = gs_opbal-hkont.
            gs_opfinal-prctr = gs_opbal-prctr.
            gs_opfinal-name1 = gs_lfa1-name1.
        if gs_opbal-shkzg    = 'H'.
            gs_opfinal-tcr   =  gs_opbal-dmbtr * -1.
            gs_opfinal-tdr   =  '000000'.
        else.
            gs_opfinal-tdr   =  gs_opbal-dmbtr.
            gs_opfinal-tcr   =  '000000'.
        endif.
           append gs_opfinal to gt_opfinal.
           endloop.
           endloop.
           endloop.
    sort gt_opfinal by bukrs lifnr prctr .
    so_date-low = so_date-low - 1 .
    loop at gt_opfinal into gs_opfinal.
    call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
      exporting
        companycode        = gs_opfinal-bukrs
        vendor             =  gs_opfinal-lifnr
        keydate            = so_date-low
       balancespgli        = ' '
       noteditems          = ' '
      importing
        return             = return
      tables
        keybalance         = keybalance.
    clear kb .
    loop at keybalance .
       kb = keybalance-lc_bal + kb .
    endloop.
          gs_opdisp-balc = kb.
          gs_opdisp-bukrs =  gs_opfinal-bukrs.
          gs_opdisp-lifnr =  gs_opfinal-lifnr.
          gs_opdisp-name1 =  gs_opfinal-name1.
        at new lifnr .
          sum .
          gs_opfinal-tbal =  gs_opfinal-tdr + gs_opfinal-tcr  .
          gs_opdisp-tbal = gs_opfinal-tbal.
          gs_opdisp-bala = gs_opfinal-tdr .
          gs_opdisp-balb = gs_opfinal-tcr .
          gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
          append gs_opdisp to gt_opdisp.
        endat.
        clear gs_opdisp.
        clear keybalance .
      endloop.
      delete adjacent duplicates from gt_opdisp.
    endform.                    " sub_openbal
    *&      Form  sub_openbal_display
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal_display .
    call function 'REUSE_ALV_GRID_DISPLAY'
        exporting
      I_INTERFACE_CHECK                 = ' '
      I_BYPASSING_BUFFER                = ' '
      I_BUFFER_ACTIVE                   = ' '
          i_callback_program              =  repid1
      I_CALLBACK_PF_STATUS_SET          = ' '
      I_CALLBACK_USER_COMMAND           = ' '
      I_CALLBACK_TOP_OF_PAGE            = ' '
      I_CALLBACK_HTML_TOP_OF_PAGE       = ' '
      I_CALLBACK_HTML_END_OF_LIST       = ' '
      I_STRUCTURE_NAME                  =
      I_BACKGROUND_ID                   = ' '
      I_GRID_TITLE                      =
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         =
          it_fieldcat                     = gt_fcats
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
      I_DEFAULT                         = 'X'
      I_SAVE                            = 'X'
      IS_VARIANT                        =
       it_events                        =
      IT_EVENT_EXIT                     =
      IS_PRINT                          =
      IS_REPREP_ID                      =
      I_SCREEN_START_COLUMN             = 0
      I_SCREEN_START_LINE               = 0
      I_SCREEN_END_COLUMN               = 0
      I_SCREEN_END_LINE                 = 0
      IT_ALV_GRAPHICS                   =
      IT_HYPERLINK                      =
      IT_ADD_FIELDCAT                   =
      IT_EXCEPT_QINFO                   =
      I_HTML_HEIGHT_TOP                 =
      I_HTML_HEIGHT_END                 =
    IMPORTING
      E_EXIT_CAUSED_BY_CALLER           =
      ES_EXIT_CAUSED_BY_USER            =
         tables
           t_outtab                       = gt_opdisp
      exceptions
        program_error                     = 1
        others                            = 2
      if sy-subrc <> 0.
        message id sy-msgid type sy-msgty number sy-msgno
                with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    endform.                    " sub_openbal_display

    I think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
    If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
    Try to check before using for all entries in the select statement like
    if gt_opbal is not initial.
    select adfda adfadf afdadf into table
      for all entries in gt_opbal.
    else.
    select abdf afad into table
    from abcd
    where a = 1
        and b = 2.
    endif.
    i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries.

Maybe you are looking for