Exceedingly long time for a Procedure to return a Cursor

Hi All,
I have a Package with a procedure that returns a REF CURSOR. The cursor is defined with dynamic SQL. I recently changed the SQL to be bound to improve perf. When I debug, the Open runs quick, and the cursor var looks fine, but when the "End" statement for the proc is reached, it waits for a good 30 secs before terminating execution. If I specify not to load the results in a grid (Using TOAD), it terminates quickly. However, my .Net app using an OracleDataAdapter takes 30-40 secs to get the data also? There's some delay in transferring the cursor var back to the calling thread for some reason.
Main difference in the previous proc to now:
Old call: OPEN shoppingBagContents FOR v_sql;
New call OPEN shoppingBagContents FOR v_sql using a, b; (a,b are bound in sql :a, :b).
shoppingBagContents is defined as OUT ref cursor.
Any help / ideas is greatly appreciated.
Thanks
Running oracle 10g
App .Net 2.0 on XP

Opening a cursor does just that, it opens the cursor. The query (cursor) is not processed until you attempt to fetch it. That's where the processing time comes in.
Are you saying that the processing of the cursor (fetching it) takes longer after your changes than before?
Can you post an example of the code you changed?
For example, if you had dynamic SQL where your predicate never changes, you'd actually want to keep the literal (say ProcessFlag = 'Y' which will ALWAYS be part of the query).
The only things you want to replace / bind are things that will change (or be included / not included) with repeated executions.

Similar Messages

  • Takes long time for shutdown after adding to domain

    Hi,
    Workstation OS - Window 7
    Domain Controller OS - Window server 2008 R2 standard
    Following thing i have measure with Stopwatch.
    1) When the laptop is in workgroup, it takes just 17 second for shutdown.
    2) when i added the same laptop in domain ( corp.abc.com ), now it takes 1minutes and 22 second for shutdown. ( it just show shutting down screen )
    Why the shutdown time increased so much?
    Do you have any idea?
    Note:- i have not make any changes in laptop, nor added any software. I have done the above testing because user start complaining that after putting laptop in domain it takes long time for shutdown. It's happening to all laptop where the OS is window 7
    There is no logoff scripts / gpo's as well as we don't have roaming profiles.
    Please advice.
    Thanks & Regards,
    Param
    www.paramgupta.blogspot.com

    Hi,
    To troubleshooting this issue, please install Windows Performance Tools (WPT) Kit. The Windows Performance Tools (WPT) Kit contains performance analysis tools, and is designed for analysis of a wide range of performance problems including application start
    times, boot issues, deferred procedure calls and interrupt activity (DPCs and ISRs), system responsiveness issues, application resource usage, and interrupt storms.
    To get the installer, you have to install the Windows 7 SDK.
    Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1
    http://www.microsoft.com/en-us/download/details.aspx?id=3138
    For shutdown tracing:
    Run command:
    xbootmgr -trace shutdown -noPrepReboot -traceFlags BASE+CSWITCH+DRIVERS+POWER -resultPath C:\TEMP
    Collect logs and post them for further troubleshooting.
    For more information please refer to following MS articles:
    Long Shutdown Time on Windows 7 Ultimate x64
    http://social.technet.microsoft.com/Forums/en/w7itproperf/thread/11a42a93-efd2-4184-9ce8-bbc1438b7ea6
    Long shutdown time on Windows 7 64 bit laptop
    http://social.technet.microsoft.com/Forums/en-US/w7itproperf/thread/4440fc6e-c81e-440c-9183-9b7e176729d2
    Lawrence
    TechNet Community Support

  • Take long time for loading

    Hi Experts,
    One of my data target (Data comming from another ODS) is taking long time for loading, basically it takes below 10 times only, but today it is running from last 40 minesu2026
    In Status Tab it showing....
    Job termination in source system
    Diagnosis
    The background job for data selection in the source system has been terminated. It is very likely that a short dump has been logged in the source system
    Procedure
    Read the job log in the source system. Additional information is displayed here.
    To access the job log, use the monitor wizard (step-by-step analysis)  or the menu path <LS>Environment -> Job Overview -> In Source System
    Error correction:
    Follow the instructions in the job log messages.
    Can anyone please solve my problem.
    Thanks in advance
    David

    Hi Experts,
    Thanks for your answers, and my load failed when the data comes from one ODS to another ODS. find the below job log
    Job started
    Step 001 started (program SBIE0001, variant &0000000007169, user ID RCREMOTE)
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    DATASOURCE = 8ZPP_OP3
             Current Values for Selected Profile Parameters               *
    abap/heap_area_nondia......... 20006838008                             *
    abap/heap_area_total.......... 20006838008                             *
    abap/heaplimit................ 83886080                                *
    zcsa/installed_languages...... ED                                      *
    zcsa/system_language.......... E                                       *
    ztta/max_memreq_MB............ 2047                                    *
    ztta/roll_area................ 5000000                                 *
    ztta/roll_extension........... 4294967295                              *
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:31:55, End = 06.09.2010 01:31:55
    Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 2 in task 0004 (2 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:00, End = 06.09.2010 01:32:00
    Asynchronous transmission of info IDoc 4 in task 0005 (2 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 3 in task 0006 (3 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:04, End = 06.09.2010 01:32:04
    Asynchronous transmission of info IDoc 5 in task 0007 (3 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 4 in task 0008 (4 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:08, End = 06.09.2010 01:32:08
    Asynchronous transmission of info IDoc 6 in task 0009 (4 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 5 in task 0010 (5 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:11, End = 06.09.2010 01:32:11
    Asynchronous transmission of info IDoc 7 in task 0011 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Asynchronous send of data package 13 in task 0026 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:01, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:44, End = 06.09.2010 01:32:45
    tRFC: Data Package = 8, TID = 0AEB465C00AE4C847CEA0070, Duration = 00:00:17,
    tRFC: Start = 06.09.2010 01:32:29, End = 06.09.2010 01:32:46
    Asynchronous transmission of info IDoc 15 in task 0027 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 14 in task 0028 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:48, End = 06.09.2010 01:32:48
    tRFC: Data Package = 9, TID = 0AEB465C00AE4C847CEF0071, Duration = 00:00:18,
    tRFC: Start = 06.09.2010 01:32:33, End = 06.09.2010 01:32:51
    Asynchronous transmission of info IDoc 16 in task 0029 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 15 in task 0030 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:52, End = 06.09.2010 01:32:52
    Asynchronous transmission of info IDoc 17 in task 0031 (6 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 16 in task 0032 (7 parallel tasks)
    tRFC: Data Package = 10, TID = 0AEB465C00684C847CF30070, Duration = 00:00:18,
    tRFC: Start = 06.09.2010 01:32:37, End = 06.09.2010 01:32:55
    tRFC: Data Package = 11, TID = 0AEB465C02E14C847CF70083, Duration = 00:00:17,
    tRFC: Start = 06.09.2010 01:32:42, End = 06.09.2010 01:32:59
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:32:56, End = 06.09.2010 01:32:56
    Asynchronous transmission of info IDoc 18 in task 0033 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 17 in task 0034 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:33:00, End = 06.09.2010 01:33:00
    tRFC: Data Package = 12, TID = 0AEB465C00AE4C847CFB0072, Duration = 00:00:16,
    tRFC: Start = 06.09.2010 01:32:46, End = 06.09.2010 01:33:02
    Asynchronous transmission of info IDoc 19 in task 0035 (5 parallel tasks)
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 100,000 records
    Result of customer enhancement: 100,000 records
    Asynchronous send of data package 18 in task 0036 (6 parallel tasks)
    tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    tRFC: Start = 06.09.2010 01:33:04, End = 06.09.2010 01:33:04
    Asynchronous transmission of info IDoc 20 in task 0037 (6 parallel tasks)
    ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    Job cancelled
    Thanks
    Daivd
    Edited by: david Rathod on Sep 6, 2010 12:04 PM

  • Impdp taking long time for only few MBs data...

    Hi All,
    I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
    I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
    Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
    Thanks...

    Also Read
    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
    DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1]

  • The simplest way for plsql procedure to return multiple rows

    Hi,
    What is the simplest way for plsql procedure to return multiple rows (records). There are many combination of ways to do it but I am looking for a solution that is appropriate for plsql beginners. Many solutions use cursors, cursor variables, collections and more that kind of things that are complex on the face of it. Is it somehow possible to achieve the same with less effort?
    Sample query would be: SELECT * FROM EMPLOYEES;
    I want to use returned rows in APEX to compose APEX SQL(in that context plsql) report.
    It is enough to use just SELECT * FROM EMPLOYEES query in APEX but I want to use plsql procedure for that.
    Thank you!

    Hi,
    It depends :-).
    With +...that is appropriate for plsql beginners...+ in mind... it still depends!
    The list of techniques (ref cursors, cursor variables, collections, arrays, using explict SQL) you have referenced in your post can be made to work. but...
    +Is it somehow possible to achieve the same with less effort?+ Less effort : That needs to be defined (measured). Especially in the context of pl/sql beginners (who is a beginner?) .
    What is the level of "programming experience" ?
    What is the level of understanding of "Relational Result set" as processible in Oracle?
    If you are looking for
    Process_the_set_of rows_in APEX () kind of capabilitywhich "abstracts/hides" relation database from developers when working on relation database, it may not be the best approach (at least strategically). Because I believe it already is abstracted enough.
    I find REF CUROSOR most effective for such use, when the "begginer" has basic understanding of processing SQL result set .
    So in a nut shell, the techniques (that you already are familiar with) are the tools available. I am not aware of any alternative tools (in pure Oracle) that will simplify / hide basics from develpers.
    vr,
    Sudhakar B.

  • Longer time for Material Availability check while creation of prd order.

    Hi guys,
    I am facing a weird problem while creating production orders thru CO01.
    I enter the component and plant and I am also using the forward scheduling option.
    for some reason, SAP is taking a long time for material availability check when I hit the release button.
    Sometimes its taking more than an hour. Its happening with few specific BOM's, and I have checked the master data but I could hardly find a problem in master data.
    Can someone suggest me some tips ??
    Thanks & Regards,
    Sashivardhan

    Hi,
    Please check the Availability check control maintained for Components it should be 01 or 02. Also check the issue storage location maintained or not. You can maintain issue storage location in BOM in Status/lng text tab in Production Storage Location.
    Hope this will help.
    Regards,
    Navin

  • Taking too long time for booting

    Hi there,
    I just bought myself an iphone 5s. . Try to get up started for the 1st time but the thing keep booting or loading too long time for endless hours. What may go wrong? Appreciate if you could help me on this.

    But payment run before due date. you are not run before due date once check it

  • Takes a long time for Apple logo to show up on boot up

    My 13" mpb early 2011 takes a long time for the Apple logo to show up... Is there a way to fix this? (It should show up within 2 seconds after the initial chime)
    I have Win7 installed, could it be that? Since it might be detecting which partition to boot up with.

    Check login items ...
    Remove all items from System Preferences > Users & Groups > Login Items
    Same for HD > Library > StartupItems
    Then restart your Mac to test.
    Not necessarily a Windows issue.
    ***   When you post for help, please state which OS X is installed.
    If you aren't sure, click About this Mac from your Apple menu 
    Troubleshooting advice can depend on that information.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • My folders take forever to open and then the docs within take a long time for the icon to pop up. the same thing with moving itmes from the desktop to a folder, and also emptying the trashh

    my folders take forever to open and then the docs within take a long time for the icon to pop up. the same thing with moving itmes from the desktop to a folder, and also emptying the trashh

    Don't know if that would be a failing hard drive, but it may be that you are out of available space. How much hard drive space do you have available? Please highlight the Macintosh HD icon and then press Command and I for a get info window. Once open, please copy and post the following:
    Capacity:
    Used:
    Available:
    Mac OS requires a minimum of 10 - 15% of total hard drive space available and empty at all times in order to operate properly.

  • My lap top is extremly slow. It studders. It kind of sounds like there is a fan inside and it constantly stopping and starting. And, it takes a really long time for it too start up. Whats wrong with it?

    My lap top is extremly slow. It studders. It kind of sounds like there is a fan inside and it constantly stopping and starting. And, it takes a really long time for it too start up. Whats wrong with it?

    Do you have current backups?
    Those symptoms could indicate a failing Hard drive. If it dies, all your documents go with it unless you have Backups.
    I have had very good luck with physical and battery problem diagnosis at the Genius Bar. Those guys put their hands (and their ears) to these machines, all day every day, and they know immediately what all those sounds mean.
    Your appointment for an evaluation is FREE, in warranty or out.

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • Bex Reports takes long time for filtering

    Hi,
    We have gone live in last December.And already our inventory cube contains some 15 million records,sales cube contains 12 million records.
    Is there any specific limit to number of records.Because while doing filtering in inventory report or sales reports it is taking very  long time.
    Is there any alternative or we should delete some the data from the cube.
    for filtering any value it is taking long time than running the query itself.
    Pls help...
    Regards,
    viren.

    Hi Viren,
    I hope a cube can perform well even at 100 million records with some performance tunning. So i absolutely doubt why it is taking long time for your cube with just 10-15 million records.
    Do a performance analysis and check whether aggregates will be helpful or not.
    Check the below link for how to do a performance analysis.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
    Hope it helps.
    Thx,
    Soumya

  • Middleware- Taking long time for generation of Runtime objects- SMOGTOTAL

    Hi Experts,
    I am doing middleware settings for connecting CRM 2007 with R/3 4.7.
    When i am generating all the required objects ( Replication objects, publications....) using the transaction code SMOGTOTAL, system is taking very long time for generating the objects. Generally it takes 4 to 6 hours but in our case it has already took more than 36 hours and still its running.
    Can anybody tell me what i need to do to make the generation process faster.
    Regards
    Nadh

    What I read in the best practice:
    It is not required for a new installation.
    Typically this activity has already been executed during the system installation or upgrade.
    Use transaction SMOGLASTLOG  to check if an initial generation has already been executed. In this case you can skip this activity.
    I checked transaction SMOGLASTLOG, and in our case the initial generation was not yet executed. I also couldn't continue with the next steps.
    That's why I started up the job, it is finally finished after 104 hours..
    Thanks for your fast reply.
    Jasper.

  • Program SAPLSBAL_DB taking long time for BALHDR table entries

    Hi Guys,
    I am running a Z program in Quality and Production system both which is uploading data from Desktop.
    In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
    As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
    Can anybody provide me any suggestion.
    Regards,
    Shyamal.

    These are QA screen shots where no issue, but we are getting very long time in CRP.
    Regards,
    Shyamal

Maybe you are looking for