Sql takes different times to run on different teminals

My sql query runs fine on my system but takes too long time to execute on
another terminal which is at client location.
Can anybody please tell me what would be the possible causes or where should
I look into to resolve this problem?
Thanking you

My sql query runs fine on my system but takes too long time to execute onanother terminal which is at client location.
Run fast on server, and slow on client machine, right ?
See if there bottleneck network problem.
Nicolas.

Similar Messages

  • The 0co_om_opa_6 ip in the process chains takes long time to run

    Hi experts,
    The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
    I have checked the note 382329,
    -> where the indexes 1 and 4 are active
    -> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
    As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
    -> OLTPSOURCE -  is blank
       PARAM_NAME - OBJSELSIZE
       PARAM_VALUE- is blank
    -> OLTPSOURCE - is blank
       PARAM_NAME - NOTSSELECT
       PARAM_VALUE- is blank
    -> OLTPSOURCE- 0CO_OM_OPA_6
       PARAM_NAME - NOBLOCKING
       PARAM_VALUE- is blank.
    Could you please check if any other settings needs to be done .
    Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
    Please suggest.

    The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
    The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the  u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
    Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
    Thanks Hope this helps ..

  • Auto message  restart job take long time to run

    Dear all,
    I have configre the auto message restart job in sdoe_bg_job_monitor
    but it take long time to run.
    i have execute the report i have find out that it is faching records from smmw_msg_hdr .
    in the table present 67 laks records are there.
    actually it is taking a lot of time while getting data from the table
    is there any report or tcode for clearing data in the table.
    I need ur valiuble help to resolve the issue.
    Regards
    lakshman balanagu

    HI,
    If you are using oracle database you may need to run table statistics report (RSOANARA) to update the index. The system admin should be able to this.
    Regards,
    Vikas
    Edited by: Vikas Lamba on Aug 3, 2010 1:20 PM

  • Sql takes long time

    Dear all,
    The following sql takes more than 2 minutes to execute.
    select * from ( select historyent0_.TRANSACTION_ID as col_0_0_, historyent0_.TRANSACTION_HISTORY_ID as col_1_0_
    from TRANSACTION_HISTORY historyent0_ where (historyent0_.TRANSACTION_HISTORY_ID in
    (select max(historyent1_.TRANSACTION_HISTORY_ID) from TRANSACTION_HISTORY historyent1_ group by
    historyent1_.TRANSACTION_ID)) and historyent0_.TRANSACTION_STATE_ID<4 and historyent0_.USER_NAME<>:1 )
    where rownum <= :2;
    I have created an index
    (TRANSACTION_HISTORY_ID,TRANSACTION_ID)
    The Plan is :
    SELECT STATEMENT  ALL_ROWSCost: 107,479  Bytes: 227.286.444  Cardinality: 5.411.582
      6 COUNT STOPKEY
        5 HASH JOIN  Cost: 107,479  Bytes: 227.286.444  Cardinality: 5.411.582
          3 VIEW VIEW SYS.VW_NSO_1 Cost: 45,87  Bytes: 76.148.033  Cardinality: 5.857.541
            2 HASH GROUP BY  Cost: 45,87  Bytes: 70.290.492  Cardinality: 5.857.541
              1 INDEX FAST FULL SCAN INDEX NIBC_FOP_AED.TEST_TRAN_HIST_IDX Cost: 8,756  Bytes: 140.464.128  Cardinality: 11.705.344
          4 TABLE ACCESS FULL TABLE NIBC_FOP_AED.TRANSACTION_HISTORY Cost: 43,995  Bytes: 156.935.878  Cardinality: 5.411.582Can someone please suggest

    You can try this and see if it helps.
    Issue is you are getting millions of rows in return and then you're doing where rownum< .... So the
    query has to run in full in any case.
    Try to use first rows hint if you care about getting immediate results.
    Try performing fts
    convert your rownum < :xyz into an actual column filter from the refined rows.....
    Cheers
    www.oraclefusions.com
    Please visit my site for free performance tuning oracle tools.
    The only real time Server-side SQL Sniffer tool developed:http://www.oraclefusions.com/applications.html#sniffer

  • Process Chain takes alonf time to run

    Hello All,
    We have a process chain which loads data for texts, attributes etc.
    On certain days the process chain will run extremely fast and maybe take a couple of hours, other days the same process chain may take 24 hours to run.  It appears to get stuck when extracting the data from our source system.
    Is there any reason for these strange performance issues and does any body else have the same problem?  Is there anything that can be done about it?
    Thanks,
    Nick.

    Hi Nick,
    This error normally occurs whenever BW encounters error and is not able to classify them. There could be multiple reasons for the same
    Whenever we are loading the Master Data for the first time, it creates SID’s. If system is unable to create SID’s for the records in the Data packet, we can get this error message.
    If the Indexes of the cube are not deleted, then it may happen that the system may give the caller 70 error.
    Whenever we are trying to load the Transactional data which has master data as one of the Characteristics and the value does not exist in Master Data table we get this error. System can have difficultly in creating SIDs for the Master Data and also load the transactional data.
    If ODS activation is taking place and at the same time there is another ODS activation running parallel then in that case it may happen that the system may classify the error as caller 70. As there were no processes free for that ODS Activation.
    It also occurs whenever there is a Read/Write occurring in the Active Data Table of ODS. For example if activation is happening for an ODS and at the same time the data loading is also taking place to the same ODS, then system may classify the error as caller 70.
    It is a system error which can be seen under the “Status” tab in the Job Over View.
    Cheers
    Raj

  • How to tune this SQL (takes long time to come up with results)

    Dear all,
    I have sum SQL which takes long time ... can any one help me to tune this.... thank You
    SELECT SUM (n_amount)
    FROM (SELECT DECODE (v_payment_type,
    'D', n_amount,
    'C', -n_amount
    ) n_amount, v_vou_no
    FROM vouch_det a, temp_global_temp b
    WHERE a.v_vou_no = TO_CHAR (b.n_column2)
    AND b.n_column1 = :b5
    AND b.v_column1 IN (:b4, :b3)
    AND v_desc IN (SELECT v_trans_source_code
    FROM benefit_trans_source
    WHERE v_income_tax_app = :b6)
    AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1)
    UNION ALL
    SELECT DECODE (v_payment_type,
    'D', n_amount,
    'C', -n_amount
    * -1 AS n_amount,
    v_vou_no
    FROM vouch_details a, temp_global_temp b
    WHERE a.v_vou_no = TO_CHAR (b.n_column2)
    AND b.n_column1 = :b5
    AND b.v_column1 IN (:b12, :b11, :b10, :b9, :b8, :b7)
    AND v_desc IN (SELECT v_trans_source_code
    FROM benefit_trans_source
    WHERE income_tax_app = :b6)
    AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1));
    Thank You.....

    Thanks a lot,
    i did change the SQL it works fine but slows down my main query.... actually my main query is calling a function which does the sum......
    here is the query.....?
    select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code,
    a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
    PKG_AGE__TAX.GET_TAX_AMT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO)  comm,
    c.v_ird_region
    FROM agent_master a, agent_lob b, agency_region c
    WHERE a.n_agent_no = b.n_agent_no
    AND a.v_agency_region = c.v_agency_region
    AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
    AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
    AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
    group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
    BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
    c.v_ird_region
    ORDER BY c.v_ird_region, a.v_agent_code DESC)
    A
    WHERE (COMM < :P_VAL_IND OR      COMM >=:P_VAL_IND1);
    Any idea to make this faster....
    Thank You...

  • NAC Agent takes long time to run

    Cisco NAC agent takes long time to popup or run on Windows 7 machine.
    The client machine is windows 7, running nac agent 4.9.0.42, against ISE 1.1.1
    Any ideas how to reduce NAC Agent timing?

    Hi Tariq,
    I'm facing the same issue with ISE 1.1.1 (268) with Agent 4.9.0.47 for Windows XP clients. I have already configured "yes" to disabled the l3 swiss delay and reduced the httpa discovery timer from 30 to 05 sec but still clients get aprox 2.30 minutes to popup and finished the posture discovery.
    Can you please advise if this is the minimum time or what is the minimum time and what are the parameters to set to a minimum time to complete agent popup and posture discovery..?
    Is there any option that we can run this on backgroup..?
    thanks in advance..

  • Oracle9i reports take longer time while running in web

    Hi,
    I have developed few reports in Oracle9i and I am trying to run the reports in web. Running a report through report builder takes lesser time compare to running the same report in web using web.show_document. This also depends on the file size. If my report file size(.jsp file) is less than 100KB then it takes 1 minute to show the parameter form and another 1 minute to show the report output. If my file size is around 190KB then the system takes atleast 15 minutes to show the parameter form. Another 10 to 15 minutes to show the report output. I don't understand why the system takes long time to show the parameter form.
    I have a similar problem while opening the file in reports builder also. If my file size is more than 150KB then it takes more than 15 minutes to open the file.
    Could anyone please help me on this.
    Thanks, Radha

    This problem exists only with .jsp reports. I saved the reports in .rdf format and they run faster on web now. Opening a .jsp report takes longer time(file with 600KB takes atleast 2 hours) but the same report in .rdf format takes few seconds to get opened in reports builder.

  • Using a different TIME range that is different from the default TIME dim

    Hi all - this is my first post - so please be gentle!
    I am busy writing an EVDRE report on an application for which I need to use a different time range than what is currently the overall time dimension for all our other applications.
    Currently the overall time dimension is set for MAR to FEB (Financial year). I now need to do a report for - OCT to SEP reflecting the YTD values?
    The report also calls for having variable A compared against variable B over the extent of this period (OCT to SEP -> YTD). So in essence I will not be displaying any PERIODIC values but rather just the YTD values.
    How can I do this without having to resort to a manual entry of each month's TIME dimension details and still ensure the report dynamically changes when I do an EXPAND?
    I am using BPC 5.1 SP3 on Microsoft Office 2003.
    Kind regards
    Pieter

    Well, without seeing the report and the setup, my suggestion is to leverage the EVTIM function.  If your columns are static, but the user selects a period for the report, you should be able to set-up the function to build the 2 sets of 3 months.  The EVTIM function simply references another time value, and adds up or down at a specific level as a member for use in the column (or that is where I would use it).
    Does that make sense?

  • Take more time while running report on live

    report takes more time to open on live. while running manually by connecting to live it run (means first pages generate with in 4 sec) but when i click to next page it tooks 3-4 sec.
    thers's only two formula column.
    please advice me to make it run smooth. waiting for ur response.
    Regards

    May i know which version u are using
    and hw many records does it has to fetch.?

  • Script takes long time to run

    Hi Friends,
    We have a leave record in the following manner:
    name date leaves
    ABC 01-oct-08 1
    ABC 02-oct-08 1
    ABC 03-oct-08 1
    ABC 04-oct-08 1
    ABC 05-oct-08 1
    ABC 10-oct-08 1
    ABC 18-oct-08 1
    ABC 19-oct-08 1
    ABC 20-oct-08 1
    ABC 25-oct-08 1
    ABC 26-oct-08 1
    ABC 27-oct-08 1
    ABC 28-oct-08 1
    and we need an output in the following manner:
    output
    Name FromDate ToDate Leaves
    ABC 01-oct-08 05-oct-08 5
    ABC 10-oct-08 10-oct-08 1
    ABC 18-oct-08 20-oct-08 3
    ABC 25-oct-08 28-oct-08 4
    The code for the above mentioned logic is as follows:
    =======================================================
    =======================================================
    procedure KPM_AB_LWP_PROC(p_err_buf OUT VARCHAR2, p_ret_code OUT NUMBER) is
    diff number;
    prev_date date;
    p_prev_date date;
    first_date date;
    last_date date;
    dt date;
    total_leaves number := 0;
    ecode varchar2(20);
    ename varchar2(240);
    cursor ab_lwp_cur(process_from_date date, process_to_date date, e_code varchar2) is
    select
    ab.EMPCODE,
    ab.RECORD_DATE,
    ab.FIRST_HALF,
    ab.SECOND_HALF,
    (em.last_name || ', ' || em.TITLE || ' '|| em.FIRST_NAME || em.MIDDLE_NAME) EMP_NAME
    from
    kpm_hr_absent_record ab, kpm_hr_emp_mst em
    where
    ab.EMPCODE = em.EMPCODE
    and (ab.FIRST_HALF in ('AB','LWP') or ab.SECOND_HALF in ('AB','LWP'))
    and ab.EMPCODE like e_code
    and ab.record_date between process_from_date and process_to_date;
    cursor active_emp is
    select empcode
    from kpm_hr_emp_mst
    where status='Active'
    order by empcode;
    begin
    for emp in active_emp loop
    begin
    select min(ab.RECORD_DATE), max(ab.RECORD_DATE)
    into prev_date, last_date
    from kpm_hr_absent_record ab
    where (ab.FIRST_HALF in ('AB','LWP') or ab.SECOND_HALF in ('AB','LWP'))
    and ab.EMPCODE like emp.empcode;
    exception
    when others then
    prev_date := null;
    last_date := null;
    end;
    dt := prev_date;
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'Employee' || chr(9) || chr(9) || 'Name' || chr(9) || chr(9) || 'From Date' || chr(9) || chr(9) || 'To Date' || chr(9) || chr(9) || 'Total Leaves');
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'---------' || chr(9) || chr(9) || '-----' || chr(9) || chr(9) || '----------' || chr(9) || chr(9) || '---------' || chr(9) || chr(9) || '-------------');
    while dt &lt;= last_date loop
    first_date := dt;
    total_leaves := 0;
    for m in ab_lwp_cur(prev_date, last_date, emp.empcode) loop
    ecode := m.empcode;
    ename := m.emp_name;
    diff := m.record_date - prev_date;
    if diff = 0 then
    if m.first_half in ('AB','LWP') then
    total_leaves := total_leaves + 0.5;
    end if;
    if m.second_half in ('AB','LWP') then
    total_leaves := total_leaves + 0.5;
    end if;
    prev_date := prev_date + 1;
    else
    prev_date := m.record_date;
    goto print_leave;
    end if;
    p_prev_date := prev_date -1;
    end loop;
    &lt;&lt;print_leave&gt;&gt;
    if total_leaves &gt; 0 then
    FND_FILE.PUT_LINE(FND_FILE.OUTPUT,ecode || chr(9) || chr(9) || ename || chr(9) || chr(9) || first_date || chr(9) || chr(9) || p_prev_date || chr(9) || chr(9) || total_leaves);
    end if;
    dt := prev_date;
    end loop;
    end loop;
    exception
    when others then
    FND_FILE.PUT_LINE(FND_FILE.LOG,'Error: ' || sqlerrm);
    end KPM_AB_LWP_PROC;
    =======================================================
    =======================================================
    The problem is that this code takes about 24hrs to run which is not accepted.
    Kindly suggest us some other technique to implement the same logic.
    For your reference, the KPM_HR_ABSENT_RECORD table has about 3,75,000 records in it.
    Also we have created some indexes on certain column as recommended by explain plans.
    Thanks in advance
    Ankur

    with t as (select 'ABC' as nm, to_date('01-oct-08','dd-mon-yy') as dt, 1 as leaves from dual union all
                   select 'ABC',to_date('02-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('03-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('04-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('05-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('10-oct-08','dd-mon-yy'), 1 from dual union all
                  select 'ABC',to_date('18-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('19-oct-08','dd-mon-yy'), 1 from dual union all
                   select 'ABC',to_date('20-oct-08','dd-mon-yy'), 1 from dual union all
                  select 'ABC',to_date('25-oct-08','dd-mon-yy'), 1 from dual union all
                  select 'ABC',to_date('26-oct-08','dd-mon-yy'), 1 from dual union all
                  select 'ABC',to_date('27-oct-08','dd-mon-yy'), 1 from dual union all
                  select 'ABC',to_date('28-oct-08','dd-mon-yy'), 1 from dual)
       -- end of sample data
    select nm, min(dt), max(dt), sum(leaves)
    from
       select nm, dt, max(group_id) over(partition by nm order by dt) group_id
                , leaves
       from
          select nm, dt, case when trunc(dt) - 1 != lag(trunc(dt),1,trunc(dt))
                                                                     over(partition by nm order by dt)
                               then rownum
                               end group_id
                 , leaves
          from t
    group by nm, group_id
    order by 1,2;Hope this helps.
    @Blushadow, thanks very much for the test data.
    Regards
    Raj
    P.S : For more information check this link
    http://www.oracle.com/technology/oramag/oracle/04-mar/o24asktom.html
    and search for "Analytics to the Rescue (Again)" in that page.
    Edited by: R.Subramanian on Oct 22, 2008 4:52 AM
    Changed count function to Sum function

  • Take longer time to run sql worksheet

    Hi,
    I using sql developer for running a script in sql worksheet, it take an endless time to complete running the command, and the command is just a simple query (e.g. begin xxxx end;)
    This problem doesn't happen before in my pc. I had try to reinstall the software but still the same. Any one can help me?
    I using oracle sql developer v1.5.3
    Window 2K SP 4, 512 RAM Intel Pentium 1.8 GHz
    Java SE development: jdk 1.6.0_03
    Thanks

    Sure it's as simple as you think? How about a SELECT * FROM DUAL ?
    What kind of database is this?
    Can you see in the Enterprise Manager (or similar) if the database is very busy or something?
    Can you verify with another tool if it's related to sqldev or not?
    K.

  • One query taking different time to execute on different environments

    I am working on Oracle 10g. We have setup of two different environments - Development and Alpha.
    I have written a query which gets some records from a table. This table contains around 1,000,000 records on both the environments.
    This query takes 5 seconds to execute on Development environment to get 200 records but the same query takes around 50 seconds to execute on Alpha environment and to retrieve same number of records.
    Data and indexes on the table is same on both environments. There are no joins in the query.
    Please let me know what are the all possible reasons for this?
    Edited by: 956610 on Sep 3, 2012 2:37 AM

    Below is the trace on the two environments ---
    -----------------------Development ------------------------------
    CPU used by this session     1741
    CPU used when call started     1741
    Cached Commit SCN referenced     15634
    DB time     1752
    Effective IO time     7236
    Number of read IOs issued     173
    SQL*Net roundtrips to/from client     14
    buffer is not pinned count     90474
    buffer is pinned count     264554
    bytes received via SQL*Net from client     4507
    bytes sent via SQL*Net to client     28859
    calls to get snapshot scn: kcmgss     6
    calls to kcmgcs     13
    cell physical IO interconnect bytes     165330944
    cleanout - number of ktugct calls     5273
    cleanouts only - consistent read gets     5273
    commit txn count during cleanout     5273
    consistent gets     202533
    consistent gets - examination     101456
    consistent gets direct     19686
    consistent gets from cache     182847
    consistent gets from cache (fastpath)     81013
    enqueue releases     3
    enqueue requests     3
    execute count     6
    file io wait time     1582
    immediate (CR) block cleanout applications     5273
    index fetch by key     36608
    index scans kdiixs1     36582
    no buffer to keep pinned count     8
    no work - consistent read gets     95791
    non-idle wait count     42
    non-idle wait time     2
    opened cursors cumulative     6
    parse count (hard)     1
    parse count (total)     6
    parse time cpu     1
    parse time elapsed     2
    physical read IO requests     181
    physical read bytes     163299328
    physical read total IO requests     181
    physical read total bytes     163299328
    physical read total multi block requests     162
    physical reads     19934
    physical reads direct     19934
    physical reads direct temporary tablespace     248
    physical write IO requests     8
    physical write bytes     2031616
    physical write total IO requests     8
    physical write total bytes     2031616
    physical write total multi block requests     8
    physical writes     248
    physical writes direct     248
    physical writes direct temporary tablespace     248
    physical writes non checkpoint     248
    recursive calls     31
    recursive cpu usage     1
    rows fetched via callback     23018
    session cursor cache hits     4
    session logical reads     202533
    session uga memory max     65488
    sorts (memory)     3
    sorts (rows)     19516
    sql area evicted     2
    table fetch by rowid     140921
    table scan blocks gotten     19686
    table scan rows gotten     2012896
    table scans (direct read)     2
    table scans (long tables)     2
    user I/O wait time     2
    user calls     16
    workarea executions - onepass     4
    workarea executions - optimal     7
    workarea memory allocated     17
    ------------------------------------------------------ For Alpha ------------------------------------------------------------------
    CCursor + sql area evicted     1
    CPU used by this session     5763
    CPU used when call started     5775
    Cached Commit SCN referenced     9264
    Commit SCN cached     1
    DB time     6999
    Effective IO time     4262103
    Number of read IOs issued     2155
    OS All other sleep time     10397
    OS Chars read and written     340383180
    OS Involuntary context switches     18766
    OS Other system trap CPU time     27
    OS Output blocks     12445
    OS Process stack size     24576
    OS System call CPU time     223
    OS System calls     20542
    OS User level CPU time     5526
    OS User lock wait sleep time     86045
    OS Voluntary context switches     15739
    OS Wait-cpu (latency) time     273
    SQL*Net roundtrips to/from client     14
    buffer is not pinned count     2111
    buffer is pinned count     334
    bytes received via SQL*Net from client     4486
    bytes sent via SQL*Net to client     28989
    calls to get snapshot scn: kcmgss     510
    calls to kcmgas     4
    calls to kcmgcs     119
    cell physical IO interconnect bytes     340041728
    cleanout - number of ktugct calls     1
    cleanouts only - consistent read gets     1
    cluster key scan block gets     179
    cluster key scans     168
    commit txn count during cleanout     1
    consistent gets     41298
    consistent gets - examination     722
    consistent gets direct     30509
    consistent gets from cache     10789
    consistent gets from cache (fastpath)     9038
    cursor authentications     2
    db block gets     7
    db block gets from cache     7
    dirty buffers inspected     1
    enqueue releases     58
    enqueue requests     58
    execute count     510
    file io wait time     6841235
    free buffer inspected     8772
    free buffer requested     8499
    hot buffers moved to head of LRU     27
    immediate (CR) block cleanout applications     1
    index fast full scans (full)     1
    index fetch by key     196
    index scans kdiixs1     331
    no work - consistent read gets     40450
    non-idle wait count     1524
    non-idle wait time     1208
    opened cursors cumulative     511
    parse count (hard)     39
    parse count (total)     44
    parse time cpu     78
    parse time elapsed     343
    physical read IO requests     3293
    physical read bytes     329277440
    physical read total IO requests     3293
    physical read total bytes     329277440
    physical read total multi block requests     1951
    physical reads     40195
    physical reads cache     8498
    physical reads cache prefetch     7467
    physical reads direct     31697
    physical reads direct temporary tablespace     1188
    physical write IO requests     126
    physical write bytes     10764288
    physical write total IO requests     126
    physical write total bytes     10764288
    physical writes     1314
    physical writes direct     1314
    physical writes direct temporary tablespace     1314
    physical writes non checkpoint     1314
    prefetched blocks aged out before use     183
    recursive calls     1329
    recursive cpu usage     76
    rows fetched via callback     7
    session cursor cache count     8
    session cursor cache hits     491
    session logical reads     41305
    session pga memory max     851968
    session uga memory     -660696
    session uga memory max     3315160
    shared hash latch upgrades - no wait     14
    sorts (disk)     1
    sorts (memory)     177
    sorts (rows)     21371
    sql area evicted     10
    table fetch by rowid     613
    table scan blocks gotten     30859
    table scan rows gotten     3738599
    table scans (direct read)     4
    table scans (long tables)     8
    table scans (short tables)     3
    user I/O wait time     1208
    user calls     16
    workarea executions - onepass     7
    workarea executions - optimal     113
    workarea memory allocated     -617

  • Function returning query takes more time to run in Apex4.0

    Hi All,
    I created a report using function returning query. The function returns query based the parameters which returns dynamic columns. When I run the query in sql developer the query generates and returns the result in 3mins. But in apex it takes maximum of 35mins to return.
    The query will return around 10000 rows.
    Is it a performance issue in the query or in Apex?can anyone please help
    Regards
    Raj

    RajEndiran wrote:
    Hi Roel,
    Thanks much for your suggestion. I run in TOAD and got the result as
    Row 1 of 500 fetched so far in 3.31 minutes which means it queried for 500 records alone ? is that not the actual time taken to run the fulll query?That reflects the time to return the first 500 records...
    Please suggest.With all the best will in the world, if I was your user and I had to wait 3 minutes for the page to refresh, I'd steadily lose the will to live!
    As this is primarily an SQL tuning question, have a look at this message in the FAQ thread in the {forum:id=75} forum:
    {message:id=9360003}
    That should give you some pointers on the right approach.

  • Threaded program takes more time than running serially!

    Hello All
    Ive converted my program into a threaded application so as to improve speed. However i found that after converting the execution time is more than it was when the program was non threaded. Im not having any synchronised methods. Any idea what could be the reason ?
    Thanx in advance.

    First, if you are doing I/O, then maybe that's what's taking the time and not the threads. One question that hasn't been asked about your problem:
    How much is the time difference? If it takes like 10 seconds to run the one and 10 minutes to run the threaded version, then that's a big difference. But if it is like 10 seconds vs 11 seconds, I think you should reconsider if it matters so much.
    One analogy that comes to mind about multiple threads vs. sequential code is this:
    With sequentially run code, all the code segments are lined up in order and they all go thru the door one after the other. As one goes thru they all move up closer, thus they know who's going first.
    With multi-threaded code, all the code segments sorta pile up around the door in a big crowd. Some push go thru one at a time while others let them (priority), while other times 2 go for the door at the same time and there might be a few moments of "oh, after you", "no, after you", "oh no, I insist, after you" before one goes thru. So that could introduce some delay.

Maybe you are looking for

  • Help needed in Idoc To File Scenario

    Hi Experts, My scenario is Idoc to File. Here my job is to convert the purchase order idoc into xml file. In this there are three conditions.                    converting the standard PO into xml file (ii)                 converting the PO with seri

  • Differentiate foreground and background processing

    Hi All, I want to execute a certain report in background. If the user tries to execute the program in foreground, the system should prompt him not to run it in foreground. In the same way, the message should not come if the program is run in backgrou

  • Change NLS_LENGTH_SEMANTICS to CHAR

    I need to change the length semantics of all the tables in an existing application schema from BYTE to CHAR. I have explored two methods 1- Datapump export/import Due to large tables with numerous CLOB columns, the performance of the export/import is

  • Safari stops loading after opening my RSS feeds

    When I want to open all my RSS feeds in Safari, it starts to load, but after about 1 to 2 min, all my pages stop loadind. The address bar is then "half blue" ... In FF everything works fine, so network (airport) works fine. Anybody out there who can

  • Scrap control

    Dear SAP Gurus, Following is the scenario: the scrap at the client's side is of three types. 1) PVC 2)Copper 3)Combination of PVC and Copper. The client has created material codes for PVC and Copper because they are individual items and so he posts t