Getting long time to execute

hi all,
Table have more than 1.6milion records. to execute following query taking long time.
select DISTINCT
invoice_id,
invoice_number,
invoice_dis_id,
dis_line,
batch_id,
invoice_date,
cancelled_date,
accounting_date,
invoice_desc,
dist_desc,
invoice_id || '!' || invoice_dis_id || '!' || batch_id as unique_string
FROM test.ORA_test_INVOICE_T
I did following workarounds and try to increase performance yes it's retrieving 30 rows per second.
analyzed the table, index, schema
tried to use hints.
can some propose me solution to increase the performance. I am using oracle 11.2
Thanks,
krish

As fifranken pointed out you have no WHERE clause and therefore are probably doing a full table scan to get all of the rows.
You are selecting too many columns to make creating an index to get the table data instead practical (the idea being that more index "rows" can be read per block than table rows, requiring fewer read operations) unless you have a LOT of columns in the table.
If you have the parallel query option license you can try PQO but I'm doubful it will improve performance for 1.6M rows on 11g - if you have the licence you can try it. You did say "more than 1.6M rows", which could be anything. 1.6M rows is a lot of data but 11g should handle it well. DBA_FILE_MULTIBLOCK_READ_COUNT will affect how efficient full table scans are, and tablespaces can be configured with larger block sizes to make full table scans be more efficient.
How long is your query taking? Are you concerned about actual execution time or the time it takes to display the results on-screen? Is the length of time happening in SQL*PLUS or PL/SQL, a GUI tool like SQL*DEVELOPER, or something else?
You will need to get execution statistics (AUTOTRACE in SQL*PLUS and SQL*DEVELOPER is an easy way to do this) to see where the slowness is being caused. An execution plan to confirm query execution will also be helpful.
Edited by: riedelme on Dec 28, 2009 7:12 AM

Similar Messages

  • Getting Long time to execute select count(*) statement.

    Hi all,
    My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
    Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
    Analyzed the table.
    created required indexes.
    yet getting the same performance issues. please help me to solve this issue
    Thanks

    BlueDiamond wrote:
    COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
    In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
    Count(*) and Count(1) are have identical executions.
    Re: Count(*)/Count(1)
    http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245

  • Function taking longer time to execute

    Hi,
    I have a scenario where i am using a TABLE FUNCTION in a join con condition with a Normal TABLE but its getting longer time to execute:
    The function is given below:
    CREATE OR REPLACE FUNCTION GET_ACCOUNT_TYPE(
    SUBNO VARCHAR2 DEFAULT NULL
    RETURN ACCOUNT_TYP_KEY_1 PIPELINED AS
    V_SUBNO VARCHAR2(20);
    V_SUBS_TYP VARCHAR2(10);
    V_ACCOUNT_TYP_KEY VARCHAR2(10);
    V_ACCOUNT_TYP_KEY_1 VARCHAR2(10);
    V_SUBS_TYP_KEY_1 VARCHAR2(10);
    V_VAL1 VARCHAR2(255);
    CURSOR C1_REC2 IS SELECT SUBNO,NULL
    FROM CTVA_ETL.RA_CRM_USER_INFO
    GROUP BY SUBNO,SUBSCR_TYPE;
    --CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE,ACCOUNT_TYPE_KEY
    --FROM CTVA_ETL.RA_CRM_USER_INFO,DIM_RA_MAST_ACCOUNT_TYPE
    --WHERE ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
    --WHERE MSISDN='8615025400109'
    --WHERE MSISDN IN ('8615025400068','8615025400083','8615025400101','8615025400132','8615025400109')
    CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE--,ACCOUNT_TYPE_KEY
    FROM CTVA_ETL.RA_CRM_USER_INFO
    GROUP BY SUBNO,SUBSCR_TYPE;
    BEGIN
    OPEN C1_REC;
    LOOP
    FETCH C1_REC INTO V_SUBNO ,V_SUBS_TYP;
    IF V_SUBS_TYP IS NOT NULL THEN
    BEGIN
    SELECT
    ACCOUNT_TYPE_KEY
    INTO
    V_ACCOUNT_TYP_KEY
    FROM
    DIM_RA_MAST_ACCOUNT_TYPE,
    RA_CRM_USER_INFO
    WHERE
    ACCOUNT_TYPE_KEY=V_SUBS_TYP
    AND ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
    AND SUBNO=V_SUBNO;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    V_ACCOUNT_TYP_KEY := '-99';
    V_ACCOUNT_TYP_KEY_1 := V_ACCOUNT_TYP_KEY;
    END;
    ELSE
    V_ACCOUNT_TYP_KEY_1:='-99';
    END IF;
    FOR CUR IN (select
    DISTINCT V_SUBNO SUBNO_TYP_2 ,V_ACCOUNT_TYP_KEY_1 ACCOUNT_TYP
    from dual)
    LOOP
    PIPE ROW (ACCOUNT_TYP_KEY(CUR.SUBNO_TYP_2,CUR.ACCOUNT_TYP));
    END LOOP;
    END LOOP;
    RETURN;
    CLOSE C1_REC;
    END;
    The above function wil return rows with respect to SUBSCRIBER TYPE (if Not Null then it will return the ACCOUNT KEY and SUBNO else '-99').
    But its not returning any rows so all the rows will come as
    SUBNO ACCOUNT_TYP
    21 -99
    22 -99
    23 -99
    24 -99
    25 -99
    Thanks and Regards

    Hi LMLobo,
    In addition to Sebastian’s answer, you can refer to the document
    Server Memory Server Configuration Options to check whether the maximum server memory setting of the SQL Server is changed on the new server. Besides, you can also compare the
    network packet size setting of the SQL Server as well as the network connectivity on both servers. Besides, you can refer to the following link to troubleshooting SSIS package performance
    issue:
    http://technet.microsoft.com/en-us/library/dd795223(v=sql.100).aspx.
    Regards,
    Mike Yin
    TechNet Community Support

  • Rank Function taking a long time to execute in SAP HANA

    Hi All,
    I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
    the following is a sample of how the Query looks,
    SQL 1:
    select      a.column1,
                    b.column1,
                    rank () over(partition by a.column1 order by sum(b.column2) asc)
    from         "_SYS_BIC"."Analyticview1"         b
                    join          "Table1"            a
                      on          (a.column2 = b.column3)
    group by  a.column1,
    b.column1;
    SQL 2:
    select    a.column1,
                    b.column1,
                    rank () over( order by min(b.column1) asc) WJXBFS1
    from         "_SYS_BIC"."Analytic view2"         b
                    cross join                "Table 2"               a
    where      (a.column2  like '%a%'
    and b.column1  between 100 and 200)
    group by  a.column1,
                    b.column1
    when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
    1.Does anyone have an any idea to make these queries to execute faster?
    2. Does the latency have anything to do with the rank function or could it be size of the result set?
    3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
    Thank you for your help!!
    -Gayathri

    Krishna,
    I tried both of them, Graphical and CE function,
    It is also taking a long time to execute
    Graphical view giving me the following error after 2 hr and 36 minutes
    Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
    SAP DBTech JDBC: [2048]: column store error: search table error:  [2620] executor: plan operation failed
    CE function - I aborted after 40 mins
    Do you know the syntax to declare local variable to use in CE function?

  • Query is taking long time to execute after migrating to 10g r2

    Hi
    We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
    Thanks for your help,
    Shree
    ======================================================================================
    SELECT cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    ,sum(ra.amount_applied) amount_applied
    FROM AR_CASH_RECEIPTS_ALL cr
    ,AR_RECEIVABLE_APPLICATIONS_ALL ra
    ,hz_cust_accounts cust
    ,AR_CASH_RECEIPT_HISTORY_ALL crh
    ,GL_PERIOD_STATUSES gps
    ,FND_APPLICATION app
    WHERE cr.cash_receipt_id = ra.cash_receipt_id
    AND ra.status = 'UNAPP'
    AND cr.status <> 'REV'
    AND cust.cust_account_id = cr.pay_from_customer
    AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
    AND crh.cash_receipt_id = cr.cash_receipt_id
    AND app.application_id = gps.application_id
    AND app.application_short_name = 'AR'
    AND gps.period_name = 'May-07'
    AND crh.gl_date <= gps.end_date
    AND cr.receipt_number not like 'WH%'
    -- AND cust.customer_number = '0000079260001'
    GROUP BY cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    HAVING sum(ra.amount_applied) > 0;
    =========================================================================================
    Here is the explain plan in 10g r2 (10.2.0.2.0)
    PLAN_TABLE_OUTPUT
    Plan hash value: 2617075047
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
    |* 1 | FILTER | | | | | |
    | 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
    | 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
    | 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
    |* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
    |* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
    |* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
    | 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
    | 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
    | 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
    | 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
    |* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
    | 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
    |* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
    |* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
    PLAN_TABLE_OUTPUT
    |* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
    |* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
    Predicate Information (identified by operation id):
    1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
    5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
    6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
    7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
    12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
    14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
    filter("GPS"."PERIOD_NAME"='May-07')
    15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
    16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
    17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
    filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
    Here is the explain plan in 9i
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
    tes=9423600)
    1 0 FILTER
    2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
    3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
    4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
    5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
    286 Card=110061 Bytes=1981098)
    6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
    7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
    T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
    8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
    01431)
    9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
    ICATION' (Cost=1 Card=1 Bytes=8)
    11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
    U3' (UNIQUE)
    12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
    D_STATUSES' (Cost=3 Card=1 Bytes=19)
    13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
    S_U1' (UNIQUE) (Cost=2 Card=1)
    14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
    RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
    15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
    ost=60412 Card=8969141 Bytes=394642204)
    16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
    LL' (Cost=337109 Card=15613237 Bytes=265425029)

    Hi,
    The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
    The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
    AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
    Try creating some indexes on
    AR_CASH_RECEIPTS_ALL
    hz_cust_accounts

  • Taking long time to execute views

    Hi All,
    my query is taking long time to execute(i am using standard views in my query)
    XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
    WHERE gjh.je_batch_id = gjb.je_batch_id AND
    gjh.je_header_id = gjl.je_header_id AND
    gjh.je_header_id = xlawip.je_header_id AND
    gjl.je_header_id = xlawip.je_header_id AND
    gjl.je_line_num = xlawip.je_line_num AND
    gcc.code_combination_id = gjl.code_combination_id AND
    gjl.code_combination_id = xlawip.code_combination_id AND
    gjb.set_of_books_id = xlawip.set_of_books_id AND
    gjh.je_source = 'Inventory' AND
    gjh.je_category = 'WIP' AND
    gp.period_set_name = 'Accounting' AND
    gp.period_name = gjl.period_name AND
    gp.period_name = gjh.period_name AND
    gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
    to_date(enddate,'DD-MON-YY') AND
    gjh.status =nvl(lstatus,gjh.status)
    Could any one help me to execute it fast?
    Thanks
    Madhu

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...

  • Procedure takes long time to execute...

    Hi all
    i wrote the proxcedure but it takes long time to execute.
    The INterdata table contains 300 records.
    Here is the procedure:
    create or replace procedure inter_filter
    is
         /*v_sessionid interdata.sessionid%type;
         v_clientip interdata.clientip%type;
         v_userid interdata.userid%type;
         v_logindate interdata%type;
         v_createddate interdata%type;
         v_sourceurl interdata%type;
         v_destinationurl interdata%type;*/
         v_sessionid filter.sessionid%type;
         v_filterid filter.filterid%type;
         cursor c1 is
         select sessionid,clientip,browsertype,userid,logindate,createddate,sourceurl,destinationurl
         from interdata;
         cursor c2 is
         select sessionid,filterid
         from filter;
    begin
         open c2;
         loop
              fetch c2 into v_sessionid,v_filterid;
              for i in c1 loop
                   if i.sessionid = v_sessionid then
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,v_filterid,i.sourceurl,i.destinationurl,i.createddate);
                   else
                        insert into filter (filterid,sessionid,clientip,browsertype,userid,logindate,createddate)
                        values (filter_seq.nextval,i.sessionid,i.clientip,i.browsertype,i.userid,i.logindate,i.createddate);
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,filter_seq.currval,i.sourceurl,i.destinationurl,i.createddate);
                   end if;
              end loop;
         end loop;
         commit;
    end
    Please Help!
    Prathamesh

    i wrote the proxcedure but it takes long time to execute.Please define "long time". How long does it take? What are you expecting it to take?
    The INterdata table contains 300 records.But how many records are there in the FILTER table? As this is the one you are driving off this is going to determine the length of time it takes to complete. Also, this solution inserts every row in the INTERDATA table for each row in the FILTER table - in other words, if the FILTER table has twenty rows to start with you are going to end up with 6000 rows in FILTERDETAIL. No wonder it takes a long time. Is that want you want?
    Also of course, you are using PL/SQL cursors when you ought to be using set operations. Did you try the solution I posted in Re: Confusion in this  scenario>>>>>>> on this topic?
    Cheers, APC

  • File Dialog Box often takes a long time to execute

    Hi all
    The Labview File Dialog Box Express VI sometimes takes 5 to 10 seconds to execute. On other occasions, it executes in a blink of an eye. Why is that so?
    I have tried to use the older Open/Create/Replace File VI as suggested in one of the forum's posts...but it behaves exactly the same way. Another problem is the trimming of the filename, which has also been reported and has NOT YET been solved.
    (By the way...Recently, I updated to LV 2012 SP2 by suggestion of the NI Update Manager, only to find out that I was not entitled to this SP (which apparently only includes bug fixes) and I will have to roll back to LV 2012. Still,  in the evaluation version I can see that the bug is not fixed. Labview must have a lot of serious bugs for NI to ask for a paid bug fix, in which some bugs remain after having been reported a long time ago..well...)
    What worries me now is that intermittent lag in the execution of the File Dialog (such a basic basic function of a program...). Anyone have any idea about this? The support menu said "Ask an engineer"...any of the NI engineers has any suggestion?
    Regards
    Helder
    Attachments:
    FileDialogTest.vi ‏26 KB

    First of all: Thank you very much for the objective answer. I suppose that yor actually ARE a NI engineer, as I had hoped for.
    lease find my comments above (helder---------------------)
    helder wrote:
    The Labview File Dialog Box Express VI sometimes takes 5 to 10 seconds to execute. On other occasions, it executes in a blink of an eye. Why is that so?
    What worries me now is that intermittent lag in the execution of the File Dialog (such a basic basic function of a program...). Anyone have any idea about this?
    The file dialog is provided by the OS, so if it is slow it is not LabVIEW's fault.
    What kind of computer do you have? Is this a laptop that spins down the HD to save power? In this case it needs to wait for the HD to spin up. (You can change the windows power profile or you can spend more money and get an SSD).
    Are you very low on memory?
    Does your LabVIEW program waste 100% CPU in parallel doing nothing due to bad coding?
    Are you pointing to a location that has millions of files?
    Are you pointing to a network location?
    helder:---------------------------------------
    I understand that the File Dialog is provided by the OS. But I have other applications on my PC and this doesn't happen - just in Labview. Sorry! And sometimes it
    doesn't happen. Every 10 times I open the File dialog Box, 5 times it takes long and 5 times it is immediate.
    (I am not low on memory, my coding is not bad>-look at the VI I sent, it just contains the fragment which uses the File Dialog Box and it behaves the same way! I am pointing to the same location that the other applications point to, I am not pointing to a network location, the PC is not spinning the disk down, etc etc.)
    But yes, I will try it on another PC
    (There was another post on the forum on this issue, but can't seem to find it now, just find my own post )
    I attach a video demo of what happens.
    helder wrote:
    Another problem is the trimming of the filename, which has also been reported and has NOT YET been solved.
    What problem is that? Can you elaborate? There is a known bug in the windows dialog box that sometimes the pre-filled file names are shifted and not fully visible. This is a windows bug documented by Microsoft and LabVIEW has no control over it. It is not NIs job to fix OS problems.
    helder:-------------------------------------------
    Of course you can't fix the OS.
    If it is a windows bug, we will have to live with it.
    A previous post on your forum did not provide me with that information (http://forums.ni.com/t5/LabVIEW/File-dialog-trimmi​ng-the-default-name/td-p/2189874). The fact that it is a windows bug was not mentioned there. 
    helder wrote:
    Hi all
    (By the way...Recently, I updated to LV 2012 SP2 by suggestion of the NI Update Manager, only to find out that I was not entitled to this SP (which apparently only includes bug fixes) and I will have to roll back to LV 2012. Still,  in the evaluation version I can see that the bug is not fixed.
    Patches are included, service packs are not included. I think this is well documented. Most users are on SSP, this way you are always entitled to the newest version.
    As with anything else, software has bugs. It is impossible to test a huge, complex system like LabVIEW under all scenarios. The possible number of combinations of hardware and other installed software on any particular computer probably exceeds the number of atoms in the universe. It is actually amazing how few bugs there are and how well it works. What "bug" are you talking about? Bugs are prioritized according to rules. Critical bugs are typically fixed very quickly. A cosmetic issue that only affects you and nobody else will take longer or might not even get fixed. Are you talking about a confirmed bug that could be reproduced by NI? Do you have a CAR#?
    helder:----------------------------------------
    I was referring to the file-name-shifting-cosmetic-bug that the October-2012 post had indicated. But if it is a Windows-bug, NI does not have any responsibility, of
    course.
    Regarding the bugs, of course they are unavoidable.
    helder wrote:
    Labview must have a lot of serious bugs for NI to ask for a paid bug fix, in which some bugs remain after having been reported a long time ago..well...)
    That sentence makes no sense! Why would the number of serious bugs depend on the cost of upgrading?? You are just rambling here....
    In an ideal world, NI would have an unlimited number of programmers that can work 24/7 to immediately fix any discovered bug. This would only be possible if all users are willing to pay an unlimited amount of money for the software. As I mentioned, bug fixes are free in the form of patches, and they come out regularly. Service packs include new features and are not free.
    helder:---------------------------------------
    "....include new features"..????
    ---->>From http://www.ni.com/labview/release-details/
    "LabVIEW 2012 Service Pack 1 is an exclusive update to LabVIEW 2012 for NI Standard Service Program (SSP) customers. There are no new product features introduced in service pack releases; instead, these releases provide bug fixes and improved stability for LabVIEW 2012. For a list of these bug fixes, click here."
    No new features, just bug fixes!"
    Moreover: If it isn't free, it shouldn' have let me update if I am not a SSP costumer.
    Now I have a 45 days evaluation version. After that, I have to uninstall and reinstall everything.That takes some time and effort.
    If there is a more direct way of rolling back t the previous version, please let me know.
    Thank you

  • Extremely long time when executing an export transaction data package link

    Hi,
    I am working in a packagelink to export transaction data to the application server. The package which  I am currently using is /CPMB/EXPORT_TD_TO_APPL. I use it to generate an output file which I later use in a different appliation to register modifications applied in BPC.
    It has been working correctly without problems for a time. The problem is that suddenly it stopped working. Maybe because of some changes in the dimension library (is it feasible?) . I defined and scheduled again the packagelink and it began working correctly again, but the time it takes to execute the process  is really long. Approximately it takes above 10 hours, which is an extremely long time for the process. In the beginning, the packagelink was executed in an hour and a half. Could anybody tell me any idea about what can be the reason of this problem?
    Any help will b much appreciated.

    >      Database error text........: "POS(1) System error: BD Index not accessible"                    
    >      Database error code........: "-602"                                                            
    >      Triggering SQL statement...: "INSERT INTO "/BIC/SZD_PROD" ( "/BIC/ZD_PROD",                    
    >       "SID", "CHCKFL", "DATAFL", "INCFL" ) VALUES ( ? , ? , ? , ? , ? )"                            
    Hi Hari,
    looks like you are hitting a BAD index.
    Check the [DB50|http://help.sap.com/saphelp_nw04s/helpdata/en/9c/ca5bb3d729034aaf6f4cea2627c2f2/frameset.htm] or the DBMGUI - there should be warnings about this.
    To fix this issue, either use the DBMGUI -> Recover -> [Indexes|http://maxdb.sap.com/doc/7_6/30/5ada38596211d4aa83006094b92fad/frameset.htm] function or logon to dbmcli, get an SQL session ([sql_connect|http://maxdb.sap.com/doc/7_6/11/8af4411cf5c417e10000000a155106/content.htm]) and use the [sql_recreateindex|http://maxdb.sap.com/doc/7_6/30/f7c7f25be311d4aa1500a0c9430730/content.htm] command.
    Regards,
    Lars

  • IP10 and IP 30 is taking very long time to Execute.

    HI gurus,
    I am facing a problem.For counter based Maintenance order scheduling-when I integrate Production order with Maintenance order by PRT, the order is generated after required number of usage for PRT but deadline monitoring and Scheduling of a individual maintenance plan takes more than 7 hours to execute.
    I am not in a condition to understand where the error is going on in the program as after a long time the required result of maintenance order is generated.
    Please Help me
    Thanx in advance,
    Praveen Kumar

    there is a SAP note available for this...just check in service.sap.com

  • Cost center query tkakes a long time while executing with User's Id

    Hi Experts,
    We have a cost-center query which is taking a long time to display the output with User's id.
    I tried running the report with the same selections and was able to get the values within seconds.
    Also we have maintained aggregates on the cube.
    When user tries it for a single cost-center the performance is Ok.
    Any help on this wil be highly appreciated.
    Thanks,
    Amit

    Hi,
    while running the query find the trace in ST05 - before running the query in RSRT activate the trace with user id and after seeing the report in RSRT deactivate the trace.
    go through the logs find the which object taking long time then create the aggregates on the cube.
    while creating agggates give the fixed value.
    please find the doc " how to find the SQL traces in sap bi"
    Thanks,
    Phani.

  • How to get the time for executing an SQL statement?

    hi all...
    How can I get the total execution time for a given SQL statement to a ViewObject which is created dynamically for this SQL so that the end user knows it before hand and try to stop it or go further..?
    Please post any ideas if you have got..!!!
    thanx in advance..
    grüss
    K°vi

    This is not really a question for the JDeveloper forum, but rather for the DB forum.
    Since Oracle9i there is a way to estimate how long a query will take, before executing it.
    Check it out in the Oracle DB documentation or ask on the DB forum.
    There is no built-in functionality for this in JDeveloper, but you should be able to call out to the DB from your Java code to get the estimate.

  • Stored Procedure  is taking too long time to Execute.

    Hi all,
    I have a stored procedure which executes in 2 hr in one database, but the same stored procedure is taking more than 6 hour in the other database.
    Both the database are in oracle 11.2
    Can you please suggest what might be the reasons.
    Thanks.

    In most sites I've worked at it's almost impossible to trace sessions, because you don't have read permissions on the tracefile directory (or access to the server at all). My first check would therefore be to look in my session browser to see what the session is actually doing. What is the current SQL statement? What is the current wait event? What cursors has the session spent time on? If the procedure just slogs through one cursor or one INSERT statement etc then you have a straightforward SQL tuning problem. If it's more complex then it will help to know which part is taking the time.
    If you have a licence for the diagnostic pack you can query v$active_session_history, e.g. (developed for 10.2.0.3, could maybe do more in 11.2):
    SELECT CAST(ash.started AS DATE) started
         , ash.elapsed
         , s.sql_text
         , CASE WHEN ash.sql_id = :sql_id AND :status = 'ACTIVE' THEN 'Y' END AS executing
         , s.executions
         , CAST(NUMTODSINTERVAL(elapsed_time/NULLIF(executions,0)/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS avg_time
         , CAST(NUMTODSINTERVAL(elapsed_time/1e6,'SECOND') AS INTERVAL DAY(0) TO SECOND(1)) AS total_time
         , ROUND(s.parse_calls/NULLIF(s.executions,0),1) avg_parses
         , ROUND(s.fetches/NULLIF(s.executions,0),1) avg_fetches
         , ROUND(s.rows_processed/NULLIF(s.executions,0),1) avg_rows_processed
         , s.module, s.action
         , ash.sql_id
         , ash.sql_child_number
         , ash.sql_plan_hash_value
         , ash.started
    FROM   ( SELECT MIN(sample_time) AS started
                  , CAST(MAX(sample_time) - MIN(sample_time) AS INTERVAL DAY(0) TO SECOND(0)) AS elapsed
                  , sql_id
                  , sql_child_number
                  , sql_plan_hash_value
             FROM   v$active_session_history
             WHERE  session_id = :sid
             AND    session_serial# = :serial#
             GROUP BY sql_id, sql_child_number, sql_plan_hash_value ) ash
           LEFT JOIN
           ( SELECT sql_id, plan_hash_value
                  , sql_text, SUM(executions) OVER (PARTITION BY sql_id) AS executions, module, action, rows_processed, fetches, parse_calls, elapsed_time
                  , ROW_NUMBER() OVER (PARTITION BY sql_id ORDER BY last_load_time DESC) AS seq
             FROM   v$sql ) s
           ON s.sql_id = ash.sql_id AND s.plan_hash_value = ash.sql_plan_hash_value
    WHERE  s.seq = 1
    ORDER BY 1 DESC;:sid and :serial# come from v$session. In PL/SQL Developer I defined this as a tab named 'Session queries' in the session browser.
    I have another tab named 'Object wait totals this query' containing:
    SELECT LTRIM(ep.owner || '.' || ep.object_name || '.' || ep.procedure_name,'.') AS plsql_entry_procedure
         , LTRIM(cp.owner || '.' || cp.object_name || '.' || cp.procedure_name,'.') AS plsql_procedure
         , session_state
         , CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END AS blocking_session_status
         , event
         , wait_class
         , ROUND(SUM(wait_time)/100,1) as wait_time_secs
         , ROUND(SUM(time_waited)/100,1) as time_waited_secs
         , LTRIM(o.owner || '.' || o.object_name,'.') AS wait_object
    FROM   v$active_session_history h
           LEFT JOIN dba_procedures ep
           ON   ep.object_id = h.plsql_entry_object_id AND ep.subprogram_id = h.plsql_entry_subprogram_id
           LEFT JOIN dba_procedures cp
           ON   cp.object_id = h.plsql_object_id AND cp.subprogram_id = h.plsql_subprogram_id
           LEFT JOIN dba_objects o ON o.object_id = h.current_obj#
    WHERE  h.session_id = :sid
    AND    h.session_serial# = :serial#
    AND    h.user_id = :user#
    AND    h.sql_id = :sql_id
    AND    h.sql_child_number = :sql_child_number
    GROUP BY
           ep.owner, ep.object_name, ep.procedure_name
         , cp.owner, cp.object_name, cp.procedure_name
         , session_state
         , CASE WHEN blocking_session_status IN ('NOT IN WAIT','NO HOLDER','UNKNOWN') THEN NULL ELSE blocking_session_status END
         , event
         , wait_class
         , o.owner
         , o.object_nameIt's not perfect and the numbers aren't reliable, but it gives me an idea where the time might be going. While I'm at it, v$session_longops is worth a look, so I also have 'Longops' as:
    SELECT sid
         , CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
         , l.opname AS operation
         , l.totalwork || ' ' || l.units AS totalwork
         , NVL(l.target,l.target_desc) AS target
         , ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
         , NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
         , l.start_time
         , CASE
               WHEN  l.time_remaining = 0 THEN l.last_update_time
               ELSE SYSDATE + l.time_remaining/86400
           END AS est_completion
         , l.sql_id
         , l.sql_address
         , l.sql_hash_value
    FROM v$session_longops l
    WHERE :sid IN (sid,qcsid)
    AND  l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
    ORDER BY l.start_time descand 'Longops this query' as:
    SELECT sid
         , CASE WHEN l.time_remaining> 0 OR l.sofar < l.totalwork THEN 'Yes' END AS "Active?"
         , l.opname AS operation
         , l.totalwork || ' ' || l.units AS totalwork
         , NVL(l.target,l.target_desc) AS target
         , ROUND(100 * l.sofar/GREATEST(l.totalwork,1),1) AS "Complete %"
         , NULLIF(RTRIM(RTRIM(LTRIM(LTRIM(numtodsinterval(l.elapsed_seconds,'SECOND'),'+0'),' '),'0'),'.'),'00:00:00') AS elapsed
         , l.start_time
         , CASE
               WHEN  l.time_remaining = 0 THEN l.last_update_time
               ELSE SYSDATE + l.time_remaining/86400
           END AS est_completion
         , l.sql_id
         , l.sql_address
         , l.sql_hash_value
    FROM v$session_longops l
    WHERE :sid IN (sid,qcsid)
    AND  l.start_time >= TO_DATE(:logon_time,'DD/MM/YYYY HH24:MI:SS')
    AND  l.sql_id = :sql_id
    ORDER BY l.start_time descYou can also get this sort of information out of OEM if you're lucky enough to have access to it - if not, ask for it!
    Apart from this type of monitoring, you might try using DBMS_PROFILER (point and click in most IDEs, but you can use it from the SQL*Plus prompt), and also instrument your code with calls to DBMS_APPLICATION_INFO.SET_CLIENT_INFO so you can easily tell from v$session which section of code is being executed.

  • Why it takes long time to execute on Production than staging?

    Hi Experts,
    Any help apreciated on below issue.
    I have one anonymous block for updating around 1 million records by joining 9 tables.
    This is proceeded to production by following environments. And all env have exact equal volume of data.
    development->Testing->Staging->Production.
    The funny problem is while it takes 5 mins to be executed in all environments, it takes 30 mins on production.
    why it happned and what can be action points for future?
    Thanks
    -J
    ==============
    If the performance is that different in the different environments, one or more statements must have different query plans in the different environments. The first step would be to get the query plans and compare them to figure out which statement(s) is/are running slowly.
    If there are different query plans, that implies that something is different between the environments. That could be any of
    - Oracle version
    - initialization parameters
    - data
    - object statistics
    - system statistics
    If you guarantee that the data is the same, I would tend to expect that the object statistics are different. How have you gathered statistics in the various environments? Can you move statistics from an environment where performance is acceptable to the environment where performance is unacceptable?
    I would also recommend following the advice others have given you. You don't want to commit in a loop and you want to do as much processing in SQL as possible.
    Justin
    ===============
    Thanks Steve for your inputs.
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.
    Best Regards
    -J
    ===============
    But before that, can you raise this in a seperate thread as there is a different issue going on in this thread?
    Cheers
    Sarma.
    ===========
    Posts: 4
    Registered: 08/28/06
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:46 AM in response to: Radhakrishna Sa... Edit Reply
    Hey Sarma,
    Exterme aplogies to say that I don't know how to raise a new thread.
    Thanks in advnce for your help.
    -J
    user636482
    Posts: 202
    Registered: 05/15/08
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:51 AM in response to: user527345 Reply
    Hi User 527345,
    Please follow the steps to raise a request in this Forum.
    1. Register urself.
    2. Go to the forum home and select the Technolgy where do you want to rasie a request.
    eg : If is related to Oracle DATAbase general then select Oracle databse general...
    3. clik on post new thread
    4. Give the summary of your issue.
    5. then submit the issue.
    please let me know if you need more information.
    Thank you

    Jayashree Mohanty wrote:
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.First , I didn't get at all what actually was that thing when you copied some part of don't know which post in your actual question? Please read this , it would help you post a proper question to get a proper answer ,
    http://www.catb.org/~esr/faqs/smart-questions.html
    Now, how did you come to the conclusion that the backups are actually making your query slower? What's the benchmark that you lead to this? And what's the meaning of the 2nd point , can you please explain it ?
    As others have also mentioned, please post the plan of the query at boththe staging and production, that only can tell that what's going on.
    HTH
    Aman....

  • Reports taking long time to execute

    Hi,
    There are few reports on SSRS which are taking almost 6-8 minutes to complete and display the data.
    I am using Oracle database as source.
    When I checked the query performance, what I found that main data set is taking almost 2-3 minutes to execute on Sql developer. Also there are two parameters which are taking almost 1-1 minute to execute.
    When I run the report without these two parameters, report execution time is reduced by more than 3 minutes.
    Also I am using 3-4 groupings in the report and 6 columns are aggregated at each grouping.
    Reports are tabular reports and contains header, footers and 2-3 text boxes to display parameter values.
    Can you please provide me some ways to optimize the queries and reduce the time taken by report to complete.

    Hi sudipta,
    According to your description, it takes too long to render the report when accessing it. Right?
    In this scenario, it can be many possibilities which will reduce the report performance. We suggest you check the report on ReportServer first. Go to the ReportServer ExecutionLog to see which part takes more time.
    Then we need to do some troubleshooting to improve the report performance. Here we have a article for your reference:
    Troubleshooting Reports: Report Performance
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

Maybe you are looking for

  • My iTunes rented movie plays for 24 seconds and then Windows 7 says it needs to close down the program. Help.

    The movie is downloaded, I know where it is, I can start in different chapters, but it only works for 24 seconds before it shuts down and I have to close the program.  I have uninstalled iTunes, Quick Time and downloaded and installed after restartin

  • Boot Camp Mac Book Pro Can Boot to Win 8.1 but not Mavericks

    My Mac Book Pro Mavericks partition is damaged and I cannot boot to it, I can boot to Windows 8.1 without any issues.    Disk repair did not fix the problem, any suggestions?   I also tried using Migration Assistant to from my iMac to the Mac Book pr

  • PDFs with Navigation for Kindle or Android

    Hello, I created a large PDF in Word, and added navigation functions in Acrobat (a linked TOC, links to specific maps, etc.). It functions great in the Kindle using certain PDF readers. But the Back to Last Location button does not work. It is done t

  • Column as pushbutton in alv

    Hi all,        i want to know the moethod by which we can display a coulumn as pushbutton in alv when we are usingFM Reuse_alv_list_display.. to display  ALV... if possible then give me some sample code Regards, Syed

  • How to add .mov extension to Quicktime clips ?

    Not to exported movies but to all the individual clips / movies, created by capturing process. I know I have seen the box to check / uncheck ( add .mov extension. Obviously I didn't check it, but now need to. Doesn't seem to be a quicktime option / p