Getting Long time to execute select count(*) statement.

Hi all,
My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
Analyzed the table.
created required indexes.
yet getting the same performance issues. please help me to solve this issue
Thanks

BlueDiamond wrote:
COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
Count(*) and Count(1) are have identical executions.
Re: Count(*)/Count(1)
http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245

Similar Messages

  • Getting long time to execute

    hi all,
    Table have more than 1.6milion records. to execute following query taking long time.
    select DISTINCT
    invoice_id,
    invoice_number,
    invoice_dis_id,
    dis_line,
    batch_id,
    invoice_date,
    cancelled_date,
    accounting_date,
    invoice_desc,
    dist_desc,
    invoice_id || '!' || invoice_dis_id || '!' || batch_id as unique_string
    FROM test.ORA_test_INVOICE_T
    I did following workarounds and try to increase performance yes it's retrieving 30 rows per second.
    analyzed the table, index, schema
    tried to use hints.
    can some propose me solution to increase the performance. I am using oracle 11.2
    Thanks,
    krish

    As fifranken pointed out you have no WHERE clause and therefore are probably doing a full table scan to get all of the rows.
    You are selecting too many columns to make creating an index to get the table data instead practical (the idea being that more index "rows" can be read per block than table rows, requiring fewer read operations) unless you have a LOT of columns in the table.
    If you have the parallel query option license you can try PQO but I'm doubful it will improve performance for 1.6M rows on 11g - if you have the licence you can try it. You did say "more than 1.6M rows", which could be anything. 1.6M rows is a lot of data but 11g should handle it well. DBA_FILE_MULTIBLOCK_READ_COUNT will affect how efficient full table scans are, and tablespaces can be configured with larger block sizes to make full table scans be more efficient.
    How long is your query taking? Are you concerned about actual execution time or the time it takes to display the results on-screen? Is the length of time happening in SQL*PLUS or PL/SQL, a GUI tool like SQL*DEVELOPER, or something else?
    You will need to get execution statistics (AUTOTRACE in SQL*PLUS and SQL*DEVELOPER is an easy way to do this) to see where the slowness is being caused. An execution plan to confirm query execution will also be helpful.
    Edited by: riedelme on Dec 28, 2009 7:12 AM

  • How to get the time for executing an SQL statement?

    hi all...
    How can I get the total execution time for a given SQL statement to a ViewObject which is created dynamically for this SQL so that the end user knows it before hand and try to stop it or go further..?
    Please post any ideas if you have got..!!!
    thanx in advance..
    grüss
    K°vi

    This is not really a question for the JDeveloper forum, but rather for the DB forum.
    Since Oracle9i there is a way to estimate how long a query will take, before executing it.
    Check it out in the Oracle DB documentation or ask on the DB forum.
    There is no built-in functionality for this in JDeveloper, but you should be able to call out to the DB from your Java code to get the estimate.

  • Function taking longer time to execute

    Hi,
    I have a scenario where i am using a TABLE FUNCTION in a join con condition with a Normal TABLE but its getting longer time to execute:
    The function is given below:
    CREATE OR REPLACE FUNCTION GET_ACCOUNT_TYPE(
    SUBNO VARCHAR2 DEFAULT NULL
    RETURN ACCOUNT_TYP_KEY_1 PIPELINED AS
    V_SUBNO VARCHAR2(20);
    V_SUBS_TYP VARCHAR2(10);
    V_ACCOUNT_TYP_KEY VARCHAR2(10);
    V_ACCOUNT_TYP_KEY_1 VARCHAR2(10);
    V_SUBS_TYP_KEY_1 VARCHAR2(10);
    V_VAL1 VARCHAR2(255);
    CURSOR C1_REC2 IS SELECT SUBNO,NULL
    FROM CTVA_ETL.RA_CRM_USER_INFO
    GROUP BY SUBNO,SUBSCR_TYPE;
    --CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE,ACCOUNT_TYPE_KEY
    --FROM CTVA_ETL.RA_CRM_USER_INFO,DIM_RA_MAST_ACCOUNT_TYPE
    --WHERE ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
    --WHERE MSISDN='8615025400109'
    --WHERE MSISDN IN ('8615025400068','8615025400083','8615025400101','8615025400132','8615025400109')
    CURSOR C1_REC IS SELECT SUBNO,SUBSCR_TYPE--,ACCOUNT_TYPE_KEY
    FROM CTVA_ETL.RA_CRM_USER_INFO
    GROUP BY SUBNO,SUBSCR_TYPE;
    BEGIN
    OPEN C1_REC;
    LOOP
    FETCH C1_REC INTO V_SUBNO ,V_SUBS_TYP;
    IF V_SUBS_TYP IS NOT NULL THEN
    BEGIN
    SELECT
    ACCOUNT_TYPE_KEY
    INTO
    V_ACCOUNT_TYP_KEY
    FROM
    DIM_RA_MAST_ACCOUNT_TYPE,
    RA_CRM_USER_INFO
    WHERE
    ACCOUNT_TYPE_KEY=V_SUBS_TYP
    AND ACCOUNT_TYPE_KEY=RA_CRM_USER_INFO.SUBSCR_TYPE
    AND SUBNO=V_SUBNO;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    V_ACCOUNT_TYP_KEY := '-99';
    V_ACCOUNT_TYP_KEY_1 := V_ACCOUNT_TYP_KEY;
    END;
    ELSE
    V_ACCOUNT_TYP_KEY_1:='-99';
    END IF;
    FOR CUR IN (select
    DISTINCT V_SUBNO SUBNO_TYP_2 ,V_ACCOUNT_TYP_KEY_1 ACCOUNT_TYP
    from dual)
    LOOP
    PIPE ROW (ACCOUNT_TYP_KEY(CUR.SUBNO_TYP_2,CUR.ACCOUNT_TYP));
    END LOOP;
    END LOOP;
    RETURN;
    CLOSE C1_REC;
    END;
    The above function wil return rows with respect to SUBSCRIBER TYPE (if Not Null then it will return the ACCOUNT KEY and SUBNO else '-99').
    But its not returning any rows so all the rows will come as
    SUBNO ACCOUNT_TYP
    21 -99
    22 -99
    23 -99
    24 -99
    25 -99
    Thanks and Regards

    Hi LMLobo,
    In addition to Sebastian’s answer, you can refer to the document
    Server Memory Server Configuration Options to check whether the maximum server memory setting of the SQL Server is changed on the new server. Besides, you can also compare the
    network packet size setting of the SQL Server as well as the network connectivity on both servers. Besides, you can refer to the following link to troubleshooting SSIS package performance
    issue:
    http://technet.microsoft.com/en-us/library/dd795223(v=sql.100).aspx.
    Regards,
    Mike Yin
    TechNet Community Support

  • Query is taking long time to execute after migrating to 10g r2

    Hi
    We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
    Thanks for your help,
    Shree
    ======================================================================================
    SELECT cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    ,sum(ra.amount_applied) amount_applied
    FROM AR_CASH_RECEIPTS_ALL cr
    ,AR_RECEIVABLE_APPLICATIONS_ALL ra
    ,hz_cust_accounts cust
    ,AR_CASH_RECEIPT_HISTORY_ALL crh
    ,GL_PERIOD_STATUSES gps
    ,FND_APPLICATION app
    WHERE cr.cash_receipt_id = ra.cash_receipt_id
    AND ra.status = 'UNAPP'
    AND cr.status <> 'REV'
    AND cust.cust_account_id = cr.pay_from_customer
    AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
    AND crh.cash_receipt_id = cr.cash_receipt_id
    AND app.application_id = gps.application_id
    AND app.application_short_name = 'AR'
    AND gps.period_name = 'May-07'
    AND crh.gl_date <= gps.end_date
    AND cr.receipt_number not like 'WH%'
    -- AND cust.customer_number = '0000079260001'
    GROUP BY cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    HAVING sum(ra.amount_applied) > 0;
    =========================================================================================
    Here is the explain plan in 10g r2 (10.2.0.2.0)
    PLAN_TABLE_OUTPUT
    Plan hash value: 2617075047
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
    |* 1 | FILTER | | | | | |
    | 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
    | 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
    | 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
    |* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
    |* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
    |* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
    | 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
    | 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
    | 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
    | 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
    |* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
    | 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
    |* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
    |* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
    PLAN_TABLE_OUTPUT
    |* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
    |* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
    Predicate Information (identified by operation id):
    1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
    5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
    6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
    7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
    12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
    14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
    filter("GPS"."PERIOD_NAME"='May-07')
    15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
    16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
    17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
    filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
    Here is the explain plan in 9i
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
    tes=9423600)
    1 0 FILTER
    2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
    3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
    4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
    5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
    286 Card=110061 Bytes=1981098)
    6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
    7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
    T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
    8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
    01431)
    9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
    ICATION' (Cost=1 Card=1 Bytes=8)
    11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
    U3' (UNIQUE)
    12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
    D_STATUSES' (Cost=3 Card=1 Bytes=19)
    13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
    S_U1' (UNIQUE) (Cost=2 Card=1)
    14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
    RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
    15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
    ost=60412 Card=8969141 Bytes=394642204)
    16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
    LL' (Cost=337109 Card=15613237 Bytes=265425029)

    Hi,
    The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
    The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
    AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
    Try creating some indexes on
    AR_CASH_RECEIPTS_ALL
    hz_cust_accounts

  • Rank Function taking a long time to execute in SAP HANA

    Hi All,
    I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
    the following is a sample of how the Query looks,
    SQL 1:
    select      a.column1,
                    b.column1,
                    rank () over(partition by a.column1 order by sum(b.column2) asc)
    from         "_SYS_BIC"."Analyticview1"         b
                    join          "Table1"            a
                      on          (a.column2 = b.column3)
    group by  a.column1,
    b.column1;
    SQL 2:
    select    a.column1,
                    b.column1,
                    rank () over( order by min(b.column1) asc) WJXBFS1
    from         "_SYS_BIC"."Analytic view2"         b
                    cross join                "Table 2"               a
    where      (a.column2  like '%a%'
    and b.column1  between 100 and 200)
    group by  a.column1,
                    b.column1
    when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
    1.Does anyone have an any idea to make these queries to execute faster?
    2. Does the latency have anything to do with the rank function or could it be size of the result set?
    3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
    Thank you for your help!!
    -Gayathri

    Krishna,
    I tried both of them, Graphical and CE function,
    It is also taking a long time to execute
    Graphical view giving me the following error after 2 hr and 36 minutes
    Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
    SAP DBTech JDBC: [2048]: column store error: search table error:  [2620] executor: plan operation failed
    CE function - I aborted after 40 mins
    Do you know the syntax to declare local variable to use in CE function?

  • Procedure takes long time to execute...

    Hi all
    i wrote the proxcedure but it takes long time to execute.
    The INterdata table contains 300 records.
    Here is the procedure:
    create or replace procedure inter_filter
    is
         /*v_sessionid interdata.sessionid%type;
         v_clientip interdata.clientip%type;
         v_userid interdata.userid%type;
         v_logindate interdata%type;
         v_createddate interdata%type;
         v_sourceurl interdata%type;
         v_destinationurl interdata%type;*/
         v_sessionid filter.sessionid%type;
         v_filterid filter.filterid%type;
         cursor c1 is
         select sessionid,clientip,browsertype,userid,logindate,createddate,sourceurl,destinationurl
         from interdata;
         cursor c2 is
         select sessionid,filterid
         from filter;
    begin
         open c2;
         loop
              fetch c2 into v_sessionid,v_filterid;
              for i in c1 loop
                   if i.sessionid = v_sessionid then
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,v_filterid,i.sourceurl,i.destinationurl,i.createddate);
                   else
                        insert into filter (filterid,sessionid,clientip,browsertype,userid,logindate,createddate)
                        values (filter_seq.nextval,i.sessionid,i.clientip,i.browsertype,i.userid,i.logindate,i.createddate);
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,filter_seq.currval,i.sourceurl,i.destinationurl,i.createddate);
                   end if;
              end loop;
         end loop;
         commit;
    end
    Please Help!
    Prathamesh

    i wrote the proxcedure but it takes long time to execute.Please define "long time". How long does it take? What are you expecting it to take?
    The INterdata table contains 300 records.But how many records are there in the FILTER table? As this is the one you are driving off this is going to determine the length of time it takes to complete. Also, this solution inserts every row in the INTERDATA table for each row in the FILTER table - in other words, if the FILTER table has twenty rows to start with you are going to end up with 6000 rows in FILTERDETAIL. No wonder it takes a long time. Is that want you want?
    Also of course, you are using PL/SQL cursors when you ought to be using set operations. Did you try the solution I posted in Re: Confusion in this  scenario>>>>>>> on this topic?
    Cheers, APC

  • Taking long time to execute views

    Hi All,
    my query is taking long time to execute(i am using standard views in my query)
    XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
    WHERE gjh.je_batch_id = gjb.je_batch_id AND
    gjh.je_header_id = gjl.je_header_id AND
    gjh.je_header_id = xlawip.je_header_id AND
    gjl.je_header_id = xlawip.je_header_id AND
    gjl.je_line_num = xlawip.je_line_num AND
    gcc.code_combination_id = gjl.code_combination_id AND
    gjl.code_combination_id = xlawip.code_combination_id AND
    gjb.set_of_books_id = xlawip.set_of_books_id AND
    gjh.je_source = 'Inventory' AND
    gjh.je_category = 'WIP' AND
    gp.period_set_name = 'Accounting' AND
    gp.period_name = gjl.period_name AND
    gp.period_name = gjh.period_name AND
    gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
    to_date(enddate,'DD-MON-YY') AND
    gjh.status =nvl(lstatus,gjh.status)
    Could any one help me to execute it fast?
    Thanks
    Madhu

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...

  • CV04N takes long time to process select query on DRAT table

    Hello Team,
    While using CV04N to display DIR's, it takes long time to process select query on DRAT table. This query includes all the key fields. Any idea as to how to analyse this?
    Thanks and best regards,
    Bobby
    Moderator message: please read the sticky threads of this forum, there is a lot of information on what you can do.
    Edited by: Thomas Zloch on Feb 24, 2012

    Be aware that XP takes approx 1gb of your RAM leaving you with 1gb for whatever else is running. MS Outlook is also a memory hog.
    To check Virtual Memory Settings:
    Control Panel -> System
    System Properties -> Advanced Tab -> Performance Settings
    Performance Options -> Adavanced Tab - Virtual Memory section
    Virtual Memory -
    what are
    * Initial Size
    * Maximum Size
    In a presentation at one of the Hyperion conferences years ago, Mark Ostroff suggested that the initial be set to the same as Max. (Max is typically 2x physical RAM)
    These changes may provide some improvement.

  • Getting run time error for multiple counter plan

    Hi Friends,
    Its very critical issue,wherein i am getting run time error for multiple counter plan.
    Explanation:While try to change the Call horizon or scheduling period of the counter based plan through IP42 or IP15,systam allowing me to change the data but while saving also i am getting "Maintenance plan has changed" message and once come to the back of the transaction instantly i am getting pop up window saying that "Express document get terminated".However we have been maintaining copy of this data in another client wherein we are not getting any Run time error.
    Please find the screen shots for your reference and please let me know how to fix the issue.
    Regards,
    Srinika

    Hi,
         Please check reason for update termination in Transaction code SM13.

  • How do I execute "Select count(*) from table " in OCI

    Hi,
    I am new to OCI and so this question may seem stupid. I would like to know how to execute the query "select count(*) from <table>" using OCI V8 functionality? Also after how do I get the result into a integer datatype? I have gone through most of the demo programs but is is of little help to me.
    Thanks in advance...
    Regards,
    Shubhayan.

    Hi,
    Here is sample code to give you some idea how to do it. If you want some more info let me know.
    Pankaj
    ub4 count;
    // Prepare the statement.
    char szQry = "SELECT count() FROM T1";
    if(OCI_ERROR == OCIStmtPrepare(pStmthp, pErrhp, (unsigned char*)szQry, strlen(szQry), OCI_NTV_SYNTAX , OCI_DEFAULT) )
         cout << "Error in OCIStmtPrepare" << endl;
         exit(1);
    // Bind the output parameter.
    OCIDefine *pDefnpp;
    if(OCI_ERROR == OCIDefineByPos ( pStmthp, &pDefnpp, pErrhp, 1,
    &count, sizeof(ub4), SQLT_INT,
    (dvoid *) 0, (ub2 *) 0, (ub2 *) 0,
                        OCI_DEFAULT) )
         cout << "Error in OCIDefineByPos" << endl;
         exit(1);
    if(OCI_ERROR == OCIStmtExecute(pSvchp, pStmthp, pErrhp, 1, 0, NULL, NULL, OCI_DEFAULT) )
         cout << "Error in OCIStmtExecute" << endl;
         exit(1);

  • Extremely long time when executing an export transaction data package link

    Hi,
    I am working in a packagelink to export transaction data to the application server. The package which  I am currently using is /CPMB/EXPORT_TD_TO_APPL. I use it to generate an output file which I later use in a different appliation to register modifications applied in BPC.
    It has been working correctly without problems for a time. The problem is that suddenly it stopped working. Maybe because of some changes in the dimension library (is it feasible?) . I defined and scheduled again the packagelink and it began working correctly again, but the time it takes to execute the process  is really long. Approximately it takes above 10 hours, which is an extremely long time for the process. In the beginning, the packagelink was executed in an hour and a half. Could anybody tell me any idea about what can be the reason of this problem?
    Any help will b much appreciated.

    >      Database error text........: "POS(1) System error: BD Index not accessible"                    
    >      Database error code........: "-602"                                                            
    >      Triggering SQL statement...: "INSERT INTO "/BIC/SZD_PROD" ( "/BIC/ZD_PROD",                    
    >       "SID", "CHCKFL", "DATAFL", "INCFL" ) VALUES ( ? , ? , ? , ? , ? )"                            
    Hi Hari,
    looks like you are hitting a BAD index.
    Check the [DB50|http://help.sap.com/saphelp_nw04s/helpdata/en/9c/ca5bb3d729034aaf6f4cea2627c2f2/frameset.htm] or the DBMGUI - there should be warnings about this.
    To fix this issue, either use the DBMGUI -> Recover -> [Indexes|http://maxdb.sap.com/doc/7_6/30/5ada38596211d4aa83006094b92fad/frameset.htm] function or logon to dbmcli, get an SQL session ([sql_connect|http://maxdb.sap.com/doc/7_6/11/8af4411cf5c417e10000000a155106/content.htm]) and use the [sql_recreateindex|http://maxdb.sap.com/doc/7_6/30/f7c7f25be311d4aa1500a0c9430730/content.htm] command.
    Regards,
    Lars

  • File Dialog Box often takes a long time to execute

    Hi all
    The Labview File Dialog Box Express VI sometimes takes 5 to 10 seconds to execute. On other occasions, it executes in a blink of an eye. Why is that so?
    I have tried to use the older Open/Create/Replace File VI as suggested in one of the forum's posts...but it behaves exactly the same way. Another problem is the trimming of the filename, which has also been reported and has NOT YET been solved.
    (By the way...Recently, I updated to LV 2012 SP2 by suggestion of the NI Update Manager, only to find out that I was not entitled to this SP (which apparently only includes bug fixes) and I will have to roll back to LV 2012. Still,  in the evaluation version I can see that the bug is not fixed. Labview must have a lot of serious bugs for NI to ask for a paid bug fix, in which some bugs remain after having been reported a long time ago..well...)
    What worries me now is that intermittent lag in the execution of the File Dialog (such a basic basic function of a program...). Anyone have any idea about this? The support menu said "Ask an engineer"...any of the NI engineers has any suggestion?
    Regards
    Helder
    Attachments:
    FileDialogTest.vi ‏26 KB

    First of all: Thank you very much for the objective answer. I suppose that yor actually ARE a NI engineer, as I had hoped for.
    lease find my comments above (helder---------------------)
    helder wrote:
    The Labview File Dialog Box Express VI sometimes takes 5 to 10 seconds to execute. On other occasions, it executes in a blink of an eye. Why is that so?
    What worries me now is that intermittent lag in the execution of the File Dialog (such a basic basic function of a program...). Anyone have any idea about this?
    The file dialog is provided by the OS, so if it is slow it is not LabVIEW's fault.
    What kind of computer do you have? Is this a laptop that spins down the HD to save power? In this case it needs to wait for the HD to spin up. (You can change the windows power profile or you can spend more money and get an SSD).
    Are you very low on memory?
    Does your LabVIEW program waste 100% CPU in parallel doing nothing due to bad coding?
    Are you pointing to a location that has millions of files?
    Are you pointing to a network location?
    helder:---------------------------------------
    I understand that the File Dialog is provided by the OS. But I have other applications on my PC and this doesn't happen - just in Labview. Sorry! And sometimes it
    doesn't happen. Every 10 times I open the File dialog Box, 5 times it takes long and 5 times it is immediate.
    (I am not low on memory, my coding is not bad>-look at the VI I sent, it just contains the fragment which uses the File Dialog Box and it behaves the same way! I am pointing to the same location that the other applications point to, I am not pointing to a network location, the PC is not spinning the disk down, etc etc.)
    But yes, I will try it on another PC
    (There was another post on the forum on this issue, but can't seem to find it now, just find my own post )
    I attach a video demo of what happens.
    helder wrote:
    Another problem is the trimming of the filename, which has also been reported and has NOT YET been solved.
    What problem is that? Can you elaborate? There is a known bug in the windows dialog box that sometimes the pre-filled file names are shifted and not fully visible. This is a windows bug documented by Microsoft and LabVIEW has no control over it. It is not NIs job to fix OS problems.
    helder:-------------------------------------------
    Of course you can't fix the OS.
    If it is a windows bug, we will have to live with it.
    A previous post on your forum did not provide me with that information (http://forums.ni.com/t5/LabVIEW/File-dialog-trimmi​ng-the-default-name/td-p/2189874). The fact that it is a windows bug was not mentioned there. 
    helder wrote:
    Hi all
    (By the way...Recently, I updated to LV 2012 SP2 by suggestion of the NI Update Manager, only to find out that I was not entitled to this SP (which apparently only includes bug fixes) and I will have to roll back to LV 2012. Still,  in the evaluation version I can see that the bug is not fixed.
    Patches are included, service packs are not included. I think this is well documented. Most users are on SSP, this way you are always entitled to the newest version.
    As with anything else, software has bugs. It is impossible to test a huge, complex system like LabVIEW under all scenarios. The possible number of combinations of hardware and other installed software on any particular computer probably exceeds the number of atoms in the universe. It is actually amazing how few bugs there are and how well it works. What "bug" are you talking about? Bugs are prioritized according to rules. Critical bugs are typically fixed very quickly. A cosmetic issue that only affects you and nobody else will take longer or might not even get fixed. Are you talking about a confirmed bug that could be reproduced by NI? Do you have a CAR#?
    helder:----------------------------------------
    I was referring to the file-name-shifting-cosmetic-bug that the October-2012 post had indicated. But if it is a Windows-bug, NI does not have any responsibility, of
    course.
    Regarding the bugs, of course they are unavoidable.
    helder wrote:
    Labview must have a lot of serious bugs for NI to ask for a paid bug fix, in which some bugs remain after having been reported a long time ago..well...)
    That sentence makes no sense! Why would the number of serious bugs depend on the cost of upgrading?? You are just rambling here....
    In an ideal world, NI would have an unlimited number of programmers that can work 24/7 to immediately fix any discovered bug. This would only be possible if all users are willing to pay an unlimited amount of money for the software. As I mentioned, bug fixes are free in the form of patches, and they come out regularly. Service packs include new features and are not free.
    helder:---------------------------------------
    "....include new features"..????
    ---->>From http://www.ni.com/labview/release-details/
    "LabVIEW 2012 Service Pack 1 is an exclusive update to LabVIEW 2012 for NI Standard Service Program (SSP) customers. There are no new product features introduced in service pack releases; instead, these releases provide bug fixes and improved stability for LabVIEW 2012. For a list of these bug fixes, click here."
    No new features, just bug fixes!"
    Moreover: If it isn't free, it shouldn' have let me update if I am not a SSP costumer.
    Now I have a 45 days evaluation version. After that, I have to uninstall and reinstall everything.That takes some time and effort.
    If there is a more direct way of rolling back t the previous version, please let me know.
    Thank you

  • Select count(*) statement in ABAP

    Hi,
    Im writing following statement in my Function module,
    select count(*) into l_count
    from user_master
    where username = l_username and
          process_type = processtype and
          password = oldpassword.
    And there is one entry in table user_master.
    But still, I'm getting l_count as zero..
    Can somebody help me with this.
    Regards,
    Amey

    select count(*) into l_count
    from user_master
    where username = l_username and
    process_type = processtype and
    password = oldpassword.
    Use group by option...
    Like this....
    select count(*) into l_count
    from user_master
    where username = l_username and
    process_type = processtype and
    password = oldpassword group by username.

  • IP10 and IP 30 is taking very long time to Execute.

    HI gurus,
    I am facing a problem.For counter based Maintenance order scheduling-when I integrate Production order with Maintenance order by PRT, the order is generated after required number of usage for PRT but deadline monitoring and Scheduling of a individual maintenance plan takes more than 7 hours to execute.
    I am not in a condition to understand where the error is going on in the program as after a long time the required result of maintenance order is generated.
    Please Help me
    Thanx in advance,
    Praveen Kumar

    there is a SAP note available for this...just check in service.sap.com

Maybe you are looking for

  • F-44 (clear vendor) enhancement

    Hello, eneryone: I have a enhancement requirement: Transaction code: F-44 (program: SAPMF05A, screen: 131), when the Enter key is pressed, Vendor account group of Account (vendor) field should be checked. How can I realize this requirement ? Which en

  • Data updation in BI

    hi, i have developed a repository after developing cube n dimensions in oracle warehouse builder. Now, i have developed few graphs and pivot tables in bi answers and presented them on BI dashboard. My reports on dashboard contains employees strength

  • How to get my own facebook/twitter wall status using CS5.5?

    Hi, i'm trying to learn how to get my own facebook & twitter wall status into flash so that I can export it out and install it in my iPhone. After reading the facebook and twitter API documentation, I'm still very confussed on how to use them. I'm ve

  • Trouble with 4G and other issues after latest update.

    To whom it may concern, After the recent update I have been having the following problems: I lose 4G in many places where I was receiving great 4G access before. Battery life seems to have suffered a little, this may be due to 4G hunting. New softwar

  • Which iOS-version is the highest for each model (3GS, 4, 4S)?

    Can I get iOS5 on iPhone 3GS? Can I get iOS6 on iPhone4?