9i index query problem SLOW

I have recently upgraded to the about 9i and have experienced poor performance on many queries. In 8i selecting the max(a_date) from a table took milliseconds. In 9i the same query never seems to return. The explain plan also does not seem to accurately reflect the behavior performed by the query.
An related experiences would be welcome

Not much to go on. An open door, just in case you hadn't thought of it yourself:
Perhaps you are using cost based optimization in 9i where as in 8i you were using rule based. Take a look at the init.ora and see parameter OPTIMIZER_MODE is set the same in 9i as in 8i. Typically, if you go from RULE to COST but don't ANALYZE your objects, performance problems like these could happen.
Just a thought,
L.

Similar Messages

  • Bitmap index query is slow

    All,
    I have Dataware house environment. I have bitmap index in all the dimension id's in fact table. Here is the query. This query took 10 min... It is not affordable time.
    Any help?..
    SELECT count(*)
    FROM quote a, enterprise_sales_det_fact b, orders c
    WHERE a.quote_dim_sid = b.quote_dim_sid
    AND b.order_dim_sid = c.order_dim_sid
    We have bit map index on quote_dim_sid & order_dim_sid .
    Here is the analyze statement i used.
    exec dbms_stats.gather_table_stats(ownname => 'QUOTEPRD', tabname => 'ORDERS',estimate_percent => 10, method_opt => 'FOR ALL COLUMNS SIZE 1',cascade => true);
    Step # Step Name
    12 SELECT STATEMENT
    11 SORT [AGGREGATE]
    10 NESTED LOOPS
    8 NESTED LOOPS
    6 . VIEW
    5 HASH JOIN
    2 BITMAP CONVERSION [TO ROWIDS]
    1 QUOTEPRD.ENT_FACT_ORDER_BIDX BITMAP INDEX [FULL SCAN]
    4 BITMAP CONVERSION [TO ROWIDS]
    3 QUOTEPRD.ENT_FACT_QUOTE_BIDX BITMAP INDEX [FULL SCAN]
    7 QUOTEPRD.QUOTE_UIDX INDEX [UNIQUE SCAN]
    9 QUOTEPRD.ORDERS_UIDX INDEX [UNIQUE SCAN]
    Step # Description Est. Cost Est. Rows Returned Est. KBytes Returned
    1 This plan step retrieves one or more ROWIDs by scanning all bits in the bitmap index ENT_FACT_ORDER_BIDX to find the rows which satisfy a condition specified in the querys WHERE clause. -- -- --
    2 This plan step accepts a bitmap representation of an index from its child node, and converts it to a ROWID that can be used to access the table.
    3 This plan step retrieves one or more ROWIDs by scanning all bits in the bitmap index ENT_FACT_QUOTE_BIDX to find the rows which satisfy a condition specified in the querys WHERE clause. -- -- --
    4 This plan step accepts a bitmap representation of an index from its child node, and converts it to a ROWID that can be used to access the table.
    5 This plan step accepts two sets of rows, each from a different table. A hash table is built using the rows returned by the first child. Each row returned by the second child is then used to probe the hash table to find row pairs which satisfy a condition specified in the query's WHERE clause. Note: The Oracle cost-based optimizer will build the hash table using what it thinks is the smaller of the two tables. It uses the statistics to determine which is smaller, so out of date statistics could cause the optimizer to make the wrong choice. -- 27,037,540 290,442.324
    6 This plan step represents the execution plan for the subquery defined by the view . 26,527 27,037,540 290,442.324
    7 This plan step retrieves a single ROWID from the B*-tree index QUOTE_UIDX. -- 1 0.007
    8 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause. 26,528 27,037,540 475,269.258
    9 This plan step retrieves a single ROWID from the B*-tree index ORDERS_UIDX. -- 1 0.007
    10 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause. 26,529 27,037,540 660,096.191
    11 This plan step accepts a row set (its only child) and returns a single row by applying an aggregation function. -- 1 0.024
    12 This plan step designates this statement as a SELECT statement. 26,529 -- --

    All, i forgot to give one more information about table count.
    enterprise_sales_det_fact table has 30 million. quote has 2 million, orders has 3 millions. FYI.
    Regards
    Govind

  • Update query is slow with merge replication

    Hello friend,
    I have a database with enabling merge replication.
    Then the problem is update query is taking more time.
    But when I disable the merge triggers then it'll update quickly.
    I really appreciate your
    quick response.
    Thanks.

    Hi Manjula,
    According to your description, the update query is slow after configuring merge replication. There are some proposals for you troubleshooting this issue as follows.
    1. Perform regular index maintenance, update statistics, re-index, on the following Replication system tables.
        •MSmerge_contents
        •MSmerge_genhistory
        •MSmerge_tombstone
        •MSmerge_current_partition_mappings
        •MSmerge_past_partition_mappings
    2. Make sure that your tables involved in the query have suitable indexes. Also do the re-indexing and update the statistics for these tables. Additionally, you can use
    Database Engine Tuning Advisor to tune databases for better query performance.
    Here are some related articles for your reference.
    http://blogs.msdn.com/b/chrissk/archive/2010/02/01/sql-server-merge-replication-best-practices.aspx
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    Thanks,
    Lydia Zhang

  • Query is slow on staging

    Hi Friends ,
    i am using 11.2.0.3.0 oracle db . We have a query which is running smoothly on Live and the same query runs slow on staging environment . The data is pulled from Live to staging using Golden Gate and not all columns are refreshed .
    Can you please help me tune this query or let me know what best can be done for this query to run like Live environment .
    Regards,
    DBApps

    Hi,
    This is a general type of question. please be specific. Golden Thumb rule is that don't use '*", instead use the column names. Analyze the table and take a execution plan and check for index usage .
    Please give the problem statement also so that we can help you.

  • Query is slow

    SELECT SUM(A.NO_MONTH_CONSUMPTION),SUM(A.BASE_CONSUMPTION),SUM(A.CURRENT_DOC_AMT),SUM(A.CUR_TAX),SUM(B.CURRENT_DOC_AMT)
    FROM VW_x A,(SELECT CURRENT_DOC_AMT,DOC_NO
    FROM VW_y B
    WHERE NVL(B.VOID_STATUS,0)=0 AND B.TR_TYPE_CODE='SW' AND B.BPREF_NO=:B4 AND B.SERVICE_CODE=:B3 AND B.BIZ_PART_CODE=:B2 AND B.CONS_CODE=:B1 ) B
    WHERE A.BPREF_NO=:B4 AND A.SERVICE_CODE=:B3 AND A.BIZ_PART_CODE=:B2 AND A.CONS_CODE=:B1 AND A.BILL_MONTH >:B5 AND NVL(A.VOID_STATUS,0)=0 AND NVL(A.AVG_IND,0)= 2 AND A.DOC_NO=B.DOC_NO(+)
    the above view "VW_x" has around 40 million records from two tables and avg_ind column has only 0 and 2 values. I created a functional index on both table something like create index on x1 nvl(avg,0)
    TRACE OUT PUT
    STATISTICS
    15  recursive calls
    0  db block gets
      18  consistent gets
    4  physical reads
    0  redo size
    357 bytes sent via SQL*Net to client
    252 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    1 rows processed
    but still the query is slow...please suggest the best practise to make it fast
    thanks

    Hi sorry i was out of office for a while please check the execution plan for my query.
    Below query i am calling in a procedure passing the parameters
      While i execute the query separatly it works fine but the same thing when i call in procedure and the procedure has loop which goes and check around 400,000 records thats where i get the problem
    select sum(a.no_month_consumption),sum(a.base_consumption),sum(a.current_doc_amt),sum(a.cur_tax),sum(b.current_doc_amt
    --into vnomonths,vcons,vconsamt,vtaxamt,vsewage
    from bill_View a,(select current_doc_amt,doc_no from dbcr_View b where nvl(b.void_status,0)=0 and b.tr_type_code='SWGDBG' and b.bpref_no='Q12345' and b.service_code='E' and b.biz_part_code='MHEW') b
    where a.bpref_no='Q12345' and a.service_code='E' and a.biz_part_code='MHEW'
    and a.bill_month >'30-aPR-2011' and nvl(a.void_status,0)=0 and decode(a.avg_ind,null,0,a.avg_ind)= 2
    and a.doc_no=b.doc_no(+);
    I created functionaly inde on avg_ind column (nvl(avg_ind,0))
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=77 Card=1 Bytes=93
       1    0   SORT (AGGREGATE)
       2    1     HASH JOIN (OUTER) (Cost=77 Card=4 Bytes=372)
       3    2       VIEW OF 'VW_IBS_BILL' (VIEW) (Cost=54 Card=3 Bytes=198
       4    3         UNION-ALL
       5    4           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_BILL' (T
              ABLE) (Cost=8 Card=1 Bytes=50)
       6    5             INDEX (RANGE SCAN) OF 'STBILL_BPREF_NO' (INDEX)
              (Cost=3 Card=5)
       7    4           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_BILL' (T
              ABLE) (Cost=46 Card=2 Bytes=114)
       8    7             INDEX (RANGE SCAN) OF 'XTBILL' (INDEX) (Cost=3 C
              ard=43)
       9    2       VIEW OF 'VW_IBS_DBCR' (VIEW) (Cost=22 Card=4 Bytes=108
      10    9         UNION-ALL
      11   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_T_DBCR' (TAB
              LE) (Cost=2 Card=1 Bytes=54)
      12   11             INDEX (RANGE SCAN) OF 'TDBCR_BPREFNO' (INDEX) (C
              ost=1 Card=1)
      13   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_DBCR' (T
              ABLE) (Cost=7 Card=1 Bytes=43)
      14   13             INDEX (RANGE SCAN) OF 'STDBCR_BPREFNO' (INDEX) (
              Cost=3 Card=4)
      15   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_DBCR' (T
              ABLE) (Cost=13 Card=2 Bytes=88)
      16   15             INDEX (RANGE SCAN) OF 'XTDBCR' (INDEX) (Cost=3 C
              ard=11)
    what is Card and Cost attributes in the above output..................... ?

  • Indices configuration for XML document analysis (indexing time problems)

    Hi all,
    I'm currently developing a tool for XML Document analysis using XQuery. We have a need to analyse the content of a large CMS dump, so I am adding all documents to a berkeley DB xml to be able to run xqueries against it.
    In my last run I've been running to indexing speed problems, with single documents (typically 10-20 K in size) taking around 20 sec to be added to the database after 6000 documents (I've got around 20000 in total). The time needed for adding docs to the database drops with the number of documents.
    I suspect my index configuration to be the reason for this performance drop. Indeed, I've been very generous with indexes, as we have to analyse the data and don't know the structure in advance.
    Currently my index configuration includes:
    - 2 default indicess: edge-element-presence-none and edge-attribute-presence-none to be able to speed up every possible xquery to analyse data patterns: ex. collection()//table//p[contains(.,'help')]
    - 8 edge-attribute-substring-string indices on attributes we use often (id, value, name, ...)
    - 1 edge-element-substring-string index on the root element of the xml documents to be able to speed up document searches: ex. collection()//page[contains(.,'help')]
    So here my questions:
    - Are there any possible performance optimisations in Database config (not index config)? I only set the following:
    setTransactional(false);
    envConf.setCacheSize(1024*64);
    envConf.setCacheMax(1024*256);
    - How can I test various index configuration on the fly? Are there any db tools that allow to set/remove indexes?
    - Is my index config suspect? ;-)
    Greetings,
    Nils

    Hi Nils,
    The edge-element-substring-string index on the document element is almost certainly the cause of the slow document inserts - that's really not a good idea. Substring indexes are used to optimize "=", contains(), starts-with() and ends-with() when they are applied to the named element that has the substring index, so I don't think that index will do what you want it to.
    John

  • Why is the query so slow?

    Hi,
    I've got a query running fast (3 sec.)
    If I try to execute it on test enviroment, it takes about 2 minutes (!)
    I see in both enviroments the explain plan is the same and so are the indexes used. I've also tried to rebuild the indexes and the tables that looked quite fragmented in test, but the result is always the same. Could it be that our test enviroment is slower and with lower performances? What else could I check? (Oracle Vers. is 8.1.7)
    Thanks!

    812809 wrote:
    steps to follow:
    1.whether the candidate columns has index or notSometimes and index can cause a query to slow down rather than speed up, especially if a person has created too many indexes on a table and the optimiser can't figure out the best one to use.
    2.go for explain plan and look the query not to fall under the category of Full Table ScanFull table scans are not always a bad thing. Sometimes they are faster than using the index. It depends.

  • Query very slow!

    I have Oracle 9i and SUN OS 5.8
    I have a Java application that have a query to the Customer table. This table has 2.000.000 of records and I have to show its in pages (20 record each page)
    The user query for example the Customer that the Last Name begin with “O”. Then the application shows the first 20 records with this condition and order by Name.
    Then I have to create 2 querys
    1)
    SELECT id_customer,Name
    FROM Customers
    WHERE Name like 'O%'
    ORDER BY id_customer
    But when I proved this query in TOAD it take long to do it (the time consuming was 15 minutes)
    I have the index in the NAME field!!
    Besides, if the user want to go to the second page the query is executed again. (The java programmers said me that)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    2)
    SELECT count(*) FROM Customers WHERE NAME like 'O%'
    I have to do this query because I need to known How many pages (20 records) I need to show.
    Example with 5000 records I have to have 250 pages.
    But when I proved this query in TOAD it take long to do it (the time consuming was 30 seconds)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    Thanks in advance!

    This appears to be a dulpicate of a post in the Query very slow! forum.
    Claudio, since the same folks tend to read both forums, it is generally preferred that you post questions in only one forum. That way, multiple people don't spend time writing generally equivalent replies.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Why index creation is slower on the new server?

    Hi there,
    Here is a bit of background/env info:
    Existing prod server (physical): Oracle 10gR2 10.2.0.5.8 on ASM
    RAM: 96GB
    CPUs: 8
    RHEL 5.8 64bit
    Database size around 2TB
    New server:
    VMWare VM with Oracle 10gR2 10.2.0.5.8 on ASM
    RAM 128GB
    vCPUs: 16
    RHEL 5.8 64bit
    Copy of prod DB (from above server) - all init param are the same
    I noticed that Index creation is slower on this server. I ran following query:
    SELECT name c1, cnt c2, DECODE (total, 0, 0, ROUND (cnt * 100 / total)) c3
      FROM (SELECT name, VALUE cnt, (SUM (VALUE) OVER ()) total
              FROM v$sysstat
             WHERE name LIKE 'workarea exec%')
    C1
    C2
    C3
    workarea executions - optimal
    100427285
    100
    workarea executions - onepass
    2427
    0
    workarea executions - multipass
    0
    0
    Following bitmap index takes around 40mins in prod server while it takes around 2Hrs on the VM.
    CREATE BITMAP INDEX MY_IDX ON
    MY_HIST(PROD_YN)  TABLESPACE TS_IDX PCTFREE 10
    STORAGE(INITIAL 12582912 NEXT 12582912 PCTINCREASE 0 ) NOLOGGING
    This index is created during a batch-process and the dev team is complaining of slowness of the batch on new server. I have found this one statement responsible for some of the grief. There may be more and I am investigating.
    I know that adding "parallel" option may speedup but I want find out why is it slow on the new server.
    I tried creating a simple index on a large table and it took 2min in prod and 3.5min on this VM. So I guess index creation is slower on this VM in general. DMLs (select/insert/delete/update) seem to work with better elapsed time.
    Any clues what might be causing this slowness in index creation?
    Best regards

    I have been told that the SAN in use by the VM has capacity of 10K IOPS. Not sure of this info helps. I don't know more than this about the storage.
    What else do I need to find out? Please let me know - I'll check with my Sys Admin and update the thread.
    Best regards

  • Non - indexed query's (9I)

    Hello everbody,
    How do I retrieve all the NON indexed query's used in the database? I want to find out which ones I need to optimize.
    thanks John

    Hello
    That's not as easy as it sounds. Creating indexes for every query that does not use one could very easily cause more problems than it fixes. Full table scans are not evil, they are often the fastest way to access a relatively large portion of data in a table.
    If you have issues with performance in your database you need to know exactly what they are, and you can get a very good idea by using statspack.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/statspac.htm#21793
    Once you have the right metrics, you can make an informed decision as to what the correct course of action is to solve any performance problems you are having. Without these metrics, you are groping in the dark.
    HTH
    David

  • Query Designer slows down after working some time with it

    Hi all,
    the new BEx Query Designer slows down when working some time with it. The longer it remains open, the slower it gets. Especially formula editing slows down extremely.
    Did anyone of you encounter the same problem? Do you have an idea, how to fix this. To me it seems as if the Designer allocates more and more RAM and does not free that up.
    My version: BI AddOn 7.X, Support Package 13, Revision 467
    Kind regards,
    Philipp

    I have seen a similar problem on one of my devices, the 'Samsung A-920'. Every time the system would pop up the 'Will you allow Network Access' screen , the imput from all keypresses from then on would be strangely delayed. It looked like the problem was connected with the switching from my app and the system dialog form. I tried for many many long hours / days to fix this, but just ended up hacking my phone to remove the security questions. After removing the security questions my problem went away.
    I don't know if it's an option in your application, but is it possible to do everything using just one Canvas, and not switch between displayables? You may want to do an experiment using a single displayable Canvas, and just change how it draws. I know this will make user input much more complicated, but you may be able to avoid the input delays.
    In my case, I think the device wasn't properly releasing / un-registering the input handling from the previous dialogs, so all keypresses still went through the non-current network-security dialog before reaching my app.

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

  • Query runs slower when using variables & faster when using hard coded value

    Hi,
    My query runs slower when i use variables but it runs faster when i use hard coded values. Why it is behaving like this ?
    My query is in cursor definition in a procedure. Procedure runs faster when using hard coded valus and slower when using variables.
    Can anybody help me out there?
    Thanks in advance.

    Hi,
    Thanks for ur reply.
    here is my code with Variables:
    Procedure populateCountryTrafficDetails(pWeekStartDate IN Date , pCountry IN d_geography.country_code%TYPE) is
    startdate date;
    AR_OrgId number(10);
    Cursor cTraffic is
    Select
              l.actual_date, nvl(o.city||o.zipcode,'Undefined') Site,
              g.country_code,d.customer_name, d.customer_number,t.contrno bcn,
              nvl(r.dest_level3,'Undefined'),
              Decode(p.Product_code,'820','821','821','821','801') Product_Code ,
              Decode(p.Product_code,'820','Colt Voice Connect','821','Colt Voice Connect','Colt Voice Line') DProduct,
              sum(f.duration),
              sum(f.debamount_eur)
              from d_calendar_date l,
              d_geography g,
              d_customer d, d_contract t, d_subscriber s,
              d_retail_dest r, d_product p,
              CPS_ORDER_DETAILS o,
              f_retail_revenue f
              where
              l.date_key = f.call_date_key and
              g.geography_key = f.geography_key and
              r.dest_key = f.dest_key and
              p.product_key = f.product_key and
              --c.customer_key = f.customer_key and
              d.customer_key = f.customer_key and
              t.contract_key = f.contract_key and
              s.SUBSCRIBER_KEY = f.SUBSCRIBER_KEY and
              o.org_id(+) = AR_OrgId and
              g.country_code = pCountry and
              l.actual_date >= startdate and
              l.actual_date <= (startdate + 90) and
              o.cli(+) = s.area_subno and
              p.product_code in ('800','801','802','804','820','821')
              group by
              l.actual_date,
              o.city||o.zipcode, g.country_code,d.customer_name, d.customer_number,t.contrno,r.dest_level3, p.product_code;
    Type CountryTabType is Table of country_traffic_details.Country%Type index by BINARY_INTEGER;
    Type CallDateTabType is Table of country_traffic_details.CALL_DATE%Type index by BINARY_INTEGER;
    Type CustomerNameTabType is Table of Country_traffic_details.Customer_name%Type index by BINARY_INTEGER;
    Type CustomerNumberTabType is Table of Country_traffic_details.Customer_number%Type index by BINARY_INTEGER;
    Type BcnTabType is Table of Country_traffic_details.Bcn%Type index by BINARY_INTEGER;
    Type DestinationTypeTabType is Table of Country_traffic_details.DESTINATION_TYPE%Type index by BINARY_INTEGER;
    Type ProductCodeTabType is Table of Country_traffic_details.Product_Code%Type index by BINARY_INTEGER;
    Type ProductTabType is Table of Country_traffic_details.Product%Type index by BINARY_INTEGER;
    Type DurationTabType is Table of Country_traffic_details.Duration%Type index by BINARY_INTEGER;
    Type DebamounteurTabType is Table of Country_traffic_details.DEBAMOUNTEUR%Type index by BINARY_INTEGER;
    Type SiteTabType is Table of Country_traffic_details.Site%Type index by BINARY_INTEGER;
    CountryArr CountryTabType;
    CallDateArr CallDateTabType;
    Customer_NameArr CustomerNameTabType;
    CustomerNumberArr CustomerNumberTabType;
    BCNArr BCNTabType;
    DESTINATION_TYPEArr DESTINATIONTYPETabType;
    PRODUCT_CODEArr PRODUCTCODETabType;
    PRODUCTArr PRODUCTTabType;
    DurationArr DurationTabType;
    DebamounteurArr DebamounteurTabType;
    SiteArr SiteTabType;
    Begin
         startdate := (trunc(pWeekStartDate) + 6) - 90;
         Exe_Pos := 1;
         Execute Immediate 'Truncate table country_traffic_details';
         dropIndexes('country_traffic_details');
         Exe_Pos := 2;
         /* Set org ID's as per AR */
         case (pCountry)
         when 'FR' then AR_OrgId := 81;
         when 'AT' then AR_OrgId := 125;
         when 'CH' then AR_OrgId := 126;
         when 'DE' then AR_OrgId := 127;
         when 'ES' then AR_OrgId := 123;
         when 'IT' then AR_OrgId := 122;
         when 'PT' then AR_OrgId := 124;
         when 'BE' then AR_OrgId := 132;
         when 'IE' then AR_OrgId := 128;
         when 'DK' then AR_OrgId := 133;
         when 'NL' then AR_OrgId := 129;
         when 'SE' then AR_OrgId := 130;
         when 'UK' then AR_OrgId := 131;
         else raise_application_error (-20003, 'No such Country Code Exists.');
         end case;
         Exe_Pos := 3;
    dbms_output.put_line('3: '||to_char(sysdate, 'HH24:MI:SS'));
         populateOrderDetails(AR_OrgId);
    dbms_output.put_line('4: '||to_char(sysdate, 'HH24:MI:SS'));
         Exe_Pos := 4;
         Open cTraffic;
         Loop
         Exe_Pos := 5;
         CallDateArr.delete;
    FETCH cTraffic BULK COLLECT
              INTO CallDateArr, SiteArr, CountryArr, Customer_NameArr,CustomerNumberArr,
              BCNArr,DESTINATION_TYPEArr,PRODUCT_CODEArr, PRODUCTArr, DurationArr, DebamounteurArr LIMIT arraySize;
              EXIT WHEN CallDateArr.first IS NULL;
                   Exe_pos := 6;
                        FORALL i IN 1..callDateArr.last
                        insert into country_traffic_details
                        values(CallDateArr(i), CountryArr(i), Customer_NameArr(i),CustomerNumberArr(i),
                        BCNArr(i),DESTINATION_TYPEArr(i),PRODUCT_CODEArr(i), PRODUCTArr(i), DurationArr(i),
                        DebamounteurArr(i), SiteArr(i));
                        Exe_pos := 7;
    dbms_output.put_line('7: '||to_char(sysdate, 'HH24:MI:SS'));
         EXIT WHEN ctraffic%NOTFOUND;
    END LOOP;
         commit;
    Exe_Pos := 8;
              commit;
    dbms_output.put_line('8: '||to_char(sysdate, 'HH24:MI:SS'));
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_CUSTNO ON country_traffic_details (CUSTOMER_NUMBER)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_BCN ON country_traffic_details (BCN)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_PRODCD ON country_traffic_details (PRODUCT_CODE)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_SITE ON country_traffic_details (SITE)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_DESTYP ON country_traffic_details (DESTINATION_TYPE)';
              execDDl(lSql);
              Exe_Pos:= 9;
    dbms_output.put_line('9: '||to_char(sysdate, 'HH24:MI:SS'));
    Exception
         When Others then
         raise_application_error(-20003, 'Error in populateCountryTrafficDetails at Position: '||Exe_Pos||' The Error is '||SQLERRM);
    End populateCountryTrafficDetails;
    In the above procedure if i substitute the values with hard coded values i.e. AR_orgid = 123 & pcountry = 'Austria' then it runs faster.
    Please let me know why it is so ?
    Thanks in advance.

  • Query is slow to reutn result...

    The following query is slow, I have index on tzk, and the event_dtg column is date column and allow null value.
    abc_zone have > 60 millions records. Statistics on table and index is current.
    Any idea how to improve the query performance?
    select count (*) tz6
    from abc_zone
    where tzk =6
    and event_dtg > to_date('09/05/2009 01:00:00' , 'MM/DD/YYYY HH24:MI:SS')
    and event_dtg < to_date('04/04/2010 00:00:00' , 'MM/DD/YYYY HH24:MI:SS')
    Oracle 10.2.0.3 on AIX
    Thanks in advance.

    sorry, I do have index on event_dtg...
    here is the EP:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 19 | 148 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 19 | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| ABC_ZONE | 16 | 304 | 148 (0)| 00:00:01 |
    | 3 | INDEX RANGE SCAN | ABC_ZONE_EVENT_DTG | 3439 | | 1 (0)| 00:00:01 |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$1
    2 - SEL$1 / ABC_ZONE@SEL$1
    3 - SEL$1 / ABC_ZONE@SEL$1
    17 rows selected.
    I suspect there is some kind of conversion (date to timestamp) that is costly.
    Thanks.

  • Exchange Server Information Store has encountered an error while executing a full-text index query

    Hi Team need help
    Exchange 2013 EAC doesnt show the databases and it gives message
    "Your request couldn't be completed. Please try again in a few minutes."
    Its a 9 Node DAG and I stopped and diabled Search and Search Host controller on all of them
    in the event viewer see a lot of Event id 1012
    Log Name:      Application
    Source:        MSExchangeIS
    Date:          4/1/2013 9:23:48 AM
    Event ID:      1012
    Task Category: General
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      ex1301.dagdc.com
    Description:
    Exchange Server Information Store has encountered an error while executing a full-text index query ("and(or(itemclass:string("IPM.Note*", mode="and"), itemclass:string("IPM.Schedule.Meeting*", mode="and"), itemclass:string("IPM.OCTEL.VOICE*",
    mode="and"), itemclass:string("IPM.VOICENOTES*", mode="and")), subject:string("SearchQueryStxProbe*", mode="and"), folderid:string("48A300C7FBA4DA408B80EB019A1CE94900000000000E0000"))"). Error
    information: System.TimeoutException: Failed to open a channel.
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.ExecuteServiceCall(IProcessingEngineChannel& serviceProxy, Action`1 call, Int32 retryCount)
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.ExecuteSearchFlow(String flowName, Dictionary`2 inputData)
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.<ExecuteInternal>d__18.MoveNext()
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.<ExecuteSimple>d__a.MoveNext()
       at Microsoft.Exchange.Server.Storage.FullTextIndex.FullTextIndexQuery.ExecutePagedFullTextIndexQuery(Guid databaseGuid, Guid mailboxGuid, Int32 mailboxNumber, String query, CultureInfo culture, Guid correlationId, QueryLoggingContext loggingContext,
    PagedQueryResults pagedQueryResults)
       at Microsoft.Exchange.Server.Storage.StoreCommonServices.StoreFullTextIndexHelper.ExecuteFullTextIndexQuery(Context context, MailboxState mailboxState, QueryParameters queryParameters, PagedQueryResults pagedQueryResults, ExchangeId searchFolderId,
    SearchExecutionDiagnostics diagnostics)
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="MSExchangeIS" />
        <EventID Qualifiers="49156">1012</EventID>
        <Level>2</Level>
        <Task>1</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2013-04-01T16:23:48.000000000Z" />
        <EventRecordID>192599</EventRecordID>
        <Channel>Application</Channel>
        <Computer>ex1301.dagdc.com</Computer>
        <Security />
      </System>
      <EventData>
        <Data>and(or(itemclass:string("IPM.Note*", mode="and"), itemclass:string("IPM.Schedule.Meeting*", mode="and"), itemclass:string("IPM.OCTEL.VOICE*", mode="and"), itemclass:string("IPM.VOICENOTES*",
    mode="and")), subject:string("SearchQueryStxProbe*", mode="and"), folderid:string("48A300C7FBA4DA408B80EB019A1CE94900000000000E0000"))</Data>
        <Data>System.TimeoutException: Failed to open a channel.
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.ExecuteServiceCall(IProcessingEngineChannel&amp; serviceProxy, Action`1 call, Int32 retryCount)
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.ExecuteSearchFlow(String flowName, Dictionary`2 inputData)
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.&lt;ExecuteInternal&gt;d__18.MoveNext()
       at Microsoft.Exchange.Search.OperatorSchema.PagingImsFlowExecutor.&lt;ExecuteSimple&gt;d__a.MoveNext()
       at Microsoft.Exchange.Server.Storage.FullTextIndex.FullTextIndexQuery.ExecutePagedFullTextIndexQuery(Guid databaseGuid, Guid mailboxGuid, Int32 mailboxNumber, String query, CultureInfo culture, Guid correlationId, QueryLoggingContext loggingContext,
    PagedQueryResults pagedQueryResults)
       at Microsoft.Exchange.Server.Storage.StoreCommonServices.StoreFullTextIndexHelper.ExecuteFullTextIndexQuery(Context context, MailboxState mailboxState, QueryParameters queryParameters, PagedQueryResults pagedQueryResults, ExchangeId searchFolderId,
    SearchExecutionDiagnostics diagnostics)</Data>
        <Binary>5B444941475F4354585D000084000000FF09000000000000000268000000808A00100000000080CA00100000000080B200100000000080D200100000000030FF001000000000309F00100000000030DF001000000000B09D001000000000B0DD001000000000B0ED001000000000B08D001000000000B095001000000000B0A5001000000000</Binary>
      </EventData>
    </Event>

    Hi,
    Please reenable and start the search engine service and try again.
    The simillar case for your reference:
    http://social.technet.microsoft.com/Forums/en-US/exchangesvrgeneral/thread/4f43ef50-b71f-4ab3-8ced-70f1c36c5509
    Hope it is hlepful.
    Fiona Liao
    TechNet Community Support

Maybe you are looking for