Query is slow

SELECT SUM(A.NO_MONTH_CONSUMPTION),SUM(A.BASE_CONSUMPTION),SUM(A.CURRENT_DOC_AMT),SUM(A.CUR_TAX),SUM(B.CURRENT_DOC_AMT)
FROM VW_x A,(SELECT CURRENT_DOC_AMT,DOC_NO
FROM VW_y B
WHERE NVL(B.VOID_STATUS,0)=0 AND B.TR_TYPE_CODE='SW' AND B.BPREF_NO=:B4 AND B.SERVICE_CODE=:B3 AND B.BIZ_PART_CODE=:B2 AND B.CONS_CODE=:B1 ) B
WHERE A.BPREF_NO=:B4 AND A.SERVICE_CODE=:B3 AND A.BIZ_PART_CODE=:B2 AND A.CONS_CODE=:B1 AND A.BILL_MONTH >:B5 AND NVL(A.VOID_STATUS,0)=0 AND NVL(A.AVG_IND,0)= 2 AND A.DOC_NO=B.DOC_NO(+)
the above view "VW_x" has around 40 million records from two tables and avg_ind column has only 0 and 2 values. I created a functional index on both table something like create index on x1 nvl(avg,0)
TRACE OUT PUT
STATISTICS
15  recursive calls
0  db block gets
  18  consistent gets
4  physical reads
0  redo size
357 bytes sent via SQL*Net to client
252 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
but still the query is slow...please suggest the best practise to make it fast
thanks

Hi sorry i was out of office for a while please check the execution plan for my query.
Below query i am calling in a procedure passing the parameters
  While i execute the query separatly it works fine but the same thing when i call in procedure and the procedure has loop which goes and check around 400,000 records thats where i get the problem
select sum(a.no_month_consumption),sum(a.base_consumption),sum(a.current_doc_amt),sum(a.cur_tax),sum(b.current_doc_amt
--into vnomonths,vcons,vconsamt,vtaxamt,vsewage
from bill_View a,(select current_doc_amt,doc_no from dbcr_View b where nvl(b.void_status,0)=0 and b.tr_type_code='SWGDBG' and b.bpref_no='Q12345' and b.service_code='E' and b.biz_part_code='MHEW') b
where a.bpref_no='Q12345' and a.service_code='E' and a.biz_part_code='MHEW'
and a.bill_month >'30-aPR-2011' and nvl(a.void_status,0)=0 and decode(a.avg_ind,null,0,a.avg_ind)= 2
and a.doc_no=b.doc_no(+);
I created functionaly inde on avg_ind column (nvl(avg_ind,0))
Execution Plan
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=77 Card=1 Bytes=93
   1    0   SORT (AGGREGATE)
   2    1     HASH JOIN (OUTER) (Cost=77 Card=4 Bytes=372)
   3    2       VIEW OF 'VW_IBS_BILL' (VIEW) (Cost=54 Card=3 Bytes=198
   4    3         UNION-ALL
   5    4           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_BILL' (T
          ABLE) (Cost=8 Card=1 Bytes=50)
   6    5             INDEX (RANGE SCAN) OF 'STBILL_BPREF_NO' (INDEX)
          (Cost=3 Card=5)
   7    4           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_BILL' (T
          ABLE) (Cost=46 Card=2 Bytes=114)
   8    7             INDEX (RANGE SCAN) OF 'XTBILL' (INDEX) (Cost=3 C
          ard=43)
   9    2       VIEW OF 'VW_IBS_DBCR' (VIEW) (Cost=22 Card=4 Bytes=108
  10    9         UNION-ALL
  11   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_T_DBCR' (TAB
          LE) (Cost=2 Card=1 Bytes=54)
  12   11             INDEX (RANGE SCAN) OF 'TDBCR_BPREFNO' (INDEX) (C
          ost=1 Card=1)
  13   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_S_T_DBCR' (T
          ABLE) (Cost=7 Card=1 Bytes=43)
  14   13             INDEX (RANGE SCAN) OF 'STDBCR_BPREFNO' (INDEX) (
          Cost=3 Card=4)
  15   10           TABLE ACCESS (BY INDEX ROWID) OF 'IBS_X_T_DBCR' (T
          ABLE) (Cost=13 Card=2 Bytes=88)
  16   15             INDEX (RANGE SCAN) OF 'XTDBCR' (INDEX) (Cost=3 C
          ard=11)
what is Card and Cost attributes in the above output..................... ?

Similar Messages

  • Why is the query so slow?

    Hi,
    I've got a query running fast (3 sec.)
    If I try to execute it on test enviroment, it takes about 2 minutes (!)
    I see in both enviroments the explain plan is the same and so are the indexes used. I've also tried to rebuild the indexes and the tables that looked quite fragmented in test, but the result is always the same. Could it be that our test enviroment is slower and with lower performances? What else could I check? (Oracle Vers. is 8.1.7)
    Thanks!

    812809 wrote:
    steps to follow:
    1.whether the candidate columns has index or notSometimes and index can cause a query to slow down rather than speed up, especially if a person has created too many indexes on a table and the optimiser can't figure out the best one to use.
    2.go for explain plan and look the query not to fall under the category of Full Table ScanFull table scans are not always a bad thing. Sometimes they are faster than using the index. It depends.

  • Query very slow!

    I have Oracle 9i and SUN OS 5.8
    I have a Java application that have a query to the Customer table. This table has 2.000.000 of records and I have to show its in pages (20 record each page)
    The user query for example the Customer that the Last Name begin with “O”. Then the application shows the first 20 records with this condition and order by Name.
    Then I have to create 2 querys
    1)
    SELECT id_customer,Name
    FROM Customers
    WHERE Name like 'O%'
    ORDER BY id_customer
    But when I proved this query in TOAD it take long to do it (the time consuming was 15 minutes)
    I have the index in the NAME field!!
    Besides, if the user want to go to the second page the query is executed again. (The java programmers said me that)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    2)
    SELECT count(*) FROM Customers WHERE NAME like 'O%'
    I have to do this query because I need to known How many pages (20 records) I need to show.
    Example with 5000 records I have to have 250 pages.
    But when I proved this query in TOAD it take long to do it (the time consuming was 30 seconds)
    What is your recommendation to optimize it????? I need to obtain the information in
    few seconds.
    Thanks in advance!

    This appears to be a dulpicate of a post in the Query very slow! forum.
    Claudio, since the same folks tend to read both forums, it is generally preferred that you post questions in only one forum. That way, multiple people don't spend time writing generally equivalent replies.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Update query is slow with merge replication

    Hello friend,
    I have a database with enabling merge replication.
    Then the problem is update query is taking more time.
    But when I disable the merge triggers then it'll update quickly.
    I really appreciate your
    quick response.
    Thanks.

    Hi Manjula,
    According to your description, the update query is slow after configuring merge replication. There are some proposals for you troubleshooting this issue as follows.
    1. Perform regular index maintenance, update statistics, re-index, on the following Replication system tables.
        •MSmerge_contents
        •MSmerge_genhistory
        •MSmerge_tombstone
        •MSmerge_current_partition_mappings
        •MSmerge_past_partition_mappings
    2. Make sure that your tables involved in the query have suitable indexes. Also do the re-indexing and update the statistics for these tables. Additionally, you can use
    Database Engine Tuning Advisor to tune databases for better query performance.
    Here are some related articles for your reference.
    http://blogs.msdn.com/b/chrissk/archive/2010/02/01/sql-server-merge-replication-best-practices.aspx
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    Thanks,
    Lydia Zhang

  • Query performance slow in one instance in RAC

    Hi
    We have 3 node RAC. When we test onw query it slow by 40% in one instance and always physical reads are hapenning in that instance.
    Below are the details. All the parameters are same. Users compains some times the query is slow.
    Thanks in Advance.
    From Instance 1 - 9 Sec
    =============================================================
    Statistics
              0  recursive calls
              1  db block gets
          67209  consistent gets
              0  physical reads
              0  redo size
          23465  bytes sent via SQL*Net to client
          10356  bytes received via SQL*Net from client
             28  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             13  rows processed
    From Instance 2 - 13 Sec
    =============================================================
    Statistics
              0  recursive calls
              1  db block gets
          67215  consistent gets
          67193  physical reads    <<------------------------ Only in one instance
              0  redo size
          23465  bytes sent via SQL*Net to client
          10356  bytes received via SQL*Net from client
             28  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             13  rows processed
    From Instance 3 - 9 Sec
    =============================================================
    Statistics
              0  recursive calls
              1  db block gets
          67209  consistent gets
              0  physical reads
              0  redo size
          23465  bytes sent via SQL*Net to client
          10356  bytes received via SQL*Net from client
             28  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             13  rows processed

    You can also check global cache statistics. Run this before and after your query :
    select name, value from v$mystat s, v$statname n where s.statistic#=n.statistic# and name like '%blocks received';

  • Query Designer slows down after working some time with it

    Hi all,
    the new BEx Query Designer slows down when working some time with it. The longer it remains open, the slower it gets. Especially formula editing slows down extremely.
    Did anyone of you encounter the same problem? Do you have an idea, how to fix this. To me it seems as if the Designer allocates more and more RAM and does not free that up.
    My version: BI AddOn 7.X, Support Package 13, Revision 467
    Kind regards,
    Philipp

    I have seen a similar problem on one of my devices, the 'Samsung A-920'. Every time the system would pop up the 'Will you allow Network Access' screen , the imput from all keypresses from then on would be strangely delayed. It looked like the problem was connected with the switching from my app and the system dialog form. I tried for many many long hours / days to fix this, but just ended up hacking my phone to remove the security questions. After removing the security questions my problem went away.
    I don't know if it's an option in your application, but is it possible to do everything using just one Canvas, and not switch between displayables? You may want to do an experiment using a single displayable Canvas, and just change how it draws. I know this will make user input much more complicated, but you may be able to avoid the input delays.
    In my case, I think the device wasn't properly releasing / un-registering the input handling from the previous dialogs, so all keypresses still went through the non-current network-security dialog before reaching my app.

  • Query is slow on staging

    Hi Friends ,
    i am using 11.2.0.3.0 oracle db . We have a query which is running smoothly on Live and the same query runs slow on staging environment . The data is pulled from Live to staging using Golden Gate and not all columns are refreshed .
    Can you please help me tune this query or let me know what best can be done for this query to run like Live environment .
    Regards,
    DBApps

    Hi,
    This is a general type of question. please be specific. Golden Thumb rule is that don't use '*", instead use the column names. Analyze the table and take a execution plan and check for index usage .
    Please give the problem statement also so that we can help you.

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

  • Query runs slower when using variables & faster when using hard coded value

    Hi,
    My query runs slower when i use variables but it runs faster when i use hard coded values. Why it is behaving like this ?
    My query is in cursor definition in a procedure. Procedure runs faster when using hard coded valus and slower when using variables.
    Can anybody help me out there?
    Thanks in advance.

    Hi,
    Thanks for ur reply.
    here is my code with Variables:
    Procedure populateCountryTrafficDetails(pWeekStartDate IN Date , pCountry IN d_geography.country_code%TYPE) is
    startdate date;
    AR_OrgId number(10);
    Cursor cTraffic is
    Select
              l.actual_date, nvl(o.city||o.zipcode,'Undefined') Site,
              g.country_code,d.customer_name, d.customer_number,t.contrno bcn,
              nvl(r.dest_level3,'Undefined'),
              Decode(p.Product_code,'820','821','821','821','801') Product_Code ,
              Decode(p.Product_code,'820','Colt Voice Connect','821','Colt Voice Connect','Colt Voice Line') DProduct,
              sum(f.duration),
              sum(f.debamount_eur)
              from d_calendar_date l,
              d_geography g,
              d_customer d, d_contract t, d_subscriber s,
              d_retail_dest r, d_product p,
              CPS_ORDER_DETAILS o,
              f_retail_revenue f
              where
              l.date_key = f.call_date_key and
              g.geography_key = f.geography_key and
              r.dest_key = f.dest_key and
              p.product_key = f.product_key and
              --c.customer_key = f.customer_key and
              d.customer_key = f.customer_key and
              t.contract_key = f.contract_key and
              s.SUBSCRIBER_KEY = f.SUBSCRIBER_KEY and
              o.org_id(+) = AR_OrgId and
              g.country_code = pCountry and
              l.actual_date >= startdate and
              l.actual_date <= (startdate + 90) and
              o.cli(+) = s.area_subno and
              p.product_code in ('800','801','802','804','820','821')
              group by
              l.actual_date,
              o.city||o.zipcode, g.country_code,d.customer_name, d.customer_number,t.contrno,r.dest_level3, p.product_code;
    Type CountryTabType is Table of country_traffic_details.Country%Type index by BINARY_INTEGER;
    Type CallDateTabType is Table of country_traffic_details.CALL_DATE%Type index by BINARY_INTEGER;
    Type CustomerNameTabType is Table of Country_traffic_details.Customer_name%Type index by BINARY_INTEGER;
    Type CustomerNumberTabType is Table of Country_traffic_details.Customer_number%Type index by BINARY_INTEGER;
    Type BcnTabType is Table of Country_traffic_details.Bcn%Type index by BINARY_INTEGER;
    Type DestinationTypeTabType is Table of Country_traffic_details.DESTINATION_TYPE%Type index by BINARY_INTEGER;
    Type ProductCodeTabType is Table of Country_traffic_details.Product_Code%Type index by BINARY_INTEGER;
    Type ProductTabType is Table of Country_traffic_details.Product%Type index by BINARY_INTEGER;
    Type DurationTabType is Table of Country_traffic_details.Duration%Type index by BINARY_INTEGER;
    Type DebamounteurTabType is Table of Country_traffic_details.DEBAMOUNTEUR%Type index by BINARY_INTEGER;
    Type SiteTabType is Table of Country_traffic_details.Site%Type index by BINARY_INTEGER;
    CountryArr CountryTabType;
    CallDateArr CallDateTabType;
    Customer_NameArr CustomerNameTabType;
    CustomerNumberArr CustomerNumberTabType;
    BCNArr BCNTabType;
    DESTINATION_TYPEArr DESTINATIONTYPETabType;
    PRODUCT_CODEArr PRODUCTCODETabType;
    PRODUCTArr PRODUCTTabType;
    DurationArr DurationTabType;
    DebamounteurArr DebamounteurTabType;
    SiteArr SiteTabType;
    Begin
         startdate := (trunc(pWeekStartDate) + 6) - 90;
         Exe_Pos := 1;
         Execute Immediate 'Truncate table country_traffic_details';
         dropIndexes('country_traffic_details');
         Exe_Pos := 2;
         /* Set org ID's as per AR */
         case (pCountry)
         when 'FR' then AR_OrgId := 81;
         when 'AT' then AR_OrgId := 125;
         when 'CH' then AR_OrgId := 126;
         when 'DE' then AR_OrgId := 127;
         when 'ES' then AR_OrgId := 123;
         when 'IT' then AR_OrgId := 122;
         when 'PT' then AR_OrgId := 124;
         when 'BE' then AR_OrgId := 132;
         when 'IE' then AR_OrgId := 128;
         when 'DK' then AR_OrgId := 133;
         when 'NL' then AR_OrgId := 129;
         when 'SE' then AR_OrgId := 130;
         when 'UK' then AR_OrgId := 131;
         else raise_application_error (-20003, 'No such Country Code Exists.');
         end case;
         Exe_Pos := 3;
    dbms_output.put_line('3: '||to_char(sysdate, 'HH24:MI:SS'));
         populateOrderDetails(AR_OrgId);
    dbms_output.put_line('4: '||to_char(sysdate, 'HH24:MI:SS'));
         Exe_Pos := 4;
         Open cTraffic;
         Loop
         Exe_Pos := 5;
         CallDateArr.delete;
    FETCH cTraffic BULK COLLECT
              INTO CallDateArr, SiteArr, CountryArr, Customer_NameArr,CustomerNumberArr,
              BCNArr,DESTINATION_TYPEArr,PRODUCT_CODEArr, PRODUCTArr, DurationArr, DebamounteurArr LIMIT arraySize;
              EXIT WHEN CallDateArr.first IS NULL;
                   Exe_pos := 6;
                        FORALL i IN 1..callDateArr.last
                        insert into country_traffic_details
                        values(CallDateArr(i), CountryArr(i), Customer_NameArr(i),CustomerNumberArr(i),
                        BCNArr(i),DESTINATION_TYPEArr(i),PRODUCT_CODEArr(i), PRODUCTArr(i), DurationArr(i),
                        DebamounteurArr(i), SiteArr(i));
                        Exe_pos := 7;
    dbms_output.put_line('7: '||to_char(sysdate, 'HH24:MI:SS'));
         EXIT WHEN ctraffic%NOTFOUND;
    END LOOP;
         commit;
    Exe_Pos := 8;
              commit;
    dbms_output.put_line('8: '||to_char(sysdate, 'HH24:MI:SS'));
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_CUSTNO ON country_traffic_details (CUSTOMER_NUMBER)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_BCN ON country_traffic_details (BCN)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_PRODCD ON country_traffic_details (PRODUCT_CODE)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_SITE ON country_traffic_details (SITE)';
              execDDl(lSql);
              lSql := 'CREATE INDEX COUNTRY_TRAFFIC_DETAILS_DESTYP ON country_traffic_details (DESTINATION_TYPE)';
              execDDl(lSql);
              Exe_Pos:= 9;
    dbms_output.put_line('9: '||to_char(sysdate, 'HH24:MI:SS'));
    Exception
         When Others then
         raise_application_error(-20003, 'Error in populateCountryTrafficDetails at Position: '||Exe_Pos||' The Error is '||SQLERRM);
    End populateCountryTrafficDetails;
    In the above procedure if i substitute the values with hard coded values i.e. AR_orgid = 123 & pcountry = 'Austria' then it runs faster.
    Please let me know why it is so ?
    Thanks in advance.

  • Query is slow to reutn result...

    The following query is slow, I have index on tzk, and the event_dtg column is date column and allow null value.
    abc_zone have > 60 millions records. Statistics on table and index is current.
    Any idea how to improve the query performance?
    select count (*) tz6
    from abc_zone
    where tzk =6
    and event_dtg > to_date('09/05/2009 01:00:00' , 'MM/DD/YYYY HH24:MI:SS')
    and event_dtg < to_date('04/04/2010 00:00:00' , 'MM/DD/YYYY HH24:MI:SS')
    Oracle 10.2.0.3 on AIX
    Thanks in advance.

    sorry, I do have index on event_dtg...
    here is the EP:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 19 | 148 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 19 | | |
    | 2 | TABLE ACCESS BY INDEX ROWID| ABC_ZONE | 16 | 304 | 148 (0)| 00:00:01 |
    | 3 | INDEX RANGE SCAN | ABC_ZONE_EVENT_DTG | 3439 | | 1 (0)| 00:00:01 |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$1
    2 - SEL$1 / ABC_ZONE@SEL$1
    3 - SEL$1 / ABC_ZONE@SEL$1
    17 rows selected.
    I suspect there is some kind of conversion (date to timestamp) that is costly.
    Thanks.

  • Query of query - running slower on 64 bit CF than 32 bit CF

    Greetings...
    I am seeing behavior where pages that use query-of-query run slower on 64-bit Coldfusion 9.01 than on 32-bit Coldfusion 9.01.
    My server specs are : dual processer virtual machine, 4 GIG ram, Windows 2008 Datacenter Server r2 64-bit, Coldfusion 9.01. Note that the coldfusion is literally "straight out of the box", and is using all default settings - the only thing I configured in CF is a single datasource.
    The script I am using to benchmark this runs a query that returns 20,000 rows with fields id, firstname, lastname, email, city, datecreated. I then loop through all 20,000 records, and for each record, I do a query-of-query (on the same master query) to find any other record where the lastname matches that of the record I'm currently on. Note that I'm only interested in using this process for comparative benchmarking purposes, and I know that the process could be written more efficiently.
    Here are my observed execution times for both 64-bit and 32-bit Coldfusion (in seconds) on the same machine.
    64 bit CF 9.01: 63,49,52,52,52,48,50,49,54 (avg=52 seconds)
    32 bit CF 9.01: 47,45,43,43,45,41,44,42,46 (avg=44 seconds)
    It appears from this that 64-bit CF performs worse than 32-bit CF when doing query-of-query operations. Has anyone made similar observations, and is there any way I can tune the environment to improve 64 bit performance?
    Thanks for any help you can provide!
    By the way, here's the code that is generating these results:
    <!--- Allrecs query returns 20000 rows --->
    <CFQUERY NAME="ALLRECS" DATASOURCE="MyDsn">
        SELECT * FROM MyTBL
    </CFQUERY>
    <CFLOOP QUERY="ALLRECS">
        <CFQUERY NAME="SAMELASTNAME" DBTYPE="QUERY">
            SELECT * FROM ALLRECS
            WHERE LN=<CFQUERYPARAM VALUE="#ALLRECS.LN#" CFSQLTYPE="CF_SQL_VARCHAR">
            AND ID<><CFQUERYPARAM VALUE="#AllRecs.ID#" CFSQLTYPE="CF_SQL_INTEGER">
        </CFQUERY>
        <CFIF SameLastName.RecordCount GT 20>
            #AllRecs.LN#, #AllRecs.FN# : #SameLastName.RecordCount# other records with same lastname<BR>
        </CFIF>
    </CFLOOP>

    BoBear2681 wrote:
    ..follow-up: ..Thanks for the follow-up. I'll be interested to hear the progress (or otherwise, as the case may be).
    As an aside. I got sick of trying to deal with Clip because it could only handle very small Clip sizes. AFAIR it was 1 second of 44.1 KHz stereo. From that point, I developed BigClip.
    Unfortunately BigClip as it stands is even less able to fulfil your functional requirement than Clip, in that only one BigClip can be playing at a time. Further, it can be blocked by other sound applications (e.g. VLC Media Player, Flash in a web page..) or vice-versa.

  • Flashback and transaction query very slow

    Hello. I was wondering if anyone else has seen transaction queries be really slow and if there is anything I can do to speed it up? Here is my situation:
    I have a database with about 50 tables. We need to allow the user to go back to a point in time and "undo" what they have done. I can't use flashback table because multiple users can be making changes to the same table (different records) and I can't undo what the other users have done. So I must use the finer granularity of undoing each transaction.
    I have not had a problem with the queries, etc. I basically get a cursor to all the transactions in each of the tables and order them backwards (since all the business rules must be observed). However, getting this cursor takes forever. From that cursor, I can execute the undo_sql. In fact, I once had a cursor that did "union all" on each table and even if the user only modified 1 table, it took way too long. So now I do a quick count based on the ROWSCN (running 10g and tables have ROWDEPENDANCIES) being in the time needed to find out if this table has been touched. Based on that, I can create a cursor only for the tables that have been touched. This helps. But it is still slow especially compared to any other query I have. And if the user did touch a lot of tables, it is still way too slow.
    Here is an example of part of a query that is used on each table:
    select xid, commit_scn, logon_user, undo_change#, operation, table_name, undo_sql
    from flashback_transaction_query
    where operation IN ('INSERT', 'UPDATE', 'DELETE')
      and xid IN (select versions_xid
                  from TABLE1
                  versions between SCN p_scn and current_scn
                  where system_id = p_system_id)
      and table_name = UPPER('TABLE1')Any help is greatly appreciated.
    -Carmine

    Anyone?
    Thanks,
    -Carmine

  • SQL Query very slow.

    I have a table which has 40million data in it. Of-course partitioned!.
    begin
    pk_cm_entity_context.set_entity_in_context(1);
    end;
    SELECT COUNT(1) FROM XFACE_ADDL_DETAILS_TXNLOG;
    alter table XFACE_ADDL_DETAILS_TXNLOG rename to XFACE_ADDLDTS_TXNLOG_PTPART;
    SELECT COUNT(1) FROM XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Create table
    create table XFACE_ADDL_DETAILS_TXNLOG
    REF_TXN_NO CHAR(40),
    REF_USR_NO CHAR(40),
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO CHAR(48),
    MAIN_IC_NAME CHAR(64),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO CHAR(48),
    DEPUTY_NAME CHAR(64),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE CHAR(192),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_ENTITY_VPD NUMBER(5) default NVL(sys_context('CLIENTCONTEXT','entity_code'),11),
    COD_EXT_TASK_REV VARCHAR2(10)
    partition by hash (REF_TXN_NO)
    PARTITIONS 128
    store in (FCHDATA1,FCHDATA2,FCHDATA3,FCHDATA4, FCHDATA5, FCHDATA6, FCHDATA7, FCHDATA8);
    insert /*+APPEND NOLOGGING */ into XFACE_ADDL_DETAILS_TXNLOG
    select /*+PARALLEL */ * from XFACE_ADDLDTS_TXNLOG_PTPART;
    -- Add comments to the table
    comment on table XFACE_ADDL_DETAILS_TXNLOG
    is ' Additional Data log table ';
    -- Add comments to the columns
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO
    is 'Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO
    is 'User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_KEY_NO
    is 'Unique key to identify a leg of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_TXN_NO_ORG
    is 'Original Transaction Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.REF_USR_NO_ORG
    is 'Original Transaction User Reference Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.RECON_CODE
    is 'Reconciliation of transactions in future';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TASK_DERIVED
    is 'Transaction mnemonic for the request';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CHNL_ID
    is 'Channel ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SERVICE_ID
    is 'Service ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_USER_ID
    is 'User ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_AUTH_ID
    is 'Authorizer ID';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_NO
    is 'It can be Card number or MCA or GL or CASH GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TYP_ACCT_NO
    is 'Type of input (Valid values CARD, MCA, GL, CASH, LN)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_SUB_ACCT_NO
    is 'MC Sub Account Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_DEP_NO
    is 'Deposit Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.AMOUNT
    is 'Transaction Amount';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CCY
    is 'Currency Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_POST
    is 'Posting Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_VALUE
    is 'Value Date of the transaction';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXT_TXN_NARRATIVE
    is 'Text Transaction Narrative';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DATE_CHEQUE_ISSUE
    is 'Date of Issue of Cheque';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.TXN_BUSINESS_TYPE
    is 'Transaction Business Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_NO
    is 'Card Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_CODE
    is 'Inventory Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.INVENTORY_NO
    is 'Inventory Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CARD_PASSBOOK_NO
    is 'Card Passbook Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CASH_ANALYSIS
    is 'Cash Analysis Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BANK_INFORMATION_NO
    is 'Bank Information Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.BATCH_NO
    is 'Batch Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.SUMMARY
    is 'Summary';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_TYPE
    is 'IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NO
    is 'IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_NAME
    is 'IC Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.MAIN_IC_CHECK_RETURN_CODE
    is 'IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_TYPE
    is 'Deputy IC Type';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_NO
    is 'Deputy IC Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_NAME
    is 'Deputy Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DEPUTY_IC_CHECK_RETURN_CODE
    is 'Deputy IC Check Return Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCOUNT_PROPERTY
    is 'Account Property';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CHEQUE_NO
    is 'Cheque Number';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_TASK
    is 'External Task Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MODULE
    is 'Module Code - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACC_PURPOSE_CODE
    is 'Account Purpose Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.NATIONALITY
    is 'Nationality';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.CUSTOMER_NAME
    is 'Customer Name';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_INCOME_EXPENSE
    is 'Income Expense Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_EXT_BRANCH
    is 'External Branch Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_ACCT_TITLE
    is 'Account Title Code';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_CA_TT
    is 'Cash or Funds Transfer flag';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_EXT_LOCAL
    is 'Local Date';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.ACCT_OWNER_VALID_RESULT
    is 'Account Owner Valid Result';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_DR_CR
    is 'Flag Debit Credit - D, C.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_ONLINE_UPLOAD
    is 'Flag Online Upload - O, U.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.FLG_STMT_DISPLAY
    is 'Statement Display Flag - Y/N, Y(Normal Reversal), N(Correction Reversal)';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_TXN_TYPE
    is 'To denote the kind of transaction:
    1 ?Cash Credit Transaction
    2 ?Cash Debit Transaction
    3 ?Funds Transfer Credit Transaction
    4 ?Funds Transfer Debit Transaction
    comment on column XFACE_ADDL_DETAILS_TXNLOG.DAT_TS_TXN
    is 'Date and Timestamp of the record being inserted';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.LC_BG_GUARANTEE_NO
    is 'LC/BG Guarantee Number for which the request for the Liquidation has been initiated.';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_OTHER_ACCT_NO
    is 'Other Account No';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_MOD_OTHER_ACCT_NO
    is 'Module Code of Other Account No - CH, TD, RD , LN, CASH, GL';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_SUB_ACCT
    is 'Branch Code for Sub Account';
    comment on column XFACE_ADDL_DETAILS_TXNLOG.COD_CC_BRN_OTHR_ACCT
    is 'Branch Code for Other Account';
    -- Create/Recreate indexes
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_1;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_2;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_3;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_4;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_5;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_6;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_7;
    drop index IN_XFACE_ADDL_DETAILS_TXNLOG_8;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_1 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH (REF_TXN_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_2 on XFACE_ADDL_DETAILS_TXNLOG (REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_USR_NO, REF_KEY_NO, COD_SUB_ACCT_NO) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_3 on XFACE_ADDL_DETAILS_TXNLOG (COD_SUB_ACCT_NO, FLG_STMT_DISPLAY,DAT_POST COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_SUB_ACCT_NO, FLG_STMT_DISPLAY) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_4 on
    XFACE_ADDL_DETAILS_TXNLOG (COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_ACCT_NO, REF_TXN_NO, COD_TXN_TYPE, COD_USER_ID, COD_EXT_BRANCH)
    PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_5 on XFACE_ADDL_DETAILS_TXNLOG (COD_USER_ID, DAT_POST, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(COD_USER_ID) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_6 on XFACE_ADDL_DETAILS_TXNLOG (REF_TXN_NO_ORG, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(REF_TXN_NO_ORG) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_7 on XFACE_ADDL_DETAILS_TXNLOG (DAT_EXT_LOCAL, DAT_POST,TXN_BUSINESS_TYPE, FLG_ONLINE_UPLOAD, COD_CHNL_ID, REF_TXN_NO, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_EXT_LOCAL) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    /* Previous Key order: (COD_EXT_BRANCH,DAT_POST,REF_TXN_NO_ORG,COD_SERVICE_ID,COD_ENTITY_VPD) */
    create index IN_XFACE_ADDL_DETAILS_TXNLOG_8 on XFACE_ADDL_DETAILS_TXNLOG (DAT_POST, COD_EXT_BRANCH, REF_TXN_NO_ORG, COD_SERVICE_ID, COD_ENTITY_VPD)
    GLOBAL PARTITION BY HASH(DAT_POST) PARTITIONS 128 STORE IN (FCHINDX1, FCHINDX2, FCHINDX3, FCHINDX4) PARALLEL (DEGREE 32) NOLOGGING;
    ALTER TABLE XFACE_ADDL_DETAILS_TXNLOG NOPARALLEL PCTFREE 50 INITRANS 128 LOGGING;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_1 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_2 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_3 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_4 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_5 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_6 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_7 NOPARALLEL INITRANS 128;
    ALTER index IN_XFACE_ADDL_DETAILS_TXNLOG_8 NOPARALLEL INITRANS 128;
    BEGIN
    DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => UPPER('FCR44HOST'),
    OBJECT_NAME => UPPER('XFACE_ADDL_DETAILS_TXNLOG '),
    POLICY_NAME => 'FC_ENTITY_POLICY',
    FUNCTION_SCHEMA => UPPER('FCR44HOST'),
    POLICY_FUNCTION => 'pk_cm_vpd_policy.get_entity_predicate',
    STATEMENT_TYPES => 'select,insert,update,delete',
    UPDATE_CHECK => TRUE,
    ENABLE => TRUE,
    STATIC_POLICY => FALSE,
    POLICY_TYPE => DBMS_RLS.SHARED_STATIC,
    LONG_PREDICATE => FALSE,
    SEC_RELEVANT_COLS => NULL,
    SEC_RELEVANT_COLS_OPT => NULL);
    END;
    begin
    dbms_stats.gather_table_stats(ownname => 'FCR44HOST',tabname => 'XFACE_ADDL_DETAILS_TXNLOG', cascade=>true,method_opt=>'for all columns size 1',degree => 32, GRANULARITY => 'PARTITION');
    end;
    Query which takes time.
    INSERT INTO xface_addl_dtls_tlog_temp
    (ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    sessionid)
    SELECT ref_txn_no,
    ref_usr_no,
    ref_key_no,
    ref_txn_no_org,
    ref_usr_no_org,
    recon_code,
    cod_task_derived,
    cod_chnl_id,
    cod_service_id,
    cod_user_id,
    cod_auth_id,
    cod_acct_no,
    typ_acct_no,
    cod_sub_acct_no,
    cod_dep_no,
    amount,
    cod_ccy,
    dat_post,
    dat_value,
    txt_txn_narrative,
    date_cheque_issue,
    txn_business_type,
    card_no,
    inventory_code,
    inventory_no,
    card_passbook_no,
    cod_cash_analysis,
    bank_information_no,
    batch_no,
    summary,
    main_ic_type,
    main_ic_no,
    main_ic_name,
    main_ic_check_return_code,
    deputy_ic_type,
    deputy_ic_no,
    deputy_name,
    deputy_ic_check_return_code,
    account_property,
    cheque_no,
    cod_ext_task,
    cod_module,
    acc_purpose_code,
    nationality,
    customer_name,
    cod_income_expense,
    cod_ext_branch,
    cod_acct_title,
    flg_ca_tt,
    dat_ext_local,
    acct_owner_valid_result,
    flg_dr_cr,
    flg_online_upload,
    flg_stmt_display,
    cod_txn_type,
    dat_ts_txn,
    lc_bg_guarantee_no,
    cod_other_acct_no,
    cod_mod_other_acct_no,
    cod_cc_brn_sub_acct,
    cod_cc_brn_othr_acct,
    cod_ext_task_rev,
    var_l_sessionid
    FROM xface_addl_details_txnlog
    WHERE cod_sub_acct_no = var_pi_cod_acct_no
    AND dat_post between var_pi_start_dat AND var_pi_end_dat;
    Index referred is in_xface_addl_details_txnlog_3.
    First time when i execute the query it takes huge time. but subsequent queries are faster. This is only if i pass same account and criteria again.
    Observed that first time it goes for physical reads which takes time. and subsequent runs physical reads are less.....
    Request suggestions.....this is account statement inquiry user may have 10000txns in a day as well
    Bymistake earlier i raised this in "Oracle -> Text"
    Slow inserts due to physical reads every time for fresh account i am passin
    They suggested to use bind variable. But as i know, we are already using bind variables to bind account number and start and end date.

    My Replies below.
    Whenever you post provide your 4 digit Oracle version (SELECT * FROM V$VERSION).
    Ans :
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    "CORE     11.2.0.3.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    1. If your question is about the INSERT query into xface_addl_dtls_tlog_temp why didn't you post any information about the DDL for that table? Is it the same structure as the table you did post DDL for?
    Ans :
    -- Create table
    create global temporary table XFACE_ADDL_DTLS_TLOG_TEMP
    REF_TXN_NO CHAR(40) not null,
    REF_USR_NO CHAR(40) not null,
    REF_KEY_NO VARCHAR2(50),
    REF_TXN_NO_ORG CHAR(40),
    REF_USR_NO_ORG CHAR(40),
    RECON_CODE VARCHAR2(25),
    COD_TASK_DERIVED VARCHAR2(5),
    COD_CHNL_ID VARCHAR2(6),
    COD_SERVICE_ID VARCHAR2(10),
    COD_USER_ID VARCHAR2(30),
    COD_AUTH_ID VARCHAR2(30),
    COD_ACCT_NO CHAR(22),
    TYP_ACCT_NO VARCHAR2(4),
    COD_SUB_ACCT_NO CHAR(16),
    COD_DEP_NO NUMBER(5),
    AMOUNT NUMBER(15,2),
    COD_CCY VARCHAR2(3),
    DAT_POST DATE,
    DAT_VALUE DATE,
    TXT_TXN_NARRATIVE VARCHAR2(60),
    DATE_CHEQUE_ISSUE DATE,
    TXN_BUSINESS_TYPE VARCHAR2(10),
    CARD_NO CHAR(20),
    INVENTORY_CODE CHAR(10),
    INVENTORY_NO CHAR(20),
    CARD_PASSBOOK_NO CHAR(30),
    COD_CASH_ANALYSIS CHAR(20),
    BANK_INFORMATION_NO CHAR(8),
    BATCH_NO CHAR(10),
    SUMMARY VARCHAR2(60),
    MAIN_IC_TYPE CHAR(1),
    MAIN_IC_NO VARCHAR2(150),
    MAIN_IC_NAME VARCHAR2(192),
    MAIN_IC_CHECK_RETURN_CODE CHAR(1),
    DEPUTY_IC_TYPE CHAR(1),
    DEPUTY_IC_NO VARCHAR2(150),
    DEPUTY_NAME VARCHAR2(192),
    DEPUTY_IC_CHECK_RETURN_CODE CHAR(1),
    ACCOUNT_PROPERTY CHAR(4),
    CHEQUE_NO CHAR(20),
    COD_EXT_TASK CHAR(10),
    COD_MODULE CHAR(4),
    ACC_PURPOSE_CODE VARCHAR2(15),
    NATIONALITY CHAR(3),
    CUSTOMER_NAME CHAR(192),
    COD_INCOME_EXPENSE CHAR(6),
    COD_EXT_BRANCH CHAR(6),
    COD_ACCT_TITLE VARCHAR2(360),
    FLG_CA_TT CHAR(1),
    DAT_EXT_LOCAL DATE,
    ACCT_OWNER_VALID_RESULT CHAR(1),
    FLG_DR_CR CHAR(1),
    FLG_ONLINE_UPLOAD CHAR(1),
    FLG_STMT_DISPLAY CHAR(1),
    COD_TXN_TYPE NUMBER(1),
    DAT_TS_TXN TIMESTAMP(6),
    LC_BG_GUARANTEE_NO VARCHAR2(20),
    COD_OTHER_ACCT_NO CHAR(22),
    COD_MOD_OTHER_ACCT_NO CHAR(4),
    COD_CC_BRN_SUB_ACCT NUMBER(5),
    COD_CC_BRN_OTHR_ACCT NUMBER(5),
    COD_EXT_TASK_REV VARCHAR2(10),
    SESSIONID NUMBER default USERENV('SESSIONID') not null
    on commit delete rows;
    -- Create/Recreate indexes
    create index IN_XFACE_ADDL_DTLS_TLOG_TEMP on XFACE_ADDL_DTLS_TLOG_TEMP (COD_SUB_ACCT_NO, REF_TXN_NO, COD_SERVICE_ID, REF_KEY_NO, SESSIONID);
    2. Why doesn't your INSERT query use APPEND, NOLOGGING and PARALLEL like the first query you posted? If those help for the first query why didn't you try them for the query you are now having problems with?
    Ans :
    I will try to use append but i cannot use parallel since i have hardware limitations.
    3. What does this mean: 'Index referred is in_xface_addl_details_txnlog_3.'? You haven't posted any plan that refers to any index. Do you have an execution plan? Why didn't you post it?
    Ans :
    Plan hash value: 4081844790
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | INSERT STATEMENT | | | | 5 (100)| | | |
    | 1 | LOAD TABLE CONVENTIONAL | | | | | | | |
    | 2 | FILTER | | | | | | | |
    | 3 | PARTITION HASH ALL | | 1 | 494 | 5 (0)| 00:00:01 | 1 | 128 |
    | 4 | TABLE ACCESS BY GLOBAL INDEX ROWID| XFACE_ADDL_DETAILS_TXNLOG | 1 | 494 | 5 (0)| 00:00:01 | ROWID | ROWID |
    | 5 | INDEX RANGE SCAN | IN_XFACE_ADDL_DETAILS_TXNLOG_3 | 1 | | 3 (0)| 00:00:01 | 1 | 128 |
    4. Why are you defining 37 columns as CHAR datatypes? Are you aware that CHAR data REQUIRES the use of the designated number of BYTES/CHARACTERS?
    Ans :
    I understand and appreciate your points, but since it is huge application and is built over a period of time. I am afraid if i will be allowed to do change on datatypes. there are lot of queries over this table.
    5. Are you aware that #4 means those 37 columns columns, even if all of them are NULL, mean that your MINIMUM record length is 1012? Care to guess how many of those records Oracle can fit into an 8k block? And that is if you ignore the other 26 VARCHAR2, NUMBER and DATE columns.
    Two of your columns take 192 bytes MINIMUM even if they are null
    CUSTOMER_NAME CHAR(192),
    COD_ACCT_TITLE CHAR(192)
    Why are you wasting all of that space? If you are using a multi-byte character set and your data is multi-byte those 37 columns are using even more space because some characters will use more than one byte.
    If the name and title average 30 characters/bytes then those two columns alone use 300+ unused bytes. With 40 million records those unused bytes, just for those two columns take 12 GB of space.
    WIth a block size of 8k that would totally waste 1.5 million blocks that Oracle has to read just to ignore the empty space that isn't being used.
    I highly suspect that your use of CHAR is a large part of this performance problem and probably other performance problems in your system. Not only for this table but for any other table that uses similar CHAR datatypes and wastes space.
    Please reconsider your use of CHAR datatypes like this. I can't imagine what justification you have for using them.
    Ans :
    I understand your points, but since it is huge application is built over a period of time. I am afraid if i will be allowed to do change on datatypes.
    I have to manage in current situation. Not expecting query to respond in millisecs but not even 40secs which is happening currently.
    Edited by: Rohit Jadhav on Dec 30, 2012 6:44 PM

  • Spatial query runs slow on view

    Hello,
    I have two tables and one of them has geometry column. I created view to join those two tables based on id column which has been indexed for both tables.
    t1(
    id number(9),
    name varchar2(20)
    t2(
    id number(9),
    geom MDSYS.SDO_GEOMETRY
    CREATE VIEW v1 (
    id,
    name,
    geom
    ) AS
    SELECT /*+ FIRST_ROWS */ t1.id, t1.name, t2.geom
    FROM t1,t2
    WHERE t1.id = t2.id
    When I query the view with following statement it runs very slow (there are more then 1 million rows in t2 table)
    SELECT * FROM v1
    WHERE mdsys.sdo_filter(geom, [a rectangle],'querytype=window') = 'TRUE';
    but
    SELECT /*+ FIRST_ROWS */ t1.id, t1.name,t2.geom
    FROM t1,t2
    WHERE t1.id=t2.id
    and mdsys.sdo_filter(geom, [a rectangle],'querytype=window') = 'TRUE';
    returns almost instantly. Can some one tell me what is wrong with the "create view" statement?
    Thanks

    Thank you for your reply. Here are the plans. The view looks for the spatial index first.
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 21 | 756 | 10 (60)|
    | 1 | NESTED LOOPS | | 21 | 756 | 10 (60)|
    | 2 | TABLE ACCESS BY INDEX ROWID| T2 | 5269 | 123K| 3 (0)|
    | 3 | DOMAIN INDEX | T2_SDX | | | |
    | 4 | TABLE ACCESS BY INDEX ROWID| T1 | | | |
    | 5 | INDEX RANGE SCAN | T1_ID_IDX | 1 | | 0 (0)|
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 21 | 756 | 99 (3)|
    | 1 | TABLE ACCESS BY INDEX ROWID | T2 | 1 | 24 | 99 (3)|
    | 2 | NESTED LOOPS | | 21 | 756 | 99 (3)|
    | 3 | TABLE ACCESS FULL | T1 | 21 | 252 | 2 (0)|
    | 4 | BITMAP CONVERSION TO ROWIDS | | | | |
    | 5 | BITMAP AND | | | | |
    | 6 | BITMAP CONVERSION FROM ROWIDS| | | | |
    | 7 | INDEX RANGE SCAN | T2_ID_IDX | 1 | | 2 (0)|
    | 8 | BITMAP CONVERSION FROM ROWIDS| | | | |
    | 9 | SORT ORDER BY | | | | |
    | 10 | DOMAIN INDEX | T2_SDX | 1 | | |
    -----------------------------------------------------------------------------------------------------

  • SPATIAL QUERY VERY SLOW

    I CAN TO EXECUTE THIS QUERY BUT IT IS VERY SLOW, I HAVE 2 TABLE , ONE A WITH 250.000 SITE AND B WITH 250.000 POINTS, I WANT TO DETERMINING HOW MANY RISK INSIDE THE SITES.
    THANKS
    JGS
    SELECT B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS_SITE A, DBG_RIESGOS_CUMULOS B
    WHERE A.GC = 'PATRIMONIAL FENOMENOS SISMICOS' AND A.GC=B.GC
    AND SDO_RELATE(B.GEOMETRY, A.GEOMETRY, 'MASK=INSIDE') = 'TRUE';
    100 RECORS IN 220 '' SLOWWWWW

    I would do two things:
    1) Ensure Oracle is patched with the latest 10.2.0.4 patches
    This is the list I've been working with:
    Patch 7003151
    Patch 6989483
    Patch 7237687
    Patch 7276032
    Patch 7307918
    2) Write the query like this
    SELECT /*+ ORDERED*/ B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS B, DBG_RIESGOS_CUMULOS_SITE A
    WHERE B.GC = 'PATRIMONIAL FENOMENOS SISMICOS'
    AND A.GC=B.GC
    AND SDO_ANYINTERACT(A.GEOMETRY, B.GEOMETRY) = 'TRUE';

Maybe you are looking for