A performance problem about query bseg, help,help

thanks
we have a new report use such a sql which runs a long time: 
SELECT bukrs gjahr bzdat anln1 bschl dmbtr
    INTO CORRESPONDING FIELDS OF TABLE bsegtab
    FROM bseg
   WHERE               gjahr = gjahr
     AND bschl IN ('70' , '75')
     AND anln1 IN anln1
     AND meins = ''
     AND anln1 LIKE 'G%'
     AND hkont LIKE '1502%'
     AND bzdat >= dat
     AND bzdat <= dat1.
in se11,I can see bseg primary key is "mandt,bukrs,belnr,gjahr,buzei",and it is a cluster table.
Could I create an index on this culster table.
Could you give me some advices?
the environment is ecc6.0 and oracle 10g.
Edited by: victor on Sep 17, 2010 2:25 AM
Edited by: victor on Sep 17, 2010 2:25 AM

Hello,
Below link will be helpful.
http://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=180356001
Regards,
Yoganand.V

Similar Messages

  • Problems about  query filter (BEx).

    Hello there,
    I have some problems about  query filter (BEx).
    I have master data and I want to use attribute in this master data to dynamic filter in query(BEx).
    But this attribute is key figure. I can't done it now. TT
    Is this possible? If yes, could anyone advise how this is to be done?
    Thanks in advance.

    Hi
    In BEx there is option as 'Exceptions' & 'Conditions'. If you go in condition, you can specify range for key figure output. This is either predefined or by user selection. You create the condition as per your requirement & it will show you filtered data.
    How to create conditions you will find easily on this site.
    You can create as many conditions as no of filters you want. You can create these at query level also at report output level.

  • Performance problem with query on bkpf table

    hi good morning all ,
    i ahave a performance problem with a below query on bkpf table .
    SELECT bukrs
               belnr
               gjahr
          FROM bkpf
          INTO TABLE ist_bkpf_temp 
         WHERE budat IN s_budat.
    is ther any possibility to improve the performanece by using index .
    plz help me ,
    thanks in advance ,
    regards ,
    srinivas

    hi,
    if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
    for ex:
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    WHERE budat IN s_budat
        and    bukrs in s_bukrs.
    or
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    for all entries in itab
    WHERE budat IN s_budat
        and bukrs = itab-bukrs.
    Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement.

  • Performance problem with attribute dimensions - please help!

    Hi everybody,
    I run OBIEE reports on top of EssBase 9.3 ASO cube.
    My cube has 8 regular dimensions and 10 attribute dimensions.
    I am running a query based on:
    - 3 dimensions (TIME, BATCH, MODEL),
    - 2 attributes of BATCH ( SHIFT, SHIFT_STATUS )
    - and one measure.
    The query runs 3 minutes (!) and returns 190 rows.
    If I remove the attributes, the query runs 4 seconds for the same number of result rows.
    My MDX:
    With
    set [TIME6] as 'Filter([TIME].Generations(6).members,
    ([TIME].currentmember.MEMBER_ALIAS = "2009/01" OR [TIME].currentmember.MEMBER_Name = "2009/01"))'
    set [TIME7] as 'Generate([TIME6], Descendants([TIME].currentmember, [TIME].Generations(7), leaves))'
    set [BATCHES] as '[DIM_BATCH_HEADERS].Generations(5).members'
    set [SHIFT2] as '[SHIFT].Generations(2).members'
    set [SHIFT_STATUS2] as '[SHIFT_STATUS].Generations(2).members'
    set [MODEL2] as 'Filter([MODEL].Generations(2).members, ([MODEL].currentmember.MEMBER_ALIAS = "SNF_PCK" OR [MODEL].currentmember.MEMBER_Name = "SNF_PCK"))'
    select
    {[Accounts].[PROD_ACTUAL_QTY_KG]
    } on columns,
    NON EMPTY
    {crossjoin ({[TIME7]},
    crossjoin ({[BATCHES]},
    crossjoin ({[SHIFT2]},
    crossjoin ({[SHIFT_STATUS2]},{[MODEL2]}))))}
    properties ANCESTOR_NAMES, GEN_NUMBER on rows
    from [COP_MAAD.Cop_Maad]
    Thank you in advance,
    Alex
    Edited by: AM_1 on Feb 26, 2009 6:35 AM
    Edited by: AM_1 on Feb 26, 2009 7:03 AM

    Hi Glenn,
    I though that the reason might be the crossjoin with a very large set. The main dimension in this query has ~40000 members.
    Is there in EssBase MDX some analog for NONEMPTYCROSSJOIN() function that exists in MSAS?
    As these two attributes belong to the dimension that is used in the query, for every member of the dimension, there is one and only one member of each attribute. Therefore I though that Essbase should "know" how to behave.
    Thanks for an idea. I will try to increase the space for aggregation - hope this will help...
    Best Regards,
    Alex

  • Problem about MuVo V200,PLX HELP ME

    this is not the first time my mp3 player has problems,and i try to fix the problems by myself.i have already downloaded the files:
    MStorage_PCDRV_LB__07_00_250
    MuVoV200_PCFW_LF__05_02?after i pressed the play button and waited for 0 sec,
    I still cannot reach the recovery mode and my computer cannot detect any new dri've.....when i double click the MuVoV200_PCFW_LF__05_02,
    it cannot find my mp3 player
    so i cannot repair my mp3....
    PLX...HELP ME....

    You have to install the Mass Storage drivers first.
    Then press and hold the Play button while connecting the player to the PC.
    Wait until the New Hardware Wizard starts (or fingers go numb if it doesn't start!)
    Follow instructions
    When complete run firmware upgrade file.
    What were the original problems that caused you to try a firmware reload?
    PB

  • Problem about maintenance order, please help me to resolve it .thanks

    HI all,
            I have created a maintenance order by maintenance notification, and one of the operation sequences is external service. I selected a external service which was configured before. when  i saved it,it say: account 40060 have not been create yet(i did not enter any cost element ).
       regards

    Hello Subhakanth ,
       Message ID and content:
    ME042
    Diagnosis
    The G/L account is not defined in the system or for this company code.(I have defined the G/L account and assigned it to my company code already.)
    Procedure
    Make sure your entries are correct.
    Inform the individual or section responsible for G/L accounts in the relevant company code.
    thanks
    Message was edited by:
            cyan wang
    Message was edited by:
            cyan wang

  • Performance problem of query in 9i DB

    We upgraded our database from 8i to 9i recently. We have a set of same synonyms in both 8i and 9i Database. These synonyms created on Data warehouse tables. We have some queries to return data by joining several of those synonyms. We have the same number of the data from each of those synonyms either in 8i or 9i database. Our problem is that the queries take much longer time in 9i database than that in 8i database. Any body met with the same problem? Any idea about what the problem comes from? Please give me some help.
    Thanks a lot in Advance.

    How the upgradation took plance? Is it export/import? Is so, have you rebuild the indexes after upgradation? Which version of oracle do you upgrade it? 9.2.0.3 or 9.2.0.5? Because, 9.2.0.3 has a lot of optimizer problems. Also, optimizer behaves very differently in 9i. Try to to apply patch set 9.2.0.5.
    Put the optimizer_features_enable = 8.1.7 to force optimizer to behave like 8i.
    What about the SGA and pga_aggrigate_target and other import parameter that you set? Are they same as 8i or not?
    If none of those things works then define the following parameter in your init.ora file.
    complexview_merging=FALSE
    unnestsubquery=FALSE

  • Performance problem with table BSEG

    Hi Guru's
    this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
    this is my select query:
    select  bukrs
                  belnr
                  gjahr
                  bschl
                  koart
                  umskz
                  shkzg
                  dmbtr
                  ktosl
                  zuonr
                  sgtxt
                  kunnr
            from  bseg
            into  table gt_bseg1
             for  all entries in gt_bkpf
           where  bukrs eq p_bukrs
             and  belnr eq gt_bkpf-belnr
             and  gjahr eq p_gjahr
             and  buzei in gr_buzei
             and  bschl eq  '40'
             and  ktosl  ne  'BSP'.
    UR's
    GSANA

    >
    gsana sana wrote:
    > Hi Guru's
    >
    > this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
    >
    > this is my select query:
    >
    > select  bukrs
    >               belnr
    >               gjahr
    >               bschl
    >               koart
    >               umskz
    >               shkzg
    >               dmbtr
    >               ktosl
    >               zuonr
    >               sgtxt
    >               kunnr
    >         from  bseg
    >         into  table gt_bseg1
    >          for  all entries in gt_bkpf
    >        where  bukrs eq p_bukrs
    >          and  belnr eq gt_bkpf-belnr
    >          and  gjahr eq p_gjahr
    >          and  buzei in gr_buzei
    >          and  bschl eq  '40'
    >          and  ktosl  ne  'BSP'.
    >
    > UR's
    > GSANA
    Hi,
    This is what I know and please if any expert think its incorrect, please do correct me.
    BSEG is a cluster table with BUKRS, BELNR, GJAHR and BUZEI as the key whereas other key will be stored in database as raw data thus SAP apps will need to convert that raw data first if we are using other keys in where condition. Hence, I suggest to use up to buzei in the where condition and filter other condition in internal table level like using Delete statement. Hope its help.
    Regards,
    Abraham

  • Performance problem with query

    Hi,
    I have a query.
    SELECT
    CDW_DIM_DATUM_VW.DATUM
    ,sum(CDW_FT031_DISCO_VW.RATE_CHARGEABLE_DURATION/60)
    FROM
    CDW_FT031_DISCO_VW,
    CDW_DIM_DATUM_VW
    WHERE CDW_DIM_DATUM_VW.DATUM >= to_date('20110901','yyyymmdd')
    AND CDW_DIM_DATUM_VW.DATUM < to_date('20110911','yyyymmdd')
    AND CDW_DIM_DATUM_VW.DATUM_KEY=CDW_FT031_DISCO_VW.DATUM_KEY
    GROUP BY
    CDW_DIM_DATUM_VW.DATUM
    This query takes 2hrs on production normally which has to take 2 mts.
    On checking Explain plan i understood that it is doing full table scan on cdw_ft031 which is the table used in view(cdw_ft031_disco_vw) which has got 30 lakh records.
    I analyzed tables and indexes. I rebuilt all indexes . I used hints also but even then it is doing full table scan.
    There are indexes created on columns refered in where clause.
    Kindly help me.

    As the source are VIEWS your code is even not showing the full code. (Beside not showing the tables, indexes etc)
    which has got 30 lakh recordsfrom wikipedia:
    lakh is a unit in the Indian numbering system equal to one hundred thousandSo if you what that we do understand what your problem is, give us as much information as you have in a language the world understands.
    I guess "CDW_DIM_DATUM_VW" is a "time dimension" so it has 1 row per date? Why is it a view?
    Then the only join is to CDW_FT031_DISCO_VW where "FT" stands for "Fact"? (Why is it a views?)
    What you want is that it picks the index (which it has on DATUM_KEY??) instead of a full table scan?
    Is cdw_ft031 parititioned? (You often partition a fact via date, and you see "full table scan" which stand for "full partition scan".
    So your fact has 3 Million rows? And the query takes 2 hours? My laptop would do that in less without any index.... there is sometime completely wrong.
    I discourage the join of views, but based on what you gave us, hard to give advices.

  • Performance Problem with Query load

    Hello,
    after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
    Does anyone have an idea?
    Regards
    Marco

    Hi,
    Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
    Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
    Hope this info helps!
    Bala Koppuravuri

  • Interesting Performance problem..Query runs fast in TOAD and Reports 6i..

    Hi All,
    I have a query which runs with in 4 mins in Toad and also report 6i. But when ran through applications takes 3 to 4 hrs to complete normal.This report fetches huge amount of data will that be the reason for poor performance?? I am unable to figure that out. I was able to avoid full table scan on the tables used. But still have the problem.
    Any suggestions please.
    Thank you in advance.
    Prathima

    If you want to have a look at the query. This report gives a way to monitor the receipts entered for pay from receipt POs.
    SELECT
    hou.name "OPERATING_UNIT_NAME"
    ,glc.segment1 "UEC"
    ,glc.segment2 "DEPT"
    ,pov.vendor_name "VENDOR_NAME"
    ,msi.SEGMENT1 "ITEM_NUM"
    ,rcvs.receipt_num "RECEIPT_NUM"
    ,poh.segment1 "PO_NUMBER"
    ,pol.line_num "PO_LINE_NUM"
    ,por.RELEASE_NUM "RELEASE_NUMBER"
    ,poll.shipment_num "SHIPMENT_NUM"
    ,hrou.name "SHIP_TO_ORGANIZATION"
    ,trunc(rcv.transaction_date) "TRANSACTION_DATE"
    ,decode (transaction_type,'RECEIVE', 'ERS', 'RETURN TO VENDOR','RTS') "RECEIPT_TYPE"
    ,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)* rcv.quantity "RECEIPT_QTY"
    ,rcv.po_unit_price "PO_UNIT_PRICE"
    ,decode (rcv.transaction_type,'RECEIVE', 1, 'RETURN TO VENDOR', -1)*rcv.quantity*po_unit_price "RECEIPT_AMOUNT"
    ,rcvs.packing_slip "PACKING_SLIP"
    ,poll.quantity "QUANTITY_ORDERED"
    ,poll.quantity_received "QUANTITY_RECEIVED"
    ,poll.quantity_accepted "QUANTITY_ACCEPTED"
    ,poll.quantity_rejected "QUANTITY_REJECTED"
    ,poll.quantity_billed "QUANTITY_BILLED"
    ,poll.quantity_cancelled "QUANTITY_CANCELLED"
    ,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled)) "QUANTITY_OVER_RECEIVED"
    ,(poll.quantity_received - (poll.quantity - poll.quantity_cancelled))*po_unit_price "OVER_RECEIVED_AMOUNT"
    ,poh.currency_code "CURRENCY_CODE"
    ,perr.full_name "RECEIVER"
    ,perb.full_name "BUYER"
    FROM
    po.po_vendors pov
    ,po.po_headers_all poh
    ,po.po_lines_all pol
    ,po.po_line_locations_all poll
    ,po.po_distributions_all pod
    ,po.po_releases_all por
    ,hr.hr_all_organization_units hou
    ,hr.hr_all_organization_units hrou
    ,po.rcv_transactions rcv
    ,po.rcv_shipment_headers rcvs
    ,gl.gl_code_combinations glc
    ,hr.per_all_people_f perr
    ,hr.per_all_people_f perb
    ,inv.mtl_system_items_b msi
    where
    poh.org_id = hou.organization_id
    and pov.vendor_id (+) = poh.vendor_id
    and pod.po_header_id = poh.po_header_id
    and pod.po_line_id = pol.po_line_id
    and pod.line_location_id = poll.line_location_id
    and poll.po_header_id = poh.po_header_id
    and poll.po_line_id = pol.po_line_id
    and pol.po_header_id = poh.po_header_id
    and poh.pay_on_code like 'RECEIPT'
    and pod.po_header_id = rcv.po_header_id
    and pod.po_line_id = rcv.po_line_id
    and pod.po_release_id = rcv.po_release_id
    and pod.po_release_id = por.po_release_id
    and por.po_header_id = poh.po_header_id
    and hrou.organization_id = poll.ship_to_organization_id
    and pod.line_location_id = rcv.po_line_location_id
    and pod.po_distribution_id = rcv.po_distribution_id
    and rcv.transaction_type in ('RECEIVE','RETURN TO VENDOR')
    and rcv.shipment_header_id = rcvs.shipment_header_id (+)
    and pod.code_combination_id = glc.code_combination_id
    and rcvs.employee_id = perr.person_id
    and por.agent_id = perb.person_id (+)
    and perr.person_type_id = 1
    and perb.person_type_id = 1
    and msi.organization_id = 1 --poll.ship_to_organization_id
    and msi.inventory_item_id = pol.item_id
    and poh.type_lookup_code = 'BLANKET'
    and hou.organization_id = nvl(:p_operating_unit,hou.organization_id)
    and trunc(rcv.transaction_date) between :p_transaction_date_from and :p_transaction_date_to
    Message was edited by:
    Prathima

  • Performance problem on query

    As part of a large batch process, we load data sent from a client into our systems. The data consists of information representing their organization structure (departments, users etc.). I need to identify all users that are currently in the existing system that are not part of the new file. The second step of the process just won't perform, so I"m looking for suggestions. Here is the psuedocode and then the SQL:<br/><br/>
    * 1. Load all existing users for the client into a <i>global temporary table</i>.<br/>
    * 2. Delete all users from the temp table that are not on the file.<br/>
    * 3. Delete all top level users from the temp table (they can't be deleted).<br/>
    * 4. Delete all users from the table that already have the deleted role.<br/>
    * 5. Any remaining rows are the users that need to be deleted.<br/><br/>
    DDL to create table<br/>
    --------------------------------<br/><br/>
    CREATE GLOBAL TEMPORARY TABLE DELETE_USERS_TEMP (
    EMPLOYEE_ID VARCHAR2 (100),
    USER_ID VARCHAR2 (100)
    ) ON COMMIT DELETE ROWS;<br/><br/>
    1. insert into DELETE_USERS_TEMP select employee_id, user_id from cadreadmin.orderentry_usr_data where user_id like ? and employee_id is not null;<br/>
    2, delete from DELETE_USERS_TEMP where employee_id in (select emplid from client_dataset_detail where cd_id = ?);<br/>
    3. delete from DELETE_USERS_TEMP where user_id in (select USER_ID from CADREADMIN.DPS_USER_ORG where USER_ID like ? and ORGANIZATION = ?);<br/>
    4. delete from DELETE_USERS_TEMP where user_id in (select user_id from cadreadmin.dps_user_roles where atg_role = ? and user_id in (select user_id from DELETE_USERS_TEMP));<br/><br/>
    I have also tried the following for step 2:
    delete from DELETE_USERS_TEMP T
    where exists (select D.EMPLID from CLIENT_DATASET_DETAIL D
    where D.CD_ID = 170 and D.EMPLID = T.EMPLOYEE_ID);<br/><br/>
    Here is the explain plan for the original step 2 SQL, the one just above is about the same:<br/><br/>
    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
    "UPDATE STATEMENT"     "ALL_ROWS"     "7"     "1"     "76"<br/>     ""     ""     ""     ""     ""
    "UPDATE FILEUPLOAD.DELETE_USERS_TEMP"<br/>     ""     ""     ""     ""     ""     ""     ""     ""     ""
    "NESTED LOOPS(SEMI)"     ""     "7"     "1"     "76"<br/>     ""     ""     ""     ""     ""
    "TABLE ACCESS(FULL) FILEUPLOAD.DELETE_USERS_TEMP"     "ANALYZED"     "2"     "1"     "65"<br/>     ""     ""     ""     ""     ""
    "TABLE ACCESS(BY INDEX ROWID) FILEUPLOAD.CLIENT_DATASET_DETAIL"     "ANALYZED"     "5"     "10770"     "118470"<br/>     ""     ""     ""     ""     ""D"."CD_ID"=170"
    "INDEX(RANGE SCAN) FILEUPLOAD.CLIENT_DATASET_DETAIL_INX2"     "ANALYZED"     "2"     "3"<br/>     ""     ""     ""     ""     ""D"."EMPLID"="T"."EMPLOYEE_ID""     ""
    Message was edited by:
    user620465

    Pls, take a look at When your query takes too long ... . <br>
    Also, in step 1 you can avoid the copy of top level users and the ones that already have the deleted role. With a smaller table, the response time of step 2 can be reduced.
    Regards,
    Miguel

  • PROBLEM about query

    vBROK_CODE := SUBSTR(pBROK_CODE,2,LENGTH(pBROK_CODE)) ||'*' ;
              WHILE vLEN <= LENGTH(vBROK_CODE)
              LOOP
                   vP1 := INSTR(SUBSTR(vBROK_CODE,vLEN,LENGTH(vBROK_CODE)),'*');
                   vMYQUERY := 'INSERT INTO '||vTBL_TMPBROKCODE ||' (BROKE_CODE,DAYS)
                   SELECT SUBSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' - 1 ),1,INSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' - 1),'','') - 1 ),
                             TO_NUMBER(SUBSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||'- 1),-( LENGTH(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' -1 )) - INSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||'-1),'','')) )) FROM DUAL ' ;
                             EXECUTE IMMEDIATE vMYQUERY;
                                  PRINT_STRING(vMYQUERY);
                   vLEN:=vLEN+vP1;
                   vCOUNTER:=vCOUNTER+1;
              END LOOP;
    when i run this code it give me error
    right parenthisis missing
    help me
    pankaj

    Hi,
    i copied and pasted you code and it works well.
    CREATE OR REPLACE PROCEDURE CC IS
    vBROK_CODE VARCHAR2(4000);
    vP1 NUMBER;
    vMYQUERY VARCHAR2(4000);
    vLEN NUMBER := 0;
    pBROK_CODE VARCHAR2(1000) := '12345678';
    vTBL_TMPBROKCODE VARCHAR2(300) := 'MIA_TAB';
    vCOUNTER NUMBER;
    BEGIN
    vBROK_CODE := SUBSTR(pBROK_CODE,2,LENGTH(pBROK_CODE)) ||'*' ;
    WHILE vLEN <= LENGTH(vBROK_CODE)
    LOOP
    vP1 := INSTR(SUBSTR(vBROK_CODE,vLEN,LENGTH(vBROK_CODE)),'*');
    vMYQUERY := 'INSERT INTO '||vTBL_TMPBROKCODE ||' (BROKE_CODE,DAYS)
    SELECT SUBSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' - 1 ),1,INSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' - 1),'','') - 1 ),
    TO_NUMBER(SUBSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||'- 1),-( LENGTH(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||' -1 )) - INSTR(SUBSTR('''||vBROK_CODE||''','||vLEN||','||vP1||'-1),'','')) )) FROM DUAL ' ;
    EXECUTE IMMEDIATE vMYQUERY;
    DBMS_OUTPUT.PUT_LINE(vMYQUERY);
    --PRINT_STRING(vMYQUERY);
    vLEN:=vLEN+vP1;
    vCOUNTER:=vCOUNTER+1;
    END LOOP;
    END CC;
    try to check ALL code, from DELARE to END.
    Bye

  • Performance problem querying multiple CLOBS

    We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
    Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
    select count(*) from articles
    where contains (abstract , 'matter OR dark OR detection') > 0
    or contains (subject , 'matter OR dark OR detection') > 0
    Columns abstract and subject are CLOBs. However, this query works fine for AND;
    select count(*) from articles
    where contains (abstract , 'matter AND dark AND detection') > 0
    or contains (subject , 'matter AND dark AND detection') > 0
    The explain plan gives a cost of 2157 for OR and 14.3 for AND.
    I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
    create index article_abstract_search on article(abstract)
    INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
    The data and index tables are on separate tablespaces.
    Can anyone suggest what is going on here, and any alternatives?
    Many thanks,
    Geoff Robinson

    Thanks for your reply, Omar.
    I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
    Add an extra CONTAINS and it becomes 300 times more costly!
    The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
    Regards
    Geoff

  • Question for ORACLE EXPERTS!!! PERFORMANCE PROBLEM!!!

    I have 2 nodes on RAC. On node1 the following query run in 15 seconds and on node2 the same query run in 40 minutes. This is a big problem because we are migrating a SQL Server database to Oracle RAC 10gR2 (10.2.0.2) on HP-UX Itanium and we can´t finish the project because of this performance problem!!! PLEASE HELP ME ORACLE GURUS !!! I have opened a TAR at Metalink a 3 month ago,but they can´t help me with this. Can anyone explain this event to me please??? " gc cr multi block request 92011 0.33 737.49
    This is the query that i run on both nodes and this is the tkprof of the trace file when i run this on second node:
    select /*+ NO_PARALLEL(t) */ sum(t.saldo_em_real)
    from
    brcapdb2.titulo t
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 13.76 790.26 0 107219 0 0
    total 3 13.76 790.26 0 107219 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 31
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01
    Please help me if you can!!
    Tks,
    Paulo.

    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 2 0.00 0.00
    SQL*Net message from client 2 0.02 0.02
    gc cr multi block request 92011 0.33 737.49
    gc current grant busy 1 0.00 0.00
    latch: KCL gc element parent latch 4 0.00 0.00
    latch: object queue header operation 17 0.00 0.01
    latch free 6 0.00 0.00
    gc remaster 9 1.94 9.00
    gcs drm freeze in enter server mode 19 1.97 30.52
    gc current block 2-way 10 0.61 2.72
    SQL*Net break/reset to client 1 0.01 0.01There's lots of global cache traffic going on. All those multiblock transfers seem to be saturating the interconnect. Some things to consider:
    - Have a look at metalink note 3951017.8 for a possible fix
    - Check that your interconnect is properly configured
    - Try running the query with PARALLEL hint. This would mean doing disk I/O instead of buffer & interconnect I/O.
    edit: changed "without NOPARALLEL" to "with PARALLEL"
    Message was edited by:
    antti.koskinen

Maybe you are looking for