JOIN optimize PROJ - AFRU

Hi,
Has anybody idea if it is possible to optimize this query?
I suppose that join should be the best, what I can get... but it is still very very slow. I need data from AFRU, AFVC, AFVV per project num (PROJ), so there are thousands of lines in result. Now I am able to get data only per one AUFNR(AFPO) - few minutes, per POSID(PRPS) or PSPID(PROJ) it falls down after 10 minutes.
Is it better to divide so difficult query or use another techniques? Or change order of tables in query?
Thanks for help  in advance.
SELECT projpspid prpsposid afruaufnr afpomatnr maktmaktx afpoaufnr
      afvcvornr afvcltxa1 afvvmgvrg crhdarbpl crtxktext afvvvgw01
      afvvvgw02 afrulmnga afruism01 afruism02 afruersda afruernam
      afrupernr afrurueck afruile01 afvvvge01 afvv~meinh
    INTO CORRESPONDING FIELDS OF TABLE it_zppprodtime
    FROM afru
    INNER JOIN afpo
      ON afruaufnr = afpoaufnr
    INNER JOIN prps
      ON afpoprojn = prpspspnr
    INNER JOIN proj
      ON prpspsphi = projpspnr
    INNER JOIN crhd
      ON crhdobjid = afruarbid
    INNER JOIN afvc
      ON afvcaufpl = afruaufpl
        AND afvcaplzl = afruaplzl
    INNER JOIN afvv
      ON afvvaufpl = afruaufpl
        AND afvvaplzl = afruaplzl
    INNER JOIN makt
      ON maktmatnr = afpomatnr
    INNER JOIN crtx
      ON crtxobjid = crhdobjid
    WHERE
      proj~pspid IN s_pspid
      AND prps~posid IN s_projn
      AND crhd~arbpl IN s_arbpl
      AND afru~aufnr IN s_aufnr
      AND makt~spras = 'CS'
      and crtx~spras = 'CS'.

Hi Martin,
I had a few concerns.
1) In real life it is unlikely that that you will get users to make queries on all the orders. It appears more likely to me that users would enter a project number/s in the selection screen. Given that it makes more sense constructing your query to begin with project number rather than order number. I have taken the liberty of modifying the query to give you an example.
2) I noticed that your query reads table CRHD using field OBJID. The primary key is based on fields OBJTY and OBJID. If you know the OBJTY you can use it in the where clause. This will let you use the primary key. Remember CRHD is a huge table.
3) You can use the OBJID from CRHD table to join table CRTX. (Look at the query below).
SELECT proj~pspid
       prps~posid
       afru~aufnr
       afpo~matnr
       makt~maktx
       afpo~aufnr
       afvc~vornr
       afvc~ltxa1
       afvv~mgvrg
       crhd~arbpl
       crtx~ktext
       afvv~vgw01
       afvv~vgw02
       afru~lmnga
       afru~ism01
       afru~ism02
       afru~ersda
       afru~ernam
       afru~pernr
       afru~rueck
       afru~ile01
       afvv~vge01
       afvv~meinh
INTO CORRESPONDING FIELDS OF TABLE it_zppprodtime
FROM       proj
inner join prps
on  prps~psphi = proj~pspnr
inner join afpo
ON  afpo~projn = prps~pspnr
inner join afru
on  afru~aufnr = afpo~aufnr
INNER JOIN afvc
ON  afvc~aufpl  = afru~aufpl
AND afvc~aplzl = afru~aplzl
INNER JOIN afvv
ON  afvv~aufpl = afru~aufpl
AND afvv~aplzl = afru~aplzl
INNER JOIN makt
ON  makt~matnr = afpo~matnr
INNER JOIN crhd
ON  crhd~objid = afru~arbid
INNER JOIN crtx
ON  crtx~objTY = crhd~objty
AND crtx~objid = crhd~objid
WHERE proj~pspid IN s_pspid
AND   prps~posid IN s_projn
AND   crhd~arbpl IN s_arbpl
AND   afru~aufnr IN s_aufnr
AND   makt~spras = 'CS'
AND   crhd~objty = ????
AND   crtx~spras = 'CS'.

Similar Messages

  • Nested joins taking long time

    Hi Experts,
    Select query is taking long time execution.
    *- Get the Goods receipts  mainly selected per period (=> MKPF secondary
      SELECT mseg~ebeln mseg~ebelp mseg~werks
             ekko~bukrs ekko~lifnr ekko~zterm ekko~ekorg ekko~ekgrp
             ekko~inco1 ekko~exnum
             lfa1~name1 lfa1~land1 lfa1~ktokk lfa1~stceg
             mkpf~mblnr mkpf~mjahr mseg~zeile mkpf~bldat mkpf~budat
             mseg~bwart
    *Start of changes for CIP 6203752 by PGOX02
             mseg~smbln
    *End of changes for CIP 6203752 by PGOX02
             ekpo~matnr ekpo~txz01 ekpo~menge ekpo~meins
             ekbe~menge ekbe~dmbtr ekbe~wrbtr ekbe~waers
             ekpo~lgort ekpo~matkl ekpo~webaz ekpo~konnr ekpo~ktpnr
             ekpo~plifz ekpo~bstae
             INTO corresponding fields of TABLE it_temp
    *--Begin of modification
    *    FROM mkpf JOIN mseg ON mseg~mblnr EQ mkpf~mblnr
         FROM mkpf INNER JOIN mseg ON mseg~mandt EQ mkpf~mandt
                           and  mseg~mblnr EQ mkpf~mblnr
    *--End of modification
                           AND mseg~mjahr EQ mkpf~mjahr
                  JOIN ekbe ON ekbe~ebeln EQ mseg~ebeln
                           AND ekbe~ebelp EQ mseg~ebelp
                           AND ekbe~zekkn EQ '00'
                           AND ekbe~vgabe EQ '1'
                           AND ekbe~gjahr EQ mseg~mjahr
                           AND ekbe~belnr EQ mseg~mblnr
                           AND ekbe~buzei EQ mseg~zeile
                  JOIN ekpo ON ekpo~ebeln EQ ekbe~ebeln
                           AND ekpo~ebelp EQ ekbe~ebelp
                  JOIN ekko ON ekko~ebeln EQ ekpo~ebeln
                  JOIN lfa1 ON lfa1~lifnr EQ ekko~lifnr
              WHERE mkpf~budat IN so_budat
          AND mkpf~bldat IN so_bldat
          AND mkpf~vgart EQ 'WE'
          AND mseg~bwart IN so_bwart
          AND mseg~matnr IN so_matnr
          AND mseg~werks IN so_werks
          AND mseg~lifnr IN so_lifnr
          AND mseg~ebeln IN so_ebeln
          AND ekko~ekgrp IN so_ekgrp
          AND ekko~bukrs IN so_bukrs
          AND ekpo~matkl IN so_matkl
          AND ekko~bstyp IN so_bstyp
          AND ekpo~loekz EQ space
          AND ekpo~plifz IN so_plifz.
    in st05 it is showing MKPF is taking more time. Need ur sugesstions need ur help
    Moderator message - Please use code tags to format your code
    Edited by: Rob Burbank on Feb 5, 2010 9:21 AM

    Hi,
    your result set is quite big.
    20296 records in 283,5 seconds (13,9 ms per record) might not be optimal
    but definatelly the big result set will take time to execute, since you are reading data from 6 tables:
    SELECT
    FROM mkpf
    JOIN mseg ON u2026
    JOIN ekbe ON u2026
    JOIN ekpo ON u2026
    JOIN ekko ON u2026
    JOIN lfa1 ON u2026
    what run time do you expect?
    In case we would be able to get one record in 5ms it will still run more than 100 seconds.
    Generally: Limit the result set as far as possible (with where conditions), select only the columns that are really needed.
    Geneal nested loop join optimization:
    check if the optimizer chooses the table that has the smallest result set (take all where conditions for each table
    and use SE16 to count the result sets, the smallest one should be the start for the nested loop join.
    For the starting table the selective where condition fields (those the limit the result set significantly for that table) should be indexed.
    For other tables the join condition (and maybe additional selecitve where conditions for this table) should be indexed. but most likely you will join to the tables using the primary key which is indexed... .
    Kind regards,
    Hermann

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • (V7.3)RDBMS 7.3 Enterprise Edition NEW FEATURE에 대한 Q&A

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    (V7.3)RDBMS 7.3 Enterprise Edition NEW FEATURE에 대한 Q&A
    =========================================================
    1. Q) Oracle 7 Release 7.3의 new feature들을 간단하게 알고 싶습니다.
    A) 다음과 같이 요약하여 설명드리겠습니다.
    New features of 7.3.3 are :Direct load to cluster
    Can use backup from before RESETLOGS
    New features of 7.3 are :histograms
    hash joins
    star join enhancement
    standby databases
    parallel union-all
    dynamic init.ora configuration
    direct path export
    compiled triggers
    fast create index
    multiple LRU latches
    updatable join views
    LSQL cursor variable enhancement
    replication enhancement
    ops processor affinity
    Net 2 load balancing
    XA scaling/recovery
    thread safe pro*c/oci
    DB verify
    new pl/sql packages
    new pl/sql features
    bitmap indexes
    2. Q) Oracle 7 Release 7.2와 7.3의 새로운 Parallel feature에는 어떤 것이
    있습니까?
    A) Oracle 7 parallel query 에 의한 parallel operation에는 다음과 같은 내
    용이 있습니다.
    > Parallel Data Loading : conventional and direct-path, to the same
    table or multiple tables concurrently.
    > Parallel Query : table scans, sorts, joins, aggregates, duplicate
    elimination, UNION and UNION ALL(7.3)
    > Parallel Subqueries : in INSERT, UPDATE, DELETE statements.
    > Parallel Execution : of application code(user-defined SQL functions)
    > Parallel Joins :
    nested loop,
    sort-merge,
    star join optimization(creation of cartesian products plus the
    nested loop join),
    hash joins(7.3).
    > Parallel Anti-Joins : NOT IN(7.3).
    > Parallel Summarization(CREATE TABLE AS SELECT) :
    query and insertion of rows into a rollup table.
    > Parallel Index Creation(CREATE INDEX) :
    table scans, sorts, index fragment construction.
    3. Q) Release 7.2와 7.3에서 추가된 optimization 기능에는 어떤 내용이 있습
    니까?
    A) 다음과 같은 기능들이 있습니다.
    1> Direct Database Reads
    Parallel query 프로세스들은 필터링이나, 소팅, 조인과 같은 작업을 수행하
    기 위해서는 아주 큰 테이블을 scanning해야 합니다. Direct Database Reads는
    read efficiency와 성능의 향상을 위해 contiguous memory read를 가능하게 해
    줍니다. 또한, concurrent OLTP와 같은 작업을 수행시 따르는 경합을 없애기 위
    해 버퍼 캐쉬를 bypass합니다.
    2> Direct Database Writes
    Parallel query 프로세스들은 intermediate sort runs, summarization
    (CREATE TABLE AS SELECT), index creation(CREATE INDEX)과 같은 작업의 수행
    결과를 디스크에 종종 기록해야 합니다.
    Direct Database Writes는 write efficiency와 성능의 향상을 위해 direct
    contiguous memory로 하여금 contiguous disk writes를 가능하게 해줍니다.
    또한, concurrent OLTP 작업과 DBWR 프로세스에 의한 경합을 없애기 위해 버
    퍼 캐쉬를 bypass합니다.
    결론적으로, Direct Database Reads와 Writes는 concurrent OLTP와 DSS 작
    업에 따르는 복잡한 부하를 조절하면서 Oracle 7 서버를 분리된 형태로, 또한 최
    적의 튜닝을 가능하게 해줍니다.
    3> Asynchronous I/O
    Oracle 7은 이미 sorts, summarization, index creation, direct-path
    loading 에 대한 asynchronous write 기능을 제공하고 있습니다.
    Release 7.3부터는 보다 나은 성능의 향상을 위해 asynchronous read-ahead
    기능을 제공하여 최대한 processing과 I/O의 병행성을 증가시켜 줍니다.
    4> Parallel Table Creation
    CREATE TABLE ... AS SELECT ...와 같은 구문을 제공하여 상세한 데이타를
    갖는 큰 테이블의 조회된 결과를 저장하기 위해 임시 테이블을 생성합니다.
    이 기능은 보통 intermediate operation의 결과를 저장하기 위해 drill-down
    분석을 할 때 사용됩니다.
    5> Support for the Star Query Optimization
    Oracle 7은 수행 속도의 향상을 위해 star 스키마가 존재하고, star query
    optimization을 invoke합니다. Star query는 먼저 여러 개의 작은 테이블을
    join하고, 그런 후에, 그 결과를 하나의 큰 테이블로 join합니다.
    6> Intelligent Function Shipping
    Release 7.3부터 parallel query를 처리하는 coordinator 프로세스는
    non-shared memory machine(cluster 또는 MPP) 내의 노드들을 처리하기 위해
    디스크나 데이타들 간의 유사성에 대해 인식하게 될 것입니다.
    이 사실에 근거하여, coordinator는 data들이 machine의 shared
    interconnect를 통해 전달될 필요가 없다는 점에서, 특정 node-disk pair로 수
    행되고 있는 프로세스들에게 parallel query operation을 지정할 수 있습니다.
    이 기능은 연관된 cost나 overhead없이 'shared nothing' 소프트웨어 아키텍
    쳐의 잇점을 제공하면서 효율성과 성능, 확장성을 개선할 수 있습니다.
    7> Histograms
    Release 7.3부터 Oracle optimizer는 테이블의 컬럼 내에 있는 데이타 값의
    분포에 관한 더 많은 정보를 이용할 수 있습니다. Value와 상대적 빈도수를 나타
    내는 histogram은 optimizer에게 index의 상대적 'selectivity'에 관한 정보와
    어떤 index를 사용해야할 것인가에 관한 더 좋은 아이디어를 제공해 줄 것입니다.
    적절한 선택을 한다면, query의 수행시간을 몇 분, 심지어 몇 시간씩이나 단축
    시킬 수가 있습니다.
    8> Parallel Hash Joins
    Release 7.3부터 Oracle 7은 join 처리시간의 단축을 위하여 hash join을 제
    공합니다. 해슁 테크닉을 사용하면 join을 하기 위해 데이타를 소트하지 않아도
    되며, 기존에 존재하는 인덱스를 사용하지 않으면서 'on-the-fly' 라는 개념을 제
    공합니다.
    따라서, star schema 데이타베이스에 전형적으로 적용되는 small-to-large
    테이블 join의 수행 속도를 향상시킬 것입니다.
    9> Parallel UNION and UNION ALL
    Release 7.3부터 Oracle 7은 UNION과 UNION ALL과 같은 set operator를 사
    용하여 완전히 parallel하게 query를 수행할 수 있습니다. 이러한 operator를 사
    용하면, 큰 테이블들을 여러 개의 작은 테이블의 집합으로 나누어 처리하기가 훨
    씬 쉬워질 것입니다.
    4. Q) Release 7.3에는 어떤 제품들이 있습니까?
    A) Oracle 7 서버 Release 7.3.3에 대한 제품 리스트는 다음과 같습니다.
    단, 모든 플랫폼들이 리스트된 모든 제품들을 지원하지는 않습니다.
    [ Product ] [ Revision ]
    Advanced replication option 7.3.3.0.0
    Parallel Query Option 7.3.3.0.0
    Parallel Server Option 7.3.3.0.0
    Oracle 7 Server 7.3.3.0.0
    Distributed Database Option 7.3.3.0.0
    Oracle*XA 7.3.3.0.0
    Oracle Spatial Data Option 7.3.3.0.0
    PL/SQL 2.3.3.0.0
    ICX 7.3.3.0.0
    OWSUTL 7.3.3.0.0
    Slax 7.3.3.0.0
    Context Option 2.0.4.0.0
    Pro*C 2.2.3.0.0
    Pro*PL/I 1.6.27.0.0
    Pro*Ada 1.8.3.0.0
    Pro*COBOL 1.8.3.0.0
    Pro*Pascal 1.6.27.0.0
    Pro*FORTRAN 1.8.3.0.0
    PRO*CORE 1.8.3.0.0
    Sqllib 1.8.3.0.0
    Codegen 7.3.3.0.0
    Oracle CORE 2.3.7.2.0
    SQL*Module Ada 1.1.5.0.0
    SQL*Module C 1.1.5.0.0
    Oracle CORE 3.5.3.0.0
    NLSRTL 2.3.6.1.0
    Oracle Server Manager 2.3.3.0.0
    Oracle Toolkit II(Dependencies of svrmgr) DRUID 1.1.7.0.0
    Multi-Media APIs(MM) 2.0.5.4.0
    OACORE 2.1.3.0.0
    Oracle*Help 2.1.1.0.0
    Oracle 7 Enterprise Backup Utility 2.1.0.0.2
    NLSRTL 3.2.3.0.0
    SQL*Plus 3.3.3.0.0
    Oracle Trace Daemon 7.3.3.0.0
    Oracle MultiProtocol Interchange 2.3.3.0.0
    Oracle DECnet Protocol Adapter 2.3.3.0.0
    Oracle LU6.2 Protocol Adapter 2.3.3.0.0
    Oracle Names 2.0.3.0.0
    Advanced Networking Option 2.3.3.0.0
    Oracle TCP/IP Protocol Adapter 2.3.3.0.0
    Oracle Remote Operations 1.3.3.0.0
    Oracle Named Pipes Protocol Adapter 2.3.3.0.0
    Oracle Intelligent Agent 7.3.3.0.0
    SQL*Net APPC 2.3.3.0.0
    SQL*Net/DCE 2.3.3.0.0
    Oracle OSI/TLI Protocol Adapter 2.3.3.0.0
    Oracle SPX/IPX Protocol Adapter 2.3.3.0.0
    NIS Naming Adapter 2.3.3.0.0
    NDS Naming Adapter 2.3.3.0.0
    Oracle Installer 4.0.1

    P.S. I have checked the CD rom itself by doing the installation in our classroom on a Windows XP Pro machine and it loaded like a charm. I have been emailing and calling Compaq customer support on this issue, and I've done everything they've suggested, including a Quick Format of the hard drive. I am still getting the same results. I have been able to load a small program (Palm Desktop) on the Compaq without a problem, so I don't think it's the CD drive that's the problem, either. Thanks for any help you can give me!!! Deborah

  • Database Performance: Large execution time.

    Hi,
    I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    Following is the query:
    select
         supp_nation,
         cust_nation,
         l_year,
         sum(volume)
    from
              select
                   n1.n_name as supp_nation,
                   n2.n_name as cust_nation,
                   YEAR (l_shipdate) as l_year,
                   l_extendedprice * (1 - l_discount) as volume
              from
                   supplier,
                   lineitem,
                   orders,
                   customer,
                   nation n1,
                   nation n2
              where
                   s_suppkey = l_suppkey
                   and o_orderkey = l_orderkey
                   and c_custkey = o_custkey
                   and s_nationkey = n1.n_nationkey
                   and c_nationkey = n2.n_nationkey
                   and (
                        (n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY')
                        or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
                   and l_shipdate between '1995-01-01' and '1996-12-31'
                   and o_totalprice <= 246835
                   and c_acctbal <= -422.16
         )as shipping
    group by
         supp_nation,
         cust_nation,
         l_year
    order by
         supp_nation,
         cust_nation,
         l_year
    Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    The machine configuration and the database configuration are as follows:
    Machine:
    64-bit Windows Vista operating System.
    RAM: 8GB.
    CPU: 3.0 GHZ
    Database:
    Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    Log Area: Volume: 1, Size: 1GB
    Data and Log are on same disk.
    Caches:
    I/O Buffer Cache: 1 GB
    Data Cache: 1 GB
    Catalog Cache: 30 MB
    Parameters:
    CacheMemorySize - 131072
    ReadAheadLobThreshold- 3000
    Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    How to increase or better the performance? Is there any other parameter that remains to be set?

    > I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    > Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    Such general statements are usually total crap.
    MaxDB is running for many SAP customer and SAP internally in many installations - even for BI systems.
    We don't know your Oracle server, we don't know the execution plans - so there's nothing to tell why it may be the case here.
    > Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    It's a rule of thumb - having just one volume is a rather bad idea since you don't get parallel I/O with that.
    > Log Area: Volume: 1, Size: 1GB
    > Data and Log are on same disk.
    Although this is irrelevant for the query performance it's nonsense in productive environments and a performance killer as well.
    > I/O Buffer Cache: 1 GB
    > Data Cache: 1 GB
    Why don't you allow more Cache ?
    > Catalog Cache: 30 MB
    What for? Do you understand the catalog cache in MaxDB?
    It's  a per session setting...
    > Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    Can you be more specific here?
    What MaxDB version are you using? What parameter settings do you use?
    > How to increase or better the performance? Is there any other parameter that remains to be set?
    How about showing us the execution plan for the statement and the index structure?
    How should we know what MaxDB does here that takes so much time?
    Did you have the DBanalyzer running while the query ran?
    TPC-H is a benchmark for ad-hoc, decision making support: did you enable any of the BI feature pack features of MaxDB? What about prefetching? What about table clustering, column compression, star join optimization ...?
    All in all - you left us here with "MaxDB is slower than Oracle" and nothing to work on.
    That's not useful in any way.
    Want some answers - provide some information!
    regards,
    Lars

  • View V/s Extract structure in generic data source!

    Hi,
    Is it mandatory that view and extract structure must contain same number of fields in generic extraction?
    can we enhance the extract structure with a field which is not existing in view?
    clarify me with exmaple
    Thanks,
    Ravi

    Thanks Diego!
    Actaully, the view which is used in the definition of the data source has four tables
    namely AFRU,CRHD,AUFK and AFKO.
    Join conditions are:
    AFRU     MANDT     =     CRHD     MANDT
    AFRU     ARBID     =     CRHD     OBJID
    AFRU     MANDT     =     AUFK     MANDT
    AFRU     AUFNR     =     AUFK     AUFNR
    AFRU     MANDT     =     AFKO     MANDT
    AFRU     AUFNR     =     AFKO     AUFNR
    Now, i want to know from which table is the field AUFNR is mapped to BW info object!
    As the field is in both tables AFRU and AUFK, i am confused of from which table the data is getting filled up to the BW info object!
    I need to make a documentation of of each info object of how is it getting filled up and from which table?
    Thanks,
    Ravi

  • Regarding output in a report

    The report five tables of PS modules.PROJ,PRPS,HRP1001
    ZPRACTICE and ZLEAVE
    i have the selextion screen with project,STart date and end date
    and practice.
    I want to get the report o/p in such a mannner that
    1.Based on Project allocated with these date range selected I have to get o/p with all fields
    2.similarly with practice allocated with date range.
    These are All tables of PS and HR
      Give me solutions ASAP
      Points will be awarded
                                     thanks in advance
    the

    hi ravi
    There must be a practiceid in your Proj table.
    u can make join proj-pspid field and Hrp1001-objid.
    HRP1001 all the allocations regarding a project-pspid gets stored.
    And in Zleave table u must have all the leaves taken by company employees with startdate and enddate.
    Do join on Proj-pspid with Hrp1001-objid and get all the projects and corresponding persons allocated in the respective project and respective ZPractice field in proj table.
    once you get both these you can then check in Zleave for the dates u have selected you can check employees in selected dates allocated in the project.
    R u working in TIS.
    Have a nice day
    Regards
    Kalpesh Chandrakant Parab

  • Custom field changes in Project 2010

    Is there a last-modified-date indicator in one of the databases to indicate when a custom field was changed? For example, I want to know when a custom field called project-phase (not workflow) has changed from one value to the next within a PDP. 
    The purpose is to produce a report that indicates the progression of a project from one phase to the next (plan-analyze-design, etc.).

    this question is closed - but for what you asked for the basic query provided  gave you the custom field modified date which would be more precise then the project modified date - 
    The below query will get you the information you want
    YOU put in the project GUID  OR the GUID of the Custom Field and this will return the tasks and project name it will also show you the difference (at least it did in my DB) the project modified data versus the task/custom field modified date which is
    also what you asked for...
    USE ProjectSevrerDraft
    SELECT
    mscf.MD_PROP_NAME as "Custom Field",
    mstcv.MD_PROP_ID as "MD PROP ID - can remove",
    proj.PROJ_NAME as "Project Name",
    proj.MOD_DATE as "Project Modified Date",
    mst.TASK_NAME as "Task Name",
    mstcv.CREATED_DATE as "Task Created Date",
    mstcv.MOD_DATE as "Task Modified Date"
    FROM MSP_TASK_CUSTOM_FIELD)VALUES mstcv
    JOIN MSP_PROJECTS proj
    ON mstcv.PROJ_UID=proj.PROJ_UID
    JOIN MSP_TASKS mst
    ON mstcb.TASK_UID=mst.TASK_UID
    JOIN [ProjectServerPublished].[dbo].[MSP_CUSTOM_FIELDS] mscf
    ON mscf.MD_PROP_ID=mstcv.MD_PROP_ID
    -- You can select either below un-comment out the '--'
    -- And insert what you want - the project or Custom field
    -- GUID will help. The PROJ_UID is found under MSP_PROJECTS
    -- THE MD_PROP_ID is found under the published database in
    -- the MSP_CUSTOM_FIELDS table
    --WHERE mstcv.PROJ_UID =' GUID OF PROJECT'
    --WHERE mstcv.MD_PROP_ID = 'GUID OF CUSTOM FIELD'

  • KSB1 : New field addtion

    Hi,
    If anyone could let me know how to additional field in KSB1 T.code, the field which I want to add is "Movement Type". Please advice with necessary steps.
    Regards,
    Varsha

    Hi ,
    1- in cmod create a project using enhancement COOMEP01 ( EXIT_SAPLKAEP_001) .
    2- add your new customer field in include structure CI_RKPOS in ( KAEP_COAC )
    write ABAP code in FM: EXIT_SAPLKAEP_001 to find data you want to add and put it
    into new field in CS_RECORD .
    for example  I have added work center (ARBPL ) to CI_RKPOS and ABAP code is :
    SELECT SINGLE arbpl INTO CS_RECORD-ARBPL FROM afru INNER JOIN crhd
       on afru~arbid eq crhd~objid
       WHERE rueck eq CS_RECORD-REFBN "confirmation number
       and  rmzhl eq CS_RECORD-BW_REFBZ ."confirmation counetr
    Regards ,
    REZA ROSTAMI / SAPHIRAN ABAP TEAM(www.saphiran.com)

  • LINK BETWEEN AFKO AND AFVC

    Hi,
    How to link afko with afvc, i know the common field is AUFPL, but AFVC is table for operation, so how to choose exact row from multiple row. My coding is as follows
    SELECT * into corresponding fields of table itab1 FROM afko as a
    INNER JOIN afvc as b ON aaufpl = baufpl
    INNER JOIN afru as c ON brueck = crueck
    inner join afpo as d on daufnr = aaufnr
    inner join crhd as e on eOBJID = BARBID
    WHERE aaufnr = itab-aufnr AND cSTOKZ NE 'X'
    AND cSTZHL = '0' AND cBUDAT IN P_DATE AND c~LTXA1 <> ''.
    Regards

    Hi Khushi,
    AFKO ==> AUFNR is the only Primary Key
    AFVC ==> AUFPL and APLZL together is the primary Key...
    AUFPL you will get from AFKO and You can pass it to AFVC...
    For One AUFNR you may get several records in AFVC depending on the Number of Activity / Operations attched to given production Order..
    You can pass the counter (it refers the Operation/Activity)... along with AUFPL to AFVC to get unique record..
    By Seeing your Select Query... I can say you should include
    RMZHL while joining AFVC and AFRU.... in Join Condition....
    SELECT * into corresponding fields of table itab1 FROM afko as a
    INNER JOIN afvc as b ON a~aufpl = b~aufpl
    INNER JOIN afru as c ON b~rueck = c~rueck
    AND
    b~RMZHL = c~RMZHL
    inner join afpo as d on d~aufnr = a~aufnr
    inner join crhd as e on e~OBJID = B~ARBID
    WHERE a~aufnr = itab-aufnr AND c~STOKZ NE 'X'
    AND c~STZHL = '0' AND c~BUDAT IN P_DATE AND c~LTXA1 ''.
    Hope it will solve your problem..
    Thanks & Regards
    ilesh 24x7
    ilesh Nandaniya

  • Performance issue with the Select query

    Hi,
    I have an issue with the performance with a seclet query.
    In table AFRU - AUFNR is not a key field.
    So i had selected the low and high values into s_reuck and used it in Where condition.
    Still i have an issue with the Performance.
    SELECT SINGLE RUECK
    RMZHL
    IEDD
    AUFNR
    STOKZ
    STZHL
    FROM AFRU INTO table t_AFRU
    FOR ALL ENTRIES IN T_ZSCPRT100
    WHERE RUECK IN S_RUECK AND
    AUFNR = T_ZSCPRT100-AUFNR AND
    STOKZ = SPACE AND
    STZHL = 0.
    I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
    Is there anyway that we can declare Key field while declaring the Internal table....?
    ANy suggestions to fix the performance issue is apprecaited!
    Regards,
    Kittu

    Hi,
    Thank you for your quick response!
    Rui dantas, i have lill confusion...this is my code below :
    data : t_zscprt type standard table of ty_zscprt,
           wa_zscprt type ty_zscprt.
    types : BEGIN OF ty_zscprt100,
            aufnr type zscprt100-aufnr,
            posnr  type zscprt100-posnr,
            ezclose type zscprt100-ezclose,
            serialnr type zscprt100-serialnr,
            lgort type zscprt100-lgort,
          END OF ty_zscprt100.
    data : t_zscprt100 type standard table of ty_zscprt100,
           wa_zscprt100 type ty_zscprt100.
    Types: begin of ty_afru,
                rueck type CO_RUECK,
                rmzhl type CO_RMZHL,
                iedd  type RU_IEDD,
                aufnr type AUFNR,
                stokz type CO_STOKZ,
                stzhl type CO_STZHL,
             end of ty_afru.
    data : t_afru type STANDARD TABLE OF ty_afru,
            WA_AFRU TYPE TY_AFRU.
    SELECT AUFNR
            POSNR
            EZCLOSE
            SERIALNR
            LGORT
            FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
            FOR ALL ENTRIES IN T_ZSCPRT
            WHERE   AUFNR = T_ZSCPRT-PRTNUM
            AND   SERIALNR IN S_SERIAL
            AND    LGORT   IN S_LGORT.
        IF sy-subrc <> 0.
           MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
           stop."BDCG87
        ENDIF.
      ENDIF.
    SELECT    RUECK
                  RMZHL
                  IEDD
                  AUFNR
                  STOKZ
                  STZHL
                  FROM AFRU INTO TABLE T_AFRU
                  FOR ALL ENTRIES IN T_ZSCPRT100
                  WHERE RUECK IN S_RUECK     AND
                        AUFNR = T_ZSCPRT100-AUFNR AND
                        STOKZ = SPACE AND
                        STZHL = 0.
    Using AUFNR, get AUFPL from AFKO
    Using AUFPL, get RUECK from AFVC
    Using RUEKC, read AFRU
    In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
    This is my select query, would you want me to write another select query to meet this criteria..
    From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
    ANy suggestions wil be appreciated!
    Regards
    Kittu

  • Why optimizer prefers nested loop over hash join?

    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
          AND ROWNUM < 2tkprof output is:
    Production
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          4           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1    228.83     223.48          0    4264533          0           1
    total        3    228.84     223.50          0    4264537          4           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 108  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=4264533 pr=0 pw=0 time=223484076 us)
          1   NESTED LOOPS  (cr=4264533 pr=0 pw=0 time=223484031 us)
      10401    TABLE ACCESS FULL T1 (cr=192 pr=0 pw=0 time=228969 us)
          1    TABLE ACCESS FULL T2 (cr=4264341 pr=0 pw=0 time=223182508 us)Development
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          4          0           0
    Fetch        1      0.05       0.03          0        512          0           1
    total        3      0.06       0.06          0        516          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 113  (SANJEEV)
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=512 pr=0 pw=0 time=38876 us)
          1   HASH JOIN  (cr=512 pr=0 pw=0 time=38846 us)
         51    TABLE ACCESS FULL T2 (cr=492 pr=0 pw=0 time=30230 us)
        861    TABLE ACCESS FULL T1 (cr=20 pr=0 pw=0 time=2746 us)

    sanjeevchauhan wrote:
    What do I look for if I want to find out why the server prefers a nested loop over hash join?
    The server is 10.2.0.4.0.
    The query is:
    SELECT p.*
    FROM t1 p, t2 d
    WHERE d.emplid = p.id_psoft
    AND p.flag_processed = 'N'
    AND p.desc_pool = :b1
    AND NOT d.name LIKE '%DUPLICATE%'
    AND ROWNUM < 2
    You've got already some suggestions, but the most straightforward way is to run the unhinted statement in both environments and then force the join and access methods you would like to see using hints, in your case probably "USE_HASH(P D)" in your production environment and "FULL(P) FULL(D) USE_NL(P D)" in your development environment should be sufficient to see the costs and estimates returned by the optimizer when using the alternate access and join patterns.
    This give you a first indication why the optimizer thinks that the chosen access path seems to be cheaper than the obviously less efficient plan selected in production.
    As already mentioned by Hemant using bind variables complicates things a bit since EXPLAIN PLAN is not reliable due to bind variable peeking performed when executing the statement, but not when explaining.
    Since you're already on 10g you can get the actual execution plan used for all four variants using DBMS_XPLAN.DISPLAY_CURSOR which tells you more than the TKPROF output in the "Row Source Operation" section regarding the estimates and costs assigned.
    Of course the result of your whole exercise might be highly dependent on the actual bind variable value used.
    By the way, your statement is questionable in principle since you're querying for the first row of an indeterministic result set. It's not deterministic since you've defined no particular order so depending on the way Oracle executes the statement and the physical storage of your data this query might return different results on different runs.
    This is either an indication of a bad design (If the query is supposed to return exactly one row then you don't need the ROWNUM restriction) or an incorrect attempt of a Top 1 query which requires you to specify somehow an order, either by adding a ORDER BY to the statement and wrapping it into an inline view, or e.g. using some analytic functions that allow you specify a RANK by a defined ORDER.
    This is an example of how a deterministic Top N query could look like:
    SELECT
    FROM
    SELECT p.*
        FROM t1 p, t2 d
        WHERE d.emplid = p.id_psoft
          AND p.flag_processed = 'N'
          AND p.desc_pool = :b1
          AND NOT d.name LIKE '%DUPLICATE%'
    ORDER BY <order_criteria>
    WHERE ROWNUM <= 1;Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • What is the best way to Optimize a SQL query : call a function or do a join?

    Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?

    Hi,
    If you're even considering a join, then it will probably be faster.  As Justin said, it depends on lots of factors.
    A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
    You might choose to have a user-defined function even though you could get the same result with a join.  That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case.

  • SQL Optimization with join and in subselect

    Hello,
    I am having problems finding a way to optimize a query that has a join from a fact table to several dimension tables (star schema) and a constraint defined as an in (select ....). I am hoping that this constraint will filter the fact table then perform the joins but I am seeing just the opposite with the optimizer joining first and then filtering at the very end. I am using the cost optimizer and saw that it does in subselects last in the predicate order. I tried the push_subq hint with no success.
    Does anyone have any other suggestions?
    Thanks in advance,
    David
    example sql:
    select ....
    from fact, dim1, dim2, .... dim &lt;n&gt;
    where
    fact.dim1_fk in ( select pf from dim1 where code = '10' )
    and fact.dim1_fk = dim1.pk
    and fact.dim2_fk = dim2.pk
    and fact.dim&lt;n&gt;_fk = dim&lt;n&gt;.pk

    The original query probably shouldn't use the IN clause because in this example it is not necessary. There is no limit on the values returned if a sub-select is used, the limit is only an issue with hard coded literals like
    .. in (1, 2, 3, 4 ...)Something like this is okay even in 8.1.7
    SQL> select count(*) from all_objects
      2  where object_id in
      3    (select object_id from all_objects);
      COUNT(*)
         32378The IN clause has its uses and performs better than EXISTS in some conditions. Blanket statements to avoid IN and use EXISTS instead are just nonsense.
    Martin

  • Optimizer choosing hash joins even when slower

    We have several queries where joins are being evaluated by full scans / hash joins even when forcing index use results in an execution time about a quarter the duration of the hash join plan. It still happens if I run DMS_STATS.GATHER_TABLE_STATS with FOR ALL COLUMNS.
    Is there a stats gathering option which is more likely to result in an indexd join without having to get developers to put optimizer hints in their queries?
    11g on SuSE 10.
    Many thanks.

    user10400178 wrote:
    That would require me to post a large amount of schema information as well to be of any added value.
    Surely there are some general recommendations one could make as to how to allow the optimizer to realise that joining through an index is going to be quicker than doing a full scan and hash join to a table.If you don't want to post the plans, then as a first step you basically need to verify yourself if the cardinality estimates returned by the execution plan correspond roughly to the actual cardinalities.
    E.g. in your execution plan there are steps like "FULL TABLE SCAN" and these operations likely have a corresponding "FILTER" predicate in the "Predicate Information" section below the plan.
    As first step you should run simple count queries ("select count(*) from ... where <FILTER/ACCESS predicates>") on the tables involved using the "FILTER" and "ACCESS" predicates mentioned to compare if the returned number of rows is in the same ballpark than the estimates mentioned in the plan.
    If these estimates are already way off then you know that for some reason the optimizer makes wrong assumptions and that's probably the reason why the suboptimal access pattern is preferred.
    One potential reason could be correlated column values, but since you're already on 11g you could make use of extended column statistics. See here for more details:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/stats.htm#BEIEEIJA
    Another reason might simply that you're choosing a too low "estimate" sample size for the statistics collection. In 11g you should always use the DBMS_STATS.AUTO_SAMPLE_SIZE for the "estimate_percent" parameter of the DBMS_STATS.GATHER__STATS procedures. It should generate accurate statistics without the need to analyze all of the data. See here in Greg Rahn's blog for an example:
    http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/
    Regarding the histograms: Oracle 11g by default generates histograms if it deems them to be beneficial. It is controlled by the parameter "METHOD_OPT" which has the default value of "FOR ALL COLUMNS SIZE AUTO". The "SIZE" keyword determines the generation of histograms. You could use "SIZE 1" to prevent histogram generation, "SIZE <n>" to control the number of buckets to use for the histogram or "SIZE AUTO" to let Oracle decide itself when and how to generate histograms.
    Regarding the stored outlines: You could have so called "stored outlines" that force the optimizer to stick to a certain plan. That features was introduced a long time ago and is sometimes also referred to as "plan stability", its main purpose was an attempt to smooth the transition from the rule based optimizer (RBO) to the cost based optimizer (CBO) introduced in Oracle 7 (although you can use it for other purposes, too, of course). Oracle 11g offers now the new "SQL plan management" feature that is supposed to supersede the "plan stability" feature. For more information, look here:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/outlines.htm#PFGRF707
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Oct 16, 2008 4:20 PM
    Sample size note added
    Edited by: Randolf Geist on Oct 16, 2008 6:54 PM
    Outline info added

Maybe you are looking for

  • Cannot boot after a migration to systemd in OpenVZ

    EDIT: I "solved" my problem, and I doubt this thread will ever help anybody. You're warned. Hello, Earlier today, my Arch install that was still running with initscripts (and without the last filesystem/bash upgrade) suddently rebooted, and that brok

  • Is BT infinity good, feedback from people who have...

    So my area just got enabled for Fiber Optic and now have a selection of ISP to choose from. At the moment I'm with Sky Boradband Unlimited and they've been perfect. They have offered me a fanstistic deal which is still cheaper than what is BT offerin

  • PO item deletion if downpayment exists

    Dear Experts, How to delete a PO line item if downpayment exists for that line item.  My client had created an import PO by selection wrong vendor code.  Hence local pricing proceedure had been determined instead of import pricing proceedure.  Downpa

  • I dropped my phone in the toilet, everything still works, including WiFi, except...

    I cannot make or recieve calls or texts. A screen popped up saying I had to restore my iPhone at apple.com/support. But, I can't figure out how to do this. The top bar on my phone just says "Searching..." Like I said, everything else still works fine

  • Not enough free space in iTunes???

    Ok after 4 hours of messing with this, I NEED help. I tried to put some music into iTunes library and I went to "Update songs" and I get an error saying there is not enough free space in the iTunes library.... So I'm stuck with an Ipod nano that I ca