Missing images in explain plan export 2.1.1

Hi,
the export of explain plan is creating an images folder, but without images in there so the html result is very very hard to read, without the images. XP SP3.
Juergen

I'm seeing the same issue. If one of the devs don't pick it up in here, you may want to submit a SR.

Similar Messages

  • How to export results of explain plan?

    A colleague tells me that in an older version of SQLDeveloper (1.2.1, perhaps?), he was able to export the results of the explain plan operation (I don't know what export forms were available). In the current version (1.5.1), it generates a tree component with information I can look at, but there's no way to write out these results in any form. Am I missing something?

    HOW TO TRANSPORT A SQL TUNING SET [ID 456019.1]
    Modified 08-SEP-2008 Type HOWTO Status PUBLISHED
    In this Document
    Goal
    Solution
    Applies to:
    Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6
    Information in this document applies to any platform.
    10.2.X,11.1.X
    Goal
    The Purpose of This Note is to Show the Steps needed to Transport a Sql Tuning Set from
    One Database server to another Database Server.
    Solution
    The Sql Tuning Set name is small_sh_sts_4.It is on 10.2.0.2.0 Database server.
    It has 4 sqls .
    select sql_id, substr(sql_text,1, 15) text
    from dba_sqlset_statements
    where sqlset_name = 'small_sh_sts_4'
    order by sql_id;
    SQL_ID TEXT
    4qdz7j26mdwzb SELECT /*+ my_q
    6y289t15dqj9r SELECT /*+ my_q
    ckm14c67njf0q SELECT /*+ my_q
    g37muqb81wjau SELECT /*+ my_q
    1) Create the Staging Table ON THE SOURCE SYSTEM ( IN this case 10.2.0.2.0 ) .
    execute DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(table_name =>'TEST');
    PL/SQL procedure successfully completed.
    SQL> select count(*) from test;
    COUNT(*)
    0
    2) Popluate the table TEST using DBMS_SQLTUNE.PACK_STGTAB_SQLSET THE SOURCE SYSTEM (IN this case 10.2.0.2.0 )
    execute DBMS_SQLTUNE.PACK_STGTAB_SQLSET(sqlset_name => 'small_sh_sts_4',staging_table_name => 'TEST');
    PL/SQL procedure successfully completed.
    select count(*) from test;
    SQL> select count(*) from test;
    COUNT(*)
    4
    3) Export the table table test on THE SOURCE SYSTEM ( IN this case 10.2.0.2.0 )
    and move the table to the Destination Server
    and Import it . The staging table TEST can also be moved using
    the mechanism of choice such as datapump or database link.
    While exporting the table you will see something like
    About to export specified tables via Conventional Path ...
    Table(T) or Partition(T:P) to be exported: (RETURN to quit) > TEST
    . . exporting table TEST 4 rows exported
    . . exporting table TEST_CBINDS 0 rows exported
    . . exporting table TEST_CPLANS 34 rows exported
    Import on the Destination system ( IN this CASE it was 11.1.0.6.0 )
    While Importing the table you will see something like
    . importing SH's objects into SH
    . importing SH's objects into SH
    . . importing table "TEST" 4 rows imported
    . . importing table "TEST_CBINDS" 0 rows imported
    . . importing table "TEST_CPLANS" 34 rows imported
    Import terminated successfully without warnings.
    SQL> select count(*) from test;
    COUNT(*)
    4
    Verify the contents of DBA_SQLSET or USER_SQLSET on the Destination system
    Select NAME,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;
    4) Unpack the table using DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET on the Destination system ( IN this CASE it was 11.1.0.6.0 )
    execute DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET( sqlset_name => '%',-
    replace => TRUE,-
    staging_table_name => 'TEST');
    Select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;
    NAME CREATED STATEMENT_COUNT
    small_sh_sts_4 24-AUG-07 4
    Related
    Products
    •Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition

  • Saving/exporting autotrace and explain plan

    Is it at all possible to export or save the explain plan and/or autotrace information from Oracle SQL Developer to text?

    You must first click Explain icon or Autotrace Icon to generate them before you can check under the tab.

  • Exporting the execute explain plan

    Hi-
    Is there a way to export the explain plan (in a pretty print format) from SQL Developer 1.0.0 - copying and pasting doesn't work. For large plans it is almost un-readable.
    Thanks in Advance.

    We have a Explain Plan print bug in the development process for 1.1
    Sue

  • Missing random photos in epub export

    I am converting a print book to an ePub. I built the layout in a word processing document in Pages, using inline images, text styles and flush, right and left images. I then exported the file to the ePub format. It is a large book with 400,000 words and 300 images.
    The final product looked great until I started noticing that there were photos missing throughout the document - about 115 out of the 300. There is no pattern to which are missing - i.e. 40 in row are fine and then 3 in row are missing. I used 3 different alignment settings for the inline images and some of each all appear fine. The images are all dropped in using paste from an InDesign file that contains a photo with its caption so as to keep the caption with the image. The dropped image is a pdf that the epub exporter converts to a PNG file. In the xhtml files inside the epub, there is no reference at all to the missing images - the image numbers completely skip them and the code just has a space where the image code should appear.
    I was hoping to be able to get close with Pages to ePub and then make minor tweaks to the finished product. I didn't expect to have to hand convert 115 images into PNGs, hand code them into their exact location, code the reference into the OPF file and then assemble it all again. Seems to strongly point to an "undocumented feature" that is ignoring random images.
    Any help or ideas would be greatly appreciated. (I've got about 40 hours into converting this book so far and it looks like it is going to add about 20 more to manually fix all these problems and manually check the output.)
    Thanks,
    Wade

    I was not able to solve this problem using Pages. I took the problematic ePub I exported from Pages and used it as a starting point. I did a bunch of research on the ePub format and figured out how to solve the problem manually. (Make sure you use the latest version of the iTunesConnect_PublisherUserGuide.pdf as it gives a host of great information and things you have to do for it to work.)
    I ended up getting technical and cracking open the ePub file into its component parts - META-INF, mimetype, OPS - by changing the ePub extension to zip and then using Stuffit Archive manager to export the contents. (In a ton of research I found all about the structure of the ePub files and that it is basically a ZIP file with some special qualities.)
    Inside the OPS folder is basically a web folder - with an xhtml page for each chapter, css files, image files and a couple of ePub specific files. I used Dreamweaver to edit the problem chapters and placed the missing images. Also, I converted all of the "dropped images" I brought in from InDesign to PNG files (they contained the image with its caption so that it stayed formatted and in place with the picture). This served to make the overall size of the ePub smaller, as we were bumping up against the iTunes size limit.
    The item that has to be done or this won't work is to correctly modify the "epb.opf" file that is in the OPS folder. When you open it, you will see the pattern of how every image placed in Pages is listed/referenced in this file. From there, I had to add the appropriate reference to every image I had added and make sure that there were no extra images referenced. The ePub will not validate without this step. (Another tool I used was ePubChecker - it will check for this and many other things that will cause your iTunes validation to fail. Invaluable.)
    Once I made all the modifications, I used ePub Zip 1.0.2 to correctly ZIP all the files and convert it to a validatable ePub file. I couldn't create the right kind of ePub file - with all the meta data set correctly - using Stuffit. (Had something to do with how something wrote out from Stuffit). ePub Zip did it perfectly.
    It has been a while since I did all this, and it was an all-nighter, so I may not have listed all the details here. If I have not explained something clearly or left something out, please let me know.
    Hope this helps. It wasn't much fun, but it was really awesome when it finally succeeded in uploading and validating on iTunesConnect. Then we had to wait at least a year for it to appear in the store. It was a little over two weeks, but it seemed close to a year. Good luck with your book!

  • Explain plan results are different in SQL Developer than SQL Plus

    My Environment:
    SQL Developer 1.0.0.15.27
    Platform where SQL Developer is running: Windows XP 2002 SP2
    Oracle Database and Client 9.2.0.7
    Optimizer_mode: FIRST_ROWS
    I have the following SQL statement:
    SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE :b2 = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > :b1
    When I run an Explain in SQL Developer I get the following access path (which is the one I really want):
    SELECT STATEMENT                          TABLE ACCESS(BY INDEX ROWID) FEDLINK.AU_COMPANY          NESTED LOOPS                                   INDEX(RANGE SCAN)
    FEDLINK.UX2_TEMP_AU_COMPANY
              INDEX(RANGE SCAN) FEDLINK.PX1_COMPANY
    However, when I execute the statement with sql_trace turned on and use tkprof to generate the actual access path, the statement executes as follows (which is WAY more expensive):
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 3.58 6.68 28136 29232 0 0
    total 3 3.58 6.69 28136 29232 0 0
    Misses in library cache during parse: 1
    Optimizer goal: FIRST_ROWS
    Parsing user id: 979 (FEDLINK) (recursive depth: 1)
    Rows Row Source Operation
    0 NESTED LOOPS
    0 TABLE ACCESS FULL AU_COMPANY
    0 INDEX RANGE SCAN UX2_TEMP_AU_COMPANY (object id 49783)
    Notice the FULL access of au_company.
    I understand that SQL Developer has nothing to do with why the statement executed the way it did, but why is the Explain in SQL Developer different than the actual execution plan?
    Added note....when I run the explain in SQL Plus it is the same as the actual execution. Here is the explain from SQL Plus:
    explain plan for SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE '1' = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > '01-MAY-2006';
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 76 | 2597 |
    | 1 | NESTED LOOPS | | 2 | 76 | 2597 |
    | 2 | TABLE ACCESS FULL | AU_COMPANY | 2 | 42 | 2595 |
    | 3 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 1 | 17 | 2
    Thanks,
    Brenda

    The explain is different (full scan of au_company in SQL Plus / index access in SQL Developer) even when I use variables in SQL Plus. Here is the output for SQL Plus using variables instead of literals:
    SQL> variable b1 varchar2
    SQL> variable b2 char
    SQL> explain plan for SELECT a1.comp_id
    2 FROM temp_au_company a0, au_company a1
    3 WHERE :b2 = a0.temp_emp_code
    4 AND a0.comp_id = a1.comp_id
    5 AND a0.sls_terr_code != a1.sls_terr_code
    6 AND a1.last_mdfy_date > :b1
    7 /
    Explained.
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 3184 | 118K| 2995 |
    | 1 | HASH JOIN | | 3184 | 118K| 2995 |
    | 2 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 3187 | 54179 | 3 |
    | 3 | TABLE ACCESS FULL | AU_COMPANY | 24009 | 492K| 2983 |
    Any other ideas? They should be the same.
    Brenda

  • Same query, same dataset, same ddl setup, but wildly different explain plan

    Hello o fountains of oracle knowledge!
    We have a problem that caused us a full stop when rolling out a new version of our system to a customer and a whole Sunday to boot.
    The scenario is as follows:
    1. An previous version database schema
    2. The current version database schema
    3. A migration script to migrate the old schema to the new
    So we perform the following migration:
    1. Export the previous version database schema
    2. Import into a new schema called schema_old
    3. Create a new schema called schema_new
    4. Run migration script which creates objects, copies data, creates indexes etc etc in schema_new
    The migration runs fine in all environments (development, test and production)
    In our development and test environments performance is stellar, on the customer production server the performance is terrible.
    This using the exact same export file (from the production environment) and performing the exact same steps with the exact same migration script.
    Database version is 10.2.0.1.0 EE on all databases. OS is Microsoft Windows Server 2003 EE SP2 on all servers.
    The system is not in any sense under a heavy load (we have tested with no other load than ourselves).
    Looking at the explain plan for a query that is run frequently and does not use bind variables we see wildly different explain plans.
    The explain plan cost on our development and test servers is estimated to *7* for this query and there are no full table scans.
    On the production server the cost is *8433* and there are two full table scans of which one is on the largest table.
    We have tried to run analyse on all objects with very little effect. The plan changed very slightly, but still includes the two full table scans on the problem server and the cost is still the same.
    All tables and indexes are identical (including storage options), created from the same migration script.
    I am currently at loss for where to look? What can be causing this? I assume this could be caused by some parameter that is set on the server, but I don't know what to look for.
    I would be very grateful for any pointers.
    Thanks,
    Håkon

    Thank you for your answer.
    We collected statistics only after we determined that the production server where not behaving according to expectations.
    In this case we used TOAD and the tool within to collect statistics for all objects. We used 'Analyze' and 'Compute Statistics' options.
    I am not an expert, so sorry if this is too naive an approach.
    Here is the query:SELECT count(0)  
    FROM score_result sr, web_scorecard sc, product p
    WHERE sr.score_final_decision like 'VENT%'  
    AND sc.CREDIT_APPLICATION_ID = sr.CREDIT_APPLICATION_ID  
    AND sc.application_complete='Y'   
    AND p.product = sc.web_product   
    AND p.inactive_product = '2' ;I use this as an example, but the problem exists for virtually all queries.
    The output from the 'good' server:
    | Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                       |     1 |    39 |     7   (0)|
    |   1 |  SORT AGGREGATE                |                       |     1 |    39 |            |
    |   2 |   NESTED LOOPS                 |                       |     1 |    39 |     7   (0)|
    |   3 |    NESTED LOOPS                |                       |     1 |    30 |     6   (0)|
    |   4 |     TABLE ACCESS BY INDEX ROWID| SCORE_RESULT          |     1 |    17 |     4   (0)|
    |   5 |      INDEX RANGE SCAN          | SR_FINAL_DECISION_IDX |     1 |       |     3   (0)|
    |   6 |     TABLE ACCESS BY INDEX ROWID| WEB_SCORECARD         |     1 |    13 |     2   (0)|
    |   7 |      INDEX UNIQUE SCAN         | WEB_SCORECARD_PK      |     1 |       |     1   (0)|
    |   8 |    TABLE ACCESS BY INDEX ROWID | PRODUCT               |     1 |     9 |     1   (0)|
    |   9 |     INDEX UNIQUE SCAN          | PK_PRODUCT            |     1 |       |     0   (0)|
    ---------------------------------------------------------------------------------------------The output from the 'bad' server:
    | Id  | Operation                 | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT          |                       |     1 |    32 |  8344   (3)|
    |   1 |  SORT AGGREGATE           |                       |     1 |    32 |            |
    |   2 |   HASH JOIN               |                       | 10887 |   340K|  8344   (3)|
    |   3 |    TABLE ACCESS FULL      | PRODUCT               |     6 |    42 |     3   (0)|
    |   4 |    HASH JOIN              |                       | 34381 |   839K|  8340   (3)|
    |   5 |     VIEW                  | index$_join$_001      | 34381 |   503K|  2193   (3)|
    |   6 |      HASH JOIN            |                       |       |       |            |
    |   7 |       INDEX RANGE SCAN    | SR_FINAL_DECISION_IDX | 34381 |   503K|   280   (3)|
    |   8 |       INDEX FAST FULL SCAN| SCORE_RESULT_PK       | 34381 |   503K|  1371   (2)|
    |   9 |     TABLE ACCESS FULL     | WEB_SCORECARD         |   489K|  4782K|  6137   (4)|
    ----------------------------------------------------------------------------------------I hope the formatting makes this readable.
    Stats (from SQL Developer), good table:NUM_ROWS     489716
    BLOCKS     27198
    AVG_ROW_LEN     312
    SAMPLE_SIZE     489716
    LAST_ANALYZED     15.12.2009
    LAST_ANALYZED_SINCE     15.12.2009Stats (from SQL Developer), bad table:
    NUM_ROWS     489716
    BLOCKS     27199
    AVG_ROW_LEN     395
    SAMPLE_SIZE     489716
    LAST_ANALYZED     17.12.2009
    LAST_ANALYZED_SINCE     17.12.2009I'm unsure what would cause the difference in average row length.
    I could obviously try to tune our sql-statements to work on the server not behaving better, but I would rather understand why they are different and make sure that we can expect similar behaviour between environments.
    Thank you again for trying to help me.
    Håkon
    Edited by: ergates on 17.des.2009 05:57
    Edited by: ergates on 17.des.2009 06:02

  • What are the privileges required for explain plan in Oracle 11g database

    I am facing the problem in doing a explain plan for a view in Oracle 11g database. When I select from the view like this:
    select * from zonewisearpu
    It does a select on the view but when I give explain plan like
    explain plan for
    select * from zonewisearpu
    I get the error like insufficient privileges.
    Please let me know if things are getting missed out as I guess system level privileges are required to execute this.
    I hope, my question is clear.
    It’s a humble request to revert urgently if possible as I need to complete a task and do not know the way out.
    Regards

    975148 wrote:
    Thanks for your reply. I have found out that an explain plan is possible on the user's own objects and is not possible on the granted objects from a different schema. For eg, if I do a explain plan on a view querying on tables from a different view, it would not allow the explain plan to proceed. This could mean that explain plan needs different privileges than just a select.
    Requesting for a revert to this.
    Here is a simple test case that I have perfomed
    SQL> create user test1 identified by test1;
    User created.
    SQL> create user test2 identified by test1;
    User created.
    SQL> grant connect, resource to test1,test2;
    Grant succeeded.
    SQL> create table test1.tab1 as select * from v$session;
    Table created.
    SQL> connect test2/test1
    Conencted.
    SQL> show user
    USER is "TEST2"
    SQL>
    SQL> explain plan for
      2  select sid,serial#,status,username from test1.tab1 where username<> '';
    Explained.
    SQL>
    So, as can be seen I am able to do a explain plan from user test2 for tables belong to user test1.
    As far as privileges are concerned, following is the list
    SQL> select * from dba_role_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE                        GRANTED_ROLE                   ADM DEF
    TEST1                          CONNECT                        NO  YES
    TEST1                          RESOURCE                       NO  YES
    TEST2                          CONNECT                        NO  YES
    TEST2                          RESOURCE                       NO  YES
    SQL>
    SQL> select grantee,owner,table_name,privilege from dba_tab_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE    OWNER      TABLE_NAME           PRIVILEGE
    TEST2      TEST1      TAB1                 SELECT
    SQL>
    SQL>  select * from dba_sys_privs where grantee in ('TEST1','TEST2') order by 1;
    GRANTEE    PRIVILEGE                      ADM
    TEST1      UNLIMITED TABLESPACE           NO
    TEST2      UNLIMITED TABLESPACE           NO
    SQL>

  • How to avoid "images" folder in programatic exporting of.rpt files to HTML

    In C#, When crystal reports is programatically exported to HTML format, the result is a folder bundled with .htm files and a folder called "images" which holds the .png files associated with these .htm.
    My requirement is to have both the .htm files and the graphic files(i.e. '.png' files) in the same folder and do not want a new folder "images" to be created.
    This creation of "images" folder does not happen when we manually export cryatal reports to html format.
    Kindly suggest, How can we avoid the creation of this "images" folder when programatically exported to html format ?
    Following is the approach used to export to HTML format.
    ReportDocument mreportDocumentObject_
    mreportDocumentObject = new ReportDocument();_
    ExportOptions exportOptions = new ExportOptions();
    exportOptions.ExportDestinationType = ExportDestinationType.DiskFile;
    exportOptions.ExportFormatType = ExportFormatType.HTML40;
    HTMLFormatOptions htmlFormatOptions = new HTMLFormatOptions();
    htmlFormatOptions.HTMLBaseFolderName = folderName;
    htmlFormatOptions.HTMLFileName = fileName;
    htmlFormatOptions.HTMLEnableSeparatedPages = bHTMLEnableSeparatedPages;
    htmlFormatOptions.HTMLHasPageNavigator = bHTMLHasPageNavigator;
    exportOptions.FormatOptions = htmlFormatOptions;
    mreportDocumentObject.Export(exportOptions);_
    Edited by: GuruRaj01583 on Jul 23, 2009 1:23 PM

    HI David,
    I have a requirement to e-mail the exported html report. When i e-mail the attachments (i.e .htm file and .png file with in the images folder) , The reciever has to save the .htm file into a folder(say X) and with in the same folder 'X'  he has to create a 'images' folder and save the .png file into it. Only then .htm file will load the appropriate ".png".(I believe It is internally designed in .htm to load the .png files from 'images' folder). SO it becomes additional work for the user to save these attachments in folder structure as explained above.
    There is another scenario to store the reports into database which requires .htm and .png to be in the same folder.
    These are above scenarios where images folder is causing the problem.
    I have exported the reports to other formats (i.e. xls, pdf, rtf, txt) and those work fine. Supporting .html format as part of the above requirement is where i am stuck.
    I wanted to confirm whether creation of 'images' folder in CR .NET SDK IS BY DESIGN ?
    If that is confirmed and if there is NO work around to get the .htm file and .png file into same folder, then we can re - think on our requirement.
    Kindly confirm the above or Kindly suggest any work around if available .
    Thank You.

  • [8i] Can someone help me on using explain plan, tkprof, etc.?

    I am trying to follow the instructions at When your query takes too long ...
    I am trying to figure out why a simple query takes so long.
    The query is:
    SELECT COUNT(*) AS tot_rows FROM my_table;It takes a good 5 minutes or so to run (best case), and the result is around 22 million (total rows).
    My generic username does not (evidently) allow access to PLAN_TABLE, so I had to log on as SYSTEM to run explain plan. In SQL*Plus, I typed in:
    explain plan for (SELECT COUNT(*) AS tot_rows FROM my_table);and the response was "Explained."
    Isn't this supposed to give me some sort of output, or am I missing something?
    Then, the next step in the post I linked is to use tkprof. I see that it says it will output a file to a path specified in a parameter. The only problem is, I don't have access to the db's server. I am working remotely, and do not have any way to remotely (or directly) access the db server. Is there any way to have the file output to my local machine, or am I just S.O.L.?

    SomeoneElse used "create table as" (CTAS), wich automatically gathers the stats. You can see the differende before and after stats clearly in this example.
    This is the script:
    drop table ttemp;
    create table ttemp (object_id number not null, owner varchar2(30), object_name varchar2(200));
    alter table ttemp add constraint ttemp_pk primary key (object_id);
    insert into ttemp
    select object_id, owner, object_name
    from dba_objects
    where object_id is not null;
    set autotrace on
    select count(*) from ttemp;
    exec dbms_stats.gather_table_stats('PROD','TTEMP');
    select count(*) from ttemp;And the result:
    Table dropped.
    Table created.
    Table altered.
    46888 rows created.
      COUNT(*)
         46888
    1 row selected.
    Execution Plan
               SELECT STATEMENT Optimizer Mode=CHOOSE
       1         SORT AGGREGATE
       2    1      TABLE ACCESS FULL PROD.TTEMP
    Statistics
              1  recursive calls
              1  db block gets
            252  consistent gets
              0  physical reads
            120  redo size
              0  PX remote messages sent
              0  PX remote messages recv'd
              0  buffer is pinned count
              0  workarea memory allocated
              4  workarea executions - optimal
              1  rows processed
    PL/SQL procedure successfully completed.
      COUNT(*)
         46888
    1 row selected.
    Execution Plan
               SELECT STATEMENT Optimizer Mode=CHOOSE (Cost=4 Card=1)
       1         SORT AGGREGATE (Card=1)
       2    1      INDEX FAST FULL SCAN PROD.TTEMP_PK (Cost=4 Card=46 K)
    Statistics
              1  recursive calls
              2  db block gets
            328  consistent gets
              0  physical reads
           8856  redo size
              0  PX remote messages sent
              0  PX remote messages recv'd
              0  buffer is pinned count
              0  workarea memory allocated
              4  workarea executions - optimal
              1  rows processed

  • DECODE is changing the explain plan

    I have a statement with a decode function in the where clause like this:
    AND decode(:cropcode,-1,'-1',sdu.u_crop_group) = decode(:cropcode,-1,'-1',:cropcode)When I a pass -1 as parameter for cropgroup the filter would result in "AND '-1' = '-1' and the statement is executed in less than 2 seconds. When I leave out this where clause it takes almost 18 seconds. The result is the same so I don't understand why the explain plan is so much different and why not using index scans in the statement without decode.
    Below the explain plans and tkprofs for 1 (without decode) and 2 (with decode).
    *explain 1*
    {code}
    SQL Statement which produced this data:
    select * from table(dbms_xplan.display)
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 7080 | 2413K| | 43611 (2)|
    | 1 | SORT ORDER BY | | 7080 | 2413K| 5224K| 43611 (2)|
    |* 2 | FILTER | | | | | |
    |* 3 | HASH JOIN | | 7156 | 2438K| | 43075 (2)|
    | 4 | TABLE ACCESS FULL | DWH_ABS_DETERMINATION | 17745 | 363K| | 83 (0)|
    |* 5 | HASH JOIN OUTER | | 7156 | 2292K| | 42991 (2)|
    |* 6 | HASH JOIN | | 7156 | 1355K| | 42907 (2)|
    |* 7 | HASH JOIN | | 6987 | 1187K| | 19170 (2)|
    |* 8 | HASH JOIN | | 6947 | 963K| | 10376 (1)|
    |* 9 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 3 | 144 | | 3 (0)|
    | 10 | NESTED LOOPS | | 6907 | 897K| | 8577 (1)|
    |* 11 | HASH JOIN | | 2264 | 187K| | 1782 (2)|
    | 12 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 190 | 4370 | | 17 (0)|
    | 13 | NESTED LOOPS | | 2264 | 152K| | 107 (1)|
    | 14 | NESTED LOOPS | | 12 | 552 | | 25 (0)|
    |* 15 | TABLE ACCESS FULL | SDG_USER | 12 | 288 | | 13 (0)|
    | 16 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 22 | | 1 (0)|
    |* 17 | INDEX UNIQUE SCAN | PK_SDG | 1 | | | 0 (0)|
    |* 18 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 597 | | | 2 (0)|
    | 19 | TABLE ACCESS FULL | SAMPLE_USER | 1078K| 16M| | 1669 (1)|
    |* 20 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 3 | | | 2 (0)|
    | 21 | TABLE ACCESS FULL | ALIQUOT_USER | 3403K| 29M| | 1781 (3)|
    | 22 | TABLE ACCESS FULL | TEST | 3423K| 104M| | 8775 (2)|
    |* 23 | TABLE ACCESS FULL | RESULT | 3435K| 65M| | 23718 (2)|
    | 24 | VIEW | PLATE | 21787 | 2851K| | 84 (2)|
    |* 25 | FILTER | | | | | |
    | 26 | TABLE ACCESS FULL | PLATE | 21787 | 574K| | 84 (2)|
    |* 27 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 7 | | 0 (0)|
    |* 28 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 7 | | 0 (0)|
    |* 29 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 7 | | 0 (0)|
    |* 30 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 7 | | 0 (0)|
    |* 31 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 7 | | 0 (0)|
    Predicate Information (identified by operation id):
    2 - filter(("GROUP_ID" IS NULL OR EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP"
    "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B1)) AND ("GROUP_ID" IS NULL
    OR EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP" "OPERATOR_GROUP" WHERE
    "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B2)) AND ("GROUP_ID" IS NULL OR EXISTS (SELECT /*+
    */ 0 FROM "LIMS"."OPERATOR_GROUP" "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND
    "GROUP_ID"=:B3)) AND ("GROUP_ID" IS NULL OR EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP"
    "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B4)))
    3 - access("U_ABS_DETERMINATION"="DETERMINATION_ASSIGNMENT")
    5 - access("PLT"."PLATE_ID"(+)="PLATE_ID")
    6 - access("TEST_ID"="TEST_ID")
    7 - access("ALIQUOT_ID"="ALIQUOT_ID")
    8 - access("ALIQUOT_ID"="ALIQUOT_ID")
    9 - filter("STATUS"='C' OR "STATUS"='P' OR "STATUS"='V')
    11 - access("SAMPLE_ID"="SAMPLE_ID")
    15 - filter("U_ABS_DETERMINATION" IS NOT NULL AND "U_CLIENT_TYPE"='QC' AND
    "U_WEEK_OF_PROCESSING"=TO_NUMBER(:WEEK) AND "U_YEAR_OF_SAMPLE_DELIVERY"=TO_NUMBER(:YEAR))
    17 - access("SDG_ID"="SDG_ID")
    18 - access("SDG_ID"="SDG_ID")
    20 - access("SAMPLE_ID"="SAMPLE_ID")
    23 - filter("NAME"='End result')
    25 - filter("GROUP_ID" IS NULL OR EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP"
    "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B1))
    27 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    28 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    29 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    30 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    31 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    Note
    - 'PLAN_TABLE' is old version
    {code}
    *tkprof 1*
    {code}
    TKPROF: Release 10.2.0.3.0 - Production on Tue Jan 13 13:21:47 2009
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Trace file: C:\oracle\product\10.2.0\admin\nautp02\udump\nautp02_ora_880.trc
    Sort options: default
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    SELECT sdu.u_crop_group,
    sd.name as sdg_name,
    ad.variety_name,
    ad.batch_number,
    a.name as aliquot_name,
    sau.u_box_code as box_code,
    sau.u_box_position as box_position,
    t.name as test_name,
    r.original_result,
    plt.name as plate_name,
    concat(chr(a.plate_row + 64),a.plate_column) as plate_position,
    au.u_replicate_number as replicate_number
    FROM lims_sys.sdg sd,
    lims_sys.sdg_user sdu,
    lims_sys.sample sa,
    lims_sys.sample_user sau,
    lims_sys.aliquot a,
    lims_sys.aliquot_user au,
    lims_sys.test t,
    lims_sys.result r,
    lims_sys.plate plt,
    lims_sys.abs_determination ad
    WHERE sd.sdg_id = sdu.sdg_id
    AND sd.sdg_id = sa.sdg_id
    AND sa.sample_id = sau.sample_id
    AND sau.sample_id = a.sample_id
    AND a.aliquot_id = au.aliquot_id
    AND au.aliquot_id = t.aliquot_id
    AND t.test_id = r.test_id
    AND plt.plate_id (+) = a.plate_id
    AND sdu.u_abs_determination = ad.determination_assignment
    AND a.status IN ('V','P','C')
    AND r.name = 'End result'
    AND sdu.u_client_type = 'QC'
    AND sdu.u_year_of_sample_delivery = (:year)
    AND sdu.u_week_of_processing = (:week)
    --AND decode(:cropcode,-1,'-1',sdu.u_crop_group) = decode(:cropcode,-1,'-1',:cropcode)
    ORDER BY box_code, box_position, replicate_number
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 1.15 1.16 0 0 0 0
    Fetch 1 8.53 16.10 227649 241266 0 500
    total 3 9.68 17.27 227649 241266 0 500
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 97
    Rows Row Source Operation
    500 SORT ORDER BY (cr=241266 pr=227649 pw=0 time=16104631 us)
    21311 FILTER (cr=241266 pr=227649 pw=0 time=16246749 us)
    21311 HASH JOIN (cr=241266 pr=227649 pw=0 time=16225434 us)
    17745 TABLE ACCESS FULL DWH_ABS_DETERMINATION (cr=374 pr=0 pw=0 time=69 us)
    21311 HASH JOIN RIGHT OUTER (cr=240892 pr=227649 pw=0 time=16170607 us)
    21895 VIEW PLATE (cr=316 pr=0 pw=0 time=43825 us)
    21895 FILTER (cr=316 pr=0 pw=0 time=43823 us)
    21895 TABLE ACCESS FULL PLATE (cr=316 pr=0 pw=0 time=31 us)
    0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    21311 HASH JOIN (cr=240576 pr=227649 pw=0 time=16106174 us)
    21311 HASH JOIN (cr=133559 pr=121596 pw=0 time=9594130 us)
    21311 HASH JOIN (cr=94323 pr=83281 pw=0 time=6917067 us)
    21311 HASH JOIN (cr=86383 pr=75547 pw=0 time=5509672 us)
    7776 HASH JOIN (cr=8134 pr=0 pw=0 time=285364 us)
    7776 TABLE ACCESS BY INDEX ROWID SAMPLE (cr=572 pr=0 pw=0 time=27152 us)
    7876 NESTED LOOPS (cr=377 pr=0 pw=0 time=488287 us)
    99 HASH JOIN (cr=160 pr=0 pw=0 time=4168 us)
    99 TABLE ACCESS FULL SDG_USER (cr=53 pr=0 pw=0 time=1209 us)
    5719 TABLE ACCESS FULL SDG (cr=107 pr=0 pw=0 time=17 us)
    7776 INDEX RANGE SCAN FK_SAMPLE_SDG (cr=217 pr=0 pw=0 time=623 us)(object id 45990)
    1079741 TABLE ACCESS FULL SAMPLE_USER (cr=7562 pr=0 pw=0 time=24 us)
    3307948 TABLE ACCESS FULL ALIQUOT (cr=78249 pr=75547 pw=0 time=3331129 us)
    3406836 TABLE ACCESS FULL ALIQUOT_USER (cr=7940 pr=7734 pw=0 time=556 us)
    3406832 TABLE ACCESS FULL TEST (cr=39236 pr=38315 pw=0 time=3413192 us)
    3406832 TABLE ACCESS FULL RESULT (cr=107017 pr=106053 pw=0 time=6848487 us)
    0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    select 'x'
    from
    dual
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 97
    Rows Row Source Operation
    1 FAST DUAL (cr=0 pr=0 pw=0 time=3 us)
    begin :id := sys.dbms_transaction.local_transaction_id; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 97
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 3 0.00 0.00 0 0 0 0
    Execute 3 1.15 1.16 0 0 0 1
    Fetch 2 8.53 16.10 227649 241266 0 501
    total 8 9.68 17.27 227649 241266 0 502
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 30 0.00 0.00 0 0 0 0
    Execute 30 0.00 0.00 0 0 0 0
    Fetch 30 0.00 0.00 0 40 0 10
    total 90 0.00 0.00 0 40 0 10
    Misses in library cache during parse: 0
    3 user SQL statements in session.
    30 internal SQL statements in session.
    33 SQL statements in session.
    Trace file: C:\oracle\product\10.2.0\admin\nautp02\udump\nautp02_ora_880.trc
    Trace file compatibility: 10.01.00
    Sort options: default
    8 sessions in tracefile.
    3 user SQL statements in trace file.
    30 internal SQL statements in trace file.
    33 SQL statements in trace file.
    6 unique SQL statements in trace file.
    633 lines in trace file.
    23 elapsed seconds in trace file.
    {code}

    explain 2
    SQL Statement which produced this data:
      select * from table(dbms_xplan.display)
    PLAN_TABLE_OUTPUT
    | Id  | Operation                               | Name                     | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                        |                          |    71 | 24779 |   857   (1)|
    |   1 |  SORT ORDER BY                          |                          |    71 | 24779 |   857   (1)|
    |*  2 |   FILTER                                |                          |       |       |            |
    |*  3 |    TABLE ACCESS BY INDEX ROWID          | RESULT                   |     1 |    20 |     4   (0)|
    |   4 |     NESTED LOOPS                        |                          |    72 | 25128 |   856   (1)|
    |   5 |      NESTED LOOPS                       |                          |    70 | 23030 |   576   (1)|
    |*  6 |       HASH JOIN OUTER                   |                          |    69 | 20493 |   369   (1)|
    |   7 |        NESTED LOOPS                     |                          |    69 | 11247 |   285   (0)|
    |   8 |         NESTED LOOPS                    |                          |    69 | 10626 |   147   (0)|
    |   9 |          NESTED LOOPS                   |                          |    23 |  2438 |    78   (0)|
    |  10 |           NESTED LOOPS                  |                          |    23 |  2070 |    32   (0)|
    |  11 |            NESTED LOOPS                 |                          |     1 |    67 |    15   (0)|
    |  12 |             NESTED LOOPS                |                          |     1 |    45 |    14   (0)|
    |* 13 |              TABLE ACCESS FULL          | SDG_USER                 |     1 |    24 |    13   (0)|
    |  14 |              TABLE ACCESS BY INDEX ROWID| DWH_ABS_DETERMINATION    |     1 |    21 |     1   (0)|
    |* 15 |               INDEX UNIQUE SCAN         | PK_DWH_ABS_DETERMINATION |     1 |       |     0   (0)|
    |  16 |             TABLE ACCESS BY INDEX ROWID | SDG                      |     1 |    22 |     1   (0)|
    |* 17 |              INDEX UNIQUE SCAN          | PK_SDG                   |     1 |       |     0   (0)|
    |  18 |            TABLE ACCESS BY INDEX ROWID  | SAMPLE                   |   190 |  4370 |    17   (0)|
    |* 19 |             INDEX RANGE SCAN            | FK_SAMPLE_SDG            |   597 |       |     2   (0)|
    |  20 |           TABLE ACCESS BY INDEX ROWID   | SAMPLE_USER              |     1 |    16 |     2   (0)|
    |* 21 |            INDEX UNIQUE SCAN            | PK_SAMPLE_USER           |     1 |       |     1   (0)|
    |* 22 |          TABLE ACCESS BY INDEX ROWID    | ALIQUOT                  |     3 |   144 |     3   (0)|
    |* 23 |           INDEX RANGE SCAN              | FK_ALIQUOT_SAMPLE        |     3 |       |     2   (0)|
    |  24 |         TABLE ACCESS BY INDEX ROWID     | ALIQUOT_USER             |     1 |     9 |     2   (0)|
    |* 25 |          INDEX UNIQUE SCAN              | PK_ALIQUOT_USER          |     1 |       |     1   (0)|
    |  26 |        VIEW                             | PLATE                    | 21787 |  2851K|    84   (2)|
    |* 27 |         FILTER                          |                          |       |       |            |
    |  28 |          TABLE ACCESS FULL              | PLATE                    | 21787 |   574K|    84   (2)|
    |* 29 |          INDEX UNIQUE SCAN              | PK_OPERATOR_GROUP        |     1 |     7 |     0   (0)|
    |  30 |       TABLE ACCESS BY INDEX ROWID       | TEST                     |     1 |    32 |     3   (0)|
    |* 31 |        INDEX RANGE SCAN                 | FK_TEST_ALIQUOT          |     1 |       |     2   (0)|
    |* 32 |      INDEX RANGE SCAN                   | FK_RESULT_TEST           |     2 |       |     2   (0)|
    |* 33 |    INDEX UNIQUE SCAN                    | PK_OPERATOR_GROUP        |     1 |     7 |     0   (0)|
    |* 34 |    INDEX UNIQUE SCAN                    | PK_OPERATOR_GROUP        |     1 |     7 |     0   (0)|
    |* 35 |    INDEX UNIQUE SCAN                    | PK_OPERATOR_GROUP        |     1 |     7 |     0   (0)|
    |* 36 |    INDEX UNIQUE SCAN                    | PK_OPERATOR_GROUP        |     1 |     7 |     0   (0)|
    Predicate Information (identified by operation id):
       2 - filter(("GROUP_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP"
                  "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B1)) AND ("GROUP_ID"
                  IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP" "OPERATOR_GROUP" WHERE
                  "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B2)) AND ("GROUP_ID" IS NULL OR  EXISTS
                  (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP" "OPERATOR_GROUP" WHERE
                  "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B3)) AND ("GROUP_ID" IS NULL OR  EXISTS
                  (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP" "OPERATOR_GROUP" WHERE
                  "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B4)))
       3 - filter("NAME"='End result')
       6 - access("PLT"."PLATE_ID"(+)="PLATE_ID")
      13 - filter("U_ABS_DETERMINATION" IS NOT NULL AND "U_CLIENT_TYPE"='QC' AND
                  "U_WEEK_OF_PROCESSING"=TO_NUMBER(:WEEK) AND "U_YEAR_OF_SAMPLE_DELIVERY"=TO_NUMBER(:YEAR) AND
                  DECODE(:CROPCODE,(-1),'-1',"U_CROP_GROUP")=DECODE(:CROPCODE,(-1),'-1',:CROPCODE))
      15 - access("U_ABS_DETERMINATION"="DETERMINATION_ASSIGNMENT")
      17 - access("SDG_ID"="SDG_ID")
      19 - access("SDG_ID"="SDG_ID")
      21 - access("SAMPLE_ID"="SAMPLE_ID")
      22 - filter("STATUS"='C' OR "STATUS"='P' OR "STATUS"='V')
      23 - access("SAMPLE_ID"="SAMPLE_ID")
      25 - access("ALIQUOT_ID"="ALIQUOT_ID")
      27 - filter("GROUP_ID" IS NULL OR  EXISTS (SELECT /*+ */ 0 FROM "LIMS"."OPERATOR_GROUP"
                  "OPERATOR_GROUP" WHERE "OPERATOR_ID"="LIMS$OPERATOR_ID"() AND "GROUP_ID"=:B1))
      29 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
      31 - access("ALIQUOT_ID"="ALIQUOT_ID")
      32 - access("TEST_ID"="TEST_ID")
      33 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
      34 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
      35 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
      36 - access("GROUP_ID"=:B1 AND "OPERATOR_ID"="LIMS$OPERATOR_ID"())
    Note
       - 'PLAN_TABLE' is old version
    tkprof 2
    TKPROF: Release 10.2.0.3.0 - Production on Tue Jan 13 13:28:26 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Trace file: C:\oracle\product\10.2.0\admin\nautp02\udump\nautp02_ora_5456.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SELECT sdu.u_crop_group,
           sd.name as sdg_name,
           ad.variety_name,
           ad.batch_number,
           a.name as aliquot_name,
           sau.u_box_code as box_code,
           sau.u_box_position as box_position,
           t.name as test_name,
           r.original_result,
           plt.name as plate_name,
           concat(chr(a.plate_row + 64),a.plate_column) as plate_position,
           au.u_replicate_number as replicate_number
    FROM lims_sys.sdg sd,
         lims_sys.sdg_user sdu,
         lims_sys.sample sa,
         lims_sys.sample_user sau,
         lims_sys.aliquot a,
         lims_sys.aliquot_user au,
         lims_sys.test t,
         lims_sys.result r,
         lims_sys.plate plt,
         lims_sys.abs_determination ad
    WHERE sd.sdg_id = sdu.sdg_id
      AND sd.sdg_id = sa.sdg_id
      AND sa.sample_id = sau.sample_id
      AND sau.sample_id = a.sample_id
      AND a.aliquot_id = au.aliquot_id
      AND au.aliquot_id = t.aliquot_id
      AND t.test_id = r.test_id
      AND plt.plate_id (+) = a.plate_id
      AND sdu.u_abs_determination = ad.determination_assignment
      AND a.status IN ('V','P','C')
      AND r.name = 'End result'
      AND sdu.u_client_type = 'QC'
      AND sdu.u_year_of_sample_delivery = (:year)
      AND sdu.u_week_of_processing = (:week)
      AND decode(:cropcode,-1,'-1',sdu.u_crop_group) = decode(:cropcode,-1,'-1',:cropcode)
    ORDER BY box_code, box_position, replicate_number
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.45       0.87          0          0          0           0
    Fetch        1      1.00       0.99          0     221420          0         500
    total        3      1.46       1.86          0     221420          0         500
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 97 
    Rows     Row Source Operation
        500  SORT ORDER BY (cr=221420 pr=0 pw=0 time=992364 us)
      21311   FILTER  (cr=221420 pr=0 pw=0 time=1128970 us)
      21311    TABLE ACCESS BY INDEX ROWID RESULT (cr=221420 pr=0 pw=0 time=1086345 us)
      42623     NESTED LOOPS  (cr=217549 pr=0 pw=0 time=30006317 us)
      21311      NESTED LOOPS  (cr=174880 pr=0 pw=0 time=809278 us)
      21311       NESTED LOOPS  (cr=110117 pr=0 pw=0 time=553538 us)
      21311        HASH JOIN OUTER (cr=46182 pr=0 pw=0 time=319102 us)
      21311         TABLE ACCESS BY INDEX ROWID ALIQUOT (cr=45866 pr=0 pw=0 time=193037 us)
      29088          NESTED LOOPS  (cr=39885 pr=0 pw=0 time=320084 us)
       7776           NESTED LOOPS  (cr=24267 pr=0 pw=0 time=156721 us)
       7776            NESTED LOOPS  (cr=937 pr=0 pw=0 time=78954 us)
         99             NESTED LOOPS  (cr=454 pr=0 pw=0 time=3826 us)
         99              NESTED LOOPS  (cr=253 pr=0 pw=0 time=2833 us)
         99               TABLE ACCESS FULL SDG_USER (cr=53 pr=0 pw=0 time=1531 us)
         99               TABLE ACCESS BY INDEX ROWID DWH_ABS_DETERMINATION (cr=200 pr=0 pw=0 time=956 us)
         99                INDEX UNIQUE SCAN PK_DWH_ABS_DETERMINATION (cr=101 pr=0 pw=0 time=438 us)(object id 46965)
         99              TABLE ACCESS BY INDEX ROWID SDG (cr=201 pr=0 pw=0 time=707 us)
         99               INDEX UNIQUE SCAN PK_SDG (cr=101 pr=0 pw=0 time=330 us)(object id 46071)
       7776             TABLE ACCESS BY INDEX ROWID SAMPLE (cr=483 pr=0 pw=0 time=16261 us)
       7776              INDEX RANGE SCAN FK_SAMPLE_SDG (cr=217 pr=0 pw=0 time=562 us)(object id 45990)
       7776            TABLE ACCESS BY INDEX ROWID SAMPLE_USER (cr=23330 pr=0 pw=0 time=64710 us)
       7776             INDEX UNIQUE SCAN PK_SAMPLE_USER (cr=15554 pr=0 pw=0 time=33728 us)(object id 46012)
      21311           INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (cr=15618 pr=0 pw=0 time=43423 us)(object id 45346)
      21895         VIEW  PLATE (cr=316 pr=0 pw=0 time=43833 us)
      21895          FILTER  (cr=316 pr=0 pw=0 time=21936 us)
      21895           TABLE ACCESS FULL PLATE (cr=316 pr=0 pw=0 time=37 us)
          0           INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
      21311        TABLE ACCESS BY INDEX ROWID ALIQUOT_USER (cr=63935 pr=0 pw=0 time=182479 us)
      21311         INDEX UNIQUE SCAN PK_ALIQUOT_USER (cr=42624 pr=0 pw=0 time=99160 us)(object id 45386)
      21311       TABLE ACCESS BY INDEX ROWID TEST (cr=64763 pr=0 pw=0 time=219096 us)
      21311        INDEX RANGE SCAN FK_TEST_ALIQUOT (cr=42669 pr=0 pw=0 time=129354 us)(object id 46222)
      21311      INDEX RANGE SCAN FK_RESULT_TEST (cr=42669 pr=0 pw=0 time=125893 us)(object id 45940)
          0    INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
          0    INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
          0    INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
          0    INDEX UNIQUE SCAN PK_OPERATOR_GROUP (cr=0 pr=0 pw=0 time=0 us)(object id 45769)
    select 'x'
    from
    dual
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 97 
    Rows     Row Source Operation
          1  FAST DUAL  (cr=0 pr=0 pw=0 time=6 us)
    begin :id := sys.dbms_transaction.local_transaction_id; end;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          2           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          2           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 97 
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.01       0.00          0          0          2           0
    Execute      3      0.45       0.87          0          0          0           1
    Fetch        2      1.00       0.99          0     221420          0         501
    total        8      1.46       1.87          0     221420          2         502
    Misses in library cache during parse: 2
    Misses in library cache during execute: 2
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       43      0.01       0.00          0          0         12           0
    Execute    128      0.00       0.01          0          0          0           0
    Fetch      178      0.00       0.00          0        383          0         465
    total      349      0.01       0.02          0        383         12         465
    Misses in library cache during parse: 5
    Misses in library cache during execute: 5
        3  user  SQL statements in session.
      128  internal SQL statements in session.
      131  SQL statements in session.
    Trace file: C:\oracle\product\10.2.0\admin\nautp02\udump\nautp02_ora_5456.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           3  user  SQL statements in trace file.
         128  internal SQL statements in trace file.
         131  SQL statements in trace file.
          19  unique SQL statements in trace file.
        1352  lines in trace file.
         287  elapsed seconds in trace file.

  • I imported 635 images into an iPhoto album so that I can share them via a photo-stream. However, iPhoto says there are only 600. When I try to add the missing images it says they are already there. Why can i not see these images in iPhoto?

    I imported 635 images into an iPhoto album so that I can share them via a photo-stream. However, iPhoto says there are only 600. When I try to add the missing images it says they are already there. Why can i not see these images in iPhoto?

    Generally I would not use Facebook for sharing any photos, it compresses the photos substantially, and when you have shadows and dark colours you get visible "bands" where there should be subtle gradients, ie at sunsets and sunrises.
    It sounds like you are using two methods to upload to Facebook:
    1. Sharing from within Aperture, which basically syncs Facebook with your Aperture album, so any changes made at either end gets synced, hence the deletions from Albums, although the original file should still be in your library, just removed rom the album. It is like a playlist in iTunes.
    2. Exporting pics and uploading to Facebook from the browser.
    I am not sure how method 1 gets compressed, but I know that uploading hi-res jpegs to Facebook using method 2 results in poor quality images.
    I wouldn't even bother comparing option 1 or 2, and they will both be poor images once you view them on Facebook, as opposed to viewing uploaded images on proper image sharing / hosting sites.
    Your problem is not with Aperture, it is using Facebook for showing your work.
    If you export pics form Aperture at high res jpegs or TIFFs your images will be fine.
    If you insist to use Facebook as your way to share your work, then your workflow should be this:
    1. Right click images you want to share.
    2. Select Export version.
    3. Export as 100% size and ensure the export settings are set at 100% quality.
    4. Upload this pic into Facebook.
    This will get you the best image size and resolution on Facebook.
    See how you go.

  • Excel Rendering Extension: Unknown Image Format Imag/GIF - SSRS Excel Export

    We are using the SSRS 2012 [ExportOpenXML].
    When a report is export to excel, getting the message Unknow Image Format IMage/GIF.
    Please suggest a fix the above issue.
    As per article Excel Rendering extension: Unknow Image Format image/x-png suggestion verified in Visual studio selected the gif file in the Solution Explorer panel, checked the Properties
    the MIME Type which is already set as image/gif.
    Note: Dev environment its working as expected and but issue is observed in QA environment. I wonder some setting is missing on QA enviornment which I'm unable to get the difference exactly.
    Quick help would be higly appreciated.

    Hi Niranjan,
    Based on the current description, I understand that there should be no issue when exporting the report containing image which is Image/Gif type in Dev environment because you have changed the MIME type to Image/Gif.
    To narrow down the scope to find the cause, I want to confirm some information below:
    1. Check Reporting Services edition in QA environment, if the Reporting Services edition is same between QA and Dev environment, you can back up the .rdl file and copy the .rdl file to QA environment and then export the report in Excel format to check
    whether the issue persists.
    2. If you export the report to other format such as PDF rather than Excel format in QA environment, please confirm whether the issue persists.
    3. If you change the MIME type to Image/Png and then export the report to Excel in Dev and QA environment, please confirm whether the issue persists.
    Supposing there is error message, please post the details to the forum for further analysis.
    Reference:
    How do you check what version of SQL Server for a database using TSQL?
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

  • Mysterious missing images

    Hello,
    I'm fighting a TERRIBLE problem with my Aperture library and I'd love to see if any of you might be able to help. I've got a library with about 40,500 images. All of them are referenced and I've moved through a couple of different storage drives over the years. I just ran a "manage referenced files" on my whole library and it shows that I have about 600 missing files, spread across three offline drives.
    The problem with this is, I don't use those drives any more and I can't find where the missing images are!! I went through every project and did a search for missing images and no project shows a missing image. It only shows itself when I run "manage referenced files." I would just ignore this and use my library normally but it rears its ugly head in one other place: project exporting.
    I tried to export a project with 556 masters to put on a laptop. The export gave me an error that some referenced files wouldn't be copied. Problem is, I ran a search for missing or offline referenced files and it returns Zero. I exported the project with the "consolidate referenced images feature" and imported it into my laptop. The 556 images are there, not managed and looking good. If I run a smart search for reference files it doesn't return any. BUT If I run "manage reference files" on the library on my laptop it shows four missing files! It's crazy, I have no clue what's going on.
    rebuilt all of my libraries, I've looked at the image folders in the project and I can't figure out what's going on. Can anyone help?

    No luck on the smart album, I tried it and it reports no missing images:
    http://farm4.static.flickr.com/3289/24218107220895a27c5fo.png
    I also tried the opposite and searched for online images and got 556, the correct number.
    The problem occurs when I click on the library and run "manage referenced files", this is what I get:
    http://farm3.static.flickr.com/2107/242099628501416543c1o.png
    The thing is, the 556 images in this library are ONLY managed - there shouldn't be any referenced images. If you notice, that screen shot shows a master file with the name IMG_1151, but if I run a smart search on the project for "1151" I get no results:
    http://farm4.static.flickr.com/3050/242099643102bbf57fbfo.png
    So there are four "phantom" images that are reported as in the library that don't really exists. So frustrating! I'd love your other ideas for how to squash this bug.

  • Explain plan shows 300T of TempSpc and 999 hours - tuning request

    Hi,
    I have a query which obtains summary statistics. There is an items table which contains the dictionary of items which can be recorded. There is an events table which contains the item ID, timestamp and a value. The query summarizes the data for each item. e.g. Mean, stddev, sample values, length. I have trimmed the query down as simply as possible and am having a problem with large temp space and runtime estimates in the explain plan. Here is the query:
    WITH ChartItems AS (
        SELECT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          COUNT(*),
          COUNT(distinct subject_id)
        FROM mimic2v26.d_chartitems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
        --WHERE ci.itemid = 51
        GROUP BY ci.itemid,
                 ci.label,
                 ci.category,
                 ci.description
    --select * from ChartItems;
    , RawData AS (
        SELECT DISTINCT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          last_value(ce.value1) ignore nulls over (partition BY ci.itemid order by
          ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS
          value1_last
        FROM ChartItems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
    select * from RawData;Which gives this explain plan:
    Plan hash value: 642453121
    | Id  | Operation                    | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                |  4811G|  1238T|       |  3686M (13)|999:59:59 |
    |   1 |  VIEW                        |                |  4811G|  1238T|       |  3686M (13)|999:59:59 |
    |   2 |   HASH UNIQUE                |                |  4811G|   258T|       |  3686M (13)|999:59:59 |
    |   3 |    WINDOW BUFFER             |                |  4811G|   258T|       |  3686M (13)|999:59:59 |
    |   4 |     SORT GROUP BY            |                |  4811G|   258T|   317T|  3686M (13)|999:59:59 |
    |*  5 |      HASH JOIN               |                |  4811G|   258T|  4943M|    25M (90)| 85:14:28 |
    |   6 |       VIEW                   | VW_DAG_0       |   152M|  3199M|       |  1366K  (1)| 04:33:18 |
    |   7 |        HASH GROUP BY         |                |   152M|  4216M|  5839M|  1366K  (1)| 04:33:18 |
    |*  8 |         HASH JOIN            |                |   152M|  4216M|       |   147K  (2)| 00:29:36 |
    |   9 |          TABLE ACCESS FULL   | D_CHARTITEMS   |  4832 | 96640 |       |     7   (0)| 00:00:01 |
    |  10 |          INDEX FAST FULL SCAN| CHARTEVENTS_O2 |   196M|  1683M|       |   147K  (1)| 00:29:25 |
    |  11 |       TABLE ACCESS FULL      | CHARTEVENTS    |   196M|  6922M|       |   616K  (1)| 02:03:19 |
    Predicate Information (identified by operation id):
       5 - access("CE"."ITEMID"="ITEM_2")300T of temp space! Ouch.
    TKPROF output (I let the query run for a short while. I let it run for ages earlier, but wasn't tracing the session. Should I let it run for longer?):
    TKPROF: Release 11.2.0.3.0 - Development on Tue Jul 10 16:54:27 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Trace file: /oracle/11gr2/app/oracle/diag/rdbms/mimic2/mimic2/trace/mimic2_ora_6507.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    WITH ChartItems AS (
        SELECT
          ci.itemid,
          ci.label,
          ci.category,
          ci.description,
          COUNT(*),
          COUNT(distinct subject_id)
        FROM mimic2v26.d_chartitems ci
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid
        GROUP BY ci.itemid,
                 ci.label,
                 ci.category,
                 ci.description
    , RawData AS (
        SELECT DISTINCT
          ci.itemid,                                                                                                                                                                                           
          ci.label,                                                                                                                                                                                            
          ci.category,                                                                                                                                                                                         
          ci.description,                                                                                                                                                                                      
          last_value(ce.value1) ignore nulls over (partition BY ci.itemid order by                                                                                                                             
          ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS                                                                                                                            
          value1_last                                                                                                                                                                                          
        FROM ChartItems ci                                                                                                                                                                                     
        JOIN mimic2v26.chartevents ce ON ce.itemid = ci.itemid                                                                                                                                                 
    select * from RawData                                                                                                                                                                                      
    call     count       cpu    elapsed       disk      query    current        rows                                                                                                                           
    Parse        1      0.02       0.02          0          0          0           0                                                                                                                           
    Execute      1      0.00       0.00          0          0          0           0                                                                                                                           
    Fetch        1    582.40     712.23     705238     737351          0           0                                                                                                                           
    total        3    582.43     712.25     705238     737351          0           0                                                                                                                           
    Misses in library cache during parse: 1                                                                                                                                                                    
    Optimizer mode: ALL_ROWS                                                                                                                                                                                   
    Parsing user id: 85 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             0          0          0  VIEW  (cr=0 pr=0 pw=0 time=184 us cost=3686387254 size=1361581801333882 card=4811243114254)
             0          0          0   HASH UNIQUE (cr=0 pr=0 pw=0 time=180 us cost=3686387254 size=283863343740986 card=4811243114254)
             0          0          0    WINDOW BUFFER (cr=0 pr=0 pw=0 time=74 us cost=3686387254 size=283863343740986 card=4811243114254)
             0          0          0     SORT GROUP BY (cr=0 pr=0 pw=0 time=59 us cost=3686387254 size=283863343740986 card=4811243114254)
    178073889  178073889  178073889      HASH JOIN  (cr=737351 pr=705238 pw=124635 time=476372451 us cost=25572251 size=283863343740986 card=4811243114254)
       6211631    6211631    6211631       VIEW  VW_DAG_0 (cr=613768 pr=581413 pw=35805 time=286546567 us cost=1366486 size=3354399576 card=152472708)
       6211631    6211631    6211631        HASH GROUP BY (cr=613768 pr=581413 pw=35805 time=281271878 us cost=1366486 size=4421708532 card=152472708)
    196182740  196182740  196182740         HASH JOIN  (cr=613768 pr=545608 pw=0 time=557485047 us cost=147987 size=4421708532 card=152472708)
          4832       4832       4832          TABLE ACCESS FULL D_CHARTITEMS (cr=18 pr=16 pw=0 time=13666 us cost=7 size=96640 card=4832)
    196182740  196182740  196182740          INDEX FAST FULL SCAN CHARTEVENTS_O2 (cr=613750 pr=545592 pw=0 time=148428499 us cost=147046 size=1765644660 card=196182740)(object id 96501)
      10683044   10683044   10683044       TABLE ACCESS FULL CHARTEVENTS (cr=123583 pr=123825 pw=0 time=25175510 us cost=616507 size=7258761380 card=196182740)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      db file sequential read                         2        0.01          0.01
      db file scattered read                          3        0.02          0.03
      direct path read                             5265        0.75         77.13
      asynch descriptor resize                        1        0.00          0.00
      direct path write temp                      15077        0.71         45.54
      direct path read temp                        2387        0.29         13.54
      SQL*Net break/reset to client                   1        0.66          0.66
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 7jby2dxrpkkm9 Plan Hash: 1388734953
    select SYS_CONTEXT('USERENV','SESSIONID')
    from
    dual
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.01          0          0          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 85 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  FAST DUAL  (cr=0 pr=0 pw=0 time=4 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.02       0.04          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch        2    582.40     712.23     705238     737351          0           1
    total        6    582.43     712.27     705238     737351          0           1
    Misses in library cache during parse: 2
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       4        0.00          0.00
      db file sequential read                         2        0.01          0.01
      db file scattered read                          3        0.02          0.03
      direct path read                             5265        0.75         77.13
      asynch descriptor resize                        1        0.00          0.00
      direct path write temp                      15077        0.71         45.54
      direct path read temp                        2387        0.29         13.54
      SQL*Net break/reset to client                   1        0.66          0.66
      SQL*Net message from client                     3       14.99         15.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: /oracle/11gr2/app/oracle/diag/rdbms/mimic2/mimic2/trace/mimic2_ora_6507.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
       24245  lines in trace file.
         728  elapsed seconds in trace file.Optimizer parameters:
    NAME                                               TYPE        VALUE                                                                                               
    optimizer_capture_sql_plan_baselines               boolean     FALSE                                                                                               
    optimizer_dynamic_sampling                         integer     2                                                                                                   
    optimizer_features_enable                          string      11.2.0.3                                                                                            
    optimizer_index_caching                            integer     0                                                                                                   
    optimizer_index_cost_adj                           integer     100                                                                                                 
    optimizer_mode                                     string      ALL_ROWS                                                                                            
    optimizer_secure_view_merging                      boolean     TRUE                                                                                                
    optimizer_use_invisible_indexes                    boolean     FALSE                                                                                               
    optimizer_use_pending_statistics                   boolean     FALSE                                                                                               
    optimizer_use_sql_plan_baselines                   boolean     TRUE                                                                                                 Can anyone help me figure out what's wrong?
    This is a data warehouse, with up to date statistics. The DB is version 11.2.0.3.0
    Thanks,
    Dan

    rp0428 wrote:
    >
    I trimmed the query down to the simplest that I could which still exhibited the problem
    >
    The DISTINCT isn't the issue. The issue is that the query you posted isn't necessary and doesn't appear to be represented in the plan that you posted.
    That makes it hard to tell where the cardinality misestimates are coming from. How many records does the actual first query (the one with the group by) return? You have a SELECT * commented out - how many rows?No, it's not necessary, but then it isn't the full query. The estimates are still wrong when I pass the count columns through to the second query. The full query contains many more analytic functions in the second query, and some additions to the first. It could probably be cleaned up a little, but the current structure makes logical sense. Should I change the aggregates to analytics and move them into the second query? In any case, there definitely seems to be something wrong with the plan.
    I don't know what you mean when you say "doesn't appear to be represented in the plan that you posted".
    I just checked the first query and while the temp space has dropped to a reasonable amount, the number of rows in the hash join is still way off (I didn't notice before because I was looking at the temp space). The rows for d_chartitems and the index on chartevents are correct:
    Plan hash value: 1249235674
    | Id  | Operation               | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT        |                |   152M|    35G|       |  1366K  (1)| 04:33:18 |
    |   1 |  VIEW                   |                |   152M|    35G|       |  1366K  (1)| 04:33:18 |
    |   2 |   WINDOW SORT           |                |   152M|  4216M|  5839M|  1366K  (1)| 04:33:18 |
    |*  3 |    HASH JOIN            |                |   152M|  4216M|       |   147K  (2)| 00:29:36 |
    |   4 |     TABLE ACCESS FULL   | D_CHARTITEMS   |  4832 | 96640 |       |     7   (0)| 00:00:01 |
    |   5 |     INDEX FAST FULL SCAN| CHARTEVENTS_O2 |   196M|  1683M|       |   147K  (1)| 00:29:25 |
    Predicate Information (identified by operation id):
       3 - access("CE"."ITEMID"="CI"."ITEMID")Edited by: danscott on Jul 10, 2012 5:43 PM
    Edited by: danscott on Jul 11, 2012 6:28 AM

Maybe you are looking for