Roundtrip issue

This may have been covered already but I could not find it in
the forums.
I have a bunch of pages that were created in Fireworks and
exported to Dreamweaver.
If I needed to change some graphics I click on the FW Edit
button in DW.
So far so good. It opens the original source file in
Fireworks so I can make some updates.
The problem is that for one of the pages it will only open
the image element in FW, not the
FW source PNG. I've no idea what's causing this as it works
for all other pages.
Anybody know a solution for this?
Many thanks for your time,
Trev

BUG REPORT:
ISSUE: Roundtrip Workflow per Tutorial—Native Fireworks CS4 Files In Catalyst
Upon initiating the roundtrip from Catalyst (FC) to Illustrator (AI), I am presented with and dismiss the expected dialogue boxes as per tutorial. Upon initiating the "Edit Original" command in AI, it opens the ORIGINAL Fireworks file in my PREVIEW application, not Fireworks.
WORKAROUND:
In AI, use the "RELINK" (Upper left-hand corner) command to link with the original file. Once this is done, I am presented, once again, with the expected diaglogue boxes but now, the file opens as it was expected without this step, in Fireworks where I can make my edits.
- Primal Atom

Similar Messages

  • Roundtrip issues [Lr5.3 to Ps cc]

    Hi everyone,
    in the past two days i experience an issue in Lr-Ps roundtrip.I use to do that work from Lr 4 to Ps6 in my old win7 machine with no problems.
    Resently move to mac OS X (mountain lion)platform and be a member in Adobe`s CC.I try as always to do the task from Lr 5.3 to Ps cc but this time some[ ] of my photos don`t open.
    I check Lr`s external editor,i use PSD,ProPhoto RGB,16 bit,300 dpi. for Ps CC
    When i try to load those photos[with Cmd+E] photoshop ope, process circle spinning  ones (the grey circle with the dots) but image don`t appear.
    My resent photos open just fine,i notised the problem with some of my last year photos processed with Lr 4 [to 4.4].I assume that Lr`s and Ps`s ACR engine is the same[my apps
    are updated both].i use ColorCheck passport for custom profiles and all my files are RAW
    The think that confuse me most is that happens at some of my photos....(and as i say,i do roundtrips in my workflow a while ago..)
    Did anyone experiense something similar?
    If you need any other information for my machine or apps please let me know.
    P.S They dont open neither as smart objects ...

    If that did happen, then you will need to reinstall photoshop to fix it. Try uninstalling first, it may have a repair option. (I don't recall if it did or not)

  • 2 round trip problems.

    We are working on three timelines that we are round tripping through Color....
    Problem #1. Timeline in FCP is DVCPRO HD, 23.98 fps. Send to Color. Color project settings are now set to 29.97 fps DF, grayed out and can not be changed. When rendered and sent back to FCP, sequence settings are 23.98. Is this normal?
    Problem #2. Only 1 of our timelines (the last one tried) will reconnect with the renders once we roundtrip back to FCP. Two will not. Render files do show up in the correct Finder directory. Sequences are conformed correctly in the (from Color) versions but do not link to the graded render files, they are not color corrected sequences.
    I searched for these issues but did not come up with what we are experiencing. Any help is most appreciated.

    OK, an update... Problem #2 was solved by saving all our grades in order, then re-sending the sequences to Color, applying saved grades, rendering and resending to FCP. Don't know why the first round didn't work.
    Question #1 remains, why the mismatch in frame rate?
    EDIT: SOLUTION FOUND... the culprit? PSD file as nested sequence (layered file) caused the frame rate issues, and possibly the roundtrip issue as well.
    Message was edited by: avideditor

  • FW File; Roundtrip to Illustrator Issue

    When roundtrip to Ilustrator and select the "Edit Original", the graphic opens in "Preview", not FW. The simple graphic (gradient with a drop shadow), is a stand alone FW png file. "Get Info" also confirms this.
    How do I get this to file open (via Illustrator) in FW, as per the tutorial...?!!!
    Thanks

    BUG REPORT:
    ISSUE: Roundtrip Workflow per Tutorial—Native Fireworks CS4 Files In Catalyst
    Upon initiating the roundtrip from Catalyst (FC) to Illustrator (AI), I am presented with and dismiss the expected dialogue boxes as per tutorial. Upon initiating the "Edit Original" command in AI, it opens the ORIGINAL Fireworks file in my PREVIEW application, not Fireworks.
    WORKAROUND:
    In AI, use the "RELINK" (Upper left-hand corner) command to link with the original file. Once this is done, I am presented, once again, with the expected diaglogue boxes but now, the file opens as it was expected without this step, in Fireworks where I can make my edits.
    - Primal Atom

  • Sync Issue on Roundtrip from FCP to Waveform editor?

    I sent a section to soundtrack pro from the timeline, upon going back to FCP after the changes (just normalization, the audio track is now out of snc with the video. Anyone else having this problem? Any work arounds? Anyone want to buy my macbook pro and Final Cut Studio 2? I'm beginning to think switching to apple was a huge mistake.

    It's the way that the original file was brought into FCP. STP sees it in it's original form. It's not a bug, it really is a feature. You can select which channel you want playing in your track in STP. If there are two tracks that you want to deal with separately simply copy it into another STP track and turn on only one of the channels in each. They'll live nicely right next to each other.
    As a feature it's very nice--you can switch tracks, eliminate or add more than one channel after the fact. All too often someone hands me a FCP timeline in which they've selected a camera mic instead of a close mic. Piece of cake to switch it around. And I get plenty of tracks that were brought in as 16 or 24. I can pick the one of many, or turn them all on. It's about that level of flexibility and control.

  • NULL IS NOT NULL filter issue

    Can someone explain why 'Y' = 'N' is not working with PARALLEL Plan? i.e. With the filter like 'Y' = 'N' specified and if PQ is used , it does not return instantly. In fact it reads the entire table.
    Here is the test case.. Goal is to execute only one of the SQL joined by union all. I have included 'Y' = 'N' in both SQLs for the test purpose.
    DB Version is 10.2.0.4
    Create table test_tbl_01 nologging as select do.* from dba_objects do , dba_objects d1 where rownum < 22000001;
    Create table test_tbl_02 nologging as select do.* from dba_objects do , dba_objects d1 where rownum < 22000001;
    execute DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'TEST_TBL_01');
    execute DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'TEST_TBL_02');
    *Serial path with 2 table join*
    SQL> select
      2    /* parallel(t1,2 ) parallel(t2,2) */
      3    t1.*
      4    from test_tbl_01 t1 ,test_tbl_02 t2
      5    where t1.object_name = t2.object_name
      6    and  'Y' = 'N'
      7    and  t1.object_type = 'TABLE'
      8    union all
      9    select
    10    /* parallel(t1,2 ) parallel(t2,2) */
    11    t1.*
    12    from test_tbl_01 t1 ,test_tbl_02 t2
    13    where t1.object_name = t2.object_name
    14    and  'Y' = 'N'
    15  /
    no rows selected
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3500703583
    | Id  | Operation            | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |             |     2 |   168 |       |     0   (0)|          |
    |   1 |  UNION-ALL           |             |       |       |       |            |          |
    |*  2 |   FILTER             |             |       |       |       |            |          |
    |*  3 |    HASH JOIN         |             |   660G|    50T|   449M|  6242K (99)| 24:16:38 |
    |*  4 |     TABLE ACCESS FULL| TEST_TBL_01 |  5477K|   386M|       | 41261   (2)| 00:09:38 |
    |   5 |     TABLE ACCESS FULL| TEST_TBL_02 |    22M|   212M|       | 40933   (2)| 00:09:34 |
    |*  6 |   FILTER             |             |       |       |       |            |          |
    |*  7 |    HASH JOIN         |             |  2640G|   201T|   467M|    24M(100)| 95:54:53 |
    |   8 |     TABLE ACCESS FULL| TEST_TBL_02 |    22M|   212M|       | 40933   (2)| 00:09:34 |
    |   9 |     TABLE ACCESS FULL| TEST_TBL_01 |    21M|  1546M|       | 41373   (3)| 00:09:40 |
    Predicate Information (identified by operation id):
       2 - filter(NULL IS NOT NULL)
       3 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
       4 - filter("T1"."OBJECT_TYPE"='TABLE')
       6 - filter(NULL IS NOT NULL)
       7 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            567  bytes sent via SQL*Net to client
            232  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    *Parallel path with 2 table join*
    SQL> select
      2    /*+ parallel(t1,2 ) parallel(t2,2) */
      3    t1.*
      4    from test_tbl_01 t1 ,test_tbl_02 t2
      5    where t1.object_name = t2.object_name
      6    and  'Y' = 'N'
      7    and  t1.object_type = 'TABLE'
      8    union all
      9    select
    10    /*+ parallel(t1,2 ) parallel(t2,2) */
    11    t1.*
    12    from test_tbl_01 t1 ,test_tbl_02 t2
    13    where t1.object_name = t2.object_name
    14    and  'Y' = 'N'
    15  /
    no rows selected
    Elapsed: 00:01:09.34
    Execution Plan
    Plan hash value: 1557722279
    | Id  | Operation                   | Name        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT            |             |     2 |   168 |       |     0   (0)|          |     |         |            |
    |   1 |  PX COORDINATOR             |             |       |       |       |            |          |     |         |            |
    |   2 |   PX SEND QC (RANDOM)       | :TQ10004    |       |       |       |            |          |  Q1,04 | P->S | QC (RAND)  |
    |   3 |    BUFFER SORT              |             |     2 |   168 |       |            |          |  Q1,04 | PCWP |            |
    |   4 |     UNION-ALL               |             |       |       |       |            |          |  Q1,04 | PCWP |            |
    |*  5 |      FILTER                 |             |       |       |       |            |          |  Q1,04 | PCWC |            |
    |*  6 |       HASH JOIN             |             |   660G|    50T|   224M|  3465K (99)| 13:28:42 |  Q1,04 | PCWP |            |
    |   7 |        PX JOIN FILTER CREATE| :BF0000     |  5477K|   386M|       | 22861   (2)| 00:05:21 |  Q1,04 | PCWP |            |
    |   8 |         PX RECEIVE          |             |  5477K|   386M|       | 22861   (2)| 00:05:21 |  Q1,04 | PCWP |            |
    |   9 |          PX SEND HASH       | :TQ10000    |  5477K|   386M|       | 22861   (2)| 00:05:21 |  Q1,00 | P->P | HASH       |
    |  10 |           PX BLOCK ITERATOR |             |  5477K|   386M|       | 22861   (2)| 00:05:21 |  Q1,00 | PCWC |            |
    |* 11 |            TABLE ACCESS FULL| TEST_TBL_01 |  5477K|   386M|       | 22861   (2)| 00:05:21 |  Q1,00 | PCWP |            |
    |  12 |        PX RECEIVE           |             |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,04 | PCWP |            |
    |  13 |         PX SEND HASH        | :TQ10001    |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,01 | P->P | HASH       |
    |  14 |          PX JOIN FILTER USE | :BF0000     |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,01 | PCWP |            |
    |  15 |           PX BLOCK ITERATOR |             |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,01 | PCWC |            |
    |  16 |            TABLE ACCESS FULL| TEST_TBL_02 |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,01 | PCWP |            |
    |* 17 |      FILTER                 |             |       |       |       |            |          |  Q1,04 | PCWC |            |
    |* 18 |       HASH JOIN             |             |  2640G|   201T|   233M|    13M(100)| 53:15:52 |  Q1,04 | PCWP |            |
    |  19 |        PX RECEIVE           |             |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,04 | PCWP |            |
    |  20 |         PX SEND HASH        | :TQ10002    |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,02 | P->P | HASH       |
    |  21 |          PX BLOCK ITERATOR  |             |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,02 | PCWC |            |
    |  22 |           TABLE ACCESS FULL | TEST_TBL_02 |    22M|   212M|       | 22679   (1)| 00:05:18 |  Q1,02 | PCWP |            |
    |  23 |        PX RECEIVE           |             |    21M|  1546M|       | 22924   (2)| 00:05:21 |  Q1,04 | PCWP |            |
    |  24 |         PX SEND HASH        | :TQ10003    |    21M|  1546M|       | 22924   (2)| 00:05:21 |  Q1,03 | P->P | HASH       |
    |  25 |          PX BLOCK ITERATOR  |             |    21M|  1546M|       | 22924   (2)| 00:05:21 |  Q1,03 | PCWC |            |
    |  26 |           TABLE ACCESS FULL | TEST_TBL_01 |    21M|  1546M|       | 22924   (2)| 00:05:21 |  Q1,03 | PCWP |            |
    Predicate Information (identified by operation id):
       5 - filter(NULL IS NOT NULL)
       6 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
      11 - filter("T1"."OBJECT_TYPE"='TABLE')
      17 - filter(NULL IS NOT NULL)
      18 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
    Statistics
           1617  recursive calls
              3  db block gets
         488929  consistent gets
         493407  physical reads
            636  redo size
            567  bytes sent via SQL*Net to client
            232  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
              0  rows processedHowever single table with UNION ALL and PQ works..
    *NO Joins (i.e. Single Table with PQ )  , Issue does not show-up.*
    _*SERIAL PLAN with one Table*_
    SQL> select
      2    /* parallel(t1,2 )   */
      3    t1.*
      4    from test_tbl_01 t1
      5    where 'Y' = 'N'
      6    and  t1.object_type = 'TABLE'
      7    union all
      8    select
      9    /* parallel(t1,2 )   */
    10    t1.*
    11    from test_tbl_01 t1
    12    where 'Y' = 'N'
    13  /
    no rows selected
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 2870519681
    | Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |             |     2 |   148 |     0   (0)|          |
    |   1 |  UNION-ALL          |             |       |       |            |          |
    |*  2 |   FILTER            |             |       |       |            |          |
    |*  3 |    TABLE ACCESS FULL| TEST_TBL_01 |  5477K|   386M| 41261   (2)| 00:09:38 |
    |*  4 |   FILTER            |             |       |       |            |          |
    |   5 |    TABLE ACCESS FULL| TEST_TBL_01 |    21M|  1546M| 41373   (3)| 00:09:40 |
    Predicate Information (identified by operation id):
       2 - filter(NULL IS NOT NULL)
       3 - filter("T1"."OBJECT_TYPE"='TABLE')
       4 - filter(NULL IS NOT NULL)
    Statistics
              0  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            567  bytes sent via SQL*Net to client
            232  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    _*PARALLEL PLAN with one Table*_
    SQL> select
      2    /*+ parallel(t1,2 )      */
      3    t1.*
      4    from test_tbl_01 t1
      5    where 'Y' = 'N'
      6    and  t1.object_type = 'TABLE'
      7    union all
      8    select
      9    /*+ parallel(t1,2 )      */
    10    t1.*
    11    from test_tbl_01 t1
    12    where 'Y' = 'N'
    13  /
    no rows selected
    Elapsed: 00:00:00.09
    Execution Plan
    Plan hash value: 3114025180
    | Id  | Operation              | Name        | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT       |             |     2 |   148 |     0   (0)|          |        |      |            |
    |   1 |  PX COORDINATOR        |             |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)  | :TQ10000    |       |       |            |          |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    UNION-ALL           |             |       |       |            |          |  Q1,00 | PCWP |            |
    |*  4 |     FILTER             |             |       |       |            |          |  Q1,00 | PCWC |            |
    |   5 |      PX BLOCK ITERATOR |             |  5477K|   386M| 22861   (2)| 00:05:21 |  Q1,00 | PCWC |            |
    |*  6 |       TABLE ACCESS FULL| TEST_TBL_01 |  5477K|   386M| 22861   (2)| 00:05:21 |  Q1,00 | PCWP |            |
    |*  7 |     FILTER             |             |       |       |            |          |  Q1,00 | PCWC |            |
    |   8 |      PX BLOCK ITERATOR |             |    21M|  1546M| 22924   (2)| 00:05:21 |  Q1,00 | PCWC |            |
    |   9 |       TABLE ACCESS FULL| TEST_TBL_01 |    21M|  1546M| 22924   (2)| 00:05:21 |  Q1,00 | PCWP |            |
    Predicate Information (identified by operation id):
       4 - filter(NULL IS NOT NULL)
       6 - filter("T1"."OBJECT_TYPE"='TABLE')
       7 - filter(NULL IS NOT NULL)
    Statistics
             28  recursive calls
              3  db block gets
              7  consistent gets
              0  physical reads
            628  redo size
            567  bytes sent via SQL*Net to client
            232  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              0  rows processed

    The same behvious appears in 11.1.0.6, and you don't need such a large data set to prove the point. The paralllel distribution may change to a broadcast with a smaller data set, but I demonstrated the effect when my two tables simply selected 30,000 rows each from all_objects.
    I think you should pass this to Oracle Corp. as a bug - using the smaller data set.
    The problem seems to be the way that Oracle combines multiple lines of a plan into groups of operations (as in PCWC, PCWC, PCWP). It looks like this particularly example has managed to fold the FILTER into a group in such a way that Oracle has lost track of the fact that it is a 'pre-emptng - i.e. always false' filter rather than an ordinary data filter; consequently the filter doesn't apply until after the hash join starts running.
    In my example (which did a broadcast distribution) I could see that Oracle read the entire first table, then started to read the second table, but stopped after one row of the second table, because my plan allowed the join and filter to be applied immediately after the first row from the second table. And I think Oracle decided that the filter was alway going to be false at that moment - so stopped running the second tablescan. You've used a hash/hash distribriution, which has resulted in both scans completing because the slaves in each layer don't talk to each other.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

  • Post Upgrade SQL Performance Issue

    Hello,
    I Just Upgraded/Migrated my database from 11.1.0.6 SE to 11.2.0.3 EE. I did this with datapump export/import out of the 11.1.0.6 and into a new 11.2.0.3 database. Both the old and the new database are on the same Linux server. The new database has 2GB more RAM assigned to its SGA then the old one. Both DB are using AMM.
    The strange part is I have a SQL statement that completes in 1 second in the Old DB and takes 30 seconds in the new one. I even moved the SQL Plan from the Old DB into the New DB so they are using the same plan.
    To sum up the issue. I have one SQL statement using the same SQL Plan running at dramatically different speeds on two different databases on the same server. The databases are 11.1.0.7 SE and 11.2.0.3 EE.
    Not sure what is going on or how to fix it, Any help would be great!
    I have included Explains and Auto Traces from both NEW and OLD databases.
    NEW DB Explain Plan (Slow)
    Plan hash value: 1046170788
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 94861 | 193M| | 74043 (1)| 00:18:52 |
    | 1 | SORT ORDER BY | | 94861 | 193M| 247M| 74043 (1)| 00:18:52 |
    | 2 | VIEW | PBM_MEMBER_INTAKE_VW | 94861 | 193M| | 31803 (1)| 00:08:07 |
    | 3 | UNION-ALL | | | | | | |
    | 4 | NESTED LOOPS OUTER | | 1889 | 173K| | 455 (1)| 00:00:07 |
    |* 5 | HASH JOIN | | 1889 | 164K| | 454 (1)| 00:00:07 |
    | 6 | TABLE ACCESS FULL| PBM_CODES | 2138 | 21380 | | 8 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1889 | 145K| | 446 (1)| 00:00:07 |
    |* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
    | 9 | NESTED LOOPS | | 92972 | 9987K| | 31347 (1)| 00:08:00 |
    | 10 | NESTED LOOPS OUTER| | 92972 | 8443K| | 31346 (1)| 00:08:00 |
    |* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 92972 | 7989K| | 31344 (1)| 00:08:00 |
    |* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
    |* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    5 - access("C"."CODE_ID"="MI"."STATUS_ID")
    7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%' AND "MI"."CLAIM_NUMBER" IS NOT NULL)
    8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
    11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%' AND "M"."THEIR_GROUP_ID" IS NOT NULL)
    12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
    13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
    Note
    - SQL plan baseline "SYS_SQL_PLAN_a3c20fdcecd98dfe" used for this statement
    OLD DB Explain Plan (Fast)
    Plan hash value: 1046170788
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 95201 | 193M| | 74262 (1)| 00:14:52 |
    | 1 | SORT ORDER BY | | 95201 | 193M| 495M| 74262 (1)| 00:14:52 |
    | 2 | VIEW | PBM_MEMBER_INTAKE_VW | 95201 | 193M| | 31853 (1)| 00:06:23 |
    | 3 | UNION-ALL | | | | | | |
    | 4 | NESTED LOOPS OUTER | | 1943 | 178K| | 486 (1)| 00:00:06 |
    |* 5 | HASH JOIN | | 1943 | 168K| | 486 (1)| 00:00:06 |
    | 6 | TABLE ACCESS FULL| PBM_CODES | 2105 | 21050 | | 7 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1943 | 149K| | 479 (1)| 00:00:06 |
    |* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
    | 9 | NESTED LOOPS | | 93258 | 9M| | 31367 (1)| 00:06:17 |
    | 10 | NESTED LOOPS OUTER| | 93258 | 8469K| | 31358 (1)| 00:06:17 |
    |* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 93258 | 8014K| | 31352 (1)| 00:06:17 |
    |* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
    |* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    5 - access("C"."CODE_ID"="MI"."STATUS_ID")
    7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%')
    8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
    11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%')
    12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
    13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
    NEW DB Auto trace (Slow)
    active txn count during cleanout     0
    blocks decrypted     0
    buffer is not pinned count     664129
    buffer is pinned count     3061793
    bytes received via SQL*Net from client     3339
    bytes sent via SQL*Net to client     28758
    Cached Commit SCN referenced     662366
    calls to get snapshot scn: kcmgss     3
    calls to kcmgas     0
    calls to kcmgcs     8
    CCursor + sql area evicted     0
    cell physical IO interconnect bytes     0
    cleanout - number of ktugct calls     0
    cleanouts only - consistent read gets     0
    cluster key scan block gets     0
    cluster key scans     0
    commit cleanout failures: block lost     0
    commit cleanout failures: callback failure      0
    commit cleanouts     0
    commit cleanouts successfully completed     0
    Commit SCN cached     0
    commit txn count during cleanout     0
    concurrency wait time     0
    consistent changes     0
    consistent gets     985371
    consistent gets - examination     2993
    consistent gets direct     0
    consistent gets from cache     985371
    consistent gets from cache (fastpath)     982093
    CPU used by this session     3551
    CPU used when call started     3551
    CR blocks created     0
    cursor authentications     1
    data blocks consistent reads - undo records applied     0
    db block changes     0
    db block gets     0
    db block gets direct     0
    db block gets from cache     0
    db block gets from cache (fastpath)     0
    DB time     3553
    deferred (CURRENT) block cleanout applications     0
    dirty buffers inspected     0
    Effective IO time     0
    enqueue releases     0
    enqueue requests     0
    execute count     3
    file io wait time     0
    free buffer inspected     0
    free buffer requested     0
    heap block compress     0
    Heap Segment Array Updates     0
    hot buffers moved to head of LRU     0
    HSC Heap Segment Block Changes     0
    immediate (CR) block cleanout applications     0
    immediate (CURRENT) block cleanout applications     0
    IMU Flushes     0
    IMU ktichg flush     0
    IMU Redo allocation size     0
    IMU undo allocation size     0
    index fast full scans (full)     2
    index fetch by key     0
    index scans kdiixs1     12944
    lob reads     0
    LOB table id lookup cache misses     0
    lob writes     0
    lob writes unaligned     0
    logical read bytes from cache     -517775360
    logons cumulative     0
    logons current     0
    messages sent     0
    no buffer to keep pinned count     10
    no work - consistent read gets     982086
    non-idle wait count     6
    non-idle wait time     0
    Number of read IOs issued     0
    opened cursors cumulative     4
    opened cursors current     1
    OS Involuntary context switches     853
    OS Maximum resident set size     0
    OS Page faults     0
    OS Page reclaims     2453
    OS System time used     9
    OS User time used     3549
    OS Voluntary context switches     238
    parse count (failures)     0
    parse count (hard)     0
    parse count (total)     1
    parse time cpu     0
    parse time elapsed     0
    physical read bytes     0
    physical read IO requests     0
    physical read total bytes     0
    physical read total IO requests     0
    physical read total multi block requests     0
    physical reads     0
    physical reads cache     0
    physical reads cache prefetch     0
    physical reads direct     0
    physical reads direct (lob)     0
    physical write bytes     0
    physical write IO requests     0
    physical write total bytes     0
    physical write total IO requests     0
    physical writes     0
    physical writes direct     0
    physical writes direct (lob)     0
    physical writes non checkpoint     0
    pinned buffers inspected     0
    pinned cursors current     0
    process last non-idle time     0
    recursive calls     0
    recursive cpu usage     0
    redo entries     0
    redo size     0
    redo size for direct writes     0
    redo subscn max counts     0
    redo synch time     0
    redo synch time (usec)     0
    redo synch writes     0
    Requests to/from client     3
    rollbacks only - consistent read gets     0
    RowCR - row contention     0
    RowCR attempts     0
    rows fetched via callback     0
    session connect time     0
    session cursor cache count     1
    session cursor cache hits     3
    session logical reads     985371
    session pga memory     131072
    session pga memory max     0
    session uga memory     392928
    session uga memory max     0
    shared hash latch upgrades - no wait     284
    shared hash latch upgrades - wait     0
    sorts (memory)     3
    sorts (rows)     243
    sql area evicted     0
    sql area purged     0
    SQL*Net roundtrips to/from client     4
    switch current to new buffer     0
    table fetch by rowid     1861456
    table fetch continued row     9
    table scan blocks gotten     0
    table scan rows gotten     0
    table scans (short tables)     0
    temp space allocated (bytes)     0
    undo change vector size     0
    user calls     7
    user commits     0
    user I/O wait time     0
    workarea executions - optimal     10
    workarea memory allocated     342
    OLD DB Auto trace (Fast)
    active txn count during cleanout     0
    buffer is not pinned count     4
    buffer is pinned count     101
    bytes received via SQL*Net from client     1322
    bytes sent via SQL*Net to client     9560
    calls to get snapshot scn: kcmgss     15
    calls to kcmgas     0
    calls to kcmgcs     0
    calls to kcmgrs     1
    cleanout - number of ktugct calls     0
    cluster key scan block gets     0
    cluster key scans     0
    commit cleanouts     0
    commit cleanouts successfully completed     0
    concurrency wait time     0
    consistent changes     0
    consistent gets     117149
    consistent gets - examination     56
    consistent gets direct     115301
    consistent gets from cache     1848
    consistent gets from cache (fastpath)     1792
    CPU used by this session     118
    CPU used when call started     119
    cursor authentications     1
    db block changes     0
    db block gets     0
    db block gets from cache     0
    db block gets from cache (fastpath)     0
    DB time     123
    deferred (CURRENT) block cleanout applications     0
    Effective IO time     2012
    enqueue conversions     3
    enqueue releases     2
    enqueue requests     2
    enqueue waits     1
    execute count     2
    free buffer requested     0
    HSC Heap Segment Block Changes     0
    IMU Flushes     0
    IMU ktichg flush     0
    index fast full scans (full)     0
    index fetch by key     101
    index scans kdiixs1     0
    lob writes     0
    lob writes unaligned     0
    logons cumulative     0
    logons current     0
    messages sent     0
    no work - consistent read gets     117080
    Number of read IOs issued     1019
    opened cursors cumulative     3
    opened cursors current     1
    OS Involuntary context switches     54
    OS Maximum resident set size     7868
    OS Page faults     12
    OS Page reclaims     2911
    OS System time used     57
    OS User time used     71
    OS Voluntary context switches     25
    parse count (failures)     0
    parse count (hard)     0
    parse count (total)     3
    parse time cpu     0
    parse time elapsed     0
    physical read bytes     944545792
    physical read IO requests     1019
    physical read total bytes     944545792
    physical read total IO requests     1019
    physical read total multi block requests     905
    physical reads     115301
    physical reads cache     0
    physical reads cache prefetch     0
    physical reads direct     115301
    physical reads prefetch warmup     0
    process last non-idle time     0
    recursive calls     0
    recursive cpu usage     0
    redo entries     0
    redo size     0
    redo synch writes     0
    rows fetched via callback     0
    session connect time     0
    session cursor cache count     1
    session cursor cache hits     2
    session logical reads     117149
    session pga memory     -983040
    session pga memory max     0
    session uga memory     0
    session uga memory max     0
    shared hash latch upgrades - no wait     0
    sorts (memory)     2
    sorts (rows)     157
    sql area purged     0
    SQL*Net roundtrips to/from client     3
    table fetch by rowid     0
    table fetch continued row     0
    table scan blocks gotten     117077
    table scan rows gotten     1972604
    table scans (direct read)     1
    table scans (long tables)     1
    table scans (short tables)     2
    undo change vector size     0
    user calls     5
    user I/O wait time     0
    workarea executions - optimal     4

    Hi Srini,
    Yes the stats on the tables and indexes are current in both DBs. However the NEW DB has "System Stats" in sys.aux_stats$ and the OLD DB does not. The old DB has optimizer_index_caching=0 and optimizer_index_cost_adj=100. The new DB as them at optimizer_index_caching=90 and optimizer_index_cost_adj=25 but should not be using them because of the "System Stats".
    Also I thought none of the SQL Optimize stuff would matter because I forced in my own SQL Plan using SPM.
    Differences in init.ora
    OLD-11     optimizerpush_pred_cost_based = FALSE
    NEW-15     audit_sys_operations = FALSE
         audit_trail = "DB, EXTENDED"
         awr_snapshot_time_offset = 0
    OLD-16     audit_sys_operations = TRUE
         audit_trail = "XML, EXTENDED"
    NEW-22     cell_offload_compaction = "ADAPTIVE"
         cell_offload_decryption = TRUE
         cell_offload_plan_display = "AUTO"
         cell_offload_processing = TRUE
    NEW-28     clonedb = FALSE
    NEW-32     compatible = "11.2.0.0.0"
    OLD-27     compatible = "11.1.0.0.0"
    NEW-37     cursor_bind_capture_destination = "memory+disk"
         cursor_sharing = "FORCE"
    OLD-32     cursor_sharing = "EXACT"
    NEW-50     db_cache_size = 4294967296
         db_domain = "my.com"
    OLD-44     db_cache_size = 0
    NEW-54     db_flash_cache_size = 0
    NEW-58     db_name = "NEWDB"
         db_recovery_file_dest_size = 214748364800
    OLD-50     db_name = "OLDDB"
         db_recovery_file_dest_size = 8438939648
    NEW-63     db_unique_name = "NEWDB"
         db_unrecoverable_scn_tracking = TRUE
         db_writer_processes = 2
    OLD-55     db_unique_name = "OLDDB"
         db_writer_processes = 1
    NEW-68     deferred_segment_creation = TRUE
    NEW-71     dispatchers = "(PROTOCOL=TCP) (SERVICE=NEWDBXDB)"
    OLD-61     dispatchers = "(PROTOCOL=TCP) (SERVICE=OLDDBXDB)"
    NEW-73     dml_locks = 5068
         dst_upgrade_insert_conv = TRUE
    OLD-63     dml_locks = 3652
         drs_start = FALSE
    NEW-80     filesystemio_options = "SETALL"
    OLD-70     filesystemio_options = "none"
    NEW-87     instance_name = "NEWDB"
    OLD-77     instance_name = "OLDDB"
    NEW-94     job_queue_processes = 1000
    OLD-84     job_queue_processes = 100
    NEW-104     log_archive_dest_state_11 = "enable"
         log_archive_dest_state_12 = "enable"
         log_archive_dest_state_13 = "enable"
         log_archive_dest_state_14 = "enable"
         log_archive_dest_state_15 = "enable"
         log_archive_dest_state_16 = "enable"
         log_archive_dest_state_17 = "enable"
         log_archive_dest_state_18 = "enable"
         log_archive_dest_state_19 = "enable"
    NEW-114     log_archive_dest_state_20 = "enable"
         log_archive_dest_state_21 = "enable"
         log_archive_dest_state_22 = "enable"
         log_archive_dest_state_23 = "enable"
         log_archive_dest_state_24 = "enable"
         log_archive_dest_state_25 = "enable"
         log_archive_dest_state_26 = "enable"
         log_archive_dest_state_27 = "enable"
         log_archive_dest_state_28 = "enable"
         log_archive_dest_state_29 = "enable"
    NEW-125     log_archive_dest_state_30 = "enable"
         log_archive_dest_state_31 = "enable"
    NEW-139     log_buffer = 7012352
    OLD-108     log_buffer = 34412032
    OLD-112     max_commit_propagation_delay = 0
    NEW-144     max_enabled_roles = 150
         memory_max_target = 12884901888
         memory_target = 8589934592
         nls_calendar = "GREGORIAN"
    OLD-114     max_enabled_roles = 140
         memory_max_target = 6576668672
         memory_target = 6576668672
    NEW-149     nls_currency = "$"
         nls_date_format = "DD-MON-RR"
         nls_date_language = "AMERICAN"
         nls_dual_currency = "$"
         nls_iso_currency = "AMERICA"
    NEW-157     nls_numeric_characters = ".,"
         nls_sort = "BINARY"
    NEW-160     nls_time_format = "HH.MI.SSXFF AM"
         nls_time_tz_format = "HH.MI.SSXFF AM TZR"
         nls_timestamp_format = "DD-MON-RR HH.MI.SSXFF AM"
         nls_timestamp_tz_format = "DD-MON-RR HH.MI.SSXFF AM TZR"
    NEW-172     optimizer_features_enable = "11.2.0.3"
         optimizer_index_caching = 90
         optimizer_index_cost_adj = 25
    OLD-130     optimizer_features_enable = "11.1.0.6"
         optimizer_index_caching = 0
         optimizer_index_cost_adj = 100
    NEW-184     parallel_degree_limit = "CPU"
         parallel_degree_policy = "MANUAL"
         parallel_execution_message_size = 16384
         parallel_force_local = FALSE
    OLD-142     parallel_execution_message_size = 2152
    NEW-189     parallel_max_servers = 320
    OLD-144     parallel_max_servers = 0
    NEW-192     parallel_min_time_threshold = "AUTO"
    NEW-195     parallel_servers_target = 128
    NEW-197     permit_92_wrap_format = TRUE
    OLD-154     plsql_native_library_subdir_count = 0
    NEW-220     result_cache_max_size = 21495808
    OLD-173     result_cache_max_size = 0
    NEW-230     service_names = "NEWDB, NEWDB.my.com, NEW"
    OLD-183     service_names = "OLDDB, OLD.my.com"
    NEW-233     sessions = 1152
         sga_max_size = 12884901888
    OLD-186     sessions = 830
         sga_max_size = 6576668672
    NEW-238     shared_pool_reserved_size = 35232153
    OLD-191     shared_pool_reserved_size = 53687091
    OLD-199     sql_version = "NATIVE"
    NEW-248     star_transformation_enabled = "TRUE"
    OLD-202     star_transformation_enabled = "FALSE"
    NEW-253     timed_os_statistics = 60
    OLD-207     timed_os_statistics = 5
    NEW-256     transactions = 1267
    OLD-210     transactions = 913
    NEW-262     use_large_pages = "TRUE"

  • Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip

    Hi All,
       I am facing some serious application pool crash in one of my customer's Production site SharePoint servers. The Application Error logs in the event Viewer says -
    Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
    Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c8f9
    Exception code: 0xc0000374
    Fault offset: 0x00000000000c40f2
    Faulting process id: 0x1414
    Faulting application start time: 0x01ce5edada76109d
    Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
    Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
    Report Id: 5a69ec1e-cace-11e2-9be2-441ea13bf8be
    At the same time the SharePoint ULS logs says -
    1)
    06/13/2013 03:44:29.53 w3wp.exe (0x0808)                       0x2DF0 SharePoint Foundation        
            General                                8e2s               
    Medium               Unknown SPRequest error occurred. More information: 0x80070005      8b343224-4aa6-490c-8a2a-ce06ac160773
    06/13/2013 03:44:35.03 w3wp.exe (0x0808)                       0x2DF0 SharePoint Foundation        
            General                                      
    8e25      Medium               Failed to look up string with key "FSAdmin_SiteSettings_UserContextManagement_ToolTip", keyfile Microsoft.Office.Server.Search.   
    8b343224-4aa6-490c-8a2a-ce06ac160773
    06/13/2013 03:44:35.03 w3wp.exe (0x0808)                       0x2DF0 SharePoint Foundation        
            General                                8l3c               
    Medium               Localized resource for token 'FSAdmin_SiteSettings_UserContextManagement_ToolTip' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web
    Server Extensions\14\Template\Features\SearchExtensions\ExtendedSearchAdminLinks.xml".              8b343224-4aa6-490c-8a2a-ce06ac160773
    2)
    06/13/2013 03:44:29.01 w3wp.exe (0x0808)                       0x2DF0 SharePoint Foundation        
            Web Parts                                    
    emt4     High       Error initializing Safe control - Assembly:Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c TypeName: Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl
    Error: Could not load type 'Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl' from assembly 'Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.               
    8b343224-4aa6-490c-8a2a-ce06ac160773
    06/13/2013 03:44:29.50 w3wp.exe (0x0808)                      
    0x2DF0 SharePoint Foundation                 Logging Correlation Data                     
    xmnv     Medium               Site=/    8b343224-4aa6-490c-8a2a-ce06ac160773
    3)
    06/13/2013 03:43:59.67 w3wp.exe (0x263C)                       0x24D8 SharePoint Foundation        
            Performance                     9fx9               
    Medium               Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip.     at Microsoft.SharePoint.SPListItem.GetValue(SPField fld,
    Int32 columnNumber, Boolean bRaw, Boolean bThrowException)     at Microsoft.SharePoint.SPListItem.GetValue(String strName, Boolean bThrowException)     at Microsoft.SharePoint.SPListItem.get_Item(String fieldName)    
    at Microsoft.SharePoint.WebControls.BaseFieldControl.get_ItemFieldValue()     at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.RenderFieldForDisplay(HtmlTextWriter output)     at Microsoft.SharePoint.WebControls.BaseFieldControl.Render(HtmlTextWriter
    output)     at Microsoft.SharePoint.Publishing.WebControls.BaseRichField.Render(HtmlTextWriter output)     at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.R...             
    b8d0b8ca-8386-441f-8fce-d79fe72556e1
    06/13/2013 03:43:59.67*               w3wp.exe (0x263C)                      
    0x24D8 SharePoint Foundation                 Performance                                  
    9fx9       Medium               ...ender(HtmlTextWriter output)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection
    children)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)    
    at System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)     at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter
    writer)     at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output)     at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWrit...            
    b8d0b8ca-8386-441f-8fce-d79fe72556e1
    06/13/2013 03:43:59.67*               w3wp.exe (0x263C)                      
    0x24D8 SharePoint Foundation                 Performance                                  
    9fx9       Medium               ...er writer, ICollection children)     at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer,
    ICollection children)     at System.Web.UI.Page.Render(HtmlTextWriter writer)     at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)    
    at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)     at System.Web.UI.Page.ProcessRequest()     at System.Web.UI.Page.ProcessRequest(HttpContext context)    
    at Microsoft.SharePoint.Publishing.TemplateRedirectionPage.ProcessRequest(HttpContext context)     at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()     at System.Web.HttpApplication.ExecuteStep(IExecutionSte...       
    b8d0b8ca-8386-441f-8fce-d79fe72556e1
    06/13/2013 03:43:59.67*               w3wp.exe (0x263C)                       
    0x24D8 SharePoint Foundation                 Performance                                  
    9fx9       Medium               ...p step, Boolean& completedSynchronously)     at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception
    error)     at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb)     at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)    
    at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags)     at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
    IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags)     at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr module...              
    b8d0b8ca-8386-441f-8fce-d79fe72556e1
    06/13/2013 03:43:59.67*               w3wp.exe (0x263C)                      
    0x24D8 SharePoint Foundation                 Performance                                  
    9fx9       Medium               ...Data, Int32 flags)     at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
    IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags)                 b8d0b8ca-8386-441f-8fce-d79fe72556e1
    06/13/2013 03:43:59.67 w3wp.exe (0x263C)                      
    0x24D8 SharePoint Foundation                 Performance                     g4zd               
    High       Performance degradation: note field [PublishingPageContent] was not in demoted fields.           b8d0b8ca-8386-441f-8fce-d79fe72556e1
    Anybody has any idea whats going on? I need to fix this ASAP as we are suppose to go live in next few days.
    Soumalya

    Hello Soumalya,
    Do you have an update on your issue? We are actually experiencing a similar issue at a new customer.
    - Dennis | Netherlands | Blog |
    Twitter

  • Performance issue with drop and re-create index

    My database table has about 2 million records. The index in the table was not optmized, so we created a new index lets call it index2. So this table now was the original index (index1) and the index2. We then inserted data into this table from the other box. It was running for a few weeks.
    Suddenly we noticed that a query which used to take a few seconds now took more than a minute. The execution plan was using the index2 which technically should be faster. We checked if the statistics were upto date and it was. So then we dropped the new index, re-ran the query and it completed in 10 sec's. It was usign the old index. This puzzled me since the point of the index2 was to make it better. So then we re-created index2 and genrated stats for the index. Re-ran the query and it completed in 5 sec's.
    Everytime we timed to run the query, I shutdown and restarted the box to clear all cache's. So all the time I have specified are pure time's and not cached. The execution plan using index2 taking 1 min and 5 sec's are nearly the same, with very minior difference in cost and cardnitality. Any ideas why index2 took 1 min before and after drop and create again takes only 5 sec.
    The reason I want to find the cause is to ensure that this doesn't happen again, since its impossible for me to re-create the index everytime I see this issue. Any thoughts would be helpful.

    Firstly the indexes are different index1 is only on the time column, where as index2 is a composite index consisting of 3 columns.
    Here are the details. The test that I did were last friday, 3/31. Yesterday and today when I executed the same query I get more increased times, yesterday it took 9 sec amd today 17 sec. The stats job kicked in on both days and is upto date. This table, nothing gets deleted. Only added.
    3/31
    Original
    Elapsed: 00:01:02.17
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6553 Card=9240 Bytes
    =203280)
    1 0 SORT (UNIQUE) (Cost=6553 Card=9240 Bytes=203280)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=15982 Card=2306303 Bytes=50738666)
    drop index EVENT_NA_TIME_ETYPE
    Elapsed: 00:00:11.91
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7792 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=7792 Card=9275 Bytes=204050)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EVENT' (Cost=2092
    Card=2284254 Bytes=50253588)
    3 2 INDEX (RANGE SCAN) OF 'EVENT_TIME_NDX' (NON-UNIQUE
    ) (Cost=6740 Card=2284254)
    create index EVENT_NA_TIME_ETYPE ON EVENT(NET_ADDRESS,TIME,EVENT_TYPE);
    BEGIN
    SYS.DBMS_STATS.GENERATE_STATS('USER','EVENT_NA_TIME_ETYPE',0);
    end;
    Elapsed: 00:00:05.14
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6345 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=6345 Card=9275 Bytes=204050)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12878 Card=2284254 Bytes=50253588)
    4/3
    Elapsed: 00:00:09.70
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6596 Card=9316 Bytes
    =204952)
    1 0 SORT (UNIQUE) (Cost=6596 Card=9316 Bytes=204952)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=11696 Card=2409400 Bytes=53006800)
    Statistics
    0 recursive calls
    0 db block gets
    11933 consistent gets
    9676 physical reads
    724 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    4/4
    Elapsed: 00:00:17.99
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6681 Card=9421 Bytes
    =207262)
    1 0 SORT (UNIQUE) (Cost=6681 Card=9421 Bytes=207262)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12110 Card=2433800 Bytes=53543600)
    Statistics
    0 recursive calls
    0 db block gets
    12279 consistent gets
    9423 physical reads
    2608 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL> select index_name,clustering_factor,blevel,leaf_blocks,distinct_keys from u ser_indexes where index_name like 'EVENT%';
    INDEX_NAME CLUSTERING_FACTOR BLEVEL LEAF_BLOCKS DISTINCT_KEYS
    EVENT_NA_TIME_ETYPE 2393170 2 12108 2395545
    EVENT_PK 32640 2 5313 2286158
    EVENT_TIME_NDX 35673 2 7075 2394055

  • PDF binary sent to server each roundtrip?!

    Hi all,
    I'm on NW CE 7.11, integrating WDJ with SIFbA.
    I would really appreciate some help on the following problem:
    in my application we have 2 views, accessed in sequence by users.
    View 1 displays ("usepdf" mode) a PDF retrieved from db.
    View 2 allows users to perform search on backend and so on.
    The problem is that at each server roundtrip the binary is sent to the server, even when View 1 is no longer visible.
    As a consequence, when in view 2 a very simple interaction has to take place, the whole binary from view 1 is sent to the server as well.
    This is a major performance issue with large binaries (~1mb).
    I would like to tell the framework to send the PDF to server only when really necessary
    How can this be done?
    Thanks regards
    Vincenzo

    Hi Vincenzo,
    If you haven't tried it already you can use the setting "Allow Form Rendering to be Cached on Server". This is found on the form properties on the Performance tab. This will improve performance as the form does not need to be rendered for each round trip, however I don't believe that it prevent the round trip altogether. You may want to post this on the WDJ forum as well.
    Let us know how you go.
    Regards,
    Ben.

  • I can't roundtrip from Aperture 3.3.2 to Photoshop!

    Hi,
    I am having significant problems with Aperture and Photoshop interaction. As a working professional photographer it is a serious issue that is causing me real stress and it is commercially important that I rectify it quickly.
    After extensive testing I am unsure as to the root cause of this problem, different things point towards different causes, however at this stage I am unsure as to wether this is being caused by an Aperture software issue, a Photoshop software issue or indeed a hardware problem.
    Mindful of this, I am sending this email to Apple, Adobe, the NAPP help desk, Aperture Expert forum in the hope that somebody can resolve this issue.
    THE PROBLEM
    When I try and roundtrip images from Aperture v3.3.2 to Photoshop CS6, Aperture prepares the files as tiffs (8 bit) as per the export settings in my preferences dialog box. I can see them being duplicated in the Aperture window but when Photoshop opens only 1 image is available, the others whilst sitting in Aperture are not shown in Photoshop CS6?
    I have run the same round tripping process with images sent from Lightroom 4 and they DO all appear in Photoshop? This does point towards an Aperture problem rather that Photoshop.
    I have tried the same process using Photoshop CS5 from Aperture 3.3.2 and the situation is the same.
    Thinking that it could be a hardware issue or old preference files etc, I did a completely clean install of OSX Mountain Lion, Aperture 3.3.2 and Photoshop CS6 and still all is the same. Additionally I tested the problem on different hardware (MacBook Air) and the problem is replicated there?
    There was a time in recent months before the introduction of Aperture 3.3.2 and Mountain Lion that this problem did not occur and the process did work on both bits of hardware that they are being produced on now...
    I have trawled painfully through all my preferences in Aperture and Photoshop to see if this is a simple setting issue but to date cannot identify one.
    I have posted the problem on the web in various forums and can only find a very small handful of people having this issue, it doesn't seem to be widespread.
    Please can you help me to rectify this significant issue. I am  a professional trying to get work done and this problem is increasing my workflow exponentially.
    Best regards
    Richard

    Another user changed Aperture to read Aperture 3. Of course if did not update as the app needs to be named simply Aperture. BUT, when he changed it back to Aperture, he was able to update.
    Maybe there is an invisible character in there like a space or something.
    If that doesn't work I would contact the App Store Support.

  • Multiclip sync issues - resolved???

    Searching through the threads here I've seen a few different posts regarding the loss of sync in multiclips when moving media files around. Unfortunately all of the threads are locked and can't be posted to. None of them have been answered!
    I've just run into a similar situation and am trying to find out if the recent update to FCP 6.0.3 has fixed this. My problem, which is 100% repeatable is as follows:
    Create a new project/sequence.
    Capture 2 camera tapes from the event (1 w/ timecode, 1 via a analog to digital converter, ie no timecode)
    Review each captured clip and insert markers where I want to use scenes.
    Create subclips from each marker I want to use
    Mark In Points in each subclip to use as sync points (clapper board in my case)
    Select the corresponding subclips from Camera A and Camera B and Make Multiclip using In Points for sync.
    Place multiclip in timeline.
    Play sequence and use keypad to insert camera changes.
    Save file.
    At this point everything is fine. My sequence plays back with all of my angle changes.
    Now if I quit FCP and use the finder to move one of the captured files to another hard drive everything goes to sh*t. When I reopen the project, FCP will ask me to reconnect to the offline media file. No problem; use the dialog box to locate the moved file and reconnect. Now when I go back to the sequence and play it, the 2 camera angles are completely out of sync with each other. I'm not talking a few frames, its usually several seconds at least.
    I've done this several times, and it always happens. FCP absolutely cannot maintain multiclip sync info when a media file is moved.
    Just as a point of further frustration, I know someone is going to respond that I should use the Media Manager to move media files around. This is also broken. That's how I originally discovered the problem. I went to create an archive of the whole project. My destination was an external hard drive. When opening up the newly created project, I saw the same out of sync multiclips. My guess is because the media files were now on a different hard drive.
    Also, try to roundtrip a multiclip sequence to SoundTrackPro - totally hosed when it comes back into FCP. Same out of sync issues.
    So something is definately problematic here. I'm still on 6.0.1 and can't risk updating while this project is in process. But I would like to know if this problem still exists in 6.0.3.
    Thanks,
    Chris

    Chris-- I am having the same problem-My media went offline-- tried reconnecting and get this message stating that the attributes of the original clip have changed; mark in and out-- then when i reopen my sequence certain clips that are multicam have suddenly got different ins and outs-- have no idea what is causing this but here is some advice someone posted on another forum:
    I do almost nothing but multi-cam editing, and have for the last 8 to 10 years, but the overwhelming majority of my work is with matched time-code cameras. I have worked with in-points on FCP and have not had these problems, but I don't do it that often, so I don't feel comfortable telling someone that he's not having the problems he is having.
    Jasper - when making multi-clips with in-points are you using master clips or sub-clips? Also are you sure you haven't seperately used the same source clip and thereby changed the in-point reference? These would be the first 2 things i would test when trying to figure out why the multi-clips were not functioning. Also I will say the a sure way to get corrupted multi-clips is to use a clip that has time-code breaks within it - something that shouldn't happen, but that actually happens all to easily in FCP.
    Finally I can offer this work around, change the AUX time-code on the clips and sync with them. FCP makes it incredibly easy to change the time-code on a captured clip, and that includes 2 optional AUX time code tracks that are available for each clip. Find a sync point on each of the clips that you want to use in a multi-clip, go to Modify / Time Code - check the AUX TC 1 box and enter a common time code in the window - I would recommend that this time code be a copy of the actual time code of your most important clip, but anything will do - and then when making a multi-clip choose to Sync with Aux 1. (just don't uncheck the box containing your source time-code or you'll loose it - leave both boxes checked) - this extra step does not take long and should alleviate your problems.
    hope this helps?!@

  • JSF 1.2 - Roundtrip just to create a simple Go Back link scenario

    From what I've read, it seems JSF 1.2 treats everything as HTTP POST. Hence, doing a simple Go Back means one has to do a round trip to the backend database and back. Here's how I've done it:
    1. Create a form and populate it with a list of information. For each line of info in the list, I define a h:outputLink that have four f:param directives to pass the parameters I want to a second form. Of course, the name/id of these params have to be unique. Say, if you want to create a second set of h:outputLink for the list, you have to define new unique name/id. Yuck! Why must name/id be unique? After all, they just going to become URL parameters.
    2. To create a "Go Back" link on my second form, I define a h:outputLink with the same number of f:params deriving their values from the URL parameters received from the first form. I have tried the h:commandLink/h:commandButton without success - they just resubmitted the second form without redirecting back to the first form.
    3. Once the "Go Back" link is clicked, I have to do a roundtrip to the backend database just to "refresh" the first form - and if I have multiple backing beans providing data for this form - this roundtrip can become quite expensive.
    What I gather from doing the JSF for the last five weeks:
    1. All requests are submitted as HTTP POSTs - this much is obvious. Thus, this will break the ability to do "light" navigation (such as a Go Back link), bookmarking and so forth due to the POST nature of JSF 1.2 or earlier. JSF 2.x might be different though - I may have to check the spec to confirm this. I have heard that one may need to use Facelets and not JSP as the view technology - otherwise certain features of this implementation will not be available.
    2. To pass information as parameters means one has to validate URL parameters to prevent URL tampering from third parties.
    3. I can use session scope for the form - but this is an "ugly" way of doing it i.e. If I move to a different area of the website and come back to the form, I can still see the previously generated data - until the session expired of course - then the form is reset to it's pre-submit status. I can appreciate the use of session scope in situation where one want to keep certain information active e.g. when a user logged into a forum or a secured session (HTTPS). To use session scope to just navigate between forms (while maintaining the status of the displayed data) is too much of an overkill.
    If there are better ways of doing something trivial like a Go Back link or bookmarking then I'm all open ears. If not, I might have to ditch JSF 1.2 and investigate JSF 2.0. Hopefully, this release would be less painful.
    Looking forward to JSF experts' comments surrounding this issue. I'm sure I'm not the first to complain about this framework and I won't be the last. Enough rant...
    Edited by: icepax on 26/11/2009 11:32
    Edited by: icepax on 29/11/2009 05:45

    BalusC wrote:
    First, outputlinks do not generate POST.Appreciate your reply. I stand corrected with this comment. But reading a few articles, for example: [JSF Discussions and Blogs|http://ptrthomas.wordpress.com/2009/05/15/jsf-sucks/], one has to wonder how JSF 1.x stacks up.
    >
    Second, how would you do it all in plain HTML/JSP/Servlet without JSF? If you tell how you would do it, then we can suggest the JSF way of it (but with the time you should probably already have realized that your concern/rant makes no sense).Sorry, Bauke, I went from Java straight to JSF for the past five weeks but if I have to guess one would use HttpServletRequest or HttpServletResponse in HTML-JSP-Servlet scenario. What I have done with the roundtrip actually works for me albeit I'm hitting the backend twice instead of once (there are pros and cons of doing it this way but I won't go into detail here). However, this still begs the question about doing something trivial like a Go Back link or bookmarking and why it's not so trivial in JSF.
    >
    At any way, you may find JBoss Seam interesting. It has an extra scope between request and session in: the conversation scope.Thanks. I could try it or move straight to JSF 2.0 or later.
    Don't get me wrong, I love JSF. But I won't see its real potential until I move to JSF 2.x or later. Integrated Facelets support is something I'm looking forward to -:)

  • Premiere Markers export and import: Roundtrip via FCP XML not working

    The objective of my project is to create a sports app that records markers and saves it in different formats of XML / CSV that I can then import in different NLE packages.
    I have already succeeded with Vegas Pro but also want to support NLEs for MAC-OS.
    In order to approach this analytically I have exported a very simple project with 1 sequence including 1 clip and 3 markers (with length zero and length > 0) and exported it
    in FCP XML from Adobe Premiere CC and do a round-trip import it via FCP import into Adobe Premiere CC.
    This round-trip does not work. Any insight why or any support what other mechanism will work?
    Thanks
    Thomas

    let's go back to the purpose:
    I have written a mobile app on which people can during a sports match capture the time of the important moments by pressing buttons.
    I then convert the time information into an XML or CSV file that can be read by the different NLE. Given that Sony Vegas allows to import
    markers independently from a sequence. Now I look at the same things for Premiere. This allows to edit the highlights very quickly vs watching
    the entire match coverage again.
    I try to replicate that on Premiere through the FCP XML, but now that the roundtrip works, the issue is that FCP XML in its structure wants a sequence.
    I could imagine to provide a dummy sequence that can be overlayed by the real content just to preserve the markers. Any ideas?
    Regards,
    TK

  • Subquery execution plan issue

    Hi All,
    Oracle v11.2.0.2
    I have a SELECT query which executes in less than a second and selects few records.
    Now, if I put this SELECT query in IN clause of a DELETE command, that takes ages (even when DELETE is done using its primary key).
    See below query and execution plan.
    Here is the SELECT query
    SQL> SELECT   ITEM_ID
      2                         FROM   APP_OWNER.TABLE1
      3                        WHERE   COLUMN1 = 'SomeValue1234'
      4                                OR (COLUMN1 LIKE 'SomeValue1234%'
      5                                    AND REGEXP_LIKE (
      6                                          COLUMN1,
      7                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      8  ));
       ITEM_ID
      74206192
    1 row selected.
    Elapsed: 00:00:40.87
    Execution Plan
    Plan hash value: 3153606419
    | Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |             |     2 |    38 |     7   (0)| 00:00:01 |
    |   1 |  CONCATENATION     |             |       |       |            |          |
    |*  2 |   INDEX RANGE SCAN | PK_TABLE1   |     1 |    19 |     4   (0)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| PK_TABLE1   |     1 |    19 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("COLUMN1" LIKE 'SomeValue1234%')
           filter("COLUMN1" LIKE 'SomeValue1234%' AND  REGEXP_LIKE
                  ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       3 - access("COLUMN1"='SomeValue1234')
           filter(LNNVL("COLUMN1" LIKE 'SomeValue1234%') OR LNNVL(
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$')))
    Statistics
              0  recursive calls
              0  db block gets
              8  consistent gets
              0  physical reads
              0  redo size
            348  bytes sent via SQL*Net to client
            360  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedNow see the DELETE command. ITEM_ID is the primary key for TABLE2
    SQL> delete from TABLE2 where ITEM_ID in (
      2  SELECT   ITEM_ID
      3                         FROM   APP_OWNER.TABLE1
      4                        WHERE   COLUMN1 = 'SomeValue1234'
      5                                OR (COLUMN1 LIKE 'SomeValue1234%'
      6                                    AND REGEXP_LIKE (
      7                                          COLUMN1,
      8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
      9  ))
    10  );
    1 row deleted.
    Elapsed: 00:02:12.98
    Execution Plan
    Plan hash value: 173781921
    | Id  | Operation               | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT        |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   1 |  DELETE                 | TABLE2                      |       |       |            |          |
    |   2 |   NESTED LOOPS          |                             |     4 |   228 | 63490   (2)| 00:12:42 |
    |   3 |    SORT UNIQUE          |                             |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  4 |     INDEX FAST FULL SCAN| I_TABLE1_3                  |     1 |    19 | 63487   (2)| 00:12:42 |
    |*  5 |    INDEX RANGE SCAN     | PK_TABLE2                   |     7 |   266 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("COLUMN1"='SomeValue1234' OR "COLUMN1" LIKE 'SomeValue1234%' AND
                  REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
       5 - access("ITEM_ID"="ITEM_ID")
    Statistics
              1  recursive calls
              5  db block gets
         227145  consistent gets
         167023  physical reads
            752  redo size
            765  bytes sent via SQL*Net to client
           1255  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
              1  rows processedWhat can be the issue here?
    I tried NO_UNNEST hint, which made difference, but still the DELETE was taking around a minute (instead of 2 minutes), but that is way more than that sub-second response.
    Thanks in advance

    rahulras wrote:
    SQL> delete from TABLE2 where ITEM_ID in (
    2  SELECT   ITEM_ID
    3                         FROM   APP_OWNER.TABLE1
    4                        WHERE   COLUMN1 = 'SomeValue1234'
    5                                OR (COLUMN1 LIKE 'SomeValue1234%'
    6                                    AND REGEXP_LIKE (
    7                                          COLUMN1,
    8                                          '^SomeValue1234[A-Z]{3}[0-9]{5}$'
    9  ))
    10  );
    The optimizer will transform this delete statement into something like:
    delete from table2 where rowid in (
        select t2.rowid
        from
            table2 t2,
            table1 t1
        where
                t1.itemid = t2.itemid  
        and     (t1.column1 =  etc.... )
    )With the standalone subquery against t1 the optimizer has been a little clever with the concatenation operation, but it looks as if there is something about this transformed join that makes it impossible for the concatenation mechanism to be used. I'd also have to guess that something about the way the transformation has happened has made Oracle "lose" the PK index. As I said in another thread a few minutes ago, I don't usually look at 10053 trace files to solve optimizer problems - but this is the second one today where I'd start looking at the trace if it were my problem.
    You could try rewriting the query in this explicit join and select rowid form - that way you could always force the optimizer into the right path through table1. It's probably also possible to hint the original to make the expected path appear, but since the thing you hint and the thing that Oracle optimises are so different it might turn out to be a little difficult. I'd suggest raising an SR with Oracle.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • Network streams vs shared variables

    I send data from a PXI RT System to users on different Windows computers via Shared Variable and Network Stream.  The user that receives the data via Network Stream writes the data to a disk file (aka DAQ computer).  The users that receive the data v

  • Creating a production order for a semi finished product

    Hi I am having a multi level bom scenario over here. one of the bom components of the header material is a semi finished product. I am using the planing stratagy 40 for that material. in standard sap when i create a production order for  header mater

  • Reinstalling indesign for mac (upgrade)

    I reinstalled a mac and I need to reinstall Indesign bought as an upgrade. Can I do it with the upgrade directly or do I have to install the former version?

  • Removing Flash Gallery From Web Site

    I recently created and uploaded an LR flash gallery to my web site. Although I previewed it, once it was up I noticed something I didn't like and wanted to remove the whole gallery. When I go to the site I see my original file is gone and the gallery

  • Hotmail plugin broke browsers, OS X 10.4.11

    I tried adding the plugin "httpmail_tiger_universal_1.49" to my Mac powerbook running OS X 10.4.11. I followed all the directions. The plugin does not work with Mail. I no longer care about downloading hotmail into my mac Mail program. Now all of my