No Lava plan tree on ASE 15.0.3

We did:
set option plancost on
set showplan on
set fmtonly on
exec sp1
We get the text version of plan but no lava plan tree for our queries. We see lava tree for the UDFs used inside queries although. Wanted to know estimated cost on each branch. Anyway out?
Another question is:
We have 'deferred compiltaion' option on. So our  queries in SPs run as LWPs inside proc cache. But sp_showplan does not show any plan for the running query. Any way out?
Thanks,
Apurba K. Saha

Hi,
I am not sure what tool you are using but:
http://wiki.scn.sap.com/wiki/download/attachments/379553293/ASE%20Optimizer%20Best%20Practices.pdf?version=1&modificatio…
pg27/8
The query plan can be represented in different ways: • set showplan on shows the query plan in the well‐known format, showing the join order, indexes used and I/O strategy. • set statistics plancost on (a new command in ASE 15) shows the query plan in a format more closely resembling the internal "Lava Tree". • Lastly, the GUI tool DBISQL can display the query plan in a graphical tree format. This is not discussed further in this document.
Are you using DBISQL?
Regards,
Johan

Similar Messages

  • Have latest updates for mac and iPhoto.  several of "events" now show as grey squares with a plan tree, no thumbnails when scrolling over as before.  Photos show when clicking on icon but not visible as a group.  What gives?

    Have the latest updates on my iMac and iphoto.  Recently several "Event" folders have turned to grey with a palm tree in the corner.  Before there was a "key" photo to help identify the event.  The event title is still at the bottom of the square but no key photo.  When i click on the icon, the photos that it contained show up, but when closed again disappear.  How do I correct this annoyance?
    Also:  One event that contains about 12 photos has a problem---some of the photos when clicked will enlarge and show individually on the screen and print out, while others when clicken on show a black background with an exclamation mark and will not print.  Thoughts?

    Back up your iPhoto library, Depress and hold the option (alt) and command keys and launch iPhoto - rebuild your iPhoto library database
    LN

  • EXPLAIN PLAN with pluggable mapping

    Dear Folks,
    I have a mapping with a pluggable mapping included. When I try to get the EXPLAIN PLAN TREE I receive an error: 'ORA-00904: "FULLPACKAGENAME"."CONSTANT_0_SITUATION_1":invalid identifier'.
    The EXPLAIN PLAN REPORT says in the first line: Error: cannot fetch last explain plan from PLAN_TABLE
    When I search for 'CONSTANT_0_SITUATION_1' I can find things like:
    (SELECT
    "WH_JOIN_GSMNMBR_PORTINGDATA"."GSS_VOI_ID" "FACT1_ID",
    "WH_JOIN_GSMNMBR_PORTINGDATA"."BEGINDATE" "FACT1_BEGINDATE",
    "WH_JOIN_GSMNMBR_PORTINGDATA"."ENDDATE" "FACT1_ENDDATE",
    "INGRP2"."FACT1_MIN_BEGINDATE" "FACT1_MIN_BEGINDATE",
    "INGRP2"."FACT1_MAX_ENDDATE" "FACT1_MAX_ENDDATE",
    FULLPACKAGENAME."CONSTANT_0_SITUATION_1" "SITUATION_1"
    What is 'FULLPACKAGENAME'???
    I can validate my mapping and this is OK. I checked the pluggable mapping again and again, all seems to be right. I can run the mapping and it works. Just need to tune it.
    I'm working with OWB 11.1 on a 10.2.g DB
    Can please someone help me?
    Greetings
    S.

    Hi,
    if you generate the sql in the mapping editor, the sql may not run directly. FULLPACKAGENAME will be replaced my the mapping name during deployment. The constants you define in the mapping will be pl/sql-variables of that package.
    To make the sql work, you have to replace the FULLPACKAGENAME.xxx expressions with correct values manually.
    Do you have the most recent patch installed? In owb 10gr2 the values from the constants as well as the default values of the mappings are set correctly from version 10.2.0.3 on.
    Regards,
    Carsten.

  • Toad explain plan

    Hi,
    I understand that reading output of an explain starts from the mostly indented lines.
    Anyhow in TOAD (I am using version 9.7) toad Adds a number before each line in the explain plan tree. This number looks random to me because I cannot link it to the execution plan as per indentation.
    Can anybody tell me what this number before each line is ?
    Thanks
    AK

    askanaan wrote:
    I understand that reading output of an explain starts from the mostly indented lines.
    ...Though this may "sometimes" be so, it is certainly not a general rule. It all depends on the (parent) operations that preceed/follow that "mostly indented line".
    Here's a simple counter-example using the emp/dept tables.
      1  explain plan for
      2  select d.deptno
      3        ,d.dname
      4        ,(select count(*)
      5          from emp e
      6          where e.deptno=d.deptno) as count_emp
      7* from dept d
    SQL> /
    Explained.
    SQL> @vp
    More...
    PLAN_TABLE_OUTPUT
    Plan hash value: 4111639169
    | Id  | Operation          | Name | Rows  | Bytes |
    |   0 | SELECT STATEMENT   |      |     1 |    22 |
    |   1 |  SORT AGGREGATE    |      |     1 |    13 |
    |*  2 |   TABLE ACCESS FULL| EMP  |     1 |    13 |
    |   3 |  TABLE ACCESS FULL | DEPT |     1 |    22 |
    Predicate Information (identified by operation id):
       2 - filter("E"."DEPTNO"=:B1)This plan starts at line 3. Not 2, which is the "mostly indented line".
    Here's another one:
      1  explain plan for
      2  select d.*
      3  from emp e
      4      ,(select d1.deptno
      5        from dept d1
      6        where d1.loc in (select d2.loc
      7                         from dept d2
      8                         group by d2.loc
      9                         having count(*) > 1)) d
    10* where e.deptno = d.deptno
    SQL> /
    Explained.
    SQL> @vp
    More...
    PLAN_TABLE_OUTPUT
    Plan hash value: 1621130288
    | Id  | Operation                     | Name          |
    |   0 | SELECT STATEMENT              |               |
    |   1 |  NESTED LOOPS SEMI            |               |
    |   2 |   NESTED LOOPS                |               |
    |   3 |    TABLE ACCESS FULL          | EMP           |
    |   4 |    TABLE ACCESS BY INDEX ROWID| DEPT          |
    |*  5 |     INDEX UNIQUE SCAN         | SYS_C00204096 |
    |*  6 |   VIEW                        | VW_NSO_1      |
    |*  7 |    FILTER                     |               |
    |   8 |     SORT GROUP BY             |               |
    |   9 |      TABLE ACCESS FULL        | DEPT          |
    Predicate Information (identified by operation id):
       5 - access("E"."DEPTNO"="D1"."DEPTNO")
       6 - filter("D1"."LOC"="$nso_col_1")
       7 - filter(COUNT(*)>1)If you think this one starts at line 9, you're wrong again. It starts at line 3.
    You need to understand all the "operations" that can appear in an execution plan.
    Only then can you deduce where the execution of such plan really starts.
    Toon

  • Optimiser difference : a max() sometimes doing a backward scan and sometimes forward scan

    I interested why we get 2 different query plans on Sybase 15.5 and 15.7 for similar queries
    declare @MID int
    select @MID = 25717
    select MID , max(Date) from CB where MID = @MID group by MID
    select @MID, max(Date) from CB where MID = @MID
    The index is on (MID, Date).
    The first does forward scan and the second does a backward scan.
    With our data, the first query does 8000 page reads (with a SAN read costing 4ms = 32s)
    and the second query does 4 page reads (with a SAN read costing 4ms = 16ms)
    1)
    |   |   |GROUP SORTED Operator (VA = 1)
    |   |   |  Evaluate Grouped MAXIMUM AGGREGATE.
    |   |   |
    |   |   |   |SCAN Operator (VA = 0)
    |   |   |   |  FROM TABLE
    |   |   |   |  CB
    |   |   |   |  Index : IDX1
    |   |   |   |  Forward Scan.
    |   |   |   |  Positioning by key.
    |   |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |   |  Keys are:
    |   |   |   |    MID ASC
    |   |   |   |  Using I/O Size 16 Kbytes for index leaf pages.
    |   |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    2)
    |   |  Evaluate Ungrouped MAXIMUM AGGREGATE.
    |   |  Scanning only up to the first qualifying row.
    |   |
    |   |   |SCAN Operator (VA = 0)
    |   |   |  FROM TABLE
    |   |   |  CB
    |   |   |  Index : IDX1
    |   |   |  Backward scan.
    |   |   |  Positioning by key.
    |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |  Keys are:
    |   |   |    MID ASC
    |   |   |  Using I/O Size 2 Kbytes for index leaf pages.
    |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    Why does the optimiser generate different plans ?

    Okay - so the one reason I like to see plancost vs. just showplan...you can see in the one without the group by clause that it does a ScalarAgg vs. a GroupSorted ala:
    (without group by clause)
    ==================== Lava Operator Tree ====================
                            Emit                               
                            (VA = 2)                           
                            r:1 er:1                           
                            cpu: 0                             
                ScalarAgg                                      
                  Max                                          
                (VA = 1)                                       
                r:1 er:1                                       
                cpu: 0                                         
    IndexScan                                                  
    IDX1                                                       
    (VA = 0)                                                   
    r:1 er:1                                                   
    l:3 el:3                                                   
    p:3 ep:3                                                   
    ============================================================
    (with group by clause)
    ==================== Lava Operator Tree ====================
                                        Emit                    
                                        (VA = 3)                
                                        r:1 er:8                
                                        cpu: 300                
                            Restrict                            
                            (0)(0)(0)(4)(0)                     
                            (VA = 2)                            
                            r:1 er:8                            
                GroupSorted                                     
                Grouping                                        
                (VA = 1)                                        
                r:1 er:8                                        
    IndexScan                                                   
    IDX1                                                        
    (VA = 0)                                                    
    r:353666 er:158986                                          
    l:863 el:386                                                
    p:176 ep:78                                                 
    ============================================================
    Now the key to the problem is in the GroupSorted function.
    But, let's step back a bit.   Query optimization is just one of the many (and nearly final) steps prior to query execution.  Prior to query optimization we have the TDS Receive buffers, the SQL parsing, the SQL normalization and pre-processing.   In the pre-processing, we normally try to construct what is called the LOP (Logical Operator) tree.   Note that no-where in that sequence is any execution - I mention this because once before you commented on someone else's posting about a purported optimization problem with
    select * from table where column=<value> or 1=2
    ....the notion of evaluating 1=2 wouldn't happen until the execution phase - so optimization will not consider it - in this case it looks it at as an expression to be evaluated and skips it.   If you want to see what I mean do a showplan and statistics io on
    select * from sysobjects where 1=2
    and you will see that due to lack of sargs we do a tablescan from the optimizer due to lack of sargs...but it is during query execution that we short circuit and hence the 0 LIO's.    I point that out because there are different phases involved and in this case it is the pre-processor that selects the overall LOP tree to start with....but then the optimizer has to consider which LOPs to use underneath that such as GroupSorted (using an index) vs. GroupSorting (sort operation), etc.
    Soooo....in the GROUP BY example, it starts with:
    The Lop tree:
    ( project
            ( group
                    ( scan CB
    vs. your other example (without the Group By):
    The Lop tree:
    ( project
            ( scalar
                    ( scan CB
    ....so you can see the difference right off the bat in what the optimizer received from the pre-processor.   Like I pointed out earlier, the problem *started* the second you put the group by clause in - before the optimizer even got to look at.
    You then get 2 optimization blocks....the innermost for the (scan CB) and then the outer for (group (inner block)) ala:
    OptBlock1
            The Lop tree:
            ( scan CB
            Generic Tables: ( Gtt1( CB ) Gti2( IDX1 ) )
            Generic Columns: (  Gc0(CB ,Rid) Gc1(CB ,MID) Gc2(CB ,ValuationDate))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc0 = {  Gc0(CB ,Rid)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} Tc2 = {  Gc2(CB ,ValuationDate)} )
    OptBlock0
            The Lop tree:
            ( pseudoscan
            Generic Tables: ( Gtg0 )
            Generic Columns: (  Gc3(Gtg0 ,_gcelement_3) Gc4(Gtg0 ,_gcelement_4) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc3 = {  Gc3(Gtg0 ,_gcelement_3) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} )
    The question is, why did we estimate 8 rows returned from the group by????   And would that change the behavior???
    That comes from this section:
            ( PopGroupSorted
                    proj: {{ CB.MID Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
                    pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                    subs: {CB.MID ,Gtg0._virtualagg }
                    cost: 66305.12
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
                    Cache Strategy: [
                            prefetch=YES
                            iosize=131072 Bytes
                            bufreplace=LRU
                    order: none
                    ( PopIndScan index: Gti2( IDX1 )
                                    table: Gtt1( CB )
                            proj: {{ CB.MID Gtg0.Gc4 } ,{ CB.ValuationDate } }
                            pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                            subs: {CB.MID ,CB.ValuationDate }
                            cost: 50406.55
                            I/O estimate : [
                                    rowcount=158985.8
                                    averagewidth=14
                                    pages=383.3457
                                    prefetchpages=383.3457
                                    lio=385.3457 pio=77.60534 cpu=476957.2
                                    scanlio=385.3457 scanpio=77.60534 scancpu=476957.2
                                    tempdb=0
                            Cache Strategy: [
                                    prefetch=YES
                                    iosize=131072 Bytes
                                    bufreplace=LRU
                            order: none )
    Now to understand what is happening, let's take the bad example:
    select MID , max(Date) from CB where MID = @MID group by MID
    Now, let's make it a bit more generic:
    select type, count(*), max(crdate)
    from sysobjects
    where type='P'
    group by type
    If run in master on my machine, I get:
    type                                            
    P             20             Oct  2 2014 12:23PM
    ...as a result, with an initial LOP of:
    The Lop tree:
    ( project
    ( group
      ( scan sysobjects
    Which should be no surprise....with no index on type...or due to the quantity of rows in sysobjects, we simply go for a tablescan.   But we have our (group ) LOP.....and if we look at the plancost:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:2               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:2                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:2                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:19                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    We see sort of the same problem....we are estimating 2 rows to be returned from the group by. 
    Stepping back a bit, remember, if we don't have any stats on a column, we assume 0.1 for =, 0.25 for bounded range (between) and 0.3 for unbounded range.   In my case, with no stats on type in sysobjects in master, we can see this clearly in the index selectivity area of:
    The table (Datarows) has 188 rows, 6 pages,
    Data Page Cluster Ratio 0.9999900
        type = 'P'
       Estimated selectivity for type,
            selectivity = 0.1,
        Search argument selectivity is 0.1.
        using table prefetch (size 32K I/O)
        Large IO selected: The number of leaf pages qualified is > MIN_PREFETCH pages
        in data cache 'default data cache' (cacheid 0) with LRU replacement
    In the index selectivity due to using the magic values.   In the group by costing, we see:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.44175
      I/O estimate : [
       rowcount=1.88
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=18.8
       total lio=8 total pio=1.00007 total cpu=244.4
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.56175
       I/O estimate : [
       rowcount=18.8
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=225.6
        scanlio=8 scanpio=1.00007 scancpu=225.6
       Cache Strategy: [
        prefetch=YES
        iosize=32768 Bytes
        bufreplace=LRU
       order: none )
    Now....to go from the row count to the estimated rows for a group by, we don't use the column selectivity - we use column density stats.  The real reason behind the 0.1 is that we are saying is that there likely is only 10 distinct values.   For example, if I update statistics sysobjects (type) in my system, the output changes to:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:1               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:1                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:1                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:21                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    Which is based on:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.80175
      I/O estimate : [
       rowcount=1
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=20
       total lio=8 total pio=1.00007 total cpu=248
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.80175
       I/O estimate : [
        rowcount=20
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=228
        scanlio=8 scanpio=1.00007 scancpu=228
    Note that my overall rowcount went up a bit....but the group by went to 1 row....if we look at the column stats via optdiag:
    Statistics for column:                                                          "type"
    Last update of column statistics:                                               Feb 15 2015  9:18:32:850PM
         Range cell density:                                                        0.0053191489361702
        Total density:                                                             0.4216274332277049
         Range selectivity:                                                         default used (0.33)
         In between selectivity:                                                    default used (0.25)
         Unique range values:                                                       0.0053191489361702
         Unique total values:                                                       0.2000000000000000
         Average column width:                                                      default used (2.00)
         Rows scanned:                                                              188.0000000000000000
         Statistics version:                                                        4
    Histogram for column:                                                           "type"
    Column datatype:                                                                char(2)
    Requested step count:                                                           20
    Actual step count:                                                              9
    Sampling Percent:                                                               0
    Tuning Factor:                                                                  20
    Out of range Histogram Adjustment is DEFAULT.                                  
    Low Domain Hashing.                                                            
         Step     Weight                    Value
            1     0.00000000       <=       "EJ"
            2     0.00531915        <       "P "
            3     0.10638298        =       "P "
            4     0.00000000        <       "S "
            5     0.30319148        =       "S "
            6     0.00000000        <       "U "
            7     0.56382978        =       "U "
            8     0.00000000        <       "V "
            9     0.02127660        =       "V "
    ....so the 20 rows are due to the slightly higher 0.10638 vs. 0.1...
    Now....if we know the value for the where clause and have stats on the column, I notice we always estimate 1 row to be returned from the group by.   However, if we don't have stats - or if we don't know the value - we estimate more than one row.   For example if I declare @var for type ala:
    declare @type char(2)
    select @type='P'
    select type, max(crdate)
    from sysobjects
    where type=@type
    group by type
    go
    I then get:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc4 }  = '}]
      subs: {sysobjects.type ,Gtg0._virtualagg }
      cost: 83.58154
      I/O estimate : [
      rowcount=4.17777
       averagewidth=17
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=79.26596
       total lio=8 total pio=1.00007 total cpu=425.7979
    ...if we have stats on type vs. the 1.88 when we don't.   Which points out that the uniqueness stats and selectivity are likely considered in the final cost of the group by.   In your case, the estimate of:
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
    ...is likely due to the column uniqueness and selectivity stats based on the @MID variable
    Note that despite we now know the GROUP BY is only going to return a single row, we still don't change the plan.....we still hit the index in ascending fashion.   The reason why is that we don't assume (or should I say we don't make a special case) that the only aggregate is a max().   It could be count(), avg(), min() or a combination of which - any of could require reading every index row other than strictly min().   Could we do so??  Possibly.....however, arguable, the select max() without the group by is the more correct SQL to use....   We do (on the other hand) have special processing for scalaragg with max() as it does the backwards scan if max() is the only aggregate....if you do select min(), max() - then it does a forward scan even with scalaragg.   If you have just min(), we do a forward scan and short circuit after first row (as if select top 1)....

  • RE: Forte/CORBA inquiry - Long post

    Dave,
    Are you using any kind of structured object in the parameters to your
    CORBA-exposed method?
    We have found that Fort&eacute; will change the order of struct class attributes
    when changes are made in the IDE (often even when seemingly unrelated).
    Look at the IDL generated in your last deploy to be sure the order of
    attributes is the same.
    - MikeL
    -----Original Message-----
    Date: Thu, 3 Aug 2000 12:10:17 -0700
    From: "Dave Ortman" <dortmanrcsis.com>
    To: kamranaminyahoo.com
    Subject: Forte/CORBA inquiry - Long post
    Message-id: <3989c399.22c.0rcsis.com>
    Content-Type: text/plain; charset="iso-8859-1"
    Content-Transfer-Encoding: 7bit
    Hello,
    I've been experimenting with accessing Forte through Java via CORBA. While
    I have adequate experience with Forte and Java, I'm fairly new to CORBA. As
    a result, I've discovered some issues which I can't seem to resolve.
    I fear my explanation might be a bit lengthy, but I figure it's easier to
    offer
    comments when you have all the information. I apologize in advance.
    I started by writing a Java application that would access a Forte service
    object.
    To keep things simple, I didn't use any of our existing code base. I
    instead
    used the code examples in tech note #10950: Forte Service Objects and
    VisiBroker
    Java IIOP clients. However, instead of using the VisiBroker ORB, I used the
    Java 1.2 ORB. Other than that, I used the same code offered in the tech
    note.
    This example worked fine. I exported the IDL from Forte, ran the Sun
    idltojava
    tool, and compiled my Java classes.
    I then tried to carry this example forward, and use some of our existing
    Forte
    objects. Things did not go as smoothly, and I am now left with a few
    questions.
    I created two new projects in my workspace - CorbaSO and CorbaManager.
    In the CorbaManager I created a single class - CustomerManager. This object
    has one method which retrieves a customer object based upon a CustomerID
    which
    is passed into the method. This project has three supplier plans which are
    needed to retrieve this Customer object.
    In the CorbaSO project, I created two classes - CorbaServer and TestClass.
    CorbaServer has two methods: getCustomer and GetDamage. The getCustomer
    method
    instantiates an instance of the CustomerManager class, and uses it to
    retrieve
    a customer based upon the CustomerID passed into the getCustomer method,
    then
    returns the CustomerName (string). The GetDamage method is the same method
    that's used in tech note #10950 (discussed above). I just wanted to make
    sure
    it still worked. I then made a Service Object from the CorbaServer class,
    which
    I called CorabaServerSO. This project has two supplier plans - the
    CorbaManager
    project discussed above, and the plan which contains the CustomerObject
    (which
    is returned by the CUstomerManager class).
    Lastly, the test class simply contains a go() method, which I used to make
    sure
    the CorbaServerSO was working fine. All it does is call
    CorbaServerSO.getCustomerName()
    passing in a CustomerID, and writes the returning name out the logger.
    The code compiles, and it works fine when I run it from within Forte
    [through
    TestClass.go()]. I then deployed the app and exported the IDL.
    However, when looking at the IDL, I find that Forte exported 900k of IDL.
    It
    exported the IDL definition of every class within the supplier plan
    hierarchy.
    Given that I only wanted to expose my one service object, I assumed that I
    would only get IDL for that one project. Instead, I got IDL for every
    object
    all the way down the Supplier Plan chain. So my first question is, why do I
    get IDL for everything?
    To make matters worse, this 900k IDL file is littered with errors - and my
    IDL
    compiler chokes. I edited the IDL so that it only contained definitions for
    the CorbaSO project, and ran the IDLTOJAVA tool on that edited file. That
    created
    the JAVA files just fine, and I then compiled them into CLASS files. So far
    so good.
    I then wrote a Java client to call methods on my newly published
    CorbaServerSO.
    I found that the GetDamage() method worked fine, while the getCustomer
    method
    yield the following error: 'org.omg.CORBA.UNKNOWN: minor code: 1
    completed:
    Maybe'. This error is offered immediately after calling the getCustomer
    method
    on the CORBA stub.
    Curious as to why one method worked, and one didn't - I looked to the code
    in
    the GetDamage method. The code was as follows (all undeclared variables are
    input or output variables of type double):
    tmp1 : i4;
    tmp1 = i4(total * 0.0775 * 100.0 + 0.5);
    salesTax = double(tmp1) / 100.0;
    tmp1 = i4(total * 0.15 * 100.0 + 0.5);
    tips15 = double(tmp1) / 100.0;
    newTotalwoTips = salesTax + total + total;
    As written above, the code works. I then modified the code to the following
    (all parameters were unchanged):
    custManager : CustomerManager= new();
    tempCustomer : Customer = custManager.fetchCustomer(12345);
    tmp1 : i4;
    tmp1 = i4(total * 0.0775 * 100.0 + 0.5);
    salesTax = double(tmp1) / 100.0;
    tmp1 = i4(total * 0.15 * 100.0 + 0.5);
    tips15 = double(tmp1) / 100.0;
    newTotalwoTips = salesTax + total + total;
    After deploying the newly modified code, I find that the GetDamage method
    now
    does not work. All I did was instanstiate an object, and call a method on
    it.
    I never did anything with the results. Yet I get the same error:
    'org.omg.CORBA.UNKNOWN:
    minor code: 1 completed: Maybe'.
    So my two question are:
    1. Why did I get IDL for every object within the Supplier Plan tree when I
    was
    only attempting to expose 1 Service Object.
    2. Why does a method cease to work via CORBA when I interact with another
    object?
    Thanks in advance for any help,
    -Dave Ortman

    In a previous post, I asked:
    <<Why did I get IDL for every object within the Supplier Plan tree when I
    was
    only attempting to expose 1 Service Object.>>
    Since writing this, I have come across tech note #11427 - which mentions use
    of a IIOPIgnore extended flag which you can set on objects to suppress IDL
    generation. Otherwise, Forte automatically generates IDL for any object
    which is marked as Distributed.
    Most of our objects are distributed. I set this property on the objects
    within the two projects I was working in, and the IDL for these objects was
    not created. However, the rest of the objects in the Supplier Plan chain
    were still translated to IDL. Am I going to have to set this property on
    every object? I don't see how I could be misapplying the property - but you
    never know.
    <<Why does a method cease to work via CORBA when I interact with another
    object?>>
    Through the use of putline()'s, I've found that the error is generated
    somewhere deeper within our application architecture. It may be a problem
    with our code - I'm not sure. I would still be happy to hear any input on
    the matter.
    Thanks,
    -Dave Ortman

  • Best Practises on SMART scans

    For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?

    We cover more in our book, but here are the key points:
    1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
    2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
    3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
    Ideal execution plan for taking advantage of smart scans for reporting would be:
    1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
    2) full scan the partitions (which allows smart scans to kick in)
    2.1) no index range scans (single block reads!) and ...
    3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
    3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
    So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
    Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
    Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
    And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
    Tanel Poder
    Blog - http://blog.tanelpoder.com
    Book - http://apress.com/book/view/9781430233923

  • Check for existence of a record

    I have a table XX_TEMP. Let us say the columns are inventory_item, organization_id and description
    The values are:
    Inventory_item
    Organization
    Description
    200
    m1
    Sample
    200
    m2
    Not Sample
    400
    m4
    check
    700
    m5
    Test
    I just want to check the existence of an item in the table, I have written two queries and would like to know which one is better in terms of performance:
    Q1:
    select count(1) from xx_temp where inventory_item=200 and rownum=1;
    Q2:
    select count(1) from dual where exists (select 1 from xx_temp where inventory_item=200);
    Both Q1 and Q2 return the same result. In fact, I was surprised with the result from Q1 as I expected that the rownum would be evaluated after the where condition. I expected Q1 to return 2
    I thought that the below query:
    select count(1) from xx_temp where inventory_item=200;
    and Q1 would return the same result as rownum would be evaluated at end. In effect, I've 2 questions:
    1. Isn't rownum calculated at the end?
    2. What is the best way in terms of performance to check for an existence of record?

    Internally this is how it works:
    select count(*) from xx_temp where inventory_item=200 and rownum=1;
      COUNT(*)
             1
    1 row selected.
    Execution Plan
       0       SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=4 Card=1 Bytes=13)
       1    0    SORT AGGREGATE (Card=1 Bytes=13)
       2    1      COUNT STOPKEY
       3    2        TABLE ACCESS FULL XX_TEMP (Cost=4 Card=2 Bytes=26)
    Statistics
              5  user calls
              0  physical read total multi block requests
              0  physical read total bytes
              0  cell physical IO interconnect bytes
              0  commit cleanout failures: block lost
              0  IMU commits
              0  IMU Flushes
              0  IMU contention
              0  IMU bind flushes
              0  IMU mbu flush
              1  rows processed
    Plan
    1 Every row in the table XX_TEMP  is read.
    2 Processing was stopped when the specified number of rows from step 1 were processed.
    3 The rows were sorted to support a group operation (MAX,MIN,AVERAGE, SUM, etc).
    4 Rows were returned by the SELECT statement.
    COUNT STOPKEY knows how many rows you want and will just keep calling its child function under it in the execution plan tree to get more and more rows, until the required amount of rows have been returned. Here it stopped at 1 iteration.
    And to answer your second question : as to which is the fastest way to to check  for an existence of record :
    Answer would be it depends on your requirement.  possible answers are :  Rowid --  fastest way to check for a row.   similar answers can be Index etc.. but all this is relative to what you work with.
    Cheers,
    Manik.

  • Trees in Plan View for Illustrator - Vector Graphics

    Does anyone have any idea as to where get good trees drawn for Illustrator in Plan View?

    Brooke...
    You're in the wrong area of the forum to be asking your question.
    This area is specifically for suggesting new features for the Illustrator application itself.
    Head up one level to the Main Illustrator forum and start a new thread with your question.
    Thanks.

  • DBA Cockpit Sybase ASE - Access Plan not displayed

    Hello,
    We are running SAP ECC 740 on Sybase ASE 15.7, I'd previously used DBA Cockpit to view access plans for queries prior to our ECC upgrade. Since upgrading to 740 DBA Cockpit has had a facelift, and now I can't see an Access Plan when I try clicking "Show Plan" on the cached statements or when I try clicking EXPLAIN in ST05 - I just get a red cross (see attached image).
    Does anyone have any ideas about how to view the access plan? If I try a Runtime Explain I get a blue spinning half-a-loop and it just sits spinning and never displays anything.
    Thanks, Chris

    Hi,
    I am running java 8 update 4 i.e. java 1.8! so I would suggest as per stefan also upgrade yoru java version.
    Advanced settings on java control panel:
    Debugging                              - not selected
    Java console                         -hide console selected
    Default Java for browsers          -InternetExplorer
    Shortcut creation                    -Prompt user if hinted
    JNLP File/Mime Association     -Prompt user
    Application Installation               -install if hinted
    Secure Execution Environment    -all select but (Show site certificate from server even when listed)
    Mixed code sanbox vs trusted    -Enable (show warning if needed)
    Perform signed code certificate revocation checks on - All certificates in chain of trust
    Checked for signed code certificate revocation using - Both CRLs and OCSP
    Perform TLS certificate revocation checks on - all certificates in chain of trust
    Checked for TLS certificate revocation using - Both CRLs and OCSP
    Advanced security settings: all select but (Use SSL 2.0 compatible ClientHello format)
    Miscellaneous                               - nothing selected
    Kind Regards,
    Johan

  • ATP tree structures in planning book

    Is there a way to pull quantites that are in the ATP Tree Structure into a planning book?

    Michel -
    We understand that once the ATP Tree Structure is converted, the values will appear in the planning book.  Our issue is that we want the quantities to display in the planning book before and after conversion.
    Our situation is that we are using Rules Based ATP and having the rule generate a stock transfer request.  We are booking orders that are outside of the PPDS Horizon, so a stock transfer request is not generated, but the ATP Tree Structure is.  We need to be able to see this quantity in our planning book, so we can take it into consideration during our runs.

  • Tree Display in Organizational Plan

    Dear Experts,
    Looks simple, still I am not getting .....
    In PPOMA_BBP, PPO_SA_BBP, if I display the assignment tree for any Org Unit or Position, it displays the assignment tree from top to bottom.
    But if do the same for user, central person, employee.... it shows the assignment  tree from bottom to top.
    Can we change the way it displays i.e. "bottom to top" or "top to bottom" as and when we want?
    Thanks,
    Ganapathi

    In PPOMA_BBP, PPO_SA_BBP, if I display the assignment tree for any Org Unit or Position, it displays the assignment tree from top to bottom. But if do the same for user, central person, employee.... it shows the assignment tree from bottom to top.
    If you think about it, org unit or position are like placeholders, or folders in a file system if you will. The tree would logically show what/who belong to it. However user, central person or employee are the "leaves" of the tree meaning they don't "contain" any sub-division of the structure. Hence the display shows what/who's containing them upwards. Does that make sense to you?

  • Cuento con Adobe Acrobat Pro, suscripción mensual, pago mes a mes desde hace tres meses desde mi tarjeta. Pero me aparece la leyenda que se me acabo mi Software de Prueba, y que pague por algún Plan. Cómo lo resuelvo? Gracias

    @

    Hi asiain,
    Do I understand correctly that you have an annual Acrobat subscription, but that it continues to start in trial mode? If that's the case, you just need to activate your software by signing in with your Adobe ID and password.
    Best,
    Sara

  • Plans for gcc 4.0.0?

    Just wondering what Archlinux plans are for gcc 4.0.0, now that it's released.  It seems rather interesting the new features for optimizing, which seems to be right up arch's alley.  An i686-optimized linux distribution, after all, would benefit from better optimizations.
    I'm just hoping that not too much is broken with this new release (everyone remembers gcc 2.95 -> 3), but since this release came much faster and isn't an almost complete rewrite like egcs was, hopefully we can expect a smoother transition.
    I for one am excited about this new gcc release, much more than a new kde, gnome or x.org or whatever...  Maybe it's just me that likes compilers so much and hates bytecode.  But still, what is Arch's stance on this?  Any plans to move forward?  Maybe provide a gcc 4 package but keep compiling arch packages with 3.x until bugs get ironed out?  Or be truly bleeding edge (it is stable software afterall though) and take the plunge and compile arch packages with this new fancy gcc?
    Autovectorization seems rather interesting ( http://gcc.gnu.org/projects/tree-ssa/vectorization.html there's the link for anyone interested), yet needs -msse or -msse2 flags to be set.  Is this feasible?  Would this binary nicely optimized for processors with sse or sse2 run that much worse on hardware without such instructions?
    What is arch's stance on instructions like sse or 3dnow?  Are packages compiled with such optimizations?  If so why or if not why not?  I personally think that most arch users use relatively recent hardware and would benefit...  But then again, I'm just wondering aloud.
    Anyways, sorry for all the questions...

    Duke wrote:What is arch's stance on instructions like sse or 3dnow?  Are packages compiled with such optimizations?  If so why or if not why not?  I personally think that most arch users use relatively recent hardware and would benefit...  But then again, I'm just wondering aloud.
    This has been discussed alot - my stance (not Arch's) is that Arch is i686 based - that's the common factor... if you start creeping with that, when will it stop? first we require SSE, then MMX, then SSE2... we may end up switching from "an i686 optimized distro" to "a distro optimized to run on AMD processors produced after November 12th 2004".
    In addition, SSE instructions only make sense in advanced math applications, FFT programs, and multimedia stuff... it's a small subset of the applications... think about it: how could something like vim benefit from loop vectorization? it really can't... sure you may get some things improved... but it's vim, you don't need to do 50 calculations in the time it takes to do 1 - it's not processor intensive.
    And let's look at the apps that do make use of SSE: multimedia - improve the performance of mpegs and mp3s? who watches that much porn?
    :shock:  :shock:
    FFT programs - anyone seriously doing complex FFT/DSP calcs on Arch? I doubt it
    Advanced Math - sure might make sense, but it doesn't warrant a recompile and optimization of the entire set of packages.
    So, in my opinion, enabling SSE/SSE2/MMX/MMX2/whatever else you get:
    - loss of some processor support
    + improvement in MP3 playback
    + faster calculation of PI
    - a bunch of apps you had to redownload with negligable performance gain

  • Cannot connect to Planning from SmartView

    Hi to all! :)
    I tried it both in Excel 2007 and 2003 and I am using Hyperion Planning 11.1.1.3. When in Excel, I went to Hyperion>Options and put http://servername:8300/HyperionPlanning/SmartView as Hyperion Provider Service URL. Next, when I go to Data Source Manager, no Hyperion Planning applications appear in the tree, even the tree does not appear with its folders. And when I click Connect to Provider Services in Data Source Manager, SmartView throws a message box: Unable to load the following resource: Please enter user credential to login. After closing that box and clicking Connect to Provider Services for the second time, then already nothing happens and even the error box does not appear.
    However, if I put http://servername:13080/aps/SmartView as Hyperion Provider Service URL, then Data Source Manager shows Essbase cubes. So I would be immensely grateful for any hint on where I should look into to solve this problem :)

    use the aps/SmartView link, that is the proper link through Provider services. The other link is used in the independent connection, not the common connection.
    make sure that the 'predefined' button is selected to 'all' not 'favorites'
    to add a planning server right click on the Hyperion Planning folder and click 'add'

Maybe you are looking for

  • Report with difference of prices from Inventory Valuation

    Hello! I need report with result from Inventory Valuation: Difference of prices= New price - Current Cost. Report can show last 7 documents and i should choose item code. Can u help me? Greetings from Poland

  • Converting fixed-record length file to XML

    Hello all I am trying to convert a fixed length flat file to XML. I believe the only required information for such a coversion is- the element or field names, their data types, and the fixed length of each field. I do not want to generate a xsl docum

  • Beginner needs help

    I am working on creating a new web page for myself. I am a novice, running into a little trouble. The link to the site is http://exercise.cabspace.com/football/ Can anyone tell me why the hover works fine in FF, but not in IE. What did I do wrong? I

  • TypeError props.width has no properties. WHAT DO?!? Help ASAP, thanks.

    I've attempted to add a photo gallery using that not-so-user-friendly jquery stuff... which keeps messing up, mind you. According to them it's just 'click click' and voila, but according to a sane and unfortunately not so code-savvy individual, it ju

  • OKEON and KCH5N

    Hi All,as we create Cost center Hierarchy, Profit center Hierarchy in DEV,Quality, can we Transport (if Yes, How?) both hierarchy to Production or do we nee to create one by again in Production? How to delete a cost center or Profit center from Hiera