Use of Essbase types measures in Planning v.11.1.2.2.

Hi. Can I use Essbase date type measures in Planning 11.1.2.2? I want to store contract start and contract end dates, possibly in a Measures dimension separate from the Accounts dimension.
Thanks in advance,.

Typed measures are not yet supported.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Can I use mulitple Essbase servers if I have multiple Planning web servers?

    Hi,
    can I use mulitple Essbase servers if I have multiple Planning web servers?
    Can I have one Finance Planning application running on one Web server on one Essbase server.
    Have another Operations Planning application running on another Web server on another Essbase box and server?
    Thanks in Advance.

    Hi,
    you can have as many essbase servers as you want, provided they are registered on same shared services. When you create a data source for planning application, you have to provide Essbase server name and login details. You can provide one essbase server in one data source and another essbase server for another.
    Also since Planning is based on RDBMS database, you can have multiple planning web servers pointing to same planning application. You can use load balancer concept as well.
    Let me know if it helps.
    Cheers
    RS

  • Multiple Essbase-derived measure dimensions in OBI EE

    Hi guys,
    I have a requirement here which basically requires the usage of measures coming from two different measure dimensions of Essbase (Account and Scenario) next to each other in reports.
    I set both to dim type "measure dimension" and made sure the respective measures exist as physical cube columns. Then I tried two alternatives:
    1.) One logical fact table - result spool:
    -------------------- Logical Request (before navigation):
    RqList distinct
    Actual:{DAggr(Account.Actual by { } )} as c1 GB,
    Actual LY:{DAggr(Account.Actual LY by { } )} as c2 GB,
    Target:{DAggr(Account.Target by { } )} as c3 GB,
    Vs Actual LY:{DAggr(Account.Vs Actual LY by { } )} as c4 GB,
    Vs Actual LY %:{DAggr(Account.Vs Actual LY % by { } )} as c5 GB,
    Ops:{DAggr(Account.Ops by { } )} as c6 GB,
    Sales_In:{DAggr(Account.Sales_In by { } )} as c7 GB
    +++Administrator:2a0000:2a000a:----2008/11/10 14:29:55
    -------------------- Execution plan:
    RqList <<60982>> {for database 0:0,0} distinct
    D1.c1 as c1 {for database 0:0,0},
    D1.c2 as c2 {for database 0:0,0},
    D1.c3 as c3 {for database 0:0,0},
    D1.c4 as c4 {for database 0:0,0},
    D1.c5 as c5 {for database 0:0,0},
    D1.c6 as c6 {for database 0:0,0},
    D1.c7 as c7 {for database 0:0,0}
    Child Nodes (RqJoinSpec): <<61009>> {for database 0:0,0}
    RqList <<60949>> {for database 3023:179879:Full cube,34}
    AggrExternal(INSIT.Actual) as c1 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Actual LY) as c2 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Target) as c3 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Vs Actual LY) as c4 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Vs Actual LY %) as c5 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Ops) as c6 GB {for database 3023:179879,34},
    AggrExternal(INSIT.Sales_In) as c7 GB {for database 3023:179879,34}
    Child Nodes (RqJoinSpec): <<60952>> {for database 3023:179879:Full cube,34}
    INSIT T179892
    ) as D1
    +++Administrator:2a0000:2a000a:----2008/11/10 14:29:55
    -------------------- Sending query to database named Full cube (id: <<60949>>):
    select
    { {Scenario}.{Actual},
    {Scenario}.{Actual LY},
    {Scenario}.{Target},
    {Scenario}.{Vs Actual LY},
    {Scenario}.{Vs Actual LY %},
    {Scenario}.{Ops},
    {Scenario}.{30001}
    } on columns
    from {INSIT_C.INSIT}
    2.) Two logical fact tables - result spool:
    +++Administrator:2a0000:2a0007:----2008/11/10 14:49:23
    -------------------- Logical Request (before navigation):
    RqList distinct
    Account.Sales as c1 GB,
    Account.Backlog as c2 GB,
    Account.Sales_In as c3 GB,
    Scenario.Act vs Bud % as c4 GB,
    Scenario.Act vs Budget as c5 GB
    OrderBy: c1 asc, c2 asc, c3 asc, c4 asc, c5 asc
    +++Administrator:2a0000:2a0007:----2008/11/10 14:49:23
    -------------------- Query Status: Query Failed: {nQSError: 15018} Incorrectly defined logical table source (for fact table Account) does not contain mapping for {Scenario.Act vs Bud %, Scenario.Act vs Budget}.
    +++Administrator:2a0000:2a0009:----2008/11/10 14:49:28
    -------------------- Logical Request (before navigation):
    RqList distinct
    Account.Sales as c1 GB,
    Account.Backlog as c2 GB,
    Account.Sales_In as c3 GB
    OrderBy: c1 asc, c2 asc, c3 asc
    +++Administrator:2a0000:2a0009:----2008/11/10 14:49:28
    -------------------- Execution plan:
    RqList <<37039>> {for database 0:0,0} distinct
    D1.c3 as c1 GB {for database 0:0,0},
    D1.c2 as c2 GB {for database 0:0,0},
    D1.c1 as c3 GB {for database 0:0,0}
    Child Nodes (RqJoinSpec): <<37047>> {for database 0:0,0}
    RqList <<37051>> {for database 3023:179879:Full cube,34}
    INSIT.Sales_In as c1 GB {for database 3023:179879,34},
    INSIT.Backlog as c2 GB {for database 3023:179879,34},
    INSIT.Sales as c3 GB {for database 3023:179879,34}
    Child Nodes (RqJoinSpec): <<37059>> {for database 3023:179879:Full cube,34}
    INSIT T179892
    ) as D1
    OrderBy: c1 asc, c2 asc, c3 asc {for database 0:0,0}
    +++Administrator:2a0000:2a0009:----2008/11/10 14:49:29
    -------------------- Sending query to database named Full cube (id: <<37051>>):
    select
    { {Scenario}.{Sales}{Scenario}.{Backlog}{Scenario}.{30001}
    } on columns
    from {INSIT_C.INSIT}
    +++Administrator:2a0000:2a0009:----2008/11/10 14:49:29
    -------------------- Query Status: Query Failed: Essbase Error: Unknown Member Scenario.Sales used in query
    So basically neither of the two options work at all. Most of the time he just assumes that all that measures are coming from the last hierarchy I defined as a measure dimension...hence the unknown member error.
    Likewise with the two logical tables, he screws the query up completely and doens't even manage to resolve the column mappung against the logocal tables.
    Has anyone ever tried to mix measure dimensions retrieved from Essbase? Using OBI EE it's very tricky to define the acutal measure containing data points in Essbase.
    Cheers,
    C.
    Edited by: Christian Berg on Nov 10, 2008 2:58 PM
    Edited by: Christian Berg on Nov 10, 2008 3:00 PM
    Sorry for the reformatting...the forum doesn't like the square brackets in the BI server logs...

    Solved. Takes some nibbling around in Answers, but doable with unioning and pivot tables.
    Cheers,
    C.

  • Essbase Date Measures  in OBIEE

    Dose Essbase Date Measures supported by OBIEE. Can we have a Essbase Date Measures display as a DATE in OBIEE?
    Thanks

    Hi Nilaksha,
    I thought you might say that, I think this is because Essbase doesn't use data types as such, all data is stored as a value and characteristics of that data or how it is treated are more like meta data. This certainly rings true for the time dimensions I have seen. I think your best bet is to look at how you can format the numbers in the columns to make them appear in the format you require.
    Regards
    Ed

  • Can I use Apple TV with unlimited data plan for my IPad without any other internet connection?

    I Live in a weird area with no hardwired internet connection.
    My only internet connection is my iPad Air's UNLIMITED DATA plan from AT&T... It's LTE and it works great for my usual work.
    i Want to cancel stupid Direct TV acct and use Apple TV instead...
    IS it possible to subscribe a TV account online by using my ipad air's unlimited data plan and watch in on my TV via Apple TV without  using any other type of home network account?

    If you intend to use your iPads data plan then that will require use of the tethering/hot spot feature. As I said, you will not be able to use airplay, that needs a wifi LAN (home  network). This may change with IOS 8, but won't know for sure until the release.

  • You cannot use this transaction type to post to this asset Message no. AA83

    < MODERATOR:  Message locked.  Please post this message in the [Asset Accounting forum|SAP ERP Financials  - Asset Accounting;. >
    Hi,
    I am settling credit values [-ve values] from WBS to Asset Under Constructions Assets during CJ88 period settlement run.
    These assets belongs to Investment Measure. For few assets I am getting below error and
    You cannot use this transaction type to post to this asset
    Message no. AA834
    Diagnosis
    The transaction type entered belongs to transaction type group 15. According to the specifications for this transaction type group, posting with transactions types belonging to this group are only allowed in specific asset classes (for example, asset classes for assets under construction).
    The asset to which you are posting belongs to class XXXXX (chart of depreciation XXXX). You cannot post to this class using the transaction type you have entered.
    Procedure
    Check the asset number entered. You may want to allow posting with this transaction type group for the asset class of the asset.
    I know normally we do this way
    During charging
    Dr Exp  A/c WBS name
    Cr B.S A/c
    During settlement
    Cr Exp A/c WBS name
    Dr AuC GL A/c
    But we have situation that during settlement, we are doing reverse
    ie., Cr AuC GL A/c
          Dr Exp A/c WBS Name
    These expenses are through POs and no downpayments. The below error taking me to down payment accounts config but we dont have down payment scenario, can any one advise, thanks
    Regards,
    Sridhar

    Hi,
    Please check the SAP note 1091728 for this.
    Regards

  • You cannot use this transaction type to post to this asset Message no.AA834

    Hi,
    I am settling credit values -ve values from WBS to Asset Under Constructions Assets during CJ88 period settlement run.
    These assets belongs to Investment Measure. For few assets I am getting below error and
    You cannot use this transaction type to post to this asset
    Message no. AA834
    Diagnosis
    The transaction type entered belongs to transaction type group 15. According to the specifications for this transaction type group, posting with transactions types belonging to this group are only allowed in specific asset classes (for example, asset classes for assets under construction).
    The asset to which you are posting belongs to class XXXXX (chart of depreciation XXXX). You cannot post to this class using the transaction type you have entered.
    Procedure
    Check the asset number entered. You may want to allow posting with this transaction type group for the asset class of the asset.
    I know normally we do this way
    During charging
    Dr Exp A/c WBS name
    Cr B.S A/c
    During settlement
    Cr Exp A/c WBS name
    Dr AuC GL A/c
    But we have situation that during settlement, we are doing reverse
    ie., Cr AuC GL A/c
    Dr Exp A/c WBS Name
    These expenses are through POs and no downpayments. The below error taking me to down payment accounts config but we dont have down payment scenario, can any one advise, thanks
    Regards,
    Sridhar

    When you have an investment order/WBS create from there the AUC. You collect the cost on the AUC. Final you fill in the settlement rule the final asset in (from a normal asset class) and then you transfer the costs to the final asset.
    In the IO or WBS should be assigened a investment profile and the asset class AUC should be set-up correct
    Search on this forum for more information or read the SAP help

  • Error handling using fault message type in outbound synchronous ABAP proxy

    Hi,
    We've a scenario, outbound synchronous ABAP proxy to synchronous SOAP receiver. The requirement is to send multiple records in a single rquest and get response for all the records sent (in the same response message).
    Say if I send 10 records from ECC, I should get 10 records as response from SOAP to ECC. But the problem here is, there could be some invalid requests, for which an invalid error status code should be sent as part of response.
    Source Structure
    Req_Proxy
        req (0..unbounded, string)
    Response Structure
    Resp_SOAP
      Resp (0..Unbounded)
           respString (0..1, String) (carries the actaul response message)
          status code (0..1, String) (carries the status of the response, for ex, 001 (successful), 002 (error))
    And now we are planning to make use of Fault Message Type, to track the errors from SOAP (status code 002). But, we are not sure on how to track this for all the requested records. Is it possible to track the errors for all the requested records in a singe call using fault message type? For eg if 8 records are successful and 2 are invalid, then we should get 10 records in response 8 for valid and 2 for invalid accordingly.
    Please calrify.
    Thanks.
    Rohit

    For eg if 8 records are successful and 2 are invalid, then we should get 10 records in response 8
    for valid and 2 for invalid accordingly.
    Check if you can modify the WSDL structure to include an error node that would get populated incase of invalid entries....this would mean that you get the success and failure details in the single message....also at the proxy-end make the necessary change in your DT.....may be then you do not need to use the fault message...
    Regards,
    Abhishek.

  • Smart View for Essbase Not Available in Planning Rapid Deployment

    I have successfully deployed EPM 11.1.2.3 by following the steps in the Planning Rapid Deployment guide (http://docs.oracle.com/cd/E40248_01/epm.1112/epm_planning_rapid_deploy/epm_planning_rapid_deploy.html). However, when I bring up Smart View using a Shared Connections URL of http://<servername>:9000/workspace/SmartViewProviders, I only get connections for Planning and for the Reporting and Analysis Framework. How can a I get a connection for Essbase?

    As it turns out, the 11.1.2.3 Planning Rapid Deployment wizard doesn't install Essbase Provider Services, just Planning Provider Services and Reporting and Analysis Provider Services. Until I installed Essbase Provider Services, I wasn't even able to create a private connection to Essbase. I had to run InstallTool.cmd and select "Provider Services Web Application" from the Essbase section of the selection screen. I also had to run the configuration utility and select "Deploy to Application Server" under "Provider Services" in the Essbase section of the selection screen.
    After that, I was able to create a private connection to Essbase with this URL: http://<server name>:9000/aps/APS
    Although the private connection was now working, the Essbase connection was still not showing up as a shared connection in Excel, when I set the Excel Smart View option for Shared Connections URL to http://<server name>:9000/workspace/SmartViewProviders. I was able to get this working by running the configuration utility again and selecting "Configure Web Server" in the "Hyperion Foundation" section of the selection screen.
    I was also able to create an XML file as Celvin suggested, using http://<server name>:9000/aps/APS for the Essbase connection. Thanks for the suggestion!
    -Tom

  • Using CLOB data type - Pros and Cons

    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;
    - bind variables cannot be assigned CLOB value;
    - dynamic SQL cannot use CLOBs;
    - temp tables don't work very well with CLOBs;
    - fuzzy logic search on CLOBs is ineffective;
    - not all ODBC drivers support Oracle CLOBs
    - UNION, MINUS, INTERSECT don't work with CLOBs
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?

    848428 wrote:
    Dear Gurus,
    We are designing a database that will be receiving comments from external data source. These comments are stored as CLOB in the external database. We found that only 1% of incoming data will be larger than 4000 characters and are now evaluating the Pros and Cons of storing only 4000 characters of incoming comments in VARCHAR2 data type or using CLOB data type.
    Some of the concerns brought up during discussion were:
    - having to store CLOBs in separate tablespace;They can be stored inline too. Depends on requirements.
    - applications, such Toad require changing defaults settings to display CLOBs in the grid. Default value is not to display them;Toad is a developer tool so that shouldn't matter. What should matter is how you display the data to end users etc. but that will depend on the interface. Some can handle CLOBs and others not. Again, it depends on the requirements.
    - applications that build web page with CLOBs will be struggling to fit 18 thousand chararcters of which 17 thousand are blank lines;Why would they struggle? 18,000 characters is only around 18k in file size, that's not that big to a web page.
    - cashing CLOBs in memory will consume big chunk of data buffers which will affect performance;Who's caching them in memory? What are you planning on doing with these CLOBs? There's no real reason they should impact performance any more than anything else, but it depends on your requirements as to how you plan to use them.
    - to manipulate CLOBs you need PL/SQL anonymous block or procedure;You can manipulate CLOBs in SQL too, using the DBMS_LOB package.
    - bind variables cannot be assigned CLOB value;Are you sure?
    - dynamic SQL cannot use CLOBs;Yes it can. 11g supports CLOBs for EXECUTE IMMEDIATE statements and pre 11g you can use the DBMS_SQL package with CLOB's split into a VARCHAR2S structure.
    - temp tables don't work very well with CLOBs;What do you mean "don't work well"?
    - fuzzy logic search on CLOBs is ineffective;Seems like you're pulling information from various sources without context. Again, it depends on your requirements as to how you are going to use the CLOB's
    - not all ODBC drivers support Oracle CLOBs not all, but there are some. Again, it depends what you want to achieve.
    - UNION, MINUS, INTERSECT don't work with CLOBsTrue.
    I have not delt with CLOB data type in the past, so I am hoping to hear from you of any possible issues/hastles we may encounter?You may have more hassle if you "need" to accept more than 4000 characters and you are splitting it into seperate columns or rows, when a CLOB would do it easily.
    It seems as though you are trying to find all the negative aspects of CLOBs and ignoring all the positive aspects, and also ignoring the negative aspects of not using CLOB's.
    Without context you're assumptions are just that, assumptions, so nobody can tell you if it will be right or wrong to use them. CLOB's do have their uses, just as XMLTYPE's have their uses etc. If you're using them for the right reasons then great, but if you're ignoring them for the wrong reasons then you'll suffer.

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Assigning Equipment Number to MIC (Use of class type 005)

    Hi,
    I want to assign equipment number to MIC. For this i have made a class with class type 005 and assigned a characteristic for equipment number(EQUI table field). I have assigned the class to MIC. Now i want to enter the equipment number, when MIC is attached to a inspection plan. Please tell me how to do so ? and am i doing it correctly?

    I am not sure whether classification can be used here or its coorect way as classification has very different use altogather.
    Now what my understanding here is you can use
    1.PRT filed in Inspectionn plan and assign the same to a mic @ char lavel.
    2.You can also use the "Inspection method" to specify method and equipment that you are going to use for inspection.As Method is free text you can vertually write anything here.Attach method to mic in QS22/QS23
    3.In "Inspection char desription /  search field " you can use name of equipment.
    This is my understnading.

  • Which planning function i have to use and how to write this planning fucnti

    Hi Bi Guru's,
    I have rolled out  BW SEM-BPS Planning Layout's for the Annual Budget in my organistaion.
    There are two levels of layout given for the each sales person.
    1)  Sales quantity to be entered Material and  country wise for all 12 months ( April 2009 to March 2010)
    2)  Rate per unit and to entered in second sheet, Material and country wise for the total qty entered in the first layout.
    Now i need to calculate the sales vlaue for each period and for the each material.
    Which planning function i have to use and how to write this planning fucntion.
    Please suggest me some solution ASAP.
    Thanks in Advance,
    Nilesh

    Hi Deepti,
    Sorry to trouble you...
    I require your help for the following scenario for caluclating Sales Value.
    I have Plan data in the following format.
    Country   Material    Customer    Currency    Fiscyear    Fiscper           Qty         Rate        Sales Value
    AZ          M0001      CU001          #             2009          001.2009        100.00                        
    AZ          M0001      CU002          #             2009          001.2009        200.00                        
    BZ          M0001      CU003          #             2009          001.2009        300.00
    BZ          M0001      CU003          #             2009          002.2009        400.00
    BZ          M0002      CU003          #             2009          002.2009        300.00
    AZ          M0001       #               USD          2009             #                                 10.00
    BZ          M0001       #               USD          2009             #                                 15.50
    BZ          M0002       #               USD          2009             #                                 20.00
    In the Above data the Rate lines are entered in the Second Layout, Where the user enters on the Country Material Level with 2009 value for FISCYEAR.
    I am facing problem with this type of data. 
    I want to store the sales value for each Material Qty.
    Please suggest some solution.
    Re
    Nilesh

  • Report Painter - What is value type 10 Statistical Plan

    One of my customers is running report 6o00-001 Orders: Actual / Plan / Variance. This report is displaying a plan value when, they say, no plan value has been entered. I have checked normal planning and this is correct - there is no plan value. However, when I check the report painter definition, the column in question will display value type 1 (Plan) and value type 10 (Statistical Plan).
    Can anyone tell me:
    - what is a statistical plan on an order?
    - how can I remove the statistical plan value?
    Many thanks in advance.

    Hi Szymon
    Thanks for your answer.
    But checking the correction constructions the code do reference to the operation RKP5 and the posting are being recording with operation RKP1.
    I'm checking for a similar note but with this operation but I can't find it.
    Do you know if it is something like that?
    Regards
    This is like the record looks in the table COSP.
    Client
    130
    MANDT
    130
    Ledger
    0
    LEDNR
    00
    Object number
    OR000000760636
    OBJNR
    OR000000760636
    Fiscal Year
    2014
    GJAHR
    2014
    Value Type
    10
    WRTTP
    10
    Version
    1
    VERSN
    001
    Cost Element
    56650101
    KSTAR
    0056650101
    CO subkey
    HRKFT
    Business Transaction
    RKP1
    VRGNG
    RKP1
    Trading Partner
    VBUND
    Trading Part.BA
    PARGB
    Dr/Cr indicator
    D
    BEKNZ
    S
    Transaction Currency
    BRL
    TWAER
    BRL
    Period block
    016
    PERBL
    016
    Unit of Measure
    MEINH
                                                                                                                                        *  modification to post revenues - Note 604092 if coeja-wrttp = '10' and uf-activ = 'RKP5'. elseif coeja-wrttp ge '01' and coeja-wrttp le '04'. else. continue. endif. *  end modification to post revenues - Note 604092
    Code in correction instruction.
    *   modification to post revenues - Note 604092 if coeja-wrttp = '10' and uf-activ = 'RKP5'. elseif coeja-wrttp ge '01' and coeja-wrttp le '04'. else. continue. endif. *   end modification to post revenues - Note 604092

  • Eliminating the use of Activity Types -- what are the impacts and risks?

    Our primary FI stakeholder wants to eliminate the use of Activity Types in our ERP 6.0 system.  He would prefer the use of cost elements and cost centers.  I know that the elimination of using Activity Types will impact our planning processes plus our ability to segregate costs within a cost center.  What else am I missing?

    Hi,
    I can't understand the idea of "eliminating the use of activity types"...
    What about direct activity allocation (based on time confirmation using activity types)? Is there no need to do so? Don't you use production orders?
    best regards, Christian
    Edited by: Christian Ortner on Mar 29, 2010 8:19 PM

Maybe you are looking for