A/P & Consignment data pull to Planning Cube.....

I put this in "BC & Extractors" section, but did not get any response.....
Need couple of clarifications (brief background is provided.....):
1. We do DELTA extractions using 0FI_AP_3 adn 0FI_AR_4 to a base cube which includes both 'Open' and 'Cleared'
items. From this cube we do a DELTA load to a Planning Cube on a DAILY basis. We filter the data on Company Code
and 'Item Status' (OnlyOPEN Items) for Planning purposes. The key figure pulled to Planning Cube is '0DEB_CRE_DC'.
The bsae cube has data at Line Item Document level, where as the planning cube is only at Customer & Vendor level.
The question is - Will this Daily DELTA to Planning cube with filter on Open Item bring correct key figure? For example,
if the status of one of the items pulled as OPEN in earlier Delta changes to "CLEARED" the subsequent day, will the Key
Figure reflect the changes and show correct value? Is there any impact on the key figure due to the filter on Open Item?
My thinking is the key figure 0DEB_DRE_DC will reflect he correct number (adjsuted) since it is a summated key figure.
I could be totally wrong and want to understand this with comments from experts.
2. Has anyone worked with pulling Consignment (SMI) data into Planning cube? We pull this data using a generic extractor
from a SAP table "RKWA" with some logic. Since I pull A/P data that contains SMI & Non-SMI transactions. To avoid
Double Counting, how can I avoid bringing in the SMI transactions from A/P? There is no flag to diffferentiate the SMI /
Non-SMI transactions....
I am hoping someone has worked in these area to offer expert comments/ recommendations.
Thanks.. Shaun
Please post one question only once
Edited by: Vikram Srivastava on Aug 13, 2010 1:02 PM

Hello Lee,
For your first question as you are using the Filter on the status of the record,
lets say today the status is OPen and as a part of the filter the Delta will be able to pick this record as the status is OPen but if the status is changed to Closed tomorrow then this record will be completely ignored and hence in the next level target you will not have the right status at all.
So you need to plan your filters accordingly.
Hope this helps.
Thanks
Murali

Similar Messages

  • Does exist a way to read data from IP planning cube with ABAP?

    Hello All.
    My scenario is as follows:
    I have an ODS where we store costcenters to be planned. This ODS is loaded via a manual falt file load (regular dta transfer process).  In order to avoid inconsisten data in the planning cube, I want to check in my ODS load process, if any costcenter which is already planned (it has planned data in my planning cube) is being deleted in the loaded file.
    To do this, I need to access my planning data cube ( i guess also planning buffer), so my question is if there is any function module which retrieves data from a planning level as reference data.
    Thanks a lot and best regards,
    Alfonso.

    Hi Alfonso,
    note 1101726 shows how the plan buffer can be read.
         l_r_plan_buffer = cl_rsplfa_plan_buffer=>if_rsplfa_plan_buffer~get_instance( i_infoprov ).
         l_r_plan_buffer->get_data( EXPORTING i_t_charsel = l_t_charsel       
                                              i_include_zero_records = rc_false
                                              i_r_msg = l_r_msg
                                    IMPORTING e_r_th_data  =  l_r_th_data
                                    EXCEPTIONS OTHERS  = 2 ).  
    Please take a look at the note. You do not need to implement the after_burn_selection exit, but you can find sample code how to read the planning buffer. Please give it a try.
    Another solution would be to use the function module RSDRI_INFOPROV_READ. But you need to make shure that you first close the yellow request. This can be done using function module RSAPO_SWITCH_BATCH_TO_TRANS.
    Hope this helps
    Matthias Nutt
    SAP Consulting Switzerland

  • Newbie questions regarding pulling data from hyperion planning cube w/OBIEE

    Hello there,
    we've recently implemented Essbase and are currently pumping it full of revenue/expense data from out source systems to calculate NOI. This data is stored in a staging table at the detail level where it is sourced into Hyperion and aggregated. We also have OBIEE 10g (plan to upgrade to 11g later this year) and we would like to connect to and report out of Essase. Our ultimate goal is to be able to report on the NOI numbers in the planning cube but have the ability to drill down to the detail level which is not stored in Essbase. We've heard it is possible, albeit not native for OBIEE 10g to do this. We've also heard that it is not best practice to use our transactional cube for this type of reporting but to actually create a second "reporting cube".
    What are best practices for getting this NOI data out of Hyperion and merging it with our relational detail reporting? Can we somehow export the data from the cube and store it in a relational database? Should we clone the cube (if even possible) and configure both it and our relational source in the BI repository and setup all drill-throughs there?
    Any info is GREATLY appreciated. Thank you.
    Edited by: cisGuy on Sep 20, 2010 5:31 PM

    i have found information on how to use ODI to extract data for the cube. What I'm really trying to find out though is best practices for reporting off summary level data in Essbase with the ability to drill-through to the detail.
    We've heard reporting off the same cube that users are writing back and transacting on is bad. Do we need to make a "reporting cube" and then bring that in OBIEE and merge it with a relational source or is it better to extract the data from Essbase into flat files and join it to detail tables in our relational source?

  • Overwrite data in the Planning Cube

    Hi Gurus,
    I am new to BPS, I have created a simple Manual planning layout for my users
    where they could enter their plan data for either plan or forecast version.
    As you all know the cube is always additive and cannot overwrite the Keyfigures directly
    I want to know if there is any in built planning function which would do that.
    For example Right now if the users first enter like
    Plant  Profit center version QTY
    1000  2200             SP0     100
    and they think the number 100 is too much and they want to readjust it 80
    they want to enter as 1000 2200  SP0 80 but cube is additive so it is making the QTY as 180 instead of 80.
    As of now they are entering as 1000 2200 SP0 -20 to adjust it to 80 which they don't want to do
    Hope I communicated well.
    Thanks
    Jay.

    The basic functionality of manual planning is to save the data in the cube the values which we enter in the layout. If we make 100 to 80 in the layout, after saving, cube will have 80 only, not 180. Ofcourse cube is additive, in the sense, to make the 100 to 80, system generates a record of -20 in the cube so that the overall result is 80. User doesnt need to enter -20 to have 80 in the cube. Recheck once again.

  • Data requests in planning cube

    Hi Friends,
    I like to find out from you how can I delete from the cube data I loaded into a planning cube via planning activities given that the cube only creates new request after n thousand records have filled the request.
    In short, how can I manage the changes I introduced, for example, delete the loaded data to undo?
    regards
    Michael

    Hello Micahel,
    There is one more easy option to delete plan data without having the need to close any open requests. BEWARE .. deleting data using this method will be difficult to undo once saved.
    If you can identify the exact subset of data that you want to delete, you can create a planning level which has all the characteristics and key figures. Create a function of type delete.
    You can then set the selection criteria in the planning level and delete the data. In most cases, we have restricted the access to this functionality to one or two super users only.
    Hope this helps.
    Sunil

  • Max data pull from Virtual Cube - is this a setting?

    We have a user doing a query against a Remote cube in our BW system, and they're hitting a "maximum data" limit of data from this remote cube.  Is this a setting for this cube or globals, and can you modify it?
    Thanks,
    Ken Little
    RJ Reynolds Tobacco

    Hi,
    MAXSIZE = Maximum size of an individual data packet in KB.
    The individual records are sent in packages of varying sizes in the data transfer to the Business In-formation Warehouse. Using these parameters you determine the maximum size of such a package and therefore how much of the main memory may be used for the creation of the data package. SAP recommends a data package size between 10 and 50 MB.
    https://www.sdn.sap.com/irj/sdn/directforumsearch?threadid=&q=cube+size&objid=c4&daterange=all&numresults=15
    MAXLINES = Upper-limit for the number of records per data packet
    The default setting is 'Max. lines' = 100000
    The maximum main memory space requirement per data packet is around
    memory requirement = 2 * 'Max. lines' * 1000 Byte,
    meaning 200 MByte with the default setting
    3 THE FORMULA FOR CALCULATING NUMBER OF RECORDS
    The formula for calculating the number of records in a Data Packet is:
    packet size = MAXSIZE * 1000 / transfer structure size (ABAP Length)
    but not more than MAXLINES.
    eg. if MAXLINES < than the result of the formula, then MAXLINES size is transferred into BW.
    The size of the Data Packet is the lowest of MAXSIZE * 1000 / transfer structure size (ABAP Length) or MAXLINES
    Goto RSCUSTV6 tcode and set it.
    Go to your Infopackage, from tool bar, scheduler, data packet settings, here you can specify your data packet size
    Go to R/3, Transaction SBIW --> General settings --> Maintain Control Parameters for Data Transfer.
    Here you can set the maximum number. But the same can be reduced in BW...
    Info package>Scheduler>data’s default data transfer-->here you can give the size, but can reduce the size given in R/3 side, you can’t increase here...
    In RSCUSTV6 you can set the package size...press F1 on it to have more info and take a look to OSS Notes 409641 'Examples of packet size dependency on ROIDOCPRMS' and 417307 'Extractor Package Size Collective Note'...
    Also Check SAP Note 919694.
    This applies irrelevant of source system meaning applicable for all the DS:
    Go To SBIW-> General Settings -> Maintain Control Parameters for Data Transfer -> Enter the entries in table
    If you want to change at DS level then:
    IS->IP -> Scheduler Menu -> Data’s. Default Data Transfer and change the values.
    Before changing the values keep in mind the SAP recommended params.
    Hope this helps u..
    Best Regards,
    VVenkat..

  • How Distinguish planning data and Actual data in a Planning cube

    Hi,
    I want to dispaly planning data & actaul data in a query how can i do it?
    in my query it is showing all the requests it is not distinguishing.

    Well use infoobject 0VTYPE for actual and plan differentiation
    and 0VERSION to categorise which version of Plan data it is,typically you might have different planning versions .
    Example
    0VTYPE = 10 actual
    0VTYPE = 20  plan
    0VERSION = 001
    0VERSION = 002
    0VERSION = 003
    Business decides which 0version value to be taken as final planning version once freezed.0VTYPE = 10 for actuals is always one ,it do not have 0VERSION to be taken in to account.
    Hope it Helps
    Chetan
    @CP..

  • Unable to load data into Planning cube

    Hi,
    I am trying to load data into a planning cube using a DTP.
    But the system throws a message saying that, real time data loading cannot be performed to the planning cube.
    What would be the possible reason for the same?
    Thanks & Regards,
    Surjit P

    Hi Surjit,
    To load data into cube using DTP, it should be put in loading mode only.
    The real time cube behaviour is set to planning mode only when the data is input to the cube via the planning layouts / or through the Fileupload program.
    You can change the behaviour of the cube in RSA1 -> Right click on the cube -> Change Real-time load behaviour and select the first option(real time cube can be loaded with data; planning not allowed)
    Best Rgds
    Shyam
    Edited by: Syam K on Mar 14, 2008 7:57 AM
    Edited by: Syam K on Mar 14, 2008 7:59 AM

  • Automatically trigger the event to load data from Planning cube to Standard Cube

    Hello,
    We have a below set up in our system..
    1. A Planning BEx query using which user makes certain entries and writes back data to the planning cube.
    2. An actual reporting cube which gets data from the planning cube above.
    Now, what we want to do is to automate the data load from Planning cube to Reporting cube.
    This involves 2 things..
    1. Change the setting " Change real time load behaviour " of the planing cube to Planning.
    2. Trigger the DTP which loads data from Planning cube to reporting cube.
    We want to automate the above two steps...
    I have tried few things to achieve the same..
    1. Created an event in SM64,
    2. In the Planning cube "Manage" Screen,  clicked on "Subsequent Processing" and provided the event details. ( Not sure if its the correct place to provide the event detail ).
    3. Wrote a ABAP program which changes the setting of the planning cube (  " Change real time load behaviour " to Loading )
    4. Created a process chain, where we have used the event as a start variant, used the ABAP program as a next step, DTP run as the last step..
    This i hoped, would trigger the event as soon as a new request comes and sits in planning cube, which in turn would trigger the process chain which would load the data from Planning cube to reporting cube..
    This is not working. I dont think the event is triggering, and even if does i am not sure whether it will start the process chain automatically. Any ideas please ?

    hi,
    try to do the transformation directly in the input cube by using CR of type exit, more details :
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/43/1c3d0f31b70701e10000000a422035/content.htm
    hope it helps.

  • IP Exit Planning function copy data from cca to pca planning cubes

    Hello All,
    I have a requirement where I have to copy the characteristics and keyfigures of CCA plan cube data to pca plan cube data.The infoobjects in CCA aggregation level are are {0amount,0costcenter,0costelement,version,0calmonth,0infoprovider} which needs to be copied to corresponding infoobjects in PCA level  {0amount,0profitcenter,0account,0version,0calmonth,0infoprovider}.
    The CCA and PCA aggregation level are built on the top of the multiprovider.
    I can do it using the fox coding but 0costelement cannot be mapped to 0account as these two are different fields.Since I have to copy the values of 0costelement to 0account , I was wondering how can I do it using the exit function.
    As I have never used the exit function before, I was wondering if somebody can help me out with this.
    By the way, I have read the forums and figured out to create a class in se24 and use interface
    IF_RSPFLA_SRVTYPE_IMP_EXE and since I am generating some records , will be using the method IF_RSPLFA_SRVTYPE_IMP_EXEC~INIT_EXECUTE.
         By the way , I read in the forums where there are methods/function modules which can copy data from one aggregation level to another aggregation level.Anyways, can you tell how can I loop thru the records of CCA aggregation level and copy the records to the PCA aggregation level.
    Edited by: nazeer on Feb 22, 2009 12:04 PM

    This thread might help you.
    https://forums.sdn.sap.com/click.jspa?searchID=22634973&messageID=5317176

  • Compression in a Planning cube

    I was trying to compress 2009-2010 data in a planning cube and when i collapse the request it got csheduled but the compressio  didnt happen.
    When i look at the logs of the batch job it says that -
    No requests needing to be aggregated have been found in InfoCube DE_RBQTPL
    Compression not necessary; no requests found for compressing
    Compression not necessary; no requests found for compressing
    Job finished
    Can you please help me in resolving this issue.
    Thanks,
    Sravani

    Please check that all the request are loaded properly
    into the info cube.
    Is there any yellow request in the cube?.
    Then wait for some time and two the compression.
    Thanks,
    Saveen

  • How to give planners read only access to Planning cube for ad-hoc analysis?

    Hi Everyone,
    I am trying to issue smartview reports to the planners. Most of the reports are coming off Essbase (reporting) application. However, we have to generate one report from planning server because of the text fields. Currently, on planning server, planners have access only to the forms. They dont have access to the planning cube. But to make them able to refresh this report, we need to give them access to the planning cube.
    But we dont want them to do ad-hoc data write on planning cube. So my question goes like this - Is there a way for planners to have write access on the web forms but only read access to the cube through samrtview ??

    The access permissions will match between planning and smart view if a planning connection is used.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Was told not to report off transaction planning cubes

    Hello there,
    we recently implemented Hyperion Planning and have 3 Essbase cubes: Planning, Capex, and workforce. We want to hook OBIEE up to the cubes for reporting. We have been told it is not best practice to report off the same cubes we're transacting in. How should we go about building reporting cubes? Is it as simple as "backing up" or copying our transactional cubes?

    It is always better to restrict the planning application for the planning operations and providing the reqd reporting applications for the reporting purposes.
    This will avoid the unnecessary conflict b/w the planning and reporting cycles.
    In the planning 11.1.1.2 there is inbuilt feature of mapping the planning cubes for the reporting application(ASO applications).
    Under covers it works like this
    It extract the required set of data from the planning cube by genaratiing and executing calc script into a text file.
    After that it will format the data file to map the reporting cube and dimensionality and load into the reporting cube.
    Insteade of using this in built feature we can provide the better automated customi interface for our application with three regularly used components
    1. data export calc script
    2. Rule file to map the exported file with the reporting cube.
    3. MaxL file.
    If you are using the same dimension structure for the reporting cube as of planning cube except hsp_rates and currency dimensions.
    You can try this as well.
    1. creating bso-aso replicated partition(suppoerted in essbase 11)
    2. replicate data
    3. drop replicate data

  • 0vtype =20 (plan data)  not populating to cube but present in psa

    Hello Experts,
    I have a requirement in BW 3.5 where in I need to create a report on multiprovider which should give the actual data,plan data and the difference between actual and plan data for some key figures.
    The data flow is like this
    There are two cubes one for actual and the other for plan ,I am able to get the actual data (10) for actual cube from R/3 .
    But the problem is with plan cube,I am not able to get the plan data (20) to the plan cube (to the field 0vtype) ,but the data is coming till psa from R/3,there is no routine present in update rule also,but still data is not showing up for this 0vtype in plan cube...
    I have tried loading selectively (020) by giving 020 as constant in the update rule..but still no luck
    The datasource that is fetching plan data is 1_CO_PA_PLN1000.
    Could you please suggest how to proceed further
    Thanks,

    HI Sashi,
    Checking the mapping between the 0vtype and the source fiedls.
    Check in transfer rules first if it is mapped or not.
    Transfer rules are not mapped to source fields then correct it .
    Hope this helps.
    Regards,
    Reddy

  • After 10gto11g upgd, 'Planning Data Pull Worker'  perf is very slow

    Hi all,
    We upgraded database from 10g to 11gR2. We have 3 prod instances , they work together. They are ebiz, planning, and configurator. Some of you might be familiar with this kind of setup.
    We are running 'Planning Data Pull Worker' in APS instance (planning) instance. This program "SELECTs" (and gets data) from EBIZ instance and "INSERTS" into planning instance.
    But after upgrade, this works fine in one instance (PERF6) (half hr), and takes very long like 6+ hrs in another instacen ARCH6.
    We figured where the problem is. A select statement.
    "select count(1) from apps.MRP_AP3_SALES_ORDERS_V ;"
    Eventhough this is not exactly the select been used. We simplified by just trying "count(1)" of the main view which as the culprit.
    I am pasting trace of both instance where working fine (PERF6) and where it is taking too long (ARCH6).
    Please have a look and help. We opened a sev-1 tar as well , still problem persists.
    Thanks,
    Gopal

    =======================================
    ___PERF6 instance trace; Where it works fine._
    =======================================
    SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR(('87dtkqstb5adq'),'961766442',null,'ADVANCED'));
    PLAN_TABLE_OUTPUT
    SQL_ID 87dtkqstb5adq
    select count(1) from apps.MRP_AP3_SALES_ORDERS_V
    Plan hash value: 961766442
    | Id | Operation | Name | Row
    s | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | |
    | | 29253 (100)| |
    | 1 | SORT AGGREGATE | |
    1 | 189 | | |
    | 2 | NESTED LOOPS | |
    PLAN_TABLE_OUTPUT
    1 | 189 | 29253 (1)| 00:05:52 |
    | 3 | HASH JOIN | |
    1 | 185 | 29253 (1)| 00:05:52 |
    | 4 | NESTED LOOPS OUTER | |
    1 | 176 | 29246 (1)| 00:05:51 |
    | 5 | NESTED LOOPS OUTER | |
    1 | 170 | 29245 (1)| 00:05:51 |
    PLAN_TABLE_OUTPUT
    | 6 | NESTED LOOPS | |
    1 | 161 | 29245 (1)| 00:05:51 |
    | 7 | HASH JOIN | | 60
    90 | 654K| 19090 (1)| 00:03:50 |
    | 8 | TABLE ACCESS BY INDEX ROWID | FND_LANGUAGES |
    20 | 100 | 2 (0)| 00:00:01 |
    | 9 | INDEX RANGE SCAN | FND_LANGUAGES_N1 |
    20 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 10 | HASH JOIN | | 62
    46 | 640K| 19088 (1)| 00:03:50 |
    | 11 | TABLE ACCESS FULL | OE_TRANSACTION_TYPES_TL | 12
    97 | 37613 | 10 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | |
    | | | |
    | 13 | NESTED LOOPS | | 41
    PLAN_TABLE_OUTPUT
    03 | 304K| 19077 (1)| 00:03:49 |
    | 14 | NESTED LOOPS | | 41
    03 | 220K| 14517 (1)| 00:02:55 |
    | 15 | HASH JOIN | | 41
    00 | 104K| 9959 (1)| 00:02:00 |
    | 16 | TABLE ACCESS FULL | MRP_DERIVED_SO_DEMANDS | 41
    00 | 65600 | 17 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 17 | INDEX FAST FULL SCAN | MTL_SYS_ITEMS_SN_N2 | 15
    59K| 14M| 1184 (1)| 00:00:15 |
    | 18 | TABLE ACCESS BY INDEX ROWID| OE_ORDER_LINES_ALL |
    1 | 29 | 2 (0)| 00:00:01 |
    | 19 | INDEX UNIQUE SCAN | OE_ORDER_LINES_U1 |
    1 | | 1 (0)| 00:00:01 |
    | 20 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 |
    1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 21 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL |
    1 | 21 | 2 (0)| 00:00:01 |
    | 22 | TABLE ACCESS BY INDEX ROWID | MTL_SALES_ORDERS |
    1 | 51 | 3 (0)| 00:00:01 |
    | 23 | INDEX RANGE SCAN | MTL_SALES_ORDERS_N1 |
    1 | | 2 (0)| 00:00:01 |
    | 24 | INDEX UNIQUE SCAN | PJM_PROJECT_PARAMETERS_U1 |
    PLAN_TABLE_OUTPUT
    1 | 9 | 0 (0)| |
    | 25 | INDEX RANGE SCAN | OE_ODR_LINES_SN_N2 |
    1 | 6 | 2 (0)| 00:00:01 |
    | 26 | TABLE ACCESS FULL | AR_SYSTEM_PARAMETERS_ALL |
    43 | 387 | 6 (0)| 00:00:01 |
    | 27 | INDEX UNIQUE SCAN | GL_SETS_OF_BOOKS_U2 |
    1 | 4 | 0 (0)| |
    PLAN_TABLE_OUTPUT
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$15993197
    8 - SEL$15993197 / FNL@SEL$2
    9 - SEL$15993197 / FNL@SEL$2
    11 - SEL$15993197 / OTTL@SEL$2
    PLAN_TABLE_OUTPUT
    16 - SEL$15993197 / MDSD@SEL$2
    17 - SEL$15993197 / MSIK@SEL$2
    18 - SEL$15993197 / OOL@SEL$2
    19 - SEL$15993197 / OOL@SEL$2
    20 - SEL$15993197 / OOHA@SEL$2
    21 - SEL$15993197 / OOHA@SEL$2
    22 - SEL$15993197 / MSO@SEL$2
    23 - SEL$15993197 / MSO@SEL$2
    24 - SEL$15993197 / MPP@SEL$2
    25 - SEL$15993197 / OOL1@SEL$2
    26 - SEL$15993197 / ASPA@SEL$2
    PLAN_TABLE_OUTPUT
    27 - SEL$15993197 / GSB@SEL$2
    Outline Data
    /*+
    BEGIN_OUTLINE_DATA
    IGNORE_OPTIM_EMBEDDED_HINTS
    OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
    DB_VERSION('11.2.0.2')
    OPT_PARAM('optimizer_dynamic_sampling' 6)
    PLAN_TABLE_OUTPUT
    ALL_ROWS
    OUTLINE_LEAF(@"SEL$15993197")
    ELIMINATE_JOIN(@"SEL$5C160134" "MTL_SALES_ORDERS"@"SEL$3")
    OUTLINE(@"SEL$5C160134")
    MERGE(@"SEL$335DD26A")
    OUTLINE(@"SEL$1")
    OUTLINE(@"SEL$335DD26A")
    MERGE(@"SEL$3")
    OUTLINE(@"SEL$2")
    OUTLINE(@"SEL$3")
    INDEX_FFS(@"SEL$15993197" "MSIK"@"SEL$2" ("MRP_SN_SYS_ITEMS"."INVENTORY_IT
    PLAN_TABLE_OUTPUT
    EM_ID"
    "MRP_SN_SYS_ITEMS"."ORGANIZATION_ID"))
    FULL(@"SEL$15993197" "MDSD"@"SEL$2")
    INDEX_RS_ASC(@"SEL$15993197" "OOL"@"SEL$2" ("OE_ORDER_LINES_ALL"."LINE_ID"
    INDEX(@"SEL$15993197" "OOHA"@"SEL$2" ("OE_ORDER_HEADERS_ALL"."HEADER_ID"))
    FULL(@"SEL$15993197" "OTTL"@"SEL$2")
    INDEX_RS_ASC(@"SEL$15993197" "FNL"@"SEL$2" ("FND_LANGUAGES"."INSTALLED_FLA
    G"))
    PLAN_TABLE_OUTPUT
    INDEX_RS_ASC(@"SEL$15993197" "MSO"@"SEL$2" ("MTL_SALES_ORDERS"."SEGMENT1")
    INDEX(@"SEL$15993197" "MPP"@"SEL$2" ("PJM_PROJECT_PARAMETERS"."PROJECT_ID"
    "PJM_PROJECT_PARAMETERS"."ORGANIZATION_ID"))
    INDEX(@"SEL$15993197" "OOL1"@"SEL$2" ("MRP_SN_ODR_LINES"."LINE_ID"))
    FULL(@"SEL$15993197" "ASPA"@"SEL$2")
    INDEX(@"SEL$15993197" "GSB"@"SEL$2" ("GL_SETS_OF_BOOKS"."SET_OF_BOOKS_ID")
    PLAN_TABLE_OUTPUT
    LEADING(@"SEL$15993197" "MSIK"@"SEL$2" "MDSD"@"SEL$2" "OOL"@"SEL$2" "OOHA"
    @"SEL$2" "OTTL"@"SEL$2"
    "FNL"@"SEL$2" "MSO"@"SEL$2" "MPP"@"SEL$2" "OOL1"@"SEL$2" "ASPA"@"S
    EL$2" "GSB"@"SEL$2")
    USE_HASH(@"SEL$15993197" "MDSD"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOL"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOHA"@"SEL$2")
    PLAN_TABLE_OUTPUT
    NLJ_BATCHING(@"SEL$15993197" "OOHA"@"SEL$2")
    USE_HASH(@"SEL$15993197" "OTTL"@"SEL$2")
    USE_HASH(@"SEL$15993197" "FNL"@"SEL$2")
    USE_NL(@"SEL$15993197" "MSO"@"SEL$2")
    USE_NL(@"SEL$15993197" "MPP"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOL1"@"SEL$2")
    USE_HASH(@"SEL$15993197" "ASPA"@"SEL$2")
    USE_NL(@"SEL$15993197" "GSB"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "MDSD"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "OTTL"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "FNL"@"SEL$2")
    PLAN_TABLE_OUTPUT
    END_OUTLINE_DATA
    110 rows selected.
    SQL> spool off

Maybe you are looking for