Loading All Levels of BOM data to BW

Hi Experts,
I am loading all BOM data that is all levels of BOM data to BW. can anyone suggest me the steps using what should i create generic exctractor FM or view on tables (MAST,STKO,STPO) ?? please give me the detailed steps for the loading data to BW from R/3. because i want to do reporting at any level. ?
vikrant

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b0af489e-72b1-2b10-159d-abb8058fb88d?quicklink=index&overridelayout=true
Just check if this helps
Prathish

Similar Messages

  • Query to showed all level of BOM details, sort by create/update date

    Hi expert,
    I need a query to display all the level of BOM and can be sort by create / update date.
    The display of query should be as below:
    A (1st level) Parent BOM
    B (2nd level) Child BOM 1
    B (2nd level) Child BOM 2
    C (3rd level)  Child BOM 2 - 1
    C (3rd level)  Child BOM 2 - 2
    B (2nd level) Child BOM 3
    B (2nd level) Child BOM 4
    Below is the BOM  query that can only display up to 2nd level and cannot sort by date.
    Can someone please help to modify or there is a better query?
    Thanks
    Declare @BOMDetails table(TreeType Nvarchar(MAX),PItem NVARCHAR(Max),PName NVARCHAR(MAX),CItem  NVARCHAR(Max),CName NVARCHAR(MAX),[Quantity] Numeric(18,2),[UoM] NVARCHAR(MAX),[WareHouse] NVARCHAR(MAX),[IssuMethod] NVARCHAR(MAX),[PriceList] NVARCHAR(MAX))
    INSERT Into @BOMDetails
    SELECT T1.TreeType ,T0.Father AS [Parent Code], T2.ItemName AS [Parent Description], T0.Code AS [Child Code],
    T1.ItemName AS [Child Description], T0.Quantity ,T0.Uom ,T0.Warehouse ,T0.IssueMthd,T0.PriceList  
    FROM ITT1 T0 INNER JOIN OITM T1 ON T0.Code = T1.ItemCode
                 INNER JOIN OITM T2 ON T0.Father = T2.ItemCode
    Union All
    SELECT ' ',T0.Father as [Parent Code], T2.ItemName AS [Parent Description], ' ', ' ', 0, ' ',' ',' ' , 0 FROM ITT1 T0 INNER JOIN OITM T1
    ON T0.Code = T1.ItemCode INNER JOIN OITM T2 ON T0.Father = T2.ItemCode
    Group By T0.Father,T2.ItemName
    ORDER BY T0.Father,t0.Code
    update @BOMDetails set PItem='' ,PName='' where TreeType='N' or TreeType='P'
    Select PItem as[Parent Code] ,PName as [Parent Description],CItem as [Child Code],CName as [Child Description],Quantity,UoM,IssuMethod ,PriceList    from @BOMDetails

    Hi,
    Try this query and modify as per your requirement:
    SELECT
    T0.[Father] as
    'Assembly',T0.[code] as 'Component1', t10.[ItemName]
    'Description1',T1.[Code] as 'Component2', t11.[ItemName]
    'Description2', T2.[Code] as 'Component3', t12.[ItemName]
    'Description3', T3.[Code] as 'Component4', t13.[ItemName]
    'Description4',T4.[Code] as 'Component5', t14.[ItemName]
    'Description5', T5.[Code] as 'Component6', t15.[ItemName]
    'Description6'
    FROM
    ITT1 T0 LEFT OUTER
    JOIN ITT1 T1 on T0.Code = T1.Father LEFT OUTER JOIN ITT1 T2 on
    T1.Code = T2.Father LEFT OUTER JOIN ITT1 T3 on T2.Code = T3.Father
    LEFT OUTER JOIN ITT1 T4 on T3.Code = T4.Father LEFT OUTER JOIN ITT1
    T5 on T4.Code = T5.Father LEFT OUTER JOIN ITT1 T6 on T5.Code =
    T6.Father left outer join oitm t20 on t0.father = t20.itemcode left
    outer join oitm t10 on t0.code = t10.itemcode left outer join oitm
    t11 on t1.code = t11.itemcode left outer join oitm t12 on t2.code =
    t12.itemcode left outer join oitm t13 on t3.code = t13.itemcode left
    outer join oitm t14 on t4.code = t14.itemcode left outer join oitm
    t15 on t5.code = t15.itemcode
    Thanks & Regards,
    Nagarajan

  • Data loaded all level in a hierarchy and need to aggregate

    I am relatively new to Essbase and I am having problems with the aggregation of cube.
    Cube outline
    Compute_Date (dense)
         20101010 ~
         20101011 ~
         20101012 ~
    Scenario (dense)
    S1 ~
    S2 ~
    S3 ~
    S4 ~
    Portfolio (sparse)
    F1 +
    F11 +
    F111 +
    F112 +
    F113 +
    F12 +
    F121 +
    F122 +
    F13 +
    F131 +
    F132 +
    Instrument (sparse)
    I1 +
    I2 +
    I3 +
    I4 +
    I5 +
    Accounts (dense)
    AGGPNL ~
    PNL ~
    Porfolio is a ragged hierarchy
    Scenario is a flat hierrachy
    Instrument is a flat hierarchy
    PNL values are loaded for instruments at different points in the portfolio hierarchy.
    Then want to aggregate the PNL values up the portfolio hierarchy into AGGPNL, which is not working the loaded PNL values should remain unchanged.
    Have tried defining the following formula on AGGPNL, but this is not working.
    IF (@ISLEV (folio,0))
    pnl;
    ELSE
    pnl + @SUMRANGE("pnl",@RELATIVE(@CURRMBR("portfolio"),@CURGEN("portfolio")+1));
    ENDIF;
    using a calc script
    AGG (instrument);
    AGGPNL;
    Having searched for a solution I have seen that essbase does implicit sharing when a parent has a single child. This I can disable but this is not the sole cause of my issues I think.

    The children of F11 are aggregated, but the value which is already present at F11 is over wriiten and the value in F11 is ignored in the aggregation.^^^That's the way Essbase works.
    How about something like this:
    F1 +
    ===F11 +
    ======F111 +
    ======F112 +
    ======F113 +
    ======F11A +
    ===F12 +
    ======F121 +
    ======F122 +
    ===F13 +
    ======F131 +
    ======F132 +
    Value it like this:
    F111 = 1
    F112 = 2
    F113 = 3
    F11A = 4
    Then F11 = 1 + 2 + 3 + 4 = 10.
    Loading at upper levels is something I try to avoid whenever possible. The technique used above is incredibly common, practically universal as it allows the group level value to get loaded as well as detail and agg up correctly. Yes, you can load to upper level members, but you have hit upon why it isn't all that frequently done.
    NB -- What you are doing is only possible in BSO cubes. ASO cube data must be at level 0.
    Regards,
    Cameron Lackpour

  • PR release date capturing for all levels

    Hi,
    I have activated PR release strategy at Header level.  There are about 4 levels of release are there.  In custom PR print program, I want to capture the details of all levels corresponding release dates.  Can you help me how I can go about it.
    Munna.

    Hi
    PR release date is not relevant to Release strategy.
    Purchase Requisition Release Date
    Specifies the date on which the purchase order should be initiated on the basis of the purchase requisition.
    The release date is based on:
    The purchasing department processing time defined for the plant
    The planned delivery time from the material master record or purchasing info record
    The delivery date
    The goods receipt processing time from the material master record
    Note
    The planned delivery time from the purchasing info record and the GR processing time are only taken into account if the purchase requisition was generated via materials planning.
    Example
        Processing time   Planned del.   GR processing
           Purchasing         time           time
    Release       PO                    Delivery    Date
    date         date                   date    required
    Date required:                     10.01.96
    GR processing time:                 2 days (working days)
    Planned delivery time:             10 days (calendar days)
    Purchasing dept processing time:    2 days (working days)
    For the material to be available on the date it is needed, the purchase requisition must be released on 09.11.96 (requirement date less GR processing time, planned delivery time, and purchasing department processing time).
    Hope it helps
    Thanks/karthik

  • How to download all levels of Candy Crush Soda only get 30 levels

    How do I down load all levels in Candy Crush Soda saga.  I only have 30. Have tried deleting and downloading again and still only get 30.

    How do I down load all levels in Candy Crush Soda saga.  I only have 30. Have tried deleting and downloading again and still only get 30.

  • Can we load data for all levels in ASO?

    Hi All,
    Im creating cube in ASO
    can i load data for all levels in ASO
    we can load data for all Levels In BSO but in ASO i need confirmation????
    and one more
    wat is the consider all levels option in ASO is used for ? wat is the purpose?
    Can any one help ,it would be appriciate.
    Thanks

    In an ASO cube you can only load to level zero
    The consider all levels if used for aggregation hints. It allows you to tell the aggregation optimizer to look at all levels when deciding if aggregation needs to be done on the dimension

  • Can we load data for all levels in ASO cube

    Hi All,
    Can we load data for all levels of members in ASO cube in 9.3.1.
    Regards

    Yes you can load data for all levels in an ASO cube in any version HOWEVER, none of the upper level data in any cube will be there when you look for it. You will get a warning message in the load because ASO cubes don't store data at upper levels. It is the same as loading data into dynamic calc members in BSO cube. It will do the load without compalints, but there will be no data there (At least you get the warning in ASO)

  • Problem in the load, all data skipped

    Hello,I have a problem when I load data, the message (WARNING) is:"Aggregate storage applications ignore update to derived cells. [209774] cells skipped"All my data are skipped, and this is the only message I have, I do not have any errors in my *.err file. Does this mystery can be explain?

    Are you sure you're not trying to load to a non-level 0 member in one of your dimensions? Unlike a block storage database all AS data must be loaded at level 0.Mark Rixonwww.analitica.co.uk

  • Data at all level do not match after applying security

    Hi ,
    We are implementing the security and observed following.
    1. Data is loaded in the cube correctly and report shows up the data correct at all level.
    2. Now we apply the security which which will restricts the users to see some members according to the role they are mapped to.
    3. When we create a report we are seeing the values at second and all below level to be correct but the value at all level still shows same as got in step 1.This means that the value is not dynamically aggregated while creating the report.
    Also we checked that the values are not precomputed at all level for any dimension.
    Any pointers to resolve this?
    Thanks in advance.
    Thanks
    Brijesh

    John, A sureshot way to simulate the relational aggregation for various users (who have vpd) applied on the fact information is to create the AW for each user. That way the scoping of dimension hierarchies and/or facts occur on the relational source views and each user only sees the summation of the visible values in the AW. You can use (local) views in each user's schema on the appropriate AW and make the application access seamless across all the user schemas. Such a solution may be a bit redundant (leaf data present in multiple AWs increasing load time) but should work in all environments since it does not involve any tweaking of internal objects via custom olap dml.
    +++++++++++++
    Regd implementing the approach to have a single AW servicing multiple users but also allow for individual users to see only their data (for both base and summary data).. This can be done in 10gR2. We have used this approach with a 10gR2 AW based on suggestions from people who were in the know :)
    Please note the disclaimers/caveats..
    * Works for 10gR2 with no foreseeable way to port this onto 11g when olap dml object manipulation is prevented by the lock definition (lockdfn) keyword.
    * Custom code needs to be run at startup.. Preferably in PERMIT_READ program alone since this way any changes made by any of the restricted user(s) do not get saved/commited. This is the manner that sql query on AWM using vpd (say) would work.
    * OLAP DML Code is very dependent on the nature of the cube structure and stored level setting.
    * This approach provides for a neat and nifty solution in the context of a PoC/demo but the solution performs a (possibly exhaustive) cleanup of stored information during the startup program for each user session. And since this happens in the context of a read only session, this would happen every time for all user sessions. So be sure to scope out the extent of cleanup required at startup if you want to make this a comprehensive solution.
    *********************** Program pseudo code begin ***********************
    " Find out current user details (username, group etc.) using sysinfo
    limit all dimensions to members at stored levels
    limit dim1 to members that user does *not* have access to.
    NOTE: This can lead to perf issues if PROD dimension has thousands or millions of products and current user has access to only a few of them (2-3 say). We will have to reset the stored information for the majority of products. This is undo-ing the effects of the data load (and stored summary information) dynamically at runtime while the users request a report/query.
    limit dim1 add descendants using dim1_parentrel
    limit dim1 add anscestors using dim1_parentrel
    limit dim1 keep members at stored levels alone... use dim1_levelrel appropriately.
    same for dim2 if reqd.
    "If we want to see runtime summation for stores in North America (only visible info) but see the true or actual data for Europe (say).. then we need to clean up the stored information for stores in North America that the current user does not have access to.
    Scenario I: If Cube is uncompressed with a global composite.. only 1 composite for cube
    set cube1_meas1_stored = na across cube1_composite
    set cube1_meas2_stored = na across cube1_composite
    Scenario II: If Cube is uncompressed with multiple composites per measure.. 1 composite per cube measure
    set cube1_meas1_stored = na across cube1_meas1_composite
    set cube1_meas2_stored = na across cube1_meas2_composite
    Scenario III: If Cube is compressed but unpartitioned..
    set cube1_meas1_stored = na ... Note: This can set more cells as null than required. Each cell in status (including cells which were combinations without data and did not physically exist in any composite) get created as na. No harm done but more work than required. The composite object may get bloated as a result.
    Scenario IV: If Cube is compressed and partitioned..
    Find all partitions of the cube
    For each partition
    Find the <composite object corr to the partiton> for cube
    set cube1_meas1_stored = na across <composite object corr to the partiton>
    "Regular Permit Read code
    cns product
    permit read when product.boolean
    *********************** Program pseudo code end ***********************
    The cube in our aw was uncompressed/partitioned (Scenario I).
    It is more complicated if you have multiple stored levels along a dimension (possible for uncompressed cubes) where you apply security at an intermediate level. Ideally, you'll need to reset the values at load/leaf level, overwrite or recalculate the values for members at all higher stored levels based on this change, and then exit the program.
    HTH
    Shankar

  • JPA - How to load all data in memory

    Is it possible to load all data in memory using JPA, do many transactions like create, delete, update and finally make a commit when I want?.
    I am trying to do something like Weblogic does, the user locks the console, does many transactions (create services, delete accounts, update costumers etc) and at the end when he presses a button, all changes are committed.

    Yes. Of course. There are tradeoffs. First, if you loaded all data into memory, you likely have a small database or a huge amount of RAM. :^)
    Big I digress. What you are talking about is either conversational or transactional state. The former would be implemented at your view-controller level (e.g., hold onto the results of the user's selections until the work is done, and then submit as a batch). Sometimes this is simply session state, but generally, you are handling this at the web or controller tier. It is solely a decision to enhance user experience.
    Another possibility is to hold onto the transaction for the whole time that the user is modifying data. This is in some ways "easier" than the first method. However, it likely will not scale beyond a non-trivial amount of users. Generally, you want to hold onto a transaction for the shortest possible time.
    So, my recommendation would be to load your objects into memory using JPA. Keep those in session state (or write them to a 'working copy' database table or the filesystem to save memory). Then submit all the requests in one go back to JPA (in one transaction).
    - Saish

  • HRI Load All Competence Level Hierarchy Concurrent programm

    HRMS application
    11.5.10.2
    windows 2003
    db 10.2.0.4
    How do I do the following?
    Run the HRI Load All Competence Level Hierarchy concurrent program with the following parameters:
    o Collect From Date: earliest reporting date required
    o Collect To Date: current date (default)
    o Full Refresh: Yes (default)
    You should incorporate the above process into your Request Group.
    I can see Concurrent : Program/define but not in submit request drop down.
    Immediate Help is appreciated
    Thanks.

    I am able to do this.
    I was registering under wrong security group.
    Thanks.

  • Why should we load header data first and then we load item level data?

    Hi BW guru`s,
    I have small confusion about the data loading.
    Why should we load header data first and then we load item level data?
    Is there any particular reason?
    Scenario: 1st I have uploaded 2LIS_11_VAHDR sales document header data from R/3 to BW by using LO-Cockpit extraction. Then I loaded 2LIS_11_VAITM.This is the norma procedure which we use to follow.
    I have a question. If I load 2LIS_11_VAITM data 1st from R/3 to BW and then I will load 2LIS_11_VAHDR by using LO-Cockpit extraction. what will happen?
    Regards,
    Venkat
    Edited by: VENKAT BOORUGADDA on Aug 12, 2008 11:51 AM

    There is no difference in doing it the other way.
    The load sequence will come into play only during activation where if you map the same fields from the two datasources, you might want to have the previous value overwritten by data from the next datasource.
    That is when you should care about loading one datasource before the other.
    To your question it is not arule that header data should come first.

  • TS3212 ITunes worked fine on my Windows 7 machine.  Had to rebuild the machine but all the data remains.  Downloaded a new version of iTunes and cannot figure out how to load all of my music currently on my hard drive.  Tried moving Library file.  Did not

    ITunes worked fine on my Windows 7 machine.  Had to rebuild the machine but all the data remains.  Downloaded a new version of iTunes and cannot figure out how to load all of my music currently on my hard drive.  Tried moving "iTunes Library" file.  Did not work.  Never had trouble like this before.

    Many thanks for your post. I've been trying for days to get this sorted and was getting well fed up with I-Tunes. Really thought i'd never get it working again. Tried un-installing it, loading older versions and they still wouldn't work.
    Came across your suggestion by chance and top man - it worked..!!!!
    How you even knew what to do is beyond me - but thanks so much. I really was pulling my hair out.
    You need to put your post over the Web as there seem to be loads of people having the same trouble.
    Thanks again.

  • Is there settng source system level for master data delta loading perfo

    Hi Viwers,
    Good Morning,
    I am loading data from the source to target level for Master data it is taking too much time,I would like to know the how to increase the master data delta loading performance.
    I would like to know the source system level is there any setting requited & target system level also.
    Please give your inputs..........
    Thanks & Regards,
    Venkat Vanarasi.

    Venkat -
            Are u deleting the indexes of data target before delta loading ? If not,delete the indexes of data target before delta loading  and recreate them after delta loading is done.This procedure increase load performance.You can perform whole procedure in process chain.
    Anesh B

  • PS GP Identification Process Load all job data to the TEMP table?

    We use PeopleSoft Global Payroll for our payroll process. We have about 200,000 employees with 18 Pay Group in the system. When we run the Identification process for one pay group, we found that the Identification process will load all the 200,000 employees data into the TEMP table, including the employees in other pay group and the employees terminated before this cycle. It makes the identification process performance very bad.
    Does anyone know how to make the identification process only load the current run pay group employee's job data to the TEMP table?
    Our PS application version is 8.8, PeopleTools version is 8.45.

    The temp table we mean is the table cobol process used to store job data.
    When we trace the cobol, we found the following statment:
    22:31:38 7088 0.000 0.000 #1 RC=0 COM Stmt=INSERT INTO PS_GP_JOB2_WRK (EMPLID ,CAL_RUN_ID ,EMPL_RCD ,EFFDT ,GP_PAYGROUP ,PAY_SYSTEM_FLG ,EMPL_STATUS ) SELECT DISTINCT J.EMPLID ,R.CAL_RUN_ID ,J.EMPL_RCD ,J.EFFDT ,J.GP_PAYGROUP ,J.PAY_SYSTEM_FLG ,J.EMPL_STATUS FROM PS_GP_CAL_RUN R ,PS_JOB J WHERE J.EMPLID BETWEEN :1 AND :2 AND R.CAL_RUN_ID=:3 AND ('N'=:4 OR J.EMPLID IN (SELECT EMPLID FROM PS_GP_GRP_LIST_RUN WHERE RUN_CNTL_ID=:5 AND OPRID=:6) ) AND J.EFFSEQ= (SELECT MAX(EFFSEQ) FROM PS_JOB JJ WHERE JJ.EMPLID=J.EMPLID AND JJ.EMPL_RCD=J.EMPL_RCD AND JJ.EFFDT=J.EFFDT) AND EXISTS (SELECT 1 FROM PS_JOB JJ WHERE JJ.EMPLID=J.EMPLID AND JJ.EMPL_RCD=J.EMPL_RCD AND JJ.PAY_SYSTEM_FLG='GP')

Maybe you are looking for