Data Marts created on BW Cube to APO cube

Hi,
     Please tell me the steps that i need to perform when i create data marts from BW cube to APO cube.. We have all source systems set up and everything is fine.. we just need to create the data marts from BW cube to APO cube... i worked on data marts with in BW cubes..but here BW cube to APO cube
thanks
arya

Dear,
You will ave to create a source system BW in APO. This way you can replicate from BW to APO:
Step by step procedure as in:
http://help.sap.com/bp_biv335/BI_EN/BBLibrary/documentation/B84_BB_ConfigGuide_EN_DE.doc
http://help.sap.com/bp_biv335/BI_EN/html/BW/DemPlanAnal.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5f229690-0201-0010-84ba-9ee5a8958a05
Thanks

Similar Messages

  • APO Cube data is not showing in Planning Book

    Hi All,
              Please help me with the following issue.. We use to load the data to APO cube manually from Flat file by executing BW (Backlog, Historical) reports every week. (not a good practice). That time Planning book was working fine.. And now we automated this process by creating the data marts from BW ODS to APO Cube and the loads are successful with process chain.. But I don’t see cube data in Planning book.. In this process the existing info objects properties (Alpha has been removed) has been changed on APO side for mapping and loading purpose.. So when I load the data , the load is successful but I don’t see the data in the Planning book(Note : the cube was refreshed and I guess I need to generate CVCs and do some thing in TSCUBE).. Please explain me the details steps. I am BW consultant and don’t know much on APO.Present status i see the data in APO cube and not in Planning Book
    Thank you
    Arya
    Edited by: karan s on Apr 3, 2008 8:58 PM
    Edited by: karan s on Apr 3, 2008 8:59 PM

    Hi karan
    have you loaded the data from APO cube into the Planning book with TSCUBE ? you can add this step in your process chain also.
    If you have made changes in the strcture of BW data mapping then you dont need to change CVCs.

  • Unable to update the Data from Cube to Data Mart

    Hi,
    I have a problem with the data loading to a Cube(data Mart)in BW. When i checked in RSA3 it is showing 0 records.The data flow is depicted as follows: R/3 -> ODS(BW)-> cube(BW) -> Cube(BW- Data mart for APO cube)-> APO System. In BW, for the final Data target whenever data is loaded, by deleting the previous request through full load. But on checking this final cube (Data Mart to APO) records are avilable, wherein while checking in RSA3 for this final data target(Data Mart to APO)it is showing 0 records.
    Why? please help me.
    Regards,
    krishna

    Hi,
    I checked the data mart in RSA3.It is not a matter of Full upload or delta upload.
    thanks
    Krishna

  • Error Caller 09 contains error message - Data Marts loading(cube to ODS)

    Dear all,
              Please ! Help me in this problem, This is very urgent.
              I have one process chain that loads data from BIW to BIW only through Data Marts. In that process chain, one process loads data from one cube(Created by us) & loads data to one ODS(also created by us). Data is loaded through full update & for the selected period specified in 'Calender Day' field in data selection.
             Previously I was able to load data for 2 months, but some days ago, suddenly one day, the process of Extraction got stuck in background for long time,& showed following error :
              Error message from the source system
              Diagnosis
             An error occurred in the source system.
              System Response
             Caller 09 contains an error message.
             Further analysis:
             The error occurred in Extractor . 
             Refer to the error message.
             Procedure
             How you remove the error depends on the error message.
             Note
             If the source system is a Client Workstation, then it is possible that the file that you wanted to                load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
                  Then we killed that process on server & after another attempt, It showed some calmonth...timestamp error. Then after reducing data selection period, It had been loaded successfully, after that I was able to load data for 20 days,Again after some days process got stuck,I followed the same procedure,reduced the period to 15 days & continued, Now I can't even load data for 5 Days successfully in one attempt, I have to kill that process in background & repeat it, then sometimes It get loaded.
             Pls, suggest some solutions as soon as possible. I am waiting for your reply. Points will be assigned.
             Thanks,
              Pankaj N. Kude
    Edited by: Pankaj Kude on Jul 23, 2008 8:33 AM

    Hi Friends !
                      I didn't find any short dump for that in ST22.
                      Actually , What happens is, Request continues to run in background for infinite time. At that time
    Status Tab in Process Monitor shows  this messages :
                        Request still running
                        Diagnosis
                        No errors found. The current process has probably not finished yet.
                         System Response
                         The ALE inbox of BI is identical to the ALE outbox of the source system
                           or
                         the maximum wait time for this request has not yet been exceeded
                           or
                        the background job has not yet finished in the source system.
                       Current status
                       in the source system
                        And Details Tab shows following Messages :
                        Overall Status : Missing Messages or warnings
                        Requests (Messages) : Everything OK
                                Data Request arranged
                                Confirmed with : OK
                         Extraction(Messages) ; missing messages
                                Data request received
                                Data selection scheduled
                                Missing message : Number of Sent Records
                                Missing message : selection completed
                        Transfer (IDOCS and TRFC) : Everything OK
                                Info Idoc1 : Application Document Posted
                                Info Idoc2 : Application Document Posted
                         Processing (data packet) : No data
                        This Process runs for infinite time, then I have to kill that process from server, & Then It shows  Caller 09 Error in Status Tab
                        Should I Changed the value for that memory parameter of server or not ?. We r planning to try it today, Is it really belongs to this problem, Will it be helpful ? What r the risks ?
                        Please, give your suggestion as early as possible, I m ewaiting for your reply.
      Thanks,
    Pankaj N. Kude

  • What is the difference between SSAS & Data Mart? Is SSAS required to create BI Dashboard in SharePoint 2013?

    Hello All,
    Greetings for the day.!
    What is the main feature wise difference between SQL Server Analysis Service & Data Mart? Can we create BI Dashboard in SharePoint 2013 without creating SSAS & Measures?
    Thanks,
    msdn.microsoft.com

    Hi jdoshi65, 
    This is a very long-to-answer question :) 
    A Data Mart is a subset of a Data Warehouse, specifically designed for a department for example, or to support certain reports. The Data Warehouse is where ALL the data available for analysis in your company is, is a super set from the Data Mart.
    Then, we have SQL Server Analysis Services, which is a service which ships with SQL Server. It builds, ON TOP of the Data Warehouse or the Data Mart (or both) an OLAP solution in order to improve and enrich the reports, analytics and insights coming from
    the data in the Data Warehouse / Data Mart. 
    That's the big picture, but: 
    - Is it necessary to have a Data Warehouse or a Data Mart to use SQL Server Analysis Services? NO
    - Is it necessary to have SQL Server Analysis services to query and create reports from a Data Warehouse / Data Mart? NO
    So, answering your question, there is no difference between SQL Service Analysis Services and a Data Mart because they are different things.
    Then, yes, you can create a BI Dashboard in SharePoint 2013 without a SSAS cube behind. You can just use SQL Server Reporting Services or Performance Point Services to build dashboards in SharePoint 2013.
    Regards.

  • Data mart cube to cube copy records are not matching in target cube

    hI EXPERTS,
    Need help on the below questions for DATA Mart-Cube to cube copy(8M*)
    Its BW 3.5 system
    We have two financial cube.
    Cube A1 - Sourced through R/3 system (delta update) and Cube B1- Sourced through A1 cube.(Full update). These two cubes are connected through update rules with one to one mapping without any routines.Basis did a copy of back-end R/3 system from Production to Quality server.This happened approximately 2 months back.
    The Cube A1 which extracts delta load from R/3 is loading fine. but for the second cube, (extraction from previous cube A1) i am not getting full volume of data instead i m getting meagre value but the loading shows successful status in the monitor.
    We  tried through giving conditions in my infopackage (as it was given in previous year's loading) but then also its fetching the same meagre volume of data.
    To ensure that is it happening for the particular cube, we tried out in other cube which are sourced thro myself system and that is also getting meagre data rather than full data..
    For Example: For an employee if the data available is 1000, the system is extracting randomly some 200 records.
    Any quick reply will be more helpful. Thanks

    Hi Venkat,
                  Did you do any selective delitions in CUBEA1.
    first reconcile data cube1 & cube2 .
    match totals of cube1 with cube2.
    Thanks,
    Vijay.

  • How to load the exist Data Mart Request in to cube.

    Hi Friends,
    I have a scenario... i got DSO and one Cube , iam loading the data from DSO to Cube .
    The number of records are more in the DSO and due to some clculation in the routine i need to load the data year by year(i got 2007 and 2008 year daya). I have loaded the 2007 data to the Infocube and in the DSO i can see the Data Mart Symbal against the Request Id when th load is finished successfully.
    When iam trying to load the 2008 data from DSO to Cube iam getting the "0" records. I realised that the Data Mart symbal is exist in the DSO.
    How to load the 2008 data in this scenario. If i delete the data mart sysbel means iam deleting the Cube request.
    Can any have an idea on this.
    Thaks in advance.

    HI,
    Things are not clear.
    How is loading happening if its delta or full load through DTP or you using 3.5 flow??
    In any cases if you do a full load or full repair based on the year selection it should pick the records from the source...there is nothing to do with the data mart status in the case of full loads.
    Data mart status will come into picture only when you schedule the delta loads.
    Do a full load based on selections on year from the DSO to the cube...no need to delete the data mart or it will bring that request again delta is scheduled...does that request in DSO contains the data for 2008 only ...if yes then you can just delete the data mart status for that and do a delta...if not then do full loads as said.
    Thanks
    Ajeet

  • Value gets doubled in the cube when doing a data mart

    Hi,
    I am doing a data mart from one cube to another cube for different consolidation units and the value gets doubled for all the consolidation units except for one consolidation unit.there are no duplicate results in the cube.
    Can anyone tell me how i can debug this issue or what could be the reasons,This is very urgent.
    Thnaks and Regards,
    Subha

    In the Cube that is being loaded try seeing its content. While seeing its content give the restriction on the Request ID as that of the request which loaded the Cube.
    If you find the consolidation unit values doubled there, try seeing your update rule and transfer rule if the value is being changed anywhere.
    Then check the PSA.
    Hope that helps.
    Regards.

  • Giving Error while generating the Data mart to Infocube.

    Hi Gurus,
    I need to  extract the APO infocube data in to the BW infocube. For that iam trying to generate the data mart to APO infocube .
    So, that i can use that data mart as a data source to extract that APO Infocube data in to  BW infocube.
    In that process iam trying to generate the datamart for APO Infocube . But while generating it is giving errors like below:
    Creation of InfoSource 8ZEXTFCST for target system BW 1.2 failed
    The InfoCube cannot be used as a data mart for a BW 1.2 target system.
    Failed to create InfoSource &v1& for target system BW 1.2.
    PLease suggest me what to do for this Error problem.
    Thanks alot in Advance.

    Hi,
    Point No : 1
    What is Planning Area :
    http://help.sap.com/saphelp_scm41/helpdata/en/70/1b7539d6d1c93be10000000a114084/content.htm
    Point No : 2
    Creation Steps for Planning Area :
    http://www.sap-img.com/apo/creation-of-planning-area-in-apo.htm
    Note : We will not create Planning Area.This will be done by APO team.
    Point No 3  : Afetr opening the T-Code : /n/SAPAPO/MSDP_ADMIN in APO you will be able to see all the planning areas.
    Point No 4 : Select your planning area and Goto Extras menu and Click on Generate DS
    Point No 5. System automaticall generate the DS in APO (Naming Convention start with 9) and Replicate the DS in BI Map to your cube and load the data.
    Regards
    Ram.

  • Help Required for Mapping Key figures from Cube to APO Planning area.

    Hello Experts,
    We have created cube in APO BW and now we want to map it to Planning area how we can map.
    Can any body explain how we can map keyfigures?
    What is the use of livechache how it will be updated?
    Regards
    Ram

    Hi,
    I am not very sure about the 9ARE aggregate (haven't used it in backups), but RTSCUBE is used to copy time Series (TS) KF data from cube to planning area (SNP or DP).
    Are you trying to restore some time series data from your backup cube to the planning area? If yes, then do a mapping of characteristic from cube to planning area in RTSCUBE, and also map the TS KF between cube and planning area.
    If your KF is not a time series KF, then you can't copy it from cube to planning area. You could get data to cube for some reporting, otherwise I am not sure what use the backup is for you. For SNP, most of the data would be received from R/3, so there's not much point in having a backup.
    Hope this helps.
    Thanks - Pawan

  • How to find Info Source for Export Data source in Data Marts node

    Hi
                  I need to load data from ODS to Info Cube. I created the Export Data source for the ODS. I can see the Export Data Source but in the Data Marts node of Info Source i cannot find the Info Source for the Export Data source i created. I replicated Data sources in the BW source system. I also tried to use Insert Lost Nodes from the Context Menu of the InfoSource node but nothing worked. Please let me know what i need to do to see the Info source in the Data Marts.
    Thanks
    Padma

    In the infosource tab in RSA1 - use settings --> display generated objects
    you will be able to see the datamart infosources..

  • Data Mart and Data Extraction from an Infocube

    Can a data mart which is built on one Infocube in BW support delta extraction? We have two separate BW systems and are trying to extract data from one Infocube in one BW (Source) and load it to one Infocube in another BW(Target)?
    We have built a data mart on the source BW infocube and have successfully been able to extract and load the initial data load into the target BW infocube. I noticed that the field 0RECORDMODE was not on the datasource that was created to support this data mart so my gut feeling is that we will not be able to do delta data extractions from this data mart.  Any feedback or confirmation of this?

    Hi,
    The InfoCube that is used as an export-DataSource is first initialized , meaning that the current status is transferred into the target BW. When the next upload comes around only those requests will then be transferred that have come in since the time of initialization. Different target systems can also be supplied like this
    Since it is request based ..data mart on info cube supports delta.
    Hope its helpful,
    Anu.

  • Do we require an OLTP DB and Data Mart?

    Our data sources are as follows:
    - An mdb file (downloaded every hour)
    - Multiple xls files (downloaded every week)
    Our aim is to develop a BI solution using BISM, Data Mart, OLAP cubes etc.
    From my understanding, we do not necessarily require an OLTP DB and we can import our data directly into our data mart using SSIS.
    - However, with a data mart, will we be able to display all our data and and perform CRUD operations on all our data at our presentation layer just like an OLTP DB? For example, list historical data in table format, which can be updated if needed?
    Thanks.

    Hi DarrenOD,
    It is correct that you do not require an OLTP DB, but only the extracts you require. The extracts are usually significantly less than the OLTP DB since you will never do analysis on every fields in the operational system, but only a small portion of the
    source system.
    The tradisional datawarehouse (DWH) architecture is staging DB, DWH DB(+data mart if needed) and analytic layer ( OLAP \ Tabular). There are very specific and good reasons why. The DWH DB contains all history. Keep in mind that the DWH follows a dimensional
    model whereas the OLTP follows a normalized (3NF) model with lots of indexes and foreign keys and table relationships.
    Data marts are created for specific reporting reasons which cannot be derived from the DWH facts. The marts are created from the DWH tables.
    Hope this helps.

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • APO/BW: Creating datasource in BW for APO

    Hi
    1. In creating a datasource (in BW) to be used in APO by righting on the cube in question and selected the option “Generate export Data Source”. It created a datasource 8CubeName, which
    could be found under the InfoSource tree in BW. At this point, will APO see 8CubeName?
    2. Where in  BW do we specify that it should go to APO? What if there are other systems which can also use this same datasource and how do we specify which one?
    3. I also understand that update rules can be created on APO side, if so, what becomes of the updated rule created on the BW end? In other words, if update rules are applied on the APO side, and also applied on the cube on BW side, what happens?
    4. Ok, so if I understand it right, this process creates a structure for datasource in BW, which is seen on APO DP. But which process actually pushes data from BW to APO DP?
    Thanks

    Hi Amanda,
    I believe most of your questions will be answered by understanding that inside APO/SCM itself there is BW system, that works just like the BW that you had worked and understand well, mostly the data staging from BW to and from APO are like interaction between 2 BW (datamart), now try RSA1 in your APO system
    1. Yes, APO will see the datasource after you replicated in APO from BW as source system; in APO we create source system with RSA1 like in BW, transfer application component (RSA9), this datasource will be displayed under 'Datamart'.
    2. It just like BW act source system for other BW system, we have no need to specify in BW which one to go, but in APO we will use this datasource to assign to infosource in APO
    3. Again in APO it just like a 'mini' BW there, so update rules in APO are indepedent used for APO-BW, nothing will happen to existing BW
    4. Create infosource in APO and assign datasource from BW, create infopackage.
    You can use process chain as well, yet in APO we have more process type (specific for APO usage like generate CVC, adjust time series, etc)
    http://help.sap.com/saphelp_scm50/helpdata/en/8a/9d6937089c2556e10000009b38f889/frameset.htm
    http://help.sap.com/saphelp_scm50/helpdata/en/13/5ada58309111d398250000e8a49608/frameset.htm
    hope this helps.

Maybe you are looking for