BPC Best Practices: Sales Planning (BP2)

Im trying to follow these instructions:
1. Log on to Interface for Excel.
To do so start the SAP BPC Launch Page from your desktop icon or via the Start menu of your desktop, then in the Programs folder choose SAP -> Business Planning and Consolidation.
2. On the SAP BPC Launch Page, choose Interface for Excel. In the dialog box, select the AppSet SAP_BP_Planning and the Application Sales_Planning.
My problem is that in my BPC Launch Page (at step 2) I don't have these options:
AppSet: SAP_BP_Planning
Application: Sales_Planning
These are the options that I have:
AppSet: ApShell (in the combo box)
Application:         (nothing in the combo box)
Can anyone figure it out why i dont have those options (SAP_BP_Planning, Sales_Planning) ?
Thanks

The version I have is BPC 7.5 SP4 with SQL Server 2005
Im new at this, I installed this software in a Windows Server 2003 Virtual Machine and now im trying to learn how to use this software. I have downloaded the configuration guide and those steps are there. This is the only one step that i couldn't follow:
3 Prerequisites
Before you start installing this BPC scenario, you must install prerequisite scenarios. For more information, see the BPC prerequisite matrix (Prerequisites_Matrix_[xx]_EN_JP.xls; the placeholder [xx] depends on the SAP Best Practices version you use, for example, BPC refers to the SAP Best Practices SAP BusinessObjects Planning and Consolidation 7.5: Prerequisites_Matrix_BPC_EN_JP.xls). This document can be found on the SAP Best Practices documentation DVD in the folder \BPC_JP\Documentation\.
I couldn't find that file in the Best Practices folder (50101040)
Thanks

Similar Messages

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Best Practice for Plan for Every Part (PFEP) Database/Dashboard?

    Hello All-
    I was wondering if anyone had experience with implementing / developing a Plan for Every Part (PFEP) Database in SAP. My company is looking to migrate its existing PFEP solution (Custom developed Excel/Access system) into SAP. If you are unfamiliar, a PFEP is a dashboard view of a part/material that provides various business groups with dedicated views to data from Material Masters, Info Records, and Vendor Master Records and combines it with historical/forecasting information. The goal is to provide a single source to all the part/material settings for a given part.
    Is there a Best Practice PFEP in SAP? Or if this is something that most companies custom develop in ERP or BI?
    Thanks in advance.
    -Ron

    I think you will likely get a response in SAP ERP - Logistics Materials Management (SAP MM)
    additionally you might want to do some searches based on SAP Lean Inventory, perhaps Kanban. I am assuming you are not using WM or EWM either?
    Where I have seen PFEP incorporated into the supply chain strategy this typically requires not inconsiderable additions to the alternate UoM in MM dropping of automatic replenishment levels (reorder level) and rethinking aspects of the MRP plan so be prepared or significant additional data management work if you haven't already started on that. I believe Ryder logistics uses PFEP and theirSAP infrstructure is managed by IBM; might be an idea to try and find a linkedin  resource from there. You may also find one of the ASUG supply chain,logistics,  MM or WM sigs a good place to also ask questions and look for answers.

  • Best practice: Deployment plan for cluster environment

    Hi All,
    I want to know, which way is the best practice for preparing and deploying new configuration for WLS-cluster environment. How can I plan a simultan deployment of ALL of nodes, with out single point of failure?
    Regards,
    Moh

    Hi All,
    I get the Answer as followed:
    When you deploy an application OR redeploy an application, the deployment is initiated from the Admin Server and it it initiated on all targets (managed servers in the cluster) at the same time based on targets (which is expected to be cluster).
    We recommend that applications should be targeted to a cluster instead of individual servers whenever a cluster configuration is available.
    So, as long as you target the application to the cluster, the admin server will initiate the deployment on all the servers in a cluster at the same type, so application is in sync on all servers.
    Hope that answers your queries. If not, please let me know what exactly you mean by synchronization.
    Regards,
    Moh

  • Best Practices & Strategy Planning for SAP BI Architecture

    What are best practices and strategy planning that SAP BI Architect should know?
    What are the challenges are involved with this role ?
    What are the other information that this Architect should know to deliver the robust BI solution?
    Is there any white papers on the best practices on Architecture & Data Modeling, please ?
    Thanks,
    Venkat.

    Hi
    As per the Best Practice  first load the master and next transaction data .
    Please find the link for best practices
    http://www.sap.com/services/pdf/BWP_SAP_Best_Practices_for_Business_Intelligence.pdf.
    Regarding the architecture it depend upon the size of volumen and how much frequency  your load and  hard ware sizing
    based on this  we can provide best  solution
    If you any issues please let me know
    Regards
    Madan

  • Datatype best practice and plan cardinality

    Hi,
    I have a scenario where I need to store the data in the format YYYYMM (e.g. 201001 which means January, 2010).
    I am trying to evaluate what is the most appropriate datatype to store this kind of data. I am comparing 2 options, NUMBER and DATE.
    As the data is essentially a component of oracle date datatype and experts like Tom Kyte have proved (with examples) that using right
    datatype is better for optimizer. So I was expecting that using DATE datatype will yield (at least) similar (if not better) cardinality estimates
    than using NUMBER datatype. However, my tests show that when using DATE the cardinality estimates are way off from actuals whereas
    using NUMBER the cardinality estimates are much closer to actuals.
    My questions are:
    1) What should be the most appropriate datatype used to store YYYYMM data?
    2) Why does using DATE datatype yield estimates that are way off from actuals than using NUMBER datatype?
    SQL> select * from V$VERSION ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL>  create table a nologging as select to_number(to_char(add_months(to_date('200101','YYYYMM'),level - 1), 'YYYYMM')) id from dual connect by level <= 289 ;
    Table created.
    SQL> create table b (id number) ;
    Table created.
    SQL> begin
      2  for i in 1..8192
      3  loop
      4     insert into b select * from a ;
      5  end loop;
      6  commit;
      7  end;
      8  /
    PL/SQL procedure successfully completed.
    SQL> alter table a add dt date ;
    Table altered.
    SQL> alter table b add dt date ;
    Table altered.
    SQL> select to_date(200101, 'YYYYMM') from dual ;
    TO_DATE(2
    01-JAN-01
    SQL> update a set dt = to_date(id, 'YYYYMM') ;
    289 rows updated.
    SQL> update b set dt = to_date(id, 'YYYYMM') ;
    2367488 rows updated.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'A', estimate_percent=>NULL) ;
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user, 'B', estimate_percent=>NULL) ;    
    SQL> explain plan for select count(*) from b where id between 200810 and 200903 ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
    |   0 | SELECT STATEMENT   |       |     1 |     5 |   824   (4)| 00:00:10 |
    |   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
    |*  2 |   TABLE ACCESS FULL| B       | 46604 |   227K|   824   (4)| 00:00:10 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - filter("ID"<=200903 AND "ID">=200810)
    14 rows selected.
    SQL> explain plan for select count(*) from b where dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
    |   0 | SELECT STATEMENT   |       |     1 |     5 |   825   (4)| 00:00:10 |
    |   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
    |*  2 |   TABLE ACCESS FULL| B       |  5919 | 29595 |   825   (4)| 00:00:10 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       2 - filter("DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
               hh24:mi:ss') AND "DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
               hh24:mi:ss'))
    16 rows selected.

    Charles,
    Thanks for your response.
    I did not think of the possibilitty of histograms. When I ran the tests on 10.2.0.4, I could get the results as you have shown.
    So I thought it might be due to some bug in 10.2.0.1. But interestingly, when I ran the test after collecting statistics using 'FOR ALL COLUMNS SIZE 1'
    option, I got the cardinalities that match my 10.2.0.1 results (where METHOD_OPT was default i.e. 'FOR ALL COLUMNS SIZE AUTO').
    So I carried out the tests again on 10.2.0.1 but the results did not look consistent to me. When there were no histograms on DATE column, the cardinality
    was quite close to actuals but when I collected stats using 'FOR ALL COLUMNS SIZE SKEWONLY', it generated histograms on DATE column but
    the cardinality was not quite close to actuals.
    So I am bit confused about whether this is due to a bug or due to combined effect of optimizer's "intelligence" while collecting statistics using default option
    values and the way table is queried (COL_USAGE$ data).
    Here is my test:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> exec dbms_stats.delete_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    no rows selected
    SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    ID                                      289         254 HEIGHT BALANCED
    DT                                      289         254 HEIGHT BALANCED
    SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     5 |  3691   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    | 38218 |   186K|  3691   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
    14 rows selected.
    SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     8 |  3693   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    | 38218 |   298K|  3693   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss'))
    16 rows selected.
    SQL> connect sys as sysdba ;
    Connected.
    SQL> delete from sys.col_usage$ where obj# in (select object_id from all_objects where owner = 'HR' and object_name in ('A','B')) ;
    4 rows deleted.
    SQL> commit ;
    Commit complete.
    SQL> connect hr/hr ;
    Connected.
    SQL> set serveroutput on size 10000
    SQL> exec dbms_stats.delete_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    ID                                      289           1 NONE
    DT                                      289           1 NONE
    SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     5 |  3691   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    |   110K|   541K|  3691   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
    14 rows selected.
    SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     8 |  3693   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    | 58680 |   458K|  3693   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss'))
    16 rows selected.
    SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    ID                                      289         254 HEIGHT BALANCED
    DT                                      289           1 NONE
    SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     5 |  3690   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    | 46303 |   226K|  3690   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
    14 rows selected.
    SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     8 |  3692   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
    |*  2 |   TABLE ACCESS FULL| B    | 56797 |   443K|  3692   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
                  hh24:mi:ss'))
    16 rows selected.
    SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    ID                                      289         254 HEIGHT BALANCED
    DT                                      289           1 NONE
    SQL> exec dbms_stats.gather_table_stats(user, 'B', method_opt=>'FOR ALL COLUMNS SIZE SKEWONLY') ;
    PL/SQL procedure successfully completed.
    SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
    COLUMN_NAME                 NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    ID                         289         254 HEIGHT BALANCED
    DT                         289         254 HEIGHT BALANCED
    SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
    |   0 | SELECT STATEMENT   |       |     1 |     8 |  3692   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |       |     1 |     8 |            |       |
    |*  2 |   TABLE ACCESS FULL| B       | 27862 |   217K|  3692   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
               hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
               hh24:mi:ss'))
    16 rows selected.
    SQL> explain plan for select count(*) from b where id between 200810 and 200903 ;
    Explained.
    SQL> select * from table(dbms_xplan.display) ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 749587668
    | Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
    |   0 | SELECT STATEMENT   |       |     1 |     5 |  3690   (1)| 00:00:45 |
    |   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
    |*  2 |   TABLE ACCESS FULL| B       | 32505 |   158K|  3690   (1)| 00:00:45 |
    Predicate Information (identified by operation id):
       2 - filter("ID"<=200903 AND "ID">=200810)
    14 rows selected.

  • Best Practice in V7.0 : Issues with Sales Planning and Reporting

    I am trying to install the SAP Best Practices for BPC 5.1 on SAP PBC 7.0 SP 04 I have done this as I cannot find any Best Practice documents for version 7 as yet.
    I have managed to get through the Administration setup and most of the BPC -Administration Configuration Guide, however I am having a problem with 7.4 Running a Data ManagementPackage - Import on page 32 of 36. This step involves you uploading a data file Demo_Revenue_Data.txt into BPC.
    The file says that it has failed due to Ínvalid dimension ACCOUNT in lookup.
    I believe that this error may be driven by a previous step 6.4 Creating Script Logic where the logic for BP_Sales Application was required.
    My question is twofold in that I need to determine:
    1. Has anyone else tried the BestPractices for BPC 5.0 in BPC 7.0?
    2. Does anyone know how to overcome the error when uploading the Demo Revenue into BPC?
    Edited by: Kevin West on Jul 8, 2009 2:03 PM

    Hi,
    BPC best practices document from 5 is working fine also for 7.0 because 7.0 is just an update for 5.x.
    Running Import involve logic just if you are running the package with option enabled (Run Default Logic).
    Your issue seems to be related to maping which means you have to check Transformation and Conversion file.
    Any way the best practices document will not provide you information about how to build Transformation and Conversion files.
    You have to follow an SAP BPC training and that it will help you to build your applicatioon easier and faster.
    Regards
    Sorin Radulescu

  • Best way to plan prices, head-count etc...

    Hi all,
    I've been spending quite some time thinking about an appropiate way to implement the following requirement:
    Planning quantities and also prices through different dimensions.
    Standard behavior in BPS is that everything gets summed up. Let's say I plan my regional sales volumes (quantities, prices, amounts) and then switch to the national planning level. My regional prices would then SUM UP to an astronomically high national price (and not average or max etc.). BPS simply ignores the aggregation behavior flag of the InfoObject. So what to do?
    Is there a best practice to plan with prices in BPS?
    Many thanks in advance!
    Simon

    Hi Aneesh,
    thanks for your answer. Your latter suggestion doesn't apply to us, because they want to plan different prices on regional and national level... monthly and yearly level and so on.
    Your first hint is exactly what we're doing, but I think it will get us into a lot of trouble in the long run. We use for example a flag-characteristic for "monthly level", with which we try to seperate the monthly prices from the yearly prices. Whenever someone enters a price or a quantity, a whole set of function sequences fires and recalculates averages etc on different levels.
    I just thought that there must be a standard solution for this.
    Simon

  • What is best practice to do APO Demand planning

    Hi,
    My client wants to implement demand planning.
    Client has come up with one scenario like a New Customer is created in ECC, and if I use BI and then APO flow  ( ECC -> BI -> APO-BI) for Demand planning, user will have to wait for another day. (AS BI is always having one day delay).
    For this scenarios user is insisting on ECC and APO-DP interface.
    Will anybody suggest what should be the best practice for Demand planning.
    ECC -> Standalone BI -> Planning area (Planning is done in APO) -> Stand alone BI
    Or ECC -> APO-DP (Planning is done in APO) -> Standalone BI system
    I hope I am able to explain my scenario.
    Regards,
    ST

    Hi Sujoy,
    Thanks for reply.
    (1) I have to get Sales Order data from ECC into BI standalone system Cube.
    (2) Then from this cube same data is sent to SCM APO - BI Cube.
    (3) Planning will be done in SCM APO.
    (4) Planned data is again sent to BI standalone system, Using datamart on Planned area.
    (5) In BI we will have reporting on cube, which has ECC sales order data and APO Planned data.
    But in this case, there is always delay between data loads (firstly ECC -> BI, then BI -> APO, then APO -> BI).
    In this case, if a new customer is created, Client wants to see demand planning in  latest data.
    How do we do APO DP generally?
    Do we follow the route from BI, or directly between ECC and APO..
    Hope I am able to explain the scenario..

  • BPC:NW - Best practices to load Transaction data from ECC to BW

    I have a very basic question for loading GL transaction data into BPC for variety of purposes, would be great if you can point me towards best practices/standard ways of making such interfaces.
    1. For Planning
    When we are doing the planning for cost center expenses and need to make variance reports against the budgets, what would be the source Infocube/DSO for loading the data from ECC via BW, if the application is -
    YTD entry mode:
    Periodic entry mode:
    What difference it makes to use 0FI_GL_12 data source or using 0FIGL_C10 cube or 0FLGL_O14 or 0FIGL_D40 DSOs.
    Based on the data entry mode of planning application, what is the best way to make use of 0balance or debit_credit key figures on the BI side.
    2. For consolidation:
    Since we need to have trading partner, what are the best practices for loading the actual data from ECC.
    What are the typical mappings to be maintained for movement type with flow dimensions.
    I have seen multiple threads with different responses but I am looking for the best practices and what scenarios you are using to load such transactions from OLTP system. I will really appreciate if you can provide some functional insight in such scenarios.
    Thanks in advance.
    -SM

    For - planning , please take a look at SAP Extended Financial Planning rapid-deployment solution:  G/L Financial Planning module.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is free to licensed customers of SAP BPC.   This RDS leverages the 0FIGL_C10 cube mentioned above.
      https://service.sap.com/public/rds-epm-planning
    For consolidation, please take a look at SAP Financial Close & Disclosure Management rapid-deployment solution.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is also free to licensed customers of SAP BPC.
    https://service.sap.com/public/rds-epm-fcdm
    Note:  You will require an SAP ServiceMarketplace ID (S-ID) to download the underlying SAP RDS content and documentation.
    The documentation of RDS will discuss the how/why of best practice integration.  You can also contact me direct at [email protected] for consultation.
    We are also in the process of rolling out the updated 2015 free training on these two RDS.  Please register at this link and you will be sent an invite.
    https://www.surveymonkey.com/s/878J92K
    If the link is inactive at some point after this post, please contact [email protected]

  • Best Practices for SAP BPC

    Hi Gurus,
    Please provide some best practice for BPC or older version of BPC (outlook soft) on planning scenarios.
    regards
    kishan

    Hi kishan swagath,
                                   For best practices the below link is exclusive...
    http://help.sap.com/bp_bpcv151/html/index.htm..
    You can find the best scenarios here...
    Hope it is helpful..
    Regards,
    Naresh.K

  • BI BPC server strategy and the best practice

    Hi  Everyone
    I would like to know couple of questions:
    1) Is any white paper or documentation on pros and cons of having BPC NW installed as a add-on to BW system wherein Planning and Reporting are taking place on the same BW system versus BPC as a separate system wherein is used primarily for Planning and Consolidation system only
    2) is there a Best Practice document and performance considerations for BPC development from SAP.
    appreciated for any answers.
    Regards
    AK

    Hi AK,
    both scenarios works well but for first scenario having BPC on top of exisiting BW reporting, you need to take special attention on sizing. As BPC requires additional capacity, you need to take care of it.
    Also if you have SEM component on your BW system, you need to check this Sap note: 1326576 u2013 SAP NW system with SAP ERP software components.
    And before you install BPC, it is recommended to have a quick test process once you upgraded to NW EhP1 (it is a prerequisites for BPC), to check the existing BW reporting process.
    regards,
    Sreedhar

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice to Change Dial plan?

    Hi,
    Customer has made plenty of misdialed 911 calls so they want to change the dial plan. They have CUCM, CUC and UCCX .. I will try to suggest putting a delay for 3 sec or so and blocking 911! or 911!# translation pattern .. but in case if they do want to change their dial out number.. what's the best practice for this? I tried looking for a suggestion or document but couldn't find it... at this point I can only think of copying existing RP's and change the dial out number to 8 and if required if they will have an H.323 gateway then might require configuration on dial peers... Any suggestion on this is appreciated? Thanks

    Hi Vishal,
    If 911 is being dialed accidentally, you can try configuring a 9.911 or 911# route pattern. 9.911 will  require you to change the destination pattern and forward digit settings on dial-peer or you can actually strip it on cucm itself and will not require a change on dial-peers. Other than that you need to check your dial-plan and see if there are any router patterns that are overlapping with 911 you can try editing them as well by changing the first digit for those route patterns to something other than 9.
    HTH
    Manish

Maybe you are looking for

  • Disk Utility encryption vs. FileVault 2

    I'm doing a clean install of Lion onto my iMac and I would like to have full disk encryption.  I'm booting to a Lion DVD and have just zero wiped my hard drive as "Mac OS Extended (Journaled)".  I see an option for format as "Mac OS Extended (Journal

  • How to Get Prior Year Month Value in Query

    I have a query requirement to get the Sales and Sales Previous Year in the result line for a Product. Calendar Month is also in the rows of the query. Example - I would need Sales for 10/2009 and Sales Previous Year for 10/2008. I'm using the variabl

  • ITunes Setup Assistant runs every time I start iTunes 7.3.1.3

    The past 2 iterations of iTunes have had the same problem for me - even though the program is installed already, it doesn't save my settings and runs the setup assistant every time I start the application. I've gone through the assistant twice and pu

  • Problem Creating PDF Files from Excel 2007

    I am running WinXP SP2, Office 2007, and Acrobat v8.1.2. I have an Excel workbook with multiple worksheets. When I was using Excel 2003 I could create a PDF file using five of the six worksheets. Since upgrading to Office 2007 I am unable to create P

  • Condition Control (KMOV-KSTEU) always shows "C" Manual

    When performing the final settlement of customer rebates in VBO2 > Rebate Payments > Final Settlement > Using Payment Screen The condition type for Customer Commission (BO08) is always showing condition control as "C" changed manually even when the C