DP related queries

Hi Gurus,
In Demand Planning, can we use the same data view for different users? Is it possible to keep certain rows input-able for some users and other cant in the same view?
Can we use the same macro for different planning books in the same planning area? we would like to define all the DP macros in a book and use the same for other plng books.. i would like to run the macros only in this view and i should be able to see the results in other books..how is it possible?
Finally, what is a lag report ? could you please explain me the concept of the same..
Thanks,

Hello:
A Lag report in Forecasting is a type of Error Report (aka Accuarcy) that posts the forecasts at various snapshots in time and compares them to actual history for a given "Lag n" - where n is the month of history vs forecast (using month as a standard period of comparison for discusstion).
A forecast is captured  typically at the end of the month and BEFORE a new forecast is generated with new history on updates Sales/Demand History.
The Forecast is actually a Lead - but when it becomes historical we call it a "Lag" ie  n months ago - hence the term "Lag Report"
Here the Lag 0 forecast  generated on  1/1/2010  is 100  units ie the forecast at the start of  January for January
         the Lag 1 forecast   generated on 1/1/2010  is  200  unite ie the forecast at the start of January for February
The report shows the history and 
                       Lag      0            1          2       3         4                     
                                Jan        Feb      Mar   Apr     May
Fcst on 1/1/2010    100        200      300    400     500
Actual History         120       250       275    500     600 
Difference               -20       -50         25    -100   -100
                       Lag      0            1          2       3         4                     
                                  Feb      Mar   Apr     May    June
Fcst on 2/1/2010       275      350   450     550     700
Actual History           250       275   500     600    750
Difference                 25          25     -50    - 50   - 50
You may consolidate the units and dates and measures (ie difference between forecast and actuals) in many ways, which may alos be expressed over history (percent error for a month or over three months summed). 
Your particular needs for reporting are dependent on your particular industry.  Typically "Lag 2" may be operationally a target for many manufarturing industries - but reporting on "Lag 0" may be just as important.
Regards,

Similar Messages

  • SD Related Queries

    Hello all,
    In our project we are using Automotive Industry module.
    so now i want to take Automotive module BW SD related queries from
    [http://help.sap.com/erp2005_ehp_04/helpdata/EN/50/296fe7bf1a474f84d5955cedefa0a3/frameset.htm]
    Please tell me BW sd queries from the above link.
    Regards.
    Edited by: Ranga123 on Mar 24, 2010 2:28 PM

    no one is answering my question, so admin please delete this thread.thanks.

  • AME related queries, where to post ?

    Hi all,
    Can you please tell me where should we post our AME related queries. Since Oracle treats it as part of HRMS, do we post those queries in this forum ? Or is there any separate forum for this purpose. Please provide me the link as well, if you can.
    Thanks a lot in advance.

    You can post it here I think

  • Automotive Industry SD Related Queries

    Hello all,
    In our project we are using Automotive Industry module.
    so now i want to take Automotive module SD related queries from
    [http://help.sap.com/erp2005_ehp_04/helpdata/EN/50/296fe7bf1a474f84d5955cedefa0a3/frameset.htm]
    Please tell me sd quries from the above link.

    Check this link
    [SAP Best Practices for Automotive|http://help.sap.com/content/bestpractices/industry/bestp_industry_automotive.htm]
    thanks
    G. Lakshmipathi

  • Idoc Related queries

    1.     How can we view and rectify the errors or warnings caused ,while we create a new idoc ,which may be an extension of an existing basic Idoc type(at Transaction code – we30)?
    2.     How can we delete an Idoc created,if its already been released (at Transaction code we30) and configured(at transaction code we82)?
    3.     Is that mandatory that the check box ‘Mandatory’ field should always be checked,whenever we create(extend) a new segment to an existing segment(at transaction code we30)?
    4.     On what basis,we can identify that “To which existing segment - we can append our needed segment(new segment if any to be appended)”?

    Hi Nagarajan,
      Answers for your questions:
    1)How can we view and rectify the errors or warnings caused ,while we create a new idoc ,which may be an extension of an existing basic Idoc type(at Transaction code – we30)?
       WE30 is created for IDOCs. First set break point related user exit.For testing WE19. Just enter that error IDOC number in WE19 and press F8. Then it will display the segments. Then press /H in the command box and press inbound function module push button (Just side of inbound push button). Then it will open in debug mode. we can test.
    2. How can we delete an Idoc created,if its already been released (at Transaction code we30) and configured(at transaction code we82)?
    Yes it is possible to delete hte IDOC which is released from our system, i think thorugh remote function but i am not sure.
    3. Is that mandatory that the check box ‘Mandatory’ field should always be checked,whenever we create(extend) a new segment to an existing segment(at transaction code we30)?
    Based on the requirement we can select that check box. suppose it u upload the data for MM01 t-code then observe what are all the manditory feilds in that screen. Based on that choose mandotory check box for proper fields in the segment.(In MM01 suppose meterail number is manditory then while creating segment select that manditory chk box for MATNR)
    4. On what basis,we can identify that “To which existing segment - we can append our needed segment(new segment if any to be appended)”?
    Based on the basic IDOC type and given information from the user.
    Hope this helps you, reply for queries,
    Regards.
    kumar.

  • Adobe create suite 64-bit related queries

    Hi,
    I have following couple of questions related to 64-bit support in Adobe Products.
    1. Would like to know Adobe Illustrator CS3,CS4 and CS5 support 64-bit?
    2. Would like to know Adobe Photoshop CS3,CS4 and CS5 support 64-bit?
    3. Heard that CS5 would support 64-bit. All application underneath Creative Suite 5 would support 64-bit.
    4. does 32-bit and 64-bit have separate installer or same installer can be installed in 32-bit and 64-bit as well?.
    5. In which Window platform CS 64-bit will be supported?
    6. In which MAC platform  CS 64-bit will be supported?
    7. Separate licence to be purchased for 32-bit & 64-bit? or same license can be used?
    Please clarify the above queries.
    Regards,
    Avudaiappan

    Find answers inline.
    AvudaiappanSornam wrote:
    Hi,
    I have following couple of questions related to 64-bit support in Adobe Products.
    1. Would like to know Adobe Illustrator CS3,CS4 and CS5 support 64-bit?
    Illustrator CS5 is not 64 bit.
    2. Would like to know Adobe Photoshop CS3,CS4 and CS5 support 64-bit?
    Photoshop CS5 is 64 bit
    3. Heard that CS5 would support 64-bit. All application underneath Creative Suite 5 would support 64-bit.
    Since answer to question number 1 is no you know the answer
    4. does 32-bit and 64-bit have separate installer or same installer can be installed in 32-bit and 64-bit as well?.
    Same download can install 64 bit if you have a 64 bit OS
    5. In which Window platform CS 64-bit will be supported?
    XP, Vista, Win 7
    6. In which MAC platform  CS 64-bit will be supported?
    10.5.7 and 10.6.x
    7. Separate licence to be purchased for 32-bit & 64-bit? or same license can be used?
    I beleive no, but you can always cross check with Adobe/reseller before purchasing
    Please clarify the above queries.
    Regards,
    Avudaiappan

  • Relational queries through JDBC with the help of Kodo's metadata for O/R mapping

    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben

    I have a one point before I get to this:
    There is nothing wrong with using PC instances as both DAO and DTO
    objects. In fact, I strongly recommend this for most J2EE/JDO design.
    However, there should be no need to expose the foreign key values... use
    application identity to quickly reconstitute an object id (which can in
    turn find the persistent version), or like the j2ee tutorial, store the
    object id in some form (Object or String) and use that to re-find the
    matching persistent instance at the EJB tier.
    Otherwise, there is a much easier way of finding ClassMapping instances
    and in turn FieldMapping instances (see ClassMapping.getInstance () in
    the JavaDocs).
    Ben Eng wrote:
    Due to JDOQL's limitations (inability to express joins, when relationships
    are not modeled as object references), I find myself needing to drop down to
    expressing some queries in SQL through JDBC. However, I still want my Java
    code to remain independent of the O/R mapping. I would like to be able to
    formulate the SQL without hardcoding any knowledge of the relational table
    and column names, by using Kodo's metadata. After poking around the Kodo
    Javadocs, it appears as though the relevant calls are as follows:
    ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
    FieldMetaData fmd = cmd.getDeclaredField( "myField" );
    PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
    JDBCConfiguration conf = (JDBCConfiguration)
    ((EEPersistenceManagerFactory)pmf).getConfiguration();
    ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
    Connector connector = new PersistenceManagerConnector(
    (PersistenceManagerImpl) pm );
    DBDictionary dict = conf.getDictionary( connector );
    FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
    Column[] cols = fm.getDataColumns();
    Does that look about right?
    Here's what I'm trying to do:
    class Foo
    String name; // application identity
    String bar; // foreign key to Bar
    class Bar
    String name; // application identity
    int weight;
    Let's say I want to query for all Foo instances for which its bar.weight >
    100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
    reference to Bar. But there are frequently good reasons for modeling
    relationships as above, for example when Foo and Bar are DTOs exposed by the
    remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
    PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
    to do queries that navigate the relationship; it would be nice to do it in
    JDOQL directly. I will also want to do other weird-ass queries that would
    definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
    metadata.
    Is there anything terribly flawed with this logic?
    Ben
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • RFC related queries

    HI Friends,
    i have some queries which are RFC related.Can you please clarify them-:
    1)What does this syntax mean
    call function 'FM' destination 'dev-60'
    if i use this syntax in dev-90  FM  , will it excute the above given remote enabled FM in  dev-60.
    can i use this syntax in the same  remote enabled FM .
    Thanks and Regards,
    Sakshi
    Thanks and Regards,
    Sakshi

    Hello Sakshi,
    This is a basic question which can be answered by googling. It is real easy, try this [link|http://tinyurl.com/yeqwqfv].
    BR,
    Suhas

  • Oracle 9iDS related queries

    Dear all,
    Please, help me to provide the answers or document for the followinf queries.
    1. Major difference between Oracle 6iDS and Oracle 9iDS.
    2. Can I Execute application developed in Oracle 6i
    (Client-Server) as is in Oracle 9iDS?
    3. Can I execute forms developed in Oracle 9iDS without
    Application Server?
    4. Equivalent of DFLT.PRT (available in 6i) in Oracle 9iDS
    You can also send me the document (if any) by mail. My mail id is [email protected]
    Thanks

    Hi,
    1. Major difference between Oracle 6iDS and Oracle 9iDS.
    - Listener Servlet architecture
    - Web only
    - 25+ new features (design, deployment and architecture)
    2. Can I Execute application developed in Oracle 6i
    (Client-Server) as is in Oracle 9iDS?
    You need to re-compile them. Also there are some obsoleted built-ins that you need to migrate if you use them. There is a migration assistant contained in the DeveloperSuite (FMA)
    3. Can I execute forms developed in Oracle 9iDS without Application Server?
    Only with Oracle9iDS there is a stand alone OC4J instance that you use for designtime testing your application. For production the only supported application server is Oracle Application Server
    4. Equivalent of DFLT.PRT (available in 6i) in Oracle 9iDS
    Sounds Reports related and I would try this question on their forum.
    See also the 9i collateral on http://otn.oracle.com/products/forms
    Frank

  • 'ADMINISTRATION TOOL' RELATED QUERIES

    Hi All,
    I have few queries regarding OBIEE administration tool, request to help me out in getting answers for these.
    Using OBIEE version 10.1.3.4.0, any information about these queries/documents/site related to these topics would help me.
    1. Suppose i have more than one dimension model in a single rpd, more than one developer access to this rpd. Is it possible to restrictaccess for one dimension model so that s/he will not be able to access other models?
    2. Also when there are more than one rpds in Administration tool and many developers access to them, can security be defined like 'User A will have access only to RPD1' A cannot access any other offline/online rpd.
    3. Administration tool must be installed in Server or it can be installed in client system also? asking this question because if more than one developer wants to access administration tool at the same time how can it be achieved?
    4. In my rpd i have dimention models ( more than one), can i import one model from this rpd to another rpd?
    5. What is multiuser environment? will it help in any of my above requirements??
    Thanks in advance

    1. No, but you can use MUD to define different projects so that developers "check out" the projects they are working on. See the links provided on the previous response.
    2. The security is defined in each RPD. To open an RPD you need to be an Administrator user defined in that RPD. Online access can be restricted if you block connections to your BI Servers on port 9703 so that they can only come from a local connection or from defined IPs. You will need a firewall for that though. Offline access can not be restricted. If I have an RPD locally and I have an admin account I can open it anywhere I want.
    3. Client only it's fine. You would simply install the client tools in each developer's machine.
    4. Yes, search this forum for "merge repositories", plenty of guidance already available.
    5. The links provided above give you a good explanation of what MUD is and what it is for. Searching this forums also gives you plenty of information.

  • Monitoring Related Queries - BPIM

    Dear All,
    We are planning to implement Business Process & Inteface Monitoring - first phase mainly involving application specific mointoring and interface monitoring. I have few queries on the same. Would be great if you can help in same:
    1) Our present DB is about 35TB. If we implement BPMon in SolMan then how we can make sure that performance of monitored systems like ECC, SRM, etc are not impacted while data collection by local CCMS and then passed to SolMan Cental CCMS. There could be possibility of thousands of open sales order globally at various loc. so that data when collected should have some impact on system performance.
    2) What are the Best Practices & recommendation on BPMon side, specifically for cross application monitoring like ABAP Dumps, Update Errors and Batch Files. I have links for SAP Standard slides so please dont share those one from mkt place.
    3) Any strategic document from your side to show how it was proposed /implemented in some project/clients with real time examples as that will give more clarity.
    4) Escalation Mgmt / Corrective Measure procedure - Any standard facility available for escalation management. Also we are looking for some task assignment kind of feature where alerts action can be assigned to various project team members by process experts for needful, etc.
    Thanks in advance.
    SM

    Hello Suchit,
    1. There is no guarantee that the collector will not influence the performance, however they are written in a way which should not drastically affect the system. If you are in doubt I would suggest running a chosen TBI report (in ST13), which more or less illustrates the data collector and tracing results (especially performance trace).
    2. If you have SAP slides you should be able to find best practices, I believe that the goal of BPMon is to monitor application/process specific errors. That's why also for example ABAP dumps monitor has a selection on what is dumping, so you could search only among errors belonging to your process. In my case we created a separate BP calles cross-process monitoring and we monitor there things that are critical but not possible to assign just to one process.
    3. The only "strategic" document is a technical documentation of BPMon, as SAP will not identify your critical business processes and tell you how to monitor them (at least not for free ).
    4. That depends what kind of tool you are using for the incident management. You can utilize email notifications if you don't have a specific tools, otherwise you might consider building your own BAPi to support your IM tool.
    BR
    Jolanta

  • Smart Sync Related Queries

    A. Any guidelines in defining parent chlid relation ships and association when making BAPI Wrappers???
        Any documents/note/links.
    C. How can we separate BAPI Wrappers Interface and filtering rules. Can I bring my Filtering /distribution rules defined at backend
        to MI Middleware. If yes How??? Any documents.
    D. Which type is suitable for Master Data and which Type is suitable for Transactional Data.
        Is it OK to make every BAPI Wrapper of type T51(Server Driven).
        Are there any drawbacks in this approach.
    E. In the case of Server Driven, who has more load Server or the middleware??
    regards
    anubhav

    Hi Anubhav,
        T51 is the best type of syncBO not only for master data, but for transactional data also.  This is the one with highest performance and the least load both on both backend and MI.
    In T51 case, backend informs MI about the changes in the data (add/modify/delete). Backend is the best person to identify the change in data as the data resides there. The task to idendify the data changes is quite easy in backend (depends on the backend implementation), but if there are too many ways (too many applications) the data can be changed in the backend, the effort to catch all these cases will be higher in the backend.
    In T01 case, MI identifies the changes in the data. Since the data is primarily residing in the backend and changes in data happens there, the only way MI can identify the changes is by comparing every record from the backend with the replicated data in the middleware RDB. This process will be very time consuming and can lead to performance problems if the data is huge. Also the replication time will be higher.
    In the case of master data which seldom occurs changes, the T01 replicator will run periodically ( as scheduled ) comapring the whole data to find out there is no changes. In the case of T51, the replicator will be run automatically only when the backend informs there is a change in data.
    Even for transactional data, T51 is better in terms of performance. The delay for the updation of data in the middleware after the change in backend is very small and is even configurable in middleware. So the latest data will be always there in middleware.
    Regards
    Ajith Chandran

  • Cisco Security Manager related queries

    In one of our projects, we are running CSM 3.2 on VMWare ESX 3.5.There is a project in place to have the ESX upgraded to 4.1
    This looks like a challenge as CSM 3.2 is not supported on ESX 4.1. Cisco TAC has suggested to upgrade CSM to 4.2
    Queries:
    1.       CSM 3.2 to 4.2, will it involve additional license cost?
    2.       If we upgrade CSM 3.2 to 4.2 and  then upgrade ESX from 3.5 to 4.1, will it be back immediately without anything required to be done on CSM end, once VMWare is  upgraded.
    3.       Is there any other preferred solution/workaround to manage the situation
    4.        If we have to move CSM from one ESX to another, what would be the steps  involved to retain the same configuration and logs.
    Regards,
    Nitin

    To migrate from 3.2 to 4.2, you will need to acquire a 4.2 license. The part number depends on how many devices your 3.2 installation is licensed for. Please refer to Table 2 on this announcement. You Cisco reseller or partner can provide you a quote but if you search the Internet for that part number you can see typical costs.
    The procedure for upgrade would be to first move to an interim step of CSM 4.0 and then to 4.2. Please refer to this guide. Also see the section on that guide on moving to a new server for how to handle your ESX upgrade / migration.

  • IRecruitment Related Queries

    Hi,
    1.where is the post advert button located in vacancy details page and what is its functionality . I am unable to view that button.
    2. Once i fill the vacancy details, the page moves on to "Job posting details" page. Is it the same in your case .
    3. Is there any option called source type available on create candidate page, if not then where do i mention the source type.
    4. where do i configure the various source types.
    5.Is the configuration for "skills" in "Create Vacancy " page and "Create Candidate page " the same? Because while creating a vacancy the list of skills which appear is that of the list of competencies , but when i create a candidate i dont get the list of competencies, the LOV is blank.
    Thanks,
    Angelica.

    Hi Angelicka,
    1. The post advert button is located in the page of "Job Posting" , thisi created automatically if you define previously a Site where you want to Advert your Job.
    Depending on how you defined your relations with the external Site, your Vacancy will be post at that site according to start / end dates.
    2. It dependents onyour version, on the latest versions you after Vacancy details you get the page of Vacancy skill requirments.
    3.There is an option called source type when creating a candidate. This is alittle tricky.When a Candidate is filled by agencies than the agency is populated automatically. If a recruiter update the Candidate details (from ver IRC D) than he can fill also Source Type (If you can't see the field look for personalization.
    5. The Vacancy Skill List do not populate automatically at the candidate details. You need to define for Candidates what skills you whish they will populate.
    My advise, please read the irecruitment Implementation guide carefully, since the module implementation is not straight forward.

  • RG1 related queries

    hi all,
    i ahve two queries regarding RG1 update-
    1. I have done j1i5 and j2i5 but wen i do j2i6 i get 0 as the value for all excise conditions, why ?
    2. What is the value that shud go into J_2IRG1BAL wen i do stock upload thru 561 mvmnt type.
    Thanx in advance
    saurabh

    Hi Saurabh
    I think the following link will be helpful to you.
    http://help.sap.com/saphelp_47x200/helpdata/en/1e/f4a1a011d811d4b5af006094b9ec21/frameset.htm
    Thanks
    G. Lakshmipathi

  • SPM related queries

    HI,
    I have few queries with regard to SPM, please share if you have any info:-
    1. If we are using flatfile mechanism(data extraction) to load data in SPM, is there any program to FTP the file from ECC to SPM(BI). We have extracted the data in ECC side and file is present in AL11.
    2. When we are using direct upload mechanism(data extraction) to load data to SPM, data is succesfully loaded till PSA level. As per my understanding it cannot be loaded to Direct update DSO(Inbound layer) as per the SPM Data Model, is there any standard transformartion to take data from PSA to SPM inbound/Detail layer.
    3. If we are going by direct upload mode, how can we handle DSC.
    4. If we are going by direct upload mode then is there any standard method/Process chain by which we can load data to SPM data targets after handling the data in DSC.
    Thanks in advance.
    Regards
    Sakshi

    Hi Sakshi
    1. If we are using flatfile mechanism(data extraction) to load data in SPM, is there any program to FTP the file from ECC to SPM(BI). We have extracted the data in ECC side and file is present in AL11.
    Ans: If you are within the firewall and the EC and SPM systems are conneted, not sure why you would use flat files approach. But to answer your question, you would ask the network admin to move the flat files, either manually or thru some scheduled script. Donot download files from AL11 as this transaction tends to truncate wide files.
    2. When we are using direct upload mechanism(data extraction) to load data to SPM, data is succesfully loaded till PSA level. As per my understanding it cannot be loaded to Direct update DSO(Inbound layer) as per the SPM Data Model, is there any standard transformartion to take data from PSA to SPM inbound/Detail layer.
    Ans: There is a data mgmt tool in SPM which moves data from the PSA to the inbound layer. This tool does things like currency/unit conversions, system uniqueness concatenations, export and re import of data for data classification and normalization (like the DSE service) etc... And then data gets moved from inbound to the cubes thru process chains.
    3. If we are going by direct upload mode, how can we handle DSC.
    Ans. If you take over the data mgmt function, which is your choice, by writing a transformation from PSA to inbound, in that case you would need to figure out how you would import and export for DSE. You could think of using the BW Open Hub...
    4. If we are going by direct upload mode then is there any standard method/Process chain by which we can load data to SPM data targets after handling the data in DSC.
    Ans. If you are bypassing the data mgmt tool then you would need to figure out the interconnectivity of data with DSE, as  mentioned in the previous answer, you would then connect those transformations to the standard proess chains - check the table OPMDM_TYPES for the process chains mapped to the respective objets (both master data and transaction data)
    Hope that answers.
    Regards
    Rajesh

Maybe you are looking for