Best practice for Plan and actual data

Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
Thanks.

Hi Zack,
It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
Hope this helps.

Similar Messages

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Is there standard report for planned and actual devlivery date?

    Hi Guru,
    Could you help me to find standard report to compare planned delivery date in PO and actual delivery date (GR Posting date)?
    I wonder, Is there standard report for planned and actual delivery date?
    or I have to create a customizing report.
    Thank you very much.

    hi
    i dont think it is possible to see both in one pan so try to create some Z report
    u try ME80fn here u have to change the views ,in del shedule view u can see the delivery dtae and in PO histry view u can see the posting date
    regards
    kunal

  • How to differentiate b/w Plan and actual data

    Hi,
    How to differentiate b/w plan nad actual data.
    Is that value type or Version InfoObject??
    What should be the value for that and how it gets populated ??
    Regards,

    Hello Tapan,
    The plan and actual data is differentiated on the basis of value of infoobject 0VTYPE. Value 10 is for actual. Different plan versions can be differentiated by different values of infoobject 0VERSION.
    These values are populated when actual data is loaded or plan data is entered.
    Hope it helps.
    Regards,
    Praveen

  • How can i get best practice for SD and MM

    Please, can any body tell me how can i get best practices for SD and MM for functional approach?
    Thanks
    Utpal

    Hello Utpal,
    I am really surprised, in just 10 minutes you searched that site and found it not useful. <b>Check out my previous reply "you will not find screen shot in this but you can add it in this"</b>
    You will not find readymade document, you need to add this as per your requirement.
    btw, the following link gives you some more link for new SAP guys, this will be helpful. <b>Check out HOW to BASIC transaction</b>
    New to Materials Management / Warehouse Management?
    Hope this helps.
    Regards
    Arif Mansuri

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for converting SAP HR data (4.7 to ECC)

    Hello Experts ...
    We are going from 4.6 to ECC ... no upgrade ..it will be a new implementation ...
    I am looking for best practices to convert SAP HR data from one sap instance(4.6) to another(ECC) ...
    I am not sure if direct input or LSMW or any other method/tool is the best way ...
    Will really appreciate and award point if I can get good advise or documentation ...
    Let me know if my question is not clear enough.
    Thanks,

    Hi
    You can check SAP Marketplace and there follow the download link. You will require a SAP Marketplace login to download the Best Practices. Fortunately Best practices for ECC 5.00 and 6.00 are available there but they are country specific versions. I know HCM for US is available there.
    Reward points, if helpful.
    Regards
    Waz

Maybe you are looking for