Infomation regarding Best Practices Required,

Dear Friends,
    Happy New Year......
Im working as a part BI Excellence team in a reputed company.
I jst want to say a client to install the BI Best Practice(Scenario -  SCM), inorder to do that i need to present him the advantages and difference between Best practice (SPECIFIC FOR BI) over General Implementation.
When i search in Help.sap.com, it generally speaks about the time consumption n guidelines of Overall SAP Best Practices.
Can anyone help me wrt to BI (From Blue Print to Go Live), Time line diferrences between SAP BI Best Practice and General Implementation.
An Example with Specific Scenario Like SCM, Taking a Cube for IM and describing the Start to End Implemenation process and its timeline. How the same differs, when we go by using a SAP BI Best Practice installation?
Please provide your Valuable suggesstions, as i dont hav any Implementation experience.
Requesting your Valuable Guidence.
Regards
Santhosh kumar.N

Hi,
http://help.sap.com/saphelp_nw2004s/helpdata/en/f6/7a0c3c40787431e10000000a114084/frameset.htm
http://help.sap.com/bp_biv370/html/Bw.htm
Hope it helps........
Thanks & Regards,
SD

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • New to ColdFusion - Question regarding best practice

    Hello there.
    I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
    The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
    As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
    I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
    I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
    I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
    Thanks for the help.

    Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
    For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
    While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
    Michael G. Workman
    [email protected]
    http://www.usbid.com
    http://ic.locate-ic.com

  • Regarding Best Practices Documents

    Hi All,
    How to search and download SAP Best Practices documents.
    Thanks in Advance
    Pavan

    Hi Pavan,
    Pl go to the URL: http://help.sap.com/
    On the top centre of the page, you find SAP Best Practise tab.
    In there, you have Overview, Baseline packages, Industry packages, Cross-industry packages.
    Click on the desired option and you get to download the BEST PRACTICES.
    Given below is the Best practice URL for industry package for Automotive: Dealer business managament:
    http://help.sap.com/bp_dbmv1600/DBM_DE/html/index.htm
    (This is for your reference only).
    Hope this helps!
    Regards,
    Shilpa

  • Question regarding best practice

    Hello Experts,
    What is the best way to deploy NWGW?
    We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
    Kind regards,
    Kunal.

    Hi Kunal,
    My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
    I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
    Hth,
    Simon

  • Best practice required to configure CW and MARS SM and ACS

    Dear All,
    i had alot of managment program in my corporate org.
    CW LMS
    CW HUM
    CW QPM
    CW IPM
    ACS
    MARS
    Cisco IPS IDS 4260
    WLC
    tandberg system
    could you gude what is the best service from cisco that i could buy it to have a profetional  service to configure over all system in one integrated unite ,
    so i have one report shows all the issue with customize GUI, for managers , directors , and CTO, CEO,,
    thank you in advance,
    Ali Alkhafaji

    I have the code working without use of config files. I am just disappointed that it is not working using the configuration files. That was one of the primary intents of my code re-factoring. 
     Katherine
    Xiong , If you are proposing this as an answer then does this imply that Microsoft's stance is not to use configuration files with SSIS?? Please answer.
    SM

  • Advice needed regarding best practice

    Hi - curious as to if what i have setup now should be changed to best utilize Time Machine. I have iMac with 750GB drive (a small chunk is partitioned for Vista) - lets assume I have 600 GB dedicated for the mac.
    I havetwo firewire external drives - a 160GB and a 300GB.
    Currently, I have my itunes library on the 300GB drive as well as a few FCE files. I have made the 160GB the Time Machine drive. Would I be better off moving my iTunes library to the internal HD and then using the 300GB drive as the Time Machine drive? As I have it now, I don't think my iTunes library is getting backed up. In an ideal situation, is it safe to assume your Time MAchine disk should be at leasta s large if not larger than the internal HD? Thanks.
    Steve

    Steve,
    I would recommend using a drive that is 2x the size of the files you are going to back up. This is specifcally in the event that you make changes to the files and Time Machine starts backing up the new files that you have created. It will back up once every hour and it will only make a back-up copy of files that you have modified. If you are backing up your home folder, and you are using FCE, I would say back up to the 160Gb drive would be sufficient. If you were planning on backing up your home folder & your iTunes library, I would recommend the 300Gb drive. The only reason that you would need a backup drive 2x the size of your HD is if you were backing up your entire drive.

  • SAP Upgrade from 4.7 to ECC 6.0 connected to BW 7.0 Best Practices

    We are upgrading SAP R/3 4.7 to ECC 6.0.  We have been running live in a BW 7.0 environment. We have done some enhancements for 2LIS_11_VAITM -  Sales Document Item Data  and 2LIS_13_VDITM - Billing Document Item Data datasources.   We currently have a test instance that has been upgraded to ECC6.0.
    What are the best business practices for testing BW to insure data transfer and enhancements are working correctly?
    Eg. Should we connect the ECC6 instance to BWD and test there OR upgrade R/3 TST that is connected BWD  and test there OR Upgrade QAS and test in BWQ.
    Thanks in advance . . .
    Edited by: RWC on May 4, 2011 6:26 PM

    Hi RWC,
    the plug-in will change slightly, you may notice differences in the screens of RSA2 and others after the upgrade.
    Regarding best practices: In a recent upgrade our project team decided to create a parallel landscape with a new, additional Dev and QA on the R/3 side.
    We connected these new systems to a BW Sandbox and the BW QA.
    We identified all datasources, transfer rules and infopackages in use in production and recorded them with related objects onto transports in BW Dev. Before the import into BW Sandbox and BW QA we adjusted the system name conversion table to convert from old R/3 Dev to new R/3 Dev in order set up all the required connections for testing with the upgraded R/3 systems.
    After the go LIVE of the upgrade we renamed the old R/3 Dev system in BW Dev and ran BDLS to convert everything (speak to your Basis team). That way we made sure not to lose any Development and we got rid of the old R/3 Dev system.
    Take a look at this post for issues we encountered during this project and test everything you load in production.
    Re: Impact on BI 7.0 due to ECC 5.0 to ECC 6.0 Upgrade
    Best,
    Ralf

  • Best Practice to Integrate CER with RedSky E911 Anywhere via SIP Trunk

    We are trying to integrate CER 9 with RedSky for V911 using a SIP trunk and need assistance with best practice and configuration. There is very little documentation regarding "best practice" for routing these calls to RedSky. This trunk will be handling the majority of our geographically dispersed company's 911  calls.
    My question is: should we use an IPsec tunnel for this? The only reference I found was this: http://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/virtual-office/deployment_guide_c07-636876.htmlm which recommends an IPsec tunnel for the SIP trunk to Intrado. I would think there are issues with an unsecure SIP trunk for 911 calls. Looking for advice or specifics on how to configure this. Does the SIP trunk require a CUBE or is a CUBE only required for the IPsec tunnel?
    Any insight is appreciated.
    Thank you.

    you can use Session Trace in RTMT to check who is disconnecting the call and why.

  • Importing best practices baseline package (IT) ECC 6.0

    Hello
    I hope is the right forum,
    i've a sap release ECC 6.00 with stack abap 14.
    In this release i have to install the preconfigured smartforms that now are called
    best practices baseline package. These pacakges are localized and mine is for Italy:
    SAP Best Practices Baseline Package (IT)
    the documents about the installation say that the required support package level has to be with stack 10.
    And it says :
    "For cases when the support package levels do not match the Best Practices requirements, especially when HIGHER support package levels are implemented, only LIMITED SUPPORT can be granted"
    Note 1044256
    By your experience , is it possible to do this installation in this support package condition?
    Thanks
    Regards
    Nicola Blasi

    Hy
    a company wants to implement the preconfigured smartforms in a landscape ECC 6.0
    I think that these smartforms can be implement using the SAP best practices , in particular the baseline package ....see service.sap.com/bestpractices  --> baseline package;  once installed you can configured the scenario you want....
    the package to download is different each other ,depends the localization...for example italy or other country but this is not important at the moment....
    the problem is the note 1044256...it says that to implement this, i must have the support package level requested in this note...not lower and above all not higher.......
    before starting with this "baseline package" installation i'd like to know if i can do it because i have a SP level of 14 for aba e basis for example....while the notes says that want a SP level of 10 for aba e basis.
    what i can do?
    i hope is clear now....let me know
    thanks
    Nicola

  • Best practices desiging JSF applications

    I have an intermediate level of knowledge on JSF but after completing a component for an application I'm still stumped about the best practices required when designing a Java Server Faces application.
    The issue I faced mostly in the first iteration was Loading of dependant objects.
    Wherever we needed to display an editable view of an object by way of a request parameter there was always the question at what point to load the entity?
    For example, I have a page edit.jspx?itemId=21
    In my faces config, I have a Backing bean which has a managed property:
    <managed-property>
        <property-name>itemId</property-name>
        <property-class>java.lang.Integer</property-class>
        <value>#{param.itemId}</value>
    </managed-property>My backing bean has the getters and setters for this property but at which point is it best to load the item with id of "itemId"?
    It also gets a bit more complex when I depend on other injected properties for example an application scope manager class.
    One thing I have learned is, that the order that you declare the managed-properties in the faces-config appears to be the order that they are injected - I can't make a definitive answer on this as that's just the way it works on Sun App Server 8.1 and JSF 1.1. But I'm still trying to work out when to undertake a simple load for dependant entities in a backing bean.
    One appraoch I have looked at taking is introducing some "Loader" classes - basically a POJO that performs loads based on request param setting events:
        <managed-bean>
            <managed-bean-name>UserLoader</managed-bean-name>
            <managed-bean-class>testfaces.loader.UserLoader</managed-bean-class>
            <managed-bean-scope>request</managed-bean-scope>
            <managed-property>
                <property-name>invManager</property-name>
                <property-class>testfaces.aspect.InvestigationManager</property-class>
                <value>#{InvestigationManager}</value>
            </managed-property>
            <managed-property>
                <property-name>userId</property-name>
                <property-class>java.lang.Integer</property-class>
                <value>#{param.userId}</value>
            </managed-property>
        </managed-bean>
        <managed-bean>
            <managed-bean-name>ItemBean</managed-bean-name>
            <managed-bean-class>testfaces.jsf.bean.ItemBean</managed-bean-class>
            <managed-bean-scope>request</managed-bean-scope>
            <managed-property>
                <property-name>user</property-name>
                <property-class>org.ikeda.testfaces.model.UserEntity</property-class>
                <value>#{UserLoader.user}</value>
            </managed-property>
        </managed-bean>Ergo, each time I use the ItemBean in a JSP it will try and set the "user" property which is retrieved from the UserLoader provided the param.userId is set from the query string. Mind you this limits me to using only the request parameter (okay for Portlet integration) as there is no property called userId in the ItemBean - another workaround to work out!
    Of course this kind of functionality can be rolled into the ItemBean, but it's amazing how much of a mess the Backing bean becomes when you encapsulate this logic into the backing bean.
    What experiences has everyone else had with JSF and this kind of situation?
    Does anyone know of any good resources about "designing" JSF applications (writing is one thing, designing is another)?
    Regards,
    Anthony

    I have an intermediate level of knowledge on JSF but after completing a component for an application I'm still stumped about the best practices required when designing a Java Server Faces application.
    The issue I faced mostly in the first iteration was Loading of dependant objects.
    Wherever we needed to display an editable view of an object by way of a request parameter there was always the question at what point to load the entity?
    For example, I have a page edit.jspx?itemId=21
    In my faces config, I have a Backing bean which has a managed property:
    <managed-property>
        <property-name>itemId</property-name>
        <property-class>java.lang.Integer</property-class>
        <value>#{param.itemId}</value>
    </managed-property>My backing bean has the getters and setters for this property but at which point is it best to load the item with id of "itemId"?
    It also gets a bit more complex when I depend on other injected properties for example an application scope manager class.
    One thing I have learned is, that the order that you declare the managed-properties in the faces-config appears to be the order that they are injected - I can't make a definitive answer on this as that's just the way it works on Sun App Server 8.1 and JSF 1.1. But I'm still trying to work out when to undertake a simple load for dependant entities in a backing bean.
    One appraoch I have looked at taking is introducing some "Loader" classes - basically a POJO that performs loads based on request param setting events:
        <managed-bean>
            <managed-bean-name>UserLoader</managed-bean-name>
            <managed-bean-class>testfaces.loader.UserLoader</managed-bean-class>
            <managed-bean-scope>request</managed-bean-scope>
            <managed-property>
                <property-name>invManager</property-name>
                <property-class>testfaces.aspect.InvestigationManager</property-class>
                <value>#{InvestigationManager}</value>
            </managed-property>
            <managed-property>
                <property-name>userId</property-name>
                <property-class>java.lang.Integer</property-class>
                <value>#{param.userId}</value>
            </managed-property>
        </managed-bean>
        <managed-bean>
            <managed-bean-name>ItemBean</managed-bean-name>
            <managed-bean-class>testfaces.jsf.bean.ItemBean</managed-bean-class>
            <managed-bean-scope>request</managed-bean-scope>
            <managed-property>
                <property-name>user</property-name>
                <property-class>org.ikeda.testfaces.model.UserEntity</property-class>
                <value>#{UserLoader.user}</value>
            </managed-property>
        </managed-bean>Ergo, each time I use the ItemBean in a JSP it will try and set the "user" property which is retrieved from the UserLoader provided the param.userId is set from the query string. Mind you this limits me to using only the request parameter (okay for Portlet integration) as there is no property called userId in the ItemBean - another workaround to work out!
    Of course this kind of functionality can be rolled into the ItemBean, but it's amazing how much of a mess the Backing bean becomes when you encapsulate this logic into the backing bean.
    What experiences has everyone else had with JSF and this kind of situation?
    Does anyone know of any good resources about "designing" JSF applications (writing is one thing, designing is another)?
    Regards,
    Anthony

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practices for apps integration with third party systems ?

    Hi all
    I would like to know if there is any document from oracle or from your own regarding best practices for apps integration with third party systems.
    For example, in particular, let's say we need customization in a given module(ex:payables) need to provide data to a third party system, consider following:
    outbound interface:
    1)should third party system should be given with direct access to oracle database to access a particular payments data information table/view to look for data ?
    2) should oracle create a file to third party system, so that it can read and do what it need to do?
    inbound:
    1) should third party should directly login and insert data into tables which holds response data?
    2) again, should third party create file and oralce apps will pick up for further processing?
    again, there could be lot of company specific scenarios like it has to be real time or not... etc...
    How does companies make sure third party systems are not directly dipping into other systems (oracle apps/others), so that it will follow certain integration best practices.
    how does enterprise architectute will play a role in this? can we apply SOA standards? should use request/reply using Tibco etc?
    Many oracle apps implementations customizations are more or less directly interacting with third party systems by including code to login into respective third party systems and vice versa.
    Let me your know if you have done differently and that would help oracle apps community.
    thanks
    rrb.

    you want to send idoc to third party system (NONSAP).
    what kind of system is it? can it handle http requests
    or
    can it handle webservice?
    which version of R/3 you are using?
    what is the mechanism the receiving system has, to receive data?
    Regards
    Raja

  • Failover cluster File Server role best practices

    We recently implemented a Hyper-V Server Core 2012 R2 cluster with the sole purpose to run our server environment.  I started with our file servers and decided to create multiple file servers and put them in a cluster for high
    availability.  So now I have a cluster of VMs, which I have now learned is called a guest cluster, and I added the File Server role to this cluster.  It then struck me that I could have just as easily created the File Server role under my Hyper-V
    Server cluster and removed this extra virtual layer.  
    I'm reaching out to this community to see if there are any best practices on using the File Server role.  Are there any benefits to having a guest cluster provide file shares? Or am I making things overly complicated for no reason?
    Just to be clear, I'm just trying to make a simple Windows file server with folder shares that have security enabled on them for users to access internally. I'm using Hyper-V Core server 2012 R2 on my physical servers and right now I have Windows
    Server Standard 2012 R2 on the VMs in the guest cluster.
    Thanks for any information you can provide.

    Hi,
    Generally with Hyper-V VMs available, we will install all roles into virtual machines as that will be easy for management purpose.
    In your situation the host system is a server core, so it seems that manage file shares with a GUI is much better.
    I cannot find an article specifically regarding "best practices of setting up failover cluster". Here are 2 articles regarding build guest cluster (you have already done) and steps to create a file server cluster. 
    Hyper-V Guest Clustering Step-by-Step Guide
    http://blogs.technet.com/b/mghazai/archive/2009/12/12/hyper-v-guest-clustering-step-by-step-guide.aspx
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    https://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best practice to run Microsoft Endpoint Protection client in VDI environment

    We are using Citrix XenDesktop VDI environment. Symantec Endpoint Protection client (VDI performance optimised) has been installed on the “streamed to the clients” virtual machine image. Basically, all the files (in golden image) have been “tattooed” with
    Symantec signature. Now, when the new VM starts, Symantec scan engine simply ignores “tattooed” files and also randomise scan times. This is a rough explanations but I hope you’ve got the idea.
    We are switching from Symantec to Microsoft Endpoint Protection and I’m looking for any information and documentation in regards best practice for running Microsoft Endpoint Protection clients in VDI environment.
     Thanks in advance.

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

Maybe you are looking for

  • Installing Windows 7 with boot camp as whole partition on 2nd internal hard drive

    Hi all, I am unsure what is the recent changes with Apple boot camp. But when I used boot camp utilitiy on my Mac Pro (Mid 2010) to install windows 7 64 bits. It would not install and created a whole lot of problems. After I initialise the process to

  • To Modify a field value with field symbols

    we had a requirement like we are getting in a floating point value in a field of an IDoc segment like 12.327- .Here if we see that the negative sign is after the floating point value and if we try to insert this into a database then it will throw out

  • Photoshop CC Video Rendering

    Hi, does the Adobe media Encoder uses CUDA for video rendering? Problem is, when i render video in PS CC, it takes up to 20-30 minutes for a 30sec. clip, only work done was camera Raw filter. CPU ia at 88%, GPU 0% due rendering.

  • CS3 won't open in windows vista

    My photoshop was working fine yesterday, now every time I try to open it, I get an error message that says "Photoshop CS3 has stopped working" then it closes down.  Any suggestions?

  • "ITunes Library file cannot be saved"

    Mesage appears on my Windows desktop when using ITunes: "ITunes Library file cannot be saved. There is not enough memory available."  Looks like my music files are duplicated three times. I see all of my music album folders and songs in the following