SPARC system asset - best practice

Hello,
Is it possible to have OS asset and Service Processor asset for a single host system as one asset subtree?
(Platform is T1000, EM is 12.1.1.0.0)

I don't find it useful to transport system objects so I make them manually.

Similar Messages

  • Site System Roles - Best Practices

    Hi all -
    I was wondering if there wwere any best practice recommendations for how to configure Site System Roles? We had a vendor come onsite and setup our environment and without going into a lot of detail on why, I wasn't able to work with the vendor. I am trying
    to understand why they did certain things after the fact.
    For scoping purposes we have about 12,000 clients, and this how our environment was setup:
    SERVERA - Site Server, Management Point
    SERVERB - Management Point, Software Update Point
    SERVERC - Asset Intelligence Synchronization Point, Application Catalog Web Service Point, Application Catalog Website Point, Fallback Status Point, Software Update Point
    SERVERD - Distribution Point (we will add more DPs later)
    SERVERE - Distribution Point (we will add more DPs later)
    SERVERF - Reporting Services Point
    The rest is dedicated to our SQL cluster.
    I was wondering if this seems like a good setup, and had a few specific questions:
    Our Site Server is also a Management Point. We have a second Management Point as well, but I was curious if that was best practice?
    Should our Fallback Status Point be a Distribution Point?
    I really appreciate any help on this.

    The FSP role has nothing to do with the 'Allow
    fallback source location for content' on the DP.
    http://technet.microsoft.com/en-us/library/gg681976.aspx
    http://blogs.technet.com/b/cmpfekevin/archive/2013/03/05/what-is-fallback-and-what-does-it-mean.aspx
    Benoit Lecours | Blog: System Center Dudes

  • Portal System Transport (Best Practice)

    Hello,
    We have DEV, QA and PRD landscape. We have created systems that connect to backend ECC systems. Since the DEV and QA ECC system has one application server, we have created a portal system of type singe application server in the DEV Portal that points to DEV ECC system. Subsequently we have transported this portal system to QA portal and make it point to QA ECC.
    Now the Prd ECC systems is of type load balancing with multiple servers. The portal system that connects to Prd ECC system should also be of type Load Balancing. Now we cannot transport the QA portal system that connects to QA ECC system to prd since its of type Single Application Server.
    What will be the best strategy to create the portal system in prd portal that points to PRD ECC.
    1. Create the portal system freshly in Prd system of type Load Ballancing. Does it adhere to the best practise approach that suggest Not to Create anyting in prd system directly.
                                                       OR
    2, Is there any other way that should I follow to make sure that Best Practices for Portal Dvelepment is followed.
    Regards
    Deb

    I don't find it useful to transport system objects so I make them manually.

  • Development System Backup - Best Practice / Policy for offsite backups

    Hi, I have not found any recommendations from SAP on best practices/recommendations on backing up Development systems offsite and so would appreciate some input on what policies other companies have for backing up Development systems. We continuously make enhancements to our SAP systems and perform daily backups; however, we do not send any Development system backups offsite which I feel is a risk (losing development work, losing transport & change logs...).
    Does anyone know whether SAP have any recommendations on backuping up Development systems offsite? What policies does your company have?
    Thanks,
    Thomas

    Thomas,
    Your question does not mention consideration of both sides of the equation - you have mentioned the risk only.  What about the incremental cost of frequent backups stored offsite?  Shouldn't the question be how the 'frequent backup' cost matches up with the risk cost?
    I have never worked on an SAP system where the developers had so much unique work in progress that they could not reproduce their efforts in an acceptable amount of time, at a acceptable cost.  There is typically nothing in dev that is so valuable as to be irreplaceable (unlike production, where the loss of 'yesterday's' data is extremely costly).  Given the frequency that an offsite dev backup is actually required for a restore (seldom), and given that the value of the daily backed-up data is already so low, the actual risk cost is virtually zero.
    I have never seen SAP publish a 'best practice' in this area.  Every business is different; and I don't see how SAP could possibly make a meaningful recommendation that would fit yours.  In your business, the risk (the pro-rata cost of infrequently  needing to use offsite storage to replace or rebuild 'lost' low cost development work) may in fact outweigh the ongoing incremental costs of creating and maintaining offsite daily recovery media. Your company will have to perform that calculation to make the business decision.  I personally have never seen a situation where daily offsite backup storage of dev was even close to making  any kind of economic sense. 
    Best Regards,
    DB49

  • System Emails Best Practices

    Hi, I need a professional recommendation on what is the best practice when dealing with our customers system emails.
    I'm asking this because BC create all the workflows and system emails with the partner details, which I find really ridiculous.
    Then you have to go and change all of these emails to the client email. I realised lately that BC have a default email setting for system emails. I changed all these to my client email address, but still I'm receiving inquiry and they are getting my name somehow.
    This is very embarassing and I hate to see it happening again. Can someone please help me with what procedures I need to take to avoid this.
    As partners do we usually leave ourselves as administrators and receiving every workflow our client receives? What if you have 100 clients?
    Shouldnt BC add a "BC Partner" to users? like a super administrator in Joomla instead of being together with other administrator?
    Thanks
    Michel

    Hi Michel,
    The system is a framework in terms of site elements, so of course will have default information which you can choose to change or not (such as workflows).
    In terms of forms and notification emails:
    - Make sure you update forms notifications and emails
    - Make sure you update notification emails for mailing lists
    - Make sure the mass change for system emails is applied and you choose the correct template if you wish to for each one.
    For Workflows:
    - System comes with some basic ones and of course you can build your own.
    - Normally the thing you will do is modify or add your clients to "users" or their own role such as "busines name admin" for example. You may have multiple users and for them to have multiple roles. Under the new interface permisions control what they see and have access too.
    - Workflows are set to you for testing mainly, allowing you to get workflows and email notifications as you develop to see how things are shaping up and are working.
    - Go into workflows and change them so that emails / txts / steps go to the right people.
    You can find more information if you need to on workflows in the knowledgebase if you not gone through them already.

  • Asset Hierarchy for Linear Assets - Best Practices for UK Utilities

    Hi Experts
    I would like to know if there is any best practices to be followed while developing Asset Hierarchy for Linear assets in UK Utlity Industry. Could anyone please suggest ? if you have any sample hierarchy, that would help a lot.    
    Thank you
    Vijay

    Hi ,
    I dont think Utilities for Uk is available. you can refer the china version. [ scenarios|http://help.sap.com/bp_utilities603/UTL_CN/html/scope/Scoping_offline_SC.htm?display=STE-UTL_CN_1603+BP_UTL_V1603_FULL_SCOPE.xml]
    DP

  • System.out vs. System.err -- best practices?

    When I catch an exception, where is the best place to write the stack trace, etc? Should I write it to System.out or System.err?
    Thanks,
    Curt

    If you call printStackTrace() it will by default print to the to the standard error stream.
    But really you should look at log4j.

  • Assets - Best Practices

    Dear Asset Gurus,
    I know that there are two methods of capitalisation exist in SAP. One is Asset under Construction and other is direct capitalisation.
    Please let me know which is widely used. What are all the relative benefits
    Thanks
    Krishnadas

    Hello
    If the asset is complete in all respects and which can be directly put into use, then you could capitalise it directly. This is generally a normal and widely happening phenomena.
    If the asset needs to be built ,enhanced or awaiting additions or allocations then it would be AUC.
    One would adopt both in most cases, but predomionantly, the direct capitalisation is in vogue.
    The benefit of direct capitalisation, you could claim depreciation. Whereas in AUC, you cannot claim depreciation until completed and transferred to a final asset.
    Reg

  • System Management Best Practice Questions

    I help out with IT at a small fire department, town is around 3,000 people. They have about 10 PCs. The only time I really do anything is when someone has a question, or points out a problem. Everything is in place, and works well. However, is there anything
    I should be regularly checking on? We have the usual in place: Anti-virus on all PCs, Windows Updates are automatic, all PCs have passwords, Wi-Fi has a password, etc.

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • External System Authentication Credentials Best practice

    We are in the process of an 5.0 upgrade.
    We are using NTLM as our authentication source to get teh users and the groups and authenticate against the source. So currently we only have the NT userid, group info(NT domain password is not stored).
    We need to get user credentials to other systems/applications so that we can pass that on the specfic applications when we search/crawl or integrate with those apps/systems.
    We were thinking of getting the credentials(App userid and password) for other applications by developing a custom Profile Web service to gather the information specific to these users. However, don't know if external application password is secured when retrieving from the external repository via a PWS and storing into the Portal database.
    Is this the best approach to take to gather the above information? If not, please recommend the best practice to follow.
    Alternatively, can have the users enter the external system credentials by having them edit their user profile. However, this approach is not preferred.
    If we can't store the user credential to the external apps, we won't eb able to enhance the user experience when doing a search/or click-thorugh to tthe other applications.
    Any insight would be appreciated.
    Thanks.
    Vanita

    Hi Vanita,
    So your solution sounds fine - however, it might be easier to use an SSO Token or the Plumtree UserID in your external applications as a difinitive authentication token.
    For example if you have some external application that requires a username and password, then if you are in a portlet view of the application the application should be able to take the userid plumtree sends it to authenticate that it is the correct user.  You should limit this sort of password bypass to traffic being gatewayed by the portal (i.e. coming from the portal server only).
    If you want to write a Profile Web Service, the data the gets stored in the Plumtree Database is exactly what the Profile Web Service send it as the value for a particular attribute.  For example if your PWS tells Plumtree that the APP1UserName and APP1Password for user My Domain\Akash is Akash and password then that is what we save.  If your PWS encrypts the password using some 2-way encryption before hand, then that is what we will save.  These properties are simply attached to the user, and can be sent to different portlets.
    Hope this helps,
    -aki-

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best practice in migrating to a production system

    Dear experts,
    Which is the best practice to follow during an implementation project to organize development, quality and production environment?
    In my case, considering that SRM is connected to the back-end development system, what should be done to connect SRM to back-end quality environment:
    - connect the same SRM server to back-end quality environment, even if in this case the old data remain in SRM or
    - connect another SRM server to back-end quality environment?
    thanks,

    Hello Gaia,
    If yo have 3 landscape, backend connection should be like this.
    SRM DEV   - ERP DEV
    SRM QAS   - ERP QAS
    SRM PRD - ERP PRD
    If you have 2 landscape.
    SRM(client 100) - ERP DEV
    SRM(client 200) - ERP QAS
    SRM PRD         - ERP PRD
    Regards,
    Masa

Maybe you are looking for

  • Can't share iBook's internet with new MB

    Hi, I want to set up Internet sharing to share my iBook's Airport connection with my new Macbook which can't be brought online until this 5 day Korean holiday ends. My boyfriend is using the iMac (plugged into ethernet) in the next room. But when I s

  • HT1918 how to add a rescue email adress to recieve my reset security question ..i dont have one

    how to add a rescue email adress to recieve my reset security question ..i dont have one

  • Java Procedure  to load Oracle Table from flat file

    Hi, I am trying to load oracle table from data in flat file. Anybody help me out. I am using following code but it is giving invalid sql statement error try {              // Create the statement       Statement stmt = conn.createStatement();        

  • How to use GDP in PCUI ? (CRM 5.0)

    Dear Experts, I would like to use the GDP function in PCUI. I already read about the description in the cook book, but in the customizing application I am not able to choose the right element (GDPL or GDPF). Do you have any suggestion how to use it?

  • Can't remove black bars when converting

    I have seen numerous threads talking about Adobe Media Encoder producing black bars when converting videos.  Some are saying it's a bug w/ CS4, others are saying its something to do w/ non-square pixels.  I really don't know, and I really don't under