Best Practice for applying patches in WL 8.1

Below is what I did to apply patches to WL 8.1 SP2. Is there a better way?
Best Practice? Thanks in advance.
1) Created directory.
C:\bea\weblogic81\server\lib\patches
2) Copied jar files to patches directory.
CR127930_81sp2.jar
CR124746_81sp2.jar
3) Modified startWebLogic.cmd to include jar files first in the classpath.
set
CLASSPATH=%WL_HOME%\server\lib\patches\CR124746_81sp2.jar;%WL_HOME%\server\l
ib\patches\CR127930_81sp2.jar;%WEBLOGIC_CLASSPATH%;%POINTBASE_CLASSPATH%;%JA
VA_HOME%\jre\lib\rt.jar;%WL_HOME%\server\lib\webservices.jar;%CLASSPATH%
4) Restarted server and saw in console that the patches were applied.

Hi:
           SAP Standard does not recommend you to update quantity field in asset master data.  Just leave the Qty Field Blank , just mention the Unit of Measure as EA. While you post acquisition through F-90 or MIGO this field will get updated in Asset master data automatically. Hope this will help you.
Regards

Similar Messages

  • Best Practice for Portal Patches and effort estimation

    Hi ,
    One of our client is applying the following patches
    1. ECC 6.0 SP15(currently SP14)
    2. ESS MSS SP15(currently SP14 with some level of functional customization )
    3. EP 7 SP18(currently SP14)
    We would like to kwow the best practice for applying portal patches and the effort estimation for redoing the portal devt on the new patch.
    o   What is the overall level of effort with applying Portal patches?
    o   How are all the changes to SAP objects handle?  Do they have to be
         manually re-entered?
    o  What is the impact of having a single NWDI instance across the
        Portal Landscape during the Patch process?
    Regards,
    Revathi Raju.

    Hi Revathi,
    o What is the overall level of effort with applying Portal patches?
    overall effort to apply the patch is apprx 1/2-1 days for NW7 system. This is exclude the patch files download because it's based on your download speed.
    o How are all the changes to SAP objects handle? Do they have to be
    manually re-entered?
    Depending on your customization. Normally it wont effect if you created the customzation application apart from SAP standard application
    o What is the impact of having a single NWDI instance across the
    Portal Landscape during the Patch process?
    Any change that related to NWDI, you might be need to re-deployed from NWDI itself.
    Thanks
    Regards,
    AZLY

  • Best practices for applying sharpening in your workflow

    Recently I have been trying to get a better understanding of some of the best practices for sharpening in a workflow.  I guess I didn't realize it but there are multiple places to apply sharpening.  Which are best?  Are they additive?
    My typical workflow involves capturing an image with a professional digital SLR in either RAW or JPEG or both, importing into Lightroom and exporting to a JPEG file for screen or printing both lab and local. 
    There are three places in this workflow to add sharpening.  In the SLR, manually in Lightroom and during the export to a JPEG file or printing directly from Lightroom
    It is my understanding that sharpening is not added to RAW images even if you have added sharpening in your SLR.  However sharpening will be added to JPEG’s by the camera. 
    Back to my question, is it best to add sharpening in the SLR, manually in Lightroom or wait until you export or output to your final JPEG file or printer.  And are the effects additive?  If I add sharpening in all three places am I probably over sharpening?

    You should treat the two file types differently. RAW data never has any sharpening applied by the camera, only jpegs. Sharpening is often considered in a workflow where there are three steps (See here for a founding article about this idea).
    I. A capture sharpening step that corrects for the loss of sharp detail due to the Bayer array and the antialias filter and sometimes the lens or diffraction.
    II. A creative sharpening step where certain details in the image are "highlighted" by sharpening (think eyelashes on a model's face), and
    III. output sharpening, where you correct for loss of sharpness due to scaling/resampling or for the properties of the output medium (like blurring due to the way a printing process works, or blurring due to the way an LCD screen lays out its pixels).
    All three of these are implemented in Lightroom. I. and II. are essential and should basically always be performed. II. is up to your creative spirits. I. is the sharpening you see in the develop panel. You should zoom in at 1:1 and optimize the parameters. The default parameters are OK but fairly conservative. Usually you can increase the mask value a little so that you're not sharpening noise and play with the other three sliders. Jeff Schewe gives an overview of a simple strategy for finding optimal parameters here. This is for ACR, but the principle is the same. Most photos will benefit from a little optimization. Don't overdo it, but just correct for the softness at 1:1.
    Step II as I said, is not essential but it can be done using the local adjustment brush, or you can go to Photoshop for this. Step III is however very essential. This is done in the export panel, the print panel, or the web panel. You cannot really preview these things (especially the print-directed sharpening) and it will take a little experimentation to see what you like.
    For jpeg, the sharpening is already done in the camera. You might add a little extra capture sharpening in some cases, or simply lower the sharpening in camera and then have more control in post, but usually it is best to leave it alone. Step II and III, however, are still necessary.

  • Best Practices for automated patching?

    Howdy,
    We've been having trouble getting patching to work on our servers for months.  I decided it's time to start over.  I've done some searching and have found tons of different people saying they have the best practices and recommended ways of handling
    patches but they're all different.
    I'm using SCCM 2012 R2 and want to setup ADRs so that every month we can set all our servers up to install their Windows Updates.  I have a few questions:
    Is there an "Official" tutorial, video, walkthrough of how we should be doing this?
    Is the best way to test this to set something up that is available ASAP but doesn't expire until far in the future?  Should I then be able to see the patches in Software Center as available but they just won't actually do anything since the expiration
    would be way down the road?
    Most of our servers are running 2012 or 2012 r2 and we're just concerned with the standard patches that come out every month.
    Thanks for any links, input, advice, etc.

    >  I never know if a tutorial I found is a good one or just some random person that recorded
    what they were doing. 
    That's just it... tutorial's are meant to show you how the features work. They are not meant to be "follow these step-by-step in your production environment and you're done" instructions. Gerry and Niall's guides are to walk you through building
    things out in a lab environment so you can learn them.  You need to learn how the features work, look at the requirements of your company, and then
    make conscious decisions about how to implement the tool to meet the requirements.
    If you don't feel like you'll be able to synthesize that from knowledge learned in tutorials/guides/books, then I'd suggest bringing in a consultant with a deep understanding of the product to help you accomplish it.
    I hope that helps,
    Nash
    Nash Pherson, Senior Systems Consultant
    Now Micro -
    My Blog Posts
    If you found a bug or want the product to work differently,
    share your feedback.
    <-- If this post was helpful, please click the up arrow or propose as answer.

  • Url category best practices for ESA 8.5.6-074

    In the new version  8.5.6-074 of ESA C170, what are the best practices for applying the new URL Category?
    Is it possible to crate filters that quarantine mails based on URL filtering? Is so could you upload sample script (for example quarantine emails that have adult links in body)

    You should be able to do it with a content filter. You have some conditions based on URL and categories.

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • Best Practice for utility in Sol Man 4.0

    We have software component ST-ICO of release 150_700 with Patch level 5
    We want a Template Selection for ‘Utility’ industry. I checked in
    the service market place and found that 'Baseline Package United
    Kingdom V1.50, Template: BP_BLKU150' is available in the above software
    component.
    But we are not getting any templates other than 'BP_UTUS147 - Best Practices for Water Utility' in the 'SOLAR_PROJECT_ADMIN'
    transaction.
    Kindly suggest any patch needs to be applied or some configuration need to be done.
    Regards
    Mani

    Hi Mani,
       Colud u plz give me the link of "where u find the template BP_BLKU150"?
    It will be helpful for me.
    Thanks
    Senthil

  • Best practise for installing patches for SCCM Client 2012 (Both x86 and x64)) - OSD and Client Push Installation

    Hi All,
    What is best practice for automatic installing of SCCM 2012 client Patches
    (using Patch switch) during installation of SCCM 2012 clients? The challenge is that now there are two versions of clients and updates (x86 and x64).
    I need information for:
    OSD
    Client Push Installation
    Thank you in advance.
    Regards,

    Not everything that can be or is supported or not supported is documented (or ever can be):
    http://technet.microsoft.com/en-us/magazine/jj643252.aspx
    Your expectation here needs to be adjusted. No one recommended contacting support. William just mentioned that *if* you have an issue and needed to contact support, they may decline to help you because you've done something explicitly unsupported.
    For clientpatch, here's the specific documentation noting it as unsupported:
    http://blogs.technet.com/b/configmgrteam/archive/2009/04/08/automatically-applying-hotfixes-to-the-configuration-manager-2007-client-during-installation.aspx
    Additionally, I've also been in contact with the sustained engineering folks responsible for the CUs and they've reinforced the statement.
    What can happen? Who knows? Microsoft does not test against unsupported configurations and features -- that's the definition of unsupported. It's not tested so no one really knows. They did find a couple of explicit issues (outlined in that post above) so
    know that at least those exist and probably more since it is, as mentioned, abandoned code.
    Why does it matter what can happen though? If the folks who write and support the code tell you shouldn't do it, you are simply asking for problems by doing it. Ultimately, you're asking a question that has no defined answer (except "bad"/unsupported things)
    that really doesn't matter if you follow the explicit guidance. If you don't, as William points out, that's a risk for you take and an answer for you to discover.
    Jason | http://blog.configmgrftw.com

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • Best Practice For Working on Composite In Team

    Hello,
    I would like to know what is the best practice for working on a single composite by mutiple members in a team.
    We have a core services module wherein a single composite contains many services. So, to complete in time, we would like many members to work on it simultaneously.
    In such scenarios, if some one adds a new adapter or some other services, composite.xml changes.
    Saving it would override other member's changes. Also, it is not possible to apply lock simultaneously on the same file through some version control mechanism.
    Please let us know what should be the best practice in such scenarios.
    Thanks-
    Ashish

    You can very well use a version control software with JDev. You may refer -
    http://www.oracle.com/technetwork/articles/soa/jimerson-config-soa-355383.html
    I think without version control mechanism (like subversion) it won't be easy to work in a multi-developer environment. If you really don't have a source and version control mechanism then manual merging will be required which may be error prone and time & effort consuming.
    Regards,
    Anuj

  • Best practice for partitioning 300 GB disk

    Hi,
    I would like to seek for advise on how I should partition a 300 GB disk on Solaris 8.x, what would be the optimal size for each of the partition.
    The system will be used internally for running web/application servers and database servers.
    Thanks in advance for your help.

    There is no "best practice" regardles of what others might say. I depends entirely on how you plan on using and maintaining the system. I have run into too many situations where fine-grained file system sizing bit the admins in the backside. For example, I've run into some that assumed that /var is only going to be for logging and printing, so they made it nice and small. What they didn't realize is that patch and package information is also stored in /var. So, when they attempted to install the R&S cluster, they couldn't because they couldn't put the patch info into /var.
    I've also run into other problems where a temp/export system that was mounted on a root-level directory. They made the assumption that "Oh, well, it's root. It can be tiny since /usr and /opt have their own partitions." The file system didn't mount properly, so any scratch files in that directory that were created went to the root file system and filled it up.
    You can never have a file system that's too big, but you can always have a file system that's too small.
    I will recommend the following, however:
    * /var is the most volatile directory and should be on its own several GB partition to account for patches, packages, and logs.
    * You should have another partition as big as your system RAM and assign that parition as a system/core dump for system crashes.
    * /usr or whatever file system it's on must be big enough to assume that it will be loaded with FOSS/Sunfreeware tools, even if at this point you have no plans on installing them. I try to make mine 5-6 GB or more.
    * If this is a single-disk system, do not use any kind of parallel access structure, like what Oracle prefers, as it will most likely degrade system performance. Single disks can only make single I/O calls, obviously.
    Again, there is no "best practice" for this. It's all based on what you plan on doing with it, what applications you plan on using, and how you plan on using it. There is nothing that anyone here can tell you that will be 100% applicable to your situation.

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • Best practices for apps integration with third party systems ?

    Hi all
    I would like to know if there is any document from oracle or from your own regarding best practices for apps integration with third party systems.
    For example, in particular, let's say we need customization in a given module(ex:payables) need to provide data to a third party system, consider following:
    outbound interface:
    1)should third party system should be given with direct access to oracle database to access a particular payments data information table/view to look for data ?
    2) should oracle create a file to third party system, so that it can read and do what it need to do?
    inbound:
    1) should third party should directly login and insert data into tables which holds response data?
    2) again, should third party create file and oralce apps will pick up for further processing?
    again, there could be lot of company specific scenarios like it has to be real time or not... etc...
    How does companies make sure third party systems are not directly dipping into other systems (oracle apps/others), so that it will follow certain integration best practices.
    how does enterprise architectute will play a role in this? can we apply SOA standards? should use request/reply using Tibco etc?
    Many oracle apps implementations customizations are more or less directly interacting with third party systems by including code to login into respective third party systems and vice versa.
    Let me your know if you have done differently and that would help oracle apps community.
    thanks
    rrb.

    you want to send idoc to third party system (NONSAP).
    what kind of system is it? can it handle http requests
    or
    can it handle webservice?
    which version of R/3 you are using?
    what is the mechanism the receiving system has, to receive data?
    Regards
    Raja

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

Maybe you are looking for