Hotfix Management | Best Practices | WCS | J2EE environment

Hi All,
Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
- ONLY this jar
- the jar with 8 old files and 2 new files.
- the jar using whatever dependent jars are required.
- the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
Pointers, more in line with a WCS environment would be very much appreciated!
Regards,
Mrinal Mukherjee

Moderator Action:
This post has been moved from the SysAdmin Build & Release Engineering forum
To the Java EE SDK forum, hopefully for closer topic alignment.
@)O.P.
I don't think device driver build/release engineering is what you were intending.
Additionally, your partial post that was accidentally created as a duplicate to this one
has been removed before it confuses anyone.

Similar Messages

  • Looking for best practice on J2EE development environment

    Hi,
    We are starting to develope with J2EE. We are looking for best practice on J2EE development environment. Our concern is mainly on code sharing and deployment.
    Thanks, Charles

    To support "code sharing" you need an integrated source code control system. Several options are out there but CVS (https://www.cvshome.org/) is a nice choice, and it's completely free and it runs on Windows, Linux, and most UNIX variants.
    Your next decision is on IDE and application server. These are usually from a single "source". For instance, you can choose Oracle's JDeveloper and Deploy to Oracle Application Server; or go with free NetBeans IDE and Jakarta Tomcat; or IBM's WebSphere and their application server. Selection of IDE and AppServer will likely result in heated debates.

  • Hotfix Application Best Practices

    I have a twofer with regards to applying a hotfix. We deployed Config Manager 2012 RTM, upgraded to SP1, and then upgraded to R2. We have never applied a hotfix or CU before, so there is a bit of mysteriousness with regards to what the best practices are.
    We are applying hotfix 2910552 to address slow imaging speeds. These questions are pretty basic but I wanted to get some informed opinions.
    What is the best rollback procedure in the event of problems? I consider the hotifx a low risk but there is some concern from some others above me. We are planning on taking snapshots of the 3 site servers and the DB server in our hierarchy, but not the
    DPs. Does this seem sound or is there a better technique?
    How essential is it to update the clients in our environment in a timely fashion, or at all? I am going to have the packages created, but I did not know if I should deploy them immediately or not. Our server group has some concerns about applying the patch
    to the config manager clients on our servers during our patching windows.
    Any insight is appreciated. Thanks!
    Bryan

    There's more risk in taking snapshots as they are completely and explicitly unsupported and almost certainly would cause issues particularly since your DB is separate from your site server.
    Rollback is simply uninstalling the hotfix. That hotfix addresses a very niche issue that only manifests itself during OSD, thus, it's only important roll it out to clients before you reimage them in a refresh scenario. An alternative rollback is simply
    reinstalling the site and restoring your DB. This sounds painful and while it would take a bit of time, it's actually rather painless and works quite well.
    This all begs the question though, you really should just do CU3. There are tons of other meaningful and impactful fixes in the CUs that will improve the overall stability and even functionality of the site and clients.
    Concerns for applying hotfixes should be addressed by performing the update in a lab first. There is no other way to comfort risk averse folks except by showing them that it works. Additionally, you put forth evidence from the community that CU application
    to ConfigMgr is almost always smooth and uneventful. Can something go wrong? Of course. I could get hit by lightning sitting in my chair but that doesn't mean I stay in bed all day.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Best Practice for MUD Environment

    Hi Guys,
    I initially thought of using Merge Repository as an option to MUD Environment.
    But I found that while merging repositories, You have to either accept changes from Modified or Current Repository.
    What if I have 2 developers working parallel in a single Presentation Folder?
    Then I though Project based MUD Implementation will be the only option but in that Developers will have power to keep or remove changes from other developer.
    Now I am confused how I can get multiple users develop single RPD.
    Please let me know what's the best practice used?
    Thanks
    Saurabh

    Below some explanations. Follow the links, if you want more information. Personnaly, I prefer to set up a MUD environment.
    Software Configuration Management
    By default, the Oracle BI repository development environment is not set up for multiple users. However, online editing makes it possible for multiple developers to work simultaneously, though this may not be an efficient methodology, and can result in conflicts, because developers can potentially overwrite each other's work.
    To develop a repository in a concurrent version environment, you have several choices :
    * first of all, you can send the repository to the developper, keep a copy, retrieve it after modification and perform an [Merge Repository|http://gerardnico.com/wiki/dat/obiee/obiee_repository_merge|Merge Repository]
    * second, you can set up a [multiuser environment (MUD)|http://gerardnico.com/wiki/dat/analytic/obiee/multiuser_environment] which use the notion of Projects to split the work area. It would permit developers to modify a repository simultaneously and then check in changes.
    The import option which permit to import a subset of a repository to an other repository, work but is deprecated.
    Success
    Nico

  • SSL: Best Practice in HA environment

    Hi all,
    The customer want to know, if there is any best practice Doc available regarding SSL Konfiguration in one WebLogic Server (10.3.6) - HA environment and Hardware Loadbalancer? Additional questions:
    1- Configuration of lacation of "certification files"? which practice is better: share Storage or each Managed Server?
    2- More Information regarding generating of certificates
    Thanks in advance, Moh

    Hi Moh,
    Did you have a look at this?
    http://www.oracle.com/technetwork/database/availability/maa-fmwsharedstoragebestpractices-402094.pdf
    Search for certs.
    Regards Peter

  • Working with version management and promotion management best practices BO 4.1

    Hi Experts
    I wondered if anybody knows if there is a document or something about best practices to work with the version management and promotion management in BO 4.1?
    Our Environment includes two servers. The first one is our development and test server. The second server is our prod system.
    Now on the dev server we have basically two folders called dev and test. We control access to them with a right system based on the folder structure.
    My question now is how you would work in this scenario (third server is not an option). Main target is to have as few reports as possible. Therefore we try to work with the version management system and only have one version of each report in the dev folder of the cms. But this is where problems start. Sometimes the newest version is not the version we want to publish to the test folder or even prod server.
    How would you publish the report to the other folder? Make a copy of the concerned report (transport to the same system via promotion management is not possible). Also how would you use the version management in regards to the folder structure? Only use version management in dev folder and export reports to test folder (out of vms control) or also use vms in test folder and how would that work?
    Further more I’d be interested in learning best practices with promotion management. I found out that the promotion of a report that doesn’t exist in prod doesn’t make any problems. But as soon as an older version already exists there is only partial success and the prod folder gets renamed to “test”.
    Any suggestions on how to handle these problems?
    Thank you and regards
    Lars

    Thank you for your answer.
    So you are basically proposing to work with the vms in the dev folder and publish the desired version to the test folder. And the test folder is out of version control in this scenario if I understood you correctly (like simple data storage)?
    And how would you suggest promoting reports to the prod system? Simply by promoting the
    desired version from dev folder directly to prod? This would probably lead to inconsistence because we would need to promote from dev system to test and dev to prod instead of promoting a straight line from dev over test to prod. Furthermore it would not solve the problem of the promoting result itself (A new folder called dev will be generated in prod but the report gets promoted to the prod folder if there was no report before).
    Thank you for the link. I came across this page just a few days ago and found also lots
    of other tutorials and papers describing the basic promoting process. The promoting process in general is clear to me but I wondered if it is possible to change some parameters to  prevent folder renaming for example.
    Regards
    Lars

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Solution Manager best practices about environments

    Hello,
    we intend to use Solution Manager 4.0.
    My question : I wonder whether we need to have a single instance a SM (production) or whether we need to have multiple instances (one development SM where developments and customizing will be performed and one production SM populated with transport requests coming from the development SM) ?
    What are the best practices ?
    Thank you.
    Regards,
    Fabrice

    Dear Fabrice,
    In principle you donot need to have 2 instances of Solution Manager. 1 Instance is sufficient enough for monitoring all the Satellite Systems.
    However if you intending to have Customized ABAP on Solution Manager then it might be a good idea to do so in an different client in the same instance keeping the client as an development client.
    Most of the customizing in Solution Manager is non transportable hence it should be directly done in the productive client.
    Hope this answers your queries.
    Regards
    Amit

  • What are project management best practices?

    I created a test project in Premiere Elements 12, and saved it in a directory named "Michaels Posters".   Then I archived the project to this directory and it created a "Copied_My\ new\ video\ project1" directory with all of the media files.  Then I added a video clip to the project, archived it again, and it created the "Copied_My\ new\ video\ project1_001" folder below.
    My first real project will be a video highlights video of my 4 years old for 2013.  This will involve editing the same project several nights a week, for maybe a couple of months.  This would result in numerous "Copied_My\ new\ video\ project1_NNN" directories being created, assuming I archive the project each night.
    So what is the best practices for managing a larger project like this, and avoid using a lot of disk space for the same project?
    Michaels\ Posters/
    ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   └── My\ new\ video\ project1.PRV
    ├── Copied_My\ new\ video\ project1
    │   ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   ├── Encoded\ Files
    │   └── Layouts
    ├── Copied_My\ new\ video\ project1_001
    │   └── Adobe\ Premiere\ Elements\ Preview\ Files
    ├── Encoded\ Files
    │   └── My\ new\ video\ project1.prel
    ├── Layouts
    └── Media\ Cache\ Files

    I do work with the LAST archived project file, which contains ALL necessary resources to edit the video.  But then if I add video clips to the project, these newly added clips are NOT in the archived project, so I archive it again.
    The more I think about it, the more I like this workflow.  One disadvantage as you said is duplicate videos and resource files.  But a couple of advantages I like are:
    1. You can revert to a previous version if there are any issues with a newer version, e.g., project corruption.
    2. You can open the archived project ANYWHERE, and all video and resource files are available.
    In terms of a larger project containing dozens of individual clips like my upcoming 2013 video highlights video of my 4  year old, I'll delete older archived projects as I go, and save maybe a couple of previous archived projects, in case I want to revert to these projects.
    If you are familiar with the lack of project management iMovie, then you will know why I am elated to be using Premiere Elements 12, and being able to manage projects at all!
    Thanks again for your help, I'm looking forward to starting my next video project.

  • Multiple room management -- best practice -- server side http api update?

    Hi Folks, 
    Some of the forum postings on multiple room management are over year old now.  I have student/tutor chat application which has been in the wild for 5 months now and appears to be working well.  There is a single tutor per room, multiple chats and soon to be a whiteboard per student, which is shared with the tutor in a tabbed UI. 
    It is now time to fill out the multiple tutor functionality, which I considered and researched when building, but did not come to any conclusions.   I'm leaning towards a server side implementation.  Is there an impending update to the http api?
    Here is what I understand to be the flow:
    1) server side management of who is accessing the room
    2) load balance and manage the room access 1 time user and owner session from the server side
    3) for my implementation, a tutor will need to login to the room, in order for it to be available
    4) Any reconnection would in turn need to be managed by the server side, and is really a special case of room load balancing.
    My fear is that at some point I'm going to need access to the number of students in the room or similar and this is not available, so that I'll need client functionality, which will need update the server side manager.
    As well, I'm concerned about delays on the server side access to which might create race conditions in a re-connect situation.  User attempts to reconnect, but server side manager thinks that the user is already connected.
    Surely this simple room management has been built, does anyone have any wisdom they can impart?  Is there any best practice guidance or any samples?
    Thanks,
    Doug

    Hi Raff, Thanks a ton for the response.
    I wasn't clear on what I was calling load balancing.  What I mean by this is room assignment for student clients.  We have one tutor per room.  There are multiple students per room, but each is in their own one-on-one chat with the tutor.
    I'm very much struggling with where to do the room assignment / room managemnt, on the server side or on the client side (if that is even possible).  In my testing it is taking upwards of 10 seconds minimum to get a list of rooms (4 virtually empty rooms) and to query the users in a single room (also a minimum of users/nodes in the queried room).   If after this point, I 'redirect' the student to the least full room, then the student incurs the cost of creating a new session and logging into the room.  As well I intend to do a bit of xml parsing, and other processing, so that 10 seconds is likely to grow.
    Would I see better performance trying to do this in the client?
    As far as the server side, at what point does a room go to 'not-active'?
    When I'm querying the roomList, I am considered one of the 'OWNER' users in the UserLists.  At what point can it be safe to assume that I have left the room? 
    Is there documentation on the meaning and lifecycle of the different status codes?  not-active,  not-running, and ok?  Are there others?
    How much staleness can I expect from the server-side queries?
    As far as feature set, the only thing that comes to mind is xpath and or wild card support for getNode() but i think this was mentioned in other posts.
    Regarding the reconnection issues, I am timing out the student after inactivity, and this is probably by and large the bulk of my reconnect use cases.  This and any logout interaction from the student presents a use case where I  may want reassign the student return to the same room as before.  I can envision scenarios of a preferred tutor if available etc.  In this case, I'll need to know list of rooms.  In terms of reconnection failover, this is not not a LCCS / FMS issue.
    Thanks again for responding.

  • Logging Best Practices in J2EE

    Hi,
    I've been struggling with Apache Commons Logging and Log4J Class Loading problems between module deployments in the Sun App Server. I've also had the same problems with other App servers.
    What is the best practice for Logging in J2EE.
    i.e. I think i may be java.util.logging. But what is the best practise for providing different logging config (i.e. Levels for classes and output) for each deployed module. and how would you structure that in the EAR.
    Thanks in advance.
    Graham

    I find using the java.util.logging works fine. For configuration of the log levels I use a LifeCycle module that sets up all my levels and handlers. That way I can set up the server.policy to allow only the LifeCycle module jar to configure logging (with a codebase grant), but no other normal modules can.
    The LifeCycle module gets its properties as event data with the INIT event and configures the logging on the STARTUP event.
    Hope this helps.

  • Best Practices: Clustered Author Environment

    Hello,
    We are setting our CQ 5.5 infrastructure in 3 datacenters with ultimately an Authoring instance in each (total of three).  Our plan was to Cluster the three machines using “Share Nothing” and each would replicate to the Publish instances in all data centers.  To eliminate confusion within our organization, I’d like to create a single URL resource for our Authors so they wouldn’t have to remember to log into 3 separate machines?
    So instead of providing cqd1.acme.com, cqd2.acme.com, cqd3.acme.com, I would distribute something like “cq5.acme.com” which would resolve to one of the three author instances.  While that’s certainly possible by putting a web server/load balancer in front of the three, I’m not so sure that’s even a best practice for supporting internal users.
    I’m wondering what have other multi-datacenter companies done (or what does Adobe recommend) to solve this issue, did you:
    Only give one destination and let the other two serve as backups? (this appears to defeat the purpose of clustering)
    Place a web server/load balancer in front of each machine and distribute traffic that way?
    Do nothing, e.g., provide all 3 author URLs and let the end-user choose the one closest to them geographically?
    Something else???
    It would be nice if there was a master UI an Author could use that communicated with the other author machines in a way that’s transparent to the end-user – so if Auth01 went down, the UI would continue to work with the remaining machiness without the end-user (author) even knowing the difference (e.g., not have to change machines).
    Any thoughts would be greatly appreciated.

    Day's documentation (for CRX 2.3) states in part, "whenever a write operation is received by a slave instance, it is redirected to the master instance ..."  So, all writes will always go to the master, regardless of which instance you hit.
    Day's documentation also states, "Perhaps surprisingly, clustering can also benefit the author environment because even in the author environment the vast majority of interactions with the repository are reads. In the usual case 97% of repository requests in an author environment are reads, while only 3% are writes."
    This being the case, it seems the latency of hitting a remote author would far outweight other considerations.  If I were you, New2CQ, I would probably have my users hit the instance that's nearest to them (in terms of network latency, etc...) regardless or whether it's a master or a slave.

  • SRM EBP User management - best practice followed for your customer.

    Hello All,
    What are the best practices followed on SRM User manageemnt  followed for your customers.
    (1)When employee/ buyer  leave the organisation  ? what actions you do ? do you lock the users?
    (2) If any thing interested share your experiences.
    (3) What exactly customer expects from SRM systems on SRM user management?
    (4) SAP audit/ customer audit practice on USER management ?
    Any piece of information on your experiece/ best practice  is appreciated.
    regards
    Muthu

    Thanks Peter .
    it is happening only in SRM right.
    Is any work around for this issue.
    In future SRM any planing to take care of this.
    in ECC i can delete the user whenever the user moves .
    All SRM customers will be very happy if SRM gives some workaround for this issue.
    Every customer wants to reduce cost .
    How can I find what are the opening documents for this user in one shot ?
    thanks for answering this question.
    I have seen our Eden Kelly report helps for shopping cart and other BO.
    You are doing good job on our SRM  WIKI innovative topics and discussons. I appreciate.
    why i am raising this concern is that one user left the organisation and again we want to edit the data whch entered by the left user . system will not allow us to do after deleting the user.
    so we are approaching SAP for ths help.
    It is very difficult to convice the customers on this issues.
    br
    muthu

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • Best practices for defining Environment Variables/User Accounts in Linux

    Hello,
    After reading throught the Quick Install guide for 10gR2 on x86_64 Linux, I see that it is not recommended to define ANY variables in .bash_profile.
    I'm hoping to get a Best practices approach for defining environment variables - right now we use the oracle linux account for administration including sql*plus. So, where should the myriad variables be defined? Is it important enough to create a user account in linux to support best practices?
    What variables, exactly, should be defined? It seems that LD_LIBRARY_PATH is no longer being used?
    Thanks in advance
    Doug

    Something that I've done for years on unix/linux boxes is to create a seperate environment variable setup file for each instance on the box. This would include things like ORACLE_HOME, ORACLE_SID, etc. Then I would create an alias in my .bash_profile that would execute this script. As an example, I would create a orcl.env file that would hold all of the environment variables for this instance. Then in my .bash_profile I would create a line like the following:
    alias orcl=". $HOME/orcl.env"
    Then from anywhere you could type orcl and you would set your environment to connect to that database.
    Also, if you are using 10g, something else that is really nice if you are using sqlplus, and you connect to different databases without starting a new sqlplus session is to set a parameter in your $ORACLE_HOME/sqlplus/admin/glogin.sql file:
    set sqlprompt "_user 'at' _connect_identifier >"
    This will automatically change your command prompt to look like this:
    RALPH at ORCL >
    if you connect as GEORGE, your prompt will immediately change to :
    GEORGE at ORCL >
    This way you can always know who and where you are connected to.
    Good luck!

Maybe you are looking for