Software Change Management best practices

Hi All,
I am curious how most people here are managing the changes that they make to their SAP code that is running on the IBM i.
For example, how do you perform version control? Multiple release management? Task management? How do you manage approvals for promotion? How do you comply with SOX "separation of duties" (i.e. someone different has to promote to production).
Thanks for any insight here!
Joe.

Hi All,
I am curious how most people here are managing the changes that they make to their SAP code that is running on the IBM i.
For example, how do you perform version control? Multiple release management? Task management? How do you manage approvals for promotion? How do you comply with SOX "separation of duties" (i.e. someone different has to promote to production).
Thanks for any insight here!
Joe.

Similar Messages

  • New EWA Report Topic - Software Change Management

    Dear Experts,
    EWA reports has a new topic named Software Change Management , in this topic there is a subject name "Transport Requests with a short Transition Time" with following explanation;
    Explanation :      Transport requests with short transition time,The duration between the export from the development system and the import into the production system was shorter than one day.
    Total number of transport requests     74     Total number of transport requests in production.
    Recommendation: Transport requests with a short transition time of less than one day have occurred in the last week. These transports may not have been tested sufficiently.
    All transport requests must be tested carefully before they are imported to the production system. The requests must not be developed and tested by the same person.
    Transport requests must be bundled and imported to the production system together for maintenance cycles or releases. Daily imports are only permitted in emergency situations.
    How can we check which requests Export from DEV date and Import in PRD are same ?
    Best Regards

    sap support send the how to document ;
    1- Call transaction SDCCN in system SID and search for the corresponding EWA session in tab 'Done'. Click the "lorry" icon
    2-Press the glasses icon at DATA_PER_R3SYSTEM
    3-Press the glasses icon at CMO_EWA_TRANSPORT_MGMT
    4- Press the glasses icon at ET_TRANSP_W_SHORT_TRANS_TIME
    5- Now, you have the complete list of transports with a short transition time:

  • Operating system image build and management best practices?

    how do we create gold images for servers/desktops,
    Best practices image management,
    How do we control changes?
    How do we prevent unauthorized changes (installation of software)?
    What tools we can use for above.

    I use MDT 2013 Lite Touch to create my images
    http://www.gerryhampsoncm.blogspot.ie/2014/03/create-customised-reference-image-with.html
    You should use in-built ConfigMgr Role Based Access Control to manage images afterwards (look at the Operating System Deployment Manager role).
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Creating Software Update Packages - Best Practice?

    I am setting up our SCCM 2012 R2 environment to begin using it for Windows Updates, however I'm not sure 100% the best method of setting it up.
    Currently my plan is to break out the deployment packages by OS, but I read\told that I should avoid creating to many dynamic deployment packages, as every time it changes all the computers will re-scan that package.  So What I want to do is create
    various packages for OS and years, so I would have a package that contains all updates for Windows 7, older then January 31, 2013 (assuming the package doesn't have 1000+ updates), and are not superseded\Expired. Then I would create Packages for the 2014
    monthly updates each month, then at the end 2014, combine them all in 1 package, and restart the process for 2015.  Is this a sound plan or is there a better course of action?
    If this the best practice method, is there any way to automatically create these packages?  I tried the Automatic Deployment Rules, but I can not set a Year of release, only the a time frame of the release,(older then 9 Months), unless I am missing
    something.  The only way I can see doing this is going into All Software Updates, and filtering on my requirements, and then manually creating the package, but this would less desirable, as after each year I would like to remove the superseded and expired
    without having to recreate the package.
    Mark.

    First, please learn what the different objects are -- not trying to be rude, just stating that if you don't do this, you will have fundamental issues. Packages are effectively meaningless when it comes to deploying updates. Packages are simply a way of grouping
    the binary files so they can be distributed to DPs and in-turn made available to clients. The package an update is in is irrelevant. Also, you do not "deploy" update packages and packages are not scanned by clients. The terminology is very important because
    there are implications that go along with it).
    What you are actually talking about above are software update groups. These are separate and distinct objects from update packages. Software Update groups group updates (not the update binaries) into logical groups that can be in-turn deployed or used for
    compliance reporting.
    Thus, you have two different containers that you need to be concerned about, update packages and update groups. As mentioned, the update package an update is in is pretty meaningless as long as the update is in a package that is also available to the clients
    that need it. Thus, the best way (IMO) to organize packages is by calendar period. Yearly or semi-annually usually works well. This is done more less to avoid putting all the updates into a single package that could get corrupted or will be difficult to deploy
    to new DPs.
    As for update groups, IMO, the best way is to create a new group every month for each class of products. This typically equates to one for servers, one for workstations, and one for Office every month. Then at the end of every year (or some other timeframe),
    rolling these monthly updates into a larger update group. Keep in mind that a single update group can have no more than 1,000 updates in it though. (There is no explicit limit on packages at all except see my comments above about not wanting one huge package
    for all updates.)
    Initially populating packages (like 2009, 2010, 2011, etc) is a manual process as is populating the update groups. From then on, you can use an ADR (or really three: one for workstations, one for servers, and one for Office) that runs every month, scans
    for updates released in the past month, and creates a new update group.
    Depending upon your update process, you may have to go back and add additional deployments to each update group also, but that won't take too long. Also, always QC your update groups created by an ADR. You don't want IE11 slipping through if it will break
    your main LOB application.
    Jason | http://blog.configmgrftw.com

  • Working with version management and promotion management best practices BO 4.1

    Hi Experts
    I wondered if anybody knows if there is a document or something about best practices to work with the version management and promotion management in BO 4.1?
    Our Environment includes two servers. The first one is our development and test server. The second server is our prod system.
    Now on the dev server we have basically two folders called dev and test. We control access to them with a right system based on the folder structure.
    My question now is how you would work in this scenario (third server is not an option). Main target is to have as few reports as possible. Therefore we try to work with the version management system and only have one version of each report in the dev folder of the cms. But this is where problems start. Sometimes the newest version is not the version we want to publish to the test folder or even prod server.
    How would you publish the report to the other folder? Make a copy of the concerned report (transport to the same system via promotion management is not possible). Also how would you use the version management in regards to the folder structure? Only use version management in dev folder and export reports to test folder (out of vms control) or also use vms in test folder and how would that work?
    Further more I’d be interested in learning best practices with promotion management. I found out that the promotion of a report that doesn’t exist in prod doesn’t make any problems. But as soon as an older version already exists there is only partial success and the prod folder gets renamed to “test”.
    Any suggestions on how to handle these problems?
    Thank you and regards
    Lars

    Thank you for your answer.
    So you are basically proposing to work with the vms in the dev folder and publish the desired version to the test folder. And the test folder is out of version control in this scenario if I understood you correctly (like simple data storage)?
    And how would you suggest promoting reports to the prod system? Simply by promoting the
    desired version from dev folder directly to prod? This would probably lead to inconsistence because we would need to promote from dev system to test and dev to prod instead of promoting a straight line from dev over test to prod. Furthermore it would not solve the problem of the promoting result itself (A new folder called dev will be generated in prod but the report gets promoted to the prod folder if there was no report before).
    Thank you for the link. I came across this page just a few days ago and found also lots
    of other tutorials and papers describing the basic promoting process. The promoting process in general is clear to me but I wondered if it is possible to change some parameters to  prevent folder renaming for example.
    Regards
    Lars

  • Solution Manager best practices about environments

    Hello,
    we intend to use Solution Manager 4.0.
    My question : I wonder whether we need to have a single instance a SM (production) or whether we need to have multiple instances (one development SM where developments and customizing will be performed and one production SM populated with transport requests coming from the development SM) ?
    What are the best practices ?
    Thank you.
    Regards,
    Fabrice

    Dear Fabrice,
    In principle you donot need to have 2 instances of Solution Manager. 1 Instance is sufficient enough for monitoring all the Satellite Systems.
    However if you intending to have Customized ABAP on Solution Manager then it might be a good idea to do so in an different client in the same instance keeping the client as an development client.
    Most of the customizing in Solution Manager is non transportable hence it should be directly done in the productive client.
    Hope this answers your queries.
    Regards
    Amit

  • What are project management best practices?

    I created a test project in Premiere Elements 12, and saved it in a directory named "Michaels Posters".   Then I archived the project to this directory and it created a "Copied_My\ new\ video\ project1" directory with all of the media files.  Then I added a video clip to the project, archived it again, and it created the "Copied_My\ new\ video\ project1_001" folder below.
    My first real project will be a video highlights video of my 4 years old for 2013.  This will involve editing the same project several nights a week, for maybe a couple of months.  This would result in numerous "Copied_My\ new\ video\ project1_NNN" directories being created, assuming I archive the project each night.
    So what is the best practices for managing a larger project like this, and avoid using a lot of disk space for the same project?
    Michaels\ Posters/
    ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   └── My\ new\ video\ project1.PRV
    ├── Copied_My\ new\ video\ project1
    │   ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   ├── Encoded\ Files
    │   └── Layouts
    ├── Copied_My\ new\ video\ project1_001
    │   └── Adobe\ Premiere\ Elements\ Preview\ Files
    ├── Encoded\ Files
    │   └── My\ new\ video\ project1.prel
    ├── Layouts
    └── Media\ Cache\ Files

    I do work with the LAST archived project file, which contains ALL necessary resources to edit the video.  But then if I add video clips to the project, these newly added clips are NOT in the archived project, so I archive it again.
    The more I think about it, the more I like this workflow.  One disadvantage as you said is duplicate videos and resource files.  But a couple of advantages I like are:
    1. You can revert to a previous version if there are any issues with a newer version, e.g., project corruption.
    2. You can open the archived project ANYWHERE, and all video and resource files are available.
    In terms of a larger project containing dozens of individual clips like my upcoming 2013 video highlights video of my 4  year old, I'll delete older archived projects as I go, and save maybe a couple of previous archived projects, in case I want to revert to these projects.
    If you are familiar with the lack of project management iMovie, then you will know why I am elated to be using Premiere Elements 12, and being able to manage projects at all!
    Thanks again for your help, I'm looking forward to starting my next video project.

  • Software Change Management - usage in SAP

    Hi,
    Please anybody can -
    Are we using version control tools like ration rose, SCM – Software configuration Management in SAP?
    Thanks in advance
    Swamy

    Hi,
    Version Management is there in SAP also. SAP maintains Change history for most of its transactions.
    *Version Management *
    Use
    Version management in the Integration Repository permits versioning as follows:
    ·        The Integration Builder manages multiple versions of a software component in an Integration Repository. In this way, different product versions can communicate with each other. Each design object is created in the context of a software component version that represents a unit of a product that can be shipped.
    ·        Objects can also have new object versions when changes are made within a software component version. You also have the option of releasing changes to multiple objects simultaneously.
    Versioning in the Integration Repository ensures that objects are shipped consistently, and as part of a product.
    However, there are no software component versions in the Integration Directory, because the configuration content is not shipped. Nevertheless, you can also release changes to the configuration for the entire runtime environment here. When you release the objects, the Integration Server updates the directory runtime cache.
    Integration
    You can also export objects of an Integration Repository or an Integration Directory to import them into another repository or directory (for example, during system relocation). In doing so, versioning of the corresponding software component version is taken into account.
    Features
    Change Lists
    The Integration Builder supports object versioning for both the repository and for the directory using the user-specific change lists. When an object is saved for the first time, a new object version is created, which is added to the change list. When an object in the change list is activated, the object version is closed and is made visible for other users.
    Products and Software Component Versions
    A product can have multiple versions. Each product version is a shipment unit visible for customers. The software component versions used in a product version can be called in the System Landscape Directory.
    In the context of SAP Exchange Infrastructure, the products and software component versions that are of interest are those that are to exchange messages with each other. When development starts they must be imported from the System Landscape Directory into the Integration Builder.
    Release Transfer
    In the transition from one software component version to a new software component version, you can either transfer all, or just some of the design objects from the previous version. This release transfer also enables you to transfer objects to older software component versions.
    Regards,
    Renjith Michael.

  • Multiple room management -- best practice -- server side http api update?

    Hi Folks, 
    Some of the forum postings on multiple room management are over year old now.  I have student/tutor chat application which has been in the wild for 5 months now and appears to be working well.  There is a single tutor per room, multiple chats and soon to be a whiteboard per student, which is shared with the tutor in a tabbed UI. 
    It is now time to fill out the multiple tutor functionality, which I considered and researched when building, but did not come to any conclusions.   I'm leaning towards a server side implementation.  Is there an impending update to the http api?
    Here is what I understand to be the flow:
    1) server side management of who is accessing the room
    2) load balance and manage the room access 1 time user and owner session from the server side
    3) for my implementation, a tutor will need to login to the room, in order for it to be available
    4) Any reconnection would in turn need to be managed by the server side, and is really a special case of room load balancing.
    My fear is that at some point I'm going to need access to the number of students in the room or similar and this is not available, so that I'll need client functionality, which will need update the server side manager.
    As well, I'm concerned about delays on the server side access to which might create race conditions in a re-connect situation.  User attempts to reconnect, but server side manager thinks that the user is already connected.
    Surely this simple room management has been built, does anyone have any wisdom they can impart?  Is there any best practice guidance or any samples?
    Thanks,
    Doug

    Hi Raff, Thanks a ton for the response.
    I wasn't clear on what I was calling load balancing.  What I mean by this is room assignment for student clients.  We have one tutor per room.  There are multiple students per room, but each is in their own one-on-one chat with the tutor.
    I'm very much struggling with where to do the room assignment / room managemnt, on the server side or on the client side (if that is even possible).  In my testing it is taking upwards of 10 seconds minimum to get a list of rooms (4 virtually empty rooms) and to query the users in a single room (also a minimum of users/nodes in the queried room).   If after this point, I 'redirect' the student to the least full room, then the student incurs the cost of creating a new session and logging into the room.  As well I intend to do a bit of xml parsing, and other processing, so that 10 seconds is likely to grow.
    Would I see better performance trying to do this in the client?
    As far as the server side, at what point does a room go to 'not-active'?
    When I'm querying the roomList, I am considered one of the 'OWNER' users in the UserLists.  At what point can it be safe to assume that I have left the room? 
    Is there documentation on the meaning and lifecycle of the different status codes?  not-active,  not-running, and ok?  Are there others?
    How much staleness can I expect from the server-side queries?
    As far as feature set, the only thing that comes to mind is xpath and or wild card support for getNode() but i think this was mentioned in other posts.
    Regarding the reconnection issues, I am timing out the student after inactivity, and this is probably by and large the bulk of my reconnect use cases.  This and any logout interaction from the student presents a use case where I  may want reassign the student return to the same room as before.  I can envision scenarios of a preferred tutor if available etc.  In this case, I'll need to know list of rooms.  In terms of reconnection failover, this is not not a LCCS / FMS issue.
    Thanks again for responding.

  • SRM EBP User management - best practice followed for your customer.

    Hello All,
    What are the best practices followed on SRM User manageemnt  followed for your customers.
    (1)When employee/ buyer  leave the organisation  ? what actions you do ? do you lock the users?
    (2) If any thing interested share your experiences.
    (3) What exactly customer expects from SRM systems on SRM user management?
    (4) SAP audit/ customer audit practice on USER management ?
    Any piece of information on your experiece/ best practice  is appreciated.
    regards
    Muthu

    Thanks Peter .
    it is happening only in SRM right.
    Is any work around for this issue.
    In future SRM any planing to take care of this.
    in ECC i can delete the user whenever the user moves .
    All SRM customers will be very happy if SRM gives some workaround for this issue.
    Every customer wants to reduce cost .
    How can I find what are the opening documents for this user in one shot ?
    thanks for answering this question.
    I have seen our Eden Kelly report helps for shopping cart and other BO.
    You are doing good job on our SRM  WIKI innovative topics and discussons. I appreciate.
    why i am raising this concern is that one user left the organisation and again we want to edit the data whch entered by the left user . system will not allow us to do after deleting the user.
    so we are approaching SAP for ths help.
    It is very difficult to convice the customers on this issues.
    br
    muthu

  • Hotfix Management | Best Practices | WCS | J2EE environment

    Hi All,
    Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
    Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
    - ONLY this jar
    - the jar with 8 old files and 2 new files.
    - the jar using whatever dependent jars are required.
    - the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
    Pointers, more in line with a WCS environment would be very much appreciated!
    Regards,
    Mrinal Mukherjee

    Moderator Action:
    This post has been moved from the SysAdmin Build & Release Engineering forum
    To the Java EE SDK forum, hopefully for closer topic alignment.
    @)O.P.
    I don't think device driver build/release engineering is what you were intending.
    Additionally, your partial post that was accidentally created as a duplicate to this one
    has been removed before it confuses anyone.

  • IOS Software Upg(s) - Best Practices?

    Hello, I have several 2960 switches that are all running 12.2(35)SE5.
    I know that 12.2(50)SE1 is out there, and I am curious when it is best to upgrade the software version?
    If my switch is still PRE-production, am I better off to always upgrade to the current software on a switch if it still has not yet been deployed?
    I saw in the release notes for (50)that there were some VLAN authentication updates, and we are in the midst of designing a VLAN deployment using some L3 switches.
    Without KNOWING if I need a specific feature udpate, are there any best practices that may help guide me?
    As mentioned, because this switch is not yet in production, I'm thinking that the time is right if I'm going to upgrade the IOS.

    Best practice is NOT jumping into a new IOS code unless you need a feature provided by the new IOS or your current IOS has a critical bug that is corrected with the new IOS.
    With that said, it seems the new feature is worth having in your environment so 50SE1 may fit the bill. It's your call if you want to be exposed to new bugs.
    HTH,
    Edison.

  • ODP + Client Version Management Best Practices

    I am working with a client who is using the Oracle Developer Tools for Visual Studio to develop their application. The IT folks deploy a new server with the latest version of the Oracle client (i.e. 11.1.0.7.0) and the developers are using the latest ODT (i.e. 11.1.0.7.20). As such the publisher policy never gets a chance to add any value. Because of the unusual versioning scheme we always end up having to copy the Oracle.DataAccess.dll from the Oracle client into the deployment folder and add an assembly binding redirect. If we do not we get the following exception:
    The provider is not compatible with the version of Oracle client
    As you might expect this is VERY annoying to deal with as it adds complexity to our deployment process. We use TNS names to connect and those files are stored in the client\network\admin folder as well. Can someone please help me clear up what the best practice is to deploy an application developed using ODT/ODP.NET so that we don't have to hack the Oracle.DataAccess.dll every time?
    Thanks,
    Colin

    Thank you for your answer.
    So you are basically proposing to work with the vms in the dev folder and publish the desired version to the test folder. And the test folder is out of version control in this scenario if I understood you correctly (like simple data storage)?
    And how would you suggest promoting reports to the prod system? Simply by promoting the
    desired version from dev folder directly to prod? This would probably lead to inconsistence because we would need to promote from dev system to test and dev to prod instead of promoting a straight line from dev over test to prod. Furthermore it would not solve the problem of the promoting result itself (A new folder called dev will be generated in prod but the report gets promoted to the prod folder if there was no report before).
    Thank you for the link. I came across this page just a few days ago and found also lots
    of other tutorials and papers describing the basic promoting process. The promoting process in general is clear to me but I wondered if it is possible to change some parameters to  prevent folder renaming for example.
    Regards
    Lars

  • X2100M2 Embedded LIghts Out Manager best practice

    Hi guys,
    I'm in worry about the best practice of configuring network interface on a X2100M2 Solaris 5.10 for the Embedded LIghts Out Manager. Hope you can help. I haven't find any documents of it which explain the best practice.
    Here is the situation :
    I've have 4 network interfaces but I only need two of them. So I decide to use the bge0 and the bge1 interfaces.
    bge0 is the server interface with an IP with .157
    bge1 is the ELOM interface with an IP with .156
    In the past, it was the reverse : bge0 was the ELOM with .156 and bge1 was the network server int. with .157
    Could you please guys let me know what is the best practice? Does the int.0 must be the server one? Is it possible to have network problem with this kind of configuration?
    Thanks
    Cheers,

    hi guys,
    No one have a clue? I've got some dukeDolars to offer...
    Tks

  • CM (Configuration Managment) Best Practices for DBs?

    Hi:
    I've got a question on a topic I haven't seen discussed before. I have a project with application code and PL/SQL code and an Oracle 10g database that has triggers, constraints, etc. For a Java or C# project the concept of CM and a central code repository works, largely because the CM tool is able to enforce limits on who can do what to a given resource over time. Developers MUST check out the source files before modifying and adding back to the CM tree.
    Database objects are more on an honor system. Even if you put the code for the tables, triggers, constraints, packages etc. under CM, nothing prevents a developer or multiple developers from modifying those objects in the development database and never updating the create scripts that are in the CM tree. So given that there are thousands of systems out there with Java/C# or whatever front ends and PL/SQL back ends in databases that have all sorts of DB objects in them, what are the best CM practices? How does a developer know the package he's about to modify is the last one that got pushed to the production server?
    Constraints are problematic in several unique ways. They can be created in several places by different syntax (in the table create command; as a stand alone command after the table exist; named or system generated name if defined in the table definition). Things like SQL Developer and Toad can create code trees that contain all the scripts needed to build a database from scratch and a top level build script to run them all, but what's the best structure for that? Do table create scripts JUST define columns and separate scripts create all constraints?
    I like to think someone has crossed this particular bridge before and has some answers to my questions. Any takers?
    Thanks.

    Gaff wrote:
    No. With non-DB code, it isn't voluntary. The developer needs to check a file in or out before modifying it and the CM system enforces either that only one person at a time can edit the file or if multiple ones can, the merging of the results. Once the PL/SQL package is in a database, a developer can change it and NOT change the underlying file. I don't agree that there is a difference. All source control systems I have seen allow a copy to be taken or the source to be viewed without checking it out. I have a checked out source file on my PC. I copy it. I check in my changes, I make changes to the copy. I have made a change without changing the checked in source. Just like I could in the database. Should this change make it into the final build? No, and I can't complain when it doesn't, just like the unauthorized change directly in the database is overwritten and lost. I see no difference.
    Making changes to database objects directly in the database is a terrible practice, you can enforce the same lack of tolerance for it as you would for any other developer circumventing the source control and deployment process in any other language.
    - You tell them not to do it.
    - You overwrite their changes if they do do it.
    - You fire them if they continue to do it.
    >
    I think the solution is that by default DB developers have read-only access to the objects. When they identify the pieces that need to change to fix a given bug, some 3rd party (Software Lead? DBA?) grants them CRUD on the identified objects and verifies that their modified code makes it into the CM tree when the bug is fixed.The developer can make their own changes in the Dev database. Changes to the Test database prior to production deployment can only be made by the DBA using scripts from the source control system.
    This works flawlessly.

Maybe you are looking for

  • How to make Multiple column listbox in CS4 using tree view

    Hi , I am a beginer in Indesign Cs4 plug-in creation.In one scenario i want to create multiple column listbox,i tried with one cloumn using tree view and that is working fine.so suggest how to create multiple column listbox using tree view. thanks Ar

  • When to use abstract classes instead of interfaces with extension methods in C#?

    "Abstract class" and "interface" are similar concepts, with interface being the more abstract of the two. One differentiating factor is that abstract classes provide method implementations for derived classes when needed. In C#, however, this differe

  • Unable to Sync iphone with outlook 2003

    hi I have two PCs - one at home and one at office. The PC at home went perfectly (outlook 2007). I could not sync the contacts and calendars in the outlook 2003 in my office PC with iphone (via itune). I got the error message that "either there is no

  • Please tell me how to disable "up next" in iTunes????

    I'm really trying to be a good boy and drink the iTunes Kool Aid - I don't complain when Apple completely moves important functions around, hides others, and adds more and more useless capabilities. I just suck it up and relearn the UI and don't comp

  • FaceTime says "Could not sign in. Check your network connection and try again."

    I can't sign in to FaceTime using a macbook air. I can sign in to iTunes, iCloud (so I know my sign in is correct and works) and others can use facetime on my network.  FaceTime says "Could not sign in. Check your network connection and try again."