Automating custom software deployment and configuration for multiple nodes

Hello everyone.
Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
* Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
* Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
* Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
* Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
1. Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
2. Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
3. Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.

Configuration Management is a big topic today with a few noteworthy alternatives for Solaris:
- CFEngine (http://www.cfengine.org)
- Chef (http://wiki.opscode.com)
- Puppet (http://www.puppetlabs.com)

Similar Messages

  • Help!! Auto-Deploy and Check For Updates in an OPMN-managed instance

    I'm trying to use the auto-deploy and check for updates functions in an OAS 10.1.0.3 but I can not make it work.
    I don't Know if this functions are only availables for a standalone OC4J instance.
    Can anyone help me??
    Thanks!!! and sorry for my poor english.

    Auto-Deploy is not really designed for an OPMN managed instance -- in earlier releases, this would not even work correctly since deployments where stored in the DCM subsystem and any subsequent configuration changes made via DCM would overwrite the config and you'd lose the "auto-deployed" application.
    In 10.1.3.x where DCM is no longer in the picture, and the auto-deploy polling is a function of Oc4J itself, it may work. Be aware that this is not something that is tested in OPMN managed environments.
    What configuration changes have you made?
    You need to ensure you have modified server.xml so the attribute check-for-updates="all" and you've added the attribute application-auto-deploy-directory and have it pointing at a directory to watch.
    -steve-

  • How to execute a MTS (Master Test Script) in SAP ECATT Test Configuration for multiple variants.

    I have a MTS (Master Test Script) which references 4 Test Scripts. As of now, I am able to run this MTS directly by opening it as a Test Script and running it for default test data.
    Now, I want to run this MTS in SAP ECATT Test Configuration for multiple variants/test data. I am not able to see any parameter if I enter MTS in the Test Configuration.
    Below thread is similar to my requirement but I am not able to understand the solution properly.
    eCATT - how to run multiple test scripts with input variables in different test scripts?
    Any help in this case would be highly appreciated.
    Thanks & Regards,
    Vaibhav Gupta

    I have a MTS (Master Test Script) which references 4 Test Scripts. As of now, I am able to run this MTS directly by opening it as a Test Script and running it for default test data.
    Now, I want to run this MTS in SAP ECATT Test Configuration for multiple variants/test data. I am not able to see any parameter if I enter MTS in the Test Configuration.
    Below thread is similar to my requirement but I am not able to understand the solution properly.
    eCATT - how to run multiple test scripts with input variables in different test scripts?
    Any help in this case would be highly appreciated.
    Thanks & Regards,
    Vaibhav Gupta

  • Custom process code and FM for custom IDoc...

    Hello Experts,
    I created a custom IDoc based from ARTMAS05 IDoc. This is because we only need 3 segments and
    the standard idoc(Artmas05) contains so many segments that we need.
    Now, will I create a custom process code and FM for this? how do I go about this?
    Thank you guys and take care!

    Hello,
                 Here are the Steps that we need to be following while creating a Custom Process Code with Custom Function Module. ( Since the Segments to be handled are very Less, I am recommending that you go for a Custom Function Module).
    1. Go to SE37 Create a Function Module. Please ensure to Create it with the IMPORT / EXPORT /TABLES parameters exactly in the way that they exist in any Standard SAP Inbound FM (Refer to IDOC_INPUT_ORDERS for example).
    2. Once our FM is Ready (Need not be Complete with the Code to go ahead with the Process Code Creation) and Active, the next Step is to Create an Entry in the Transaction Code BD51 where we will register the Function Module.
    3. Next, we'll have to go to T-Code WE57 where we'll have to make an entry for the Function Module with the IDoc Type & Message Type.
    4. Finally, go to WE42 and Create a New Process Code and assign the Function Module and the Message Type.
    NOTE 1 : The Process Code is, as we know, Client Dependent. So, once you create a Process Code, we need to have it migrated to the Testing Environment to Start & Carry Out Testing.
    NOTE 2: If Step 2 & 3 are missing, then we'll not be able to assign the Function Module in WE42 while Creating Process Code.
    Hope it was helpful.
    Thanks and Regards,
    Venkata Phani Prasad K

  • Re: BulletinBoard and configuration for Touchpad

    Win7 x64
    L505-10M
    There ware all sorts of problems after a windows security update a few months ago for this system.
    I have to put a backup of C:\ back, but various functions did not work .
    I have most of them work again now. I installed Flashcard again but Bulletin Board and configuration for Touch pad are missing yet.
    Mouse drivers are HID-mouse and PS/2 version: 6.1.7600.16385 of Microsoft.
    I downloaded Touch Path Driver-20091202155841.zip but this cannot be installed.
    C: \Program Line\TOSHIBA\Bulletin Board is present but probable damaged and not present in Windows Start-menu\Toshiba.
    How can I repair this?
    arnold12

    I understand you that recovery installation is your last step and possibility.
    What kind of backup do you have? How have you created this backup?
    Obviously you have problems with typical Toshiba stuff so I dont understand what kind of backup do you use. Was your preinstalled OS screwed up and you try to install back up.
    Anyway, bulletin board try to check Toshiba download page and install ReelTime for your notebook model, if it is offered.
    What is wrong with touch pad driver? Why you cannot install it? Can you see some error message?
    Have you tried to install both offered touchpad drivers? Which one cannot be installed?

  • Setup and configuration for system monitoring and IT Reporting for Java sys

    Hi all,
    How to setup and configuration for system monitoring and IT Reporting for Java system ?
    How to connect Java system to Solman system?
    Regards,
    Neni

    HI,
    What is your OS? You can use SAPCCMSR.exe to monitoring  IT Reporting Java system on Solman.
    Go to solman rz21 create a csmreg user. and configuration fil for agent. copy configuration fil on usr/sap/ccms/..
    Go to comman line cd ../user/sap/xxx/sys/exe/.../ wite SAPCCMSR.00 -R pf=< ...../sys/profile/instans profile> .
    You can se agent on rz10 and use this connaction on rz20 to monitoring and IT Reporting Java system on Solman.
    I hope this help

  • Configuration for multilpe nodes

    Hi,
    I have the following problem: I have my app deployed on Oracle app server with two nodes and the problem is, that I'm not able to configure toplink to work properly on this server with two nodes. Sometimes when I try to write something to the DB (via my app) the changes occur in the DB, but when I try to read them back, my app can't see them. So I suppose that toplink sometimes writes the data via one node of the app server and reads via the another. The same problem occurs also by using the cache, because it seems, that the cache is not synchronized between multiple nodes.
    How can I configure toplink to work properly on app servers with multiple nodes?
    thnx Martin

    Martin,
    If you have entity types that are modified and read on multiple nodes of your application then you need to understand and properly configure your locking and caching in TopLink. This article is a good introduction to the topic and the documentation can provide some more specifics.
    I would recommend the following order of operations:
    1. Ensure you have optimistic locking configured and handled for entity types that are both shared and modified by your application.
    2. Based on the volatility of your entity types select the correct caching type. Look at types, size, isolation, and expiration.
    3. For operations where you commonly get stale data on volatile entity types consider using refreshing options on your queries. If using version optimistic locking also enable only-refresh-if-newer.
    4. Consider cache coordination for those types that are read-mostly to minimize the refreshing required.
    Doug

  • Drag and drop of multiple nodes between 2 trees

    Hi,
    I am trying to implement a drag and drop of multiple nodes between two different trees. A simple drag drop written on the lines of the demo code RSDEMO_DRAG_DROP_TREE_MULTI works perfectly fine. But my requirement is, when a child (leaf) node is dragged, if its parent is not present in the target tree, that too has to be dragged and dropped from left to right. When I try to manually add nodes to the target tree, it dumps because the node key table and drag drop object have fewer nodes than what I am trying to add. So it always dumps in the drag_drop_complete method.
    I have also tried putting this code in the PBO of my screen, calling a subroutine to refresh my tree with all nodes required. But I realise that the PBO does not get called after a drag drop. Is there a way to achieve this? Any help would be greatly appreciated. Thank you.
    Regards,
    Nithya

    There's a Multi-Select TreeView sample on the WindowsClient.com, you can download it. Then you can drag multi nodes as follows:
    Code Snippet
    private void Form2_Load(object sender, EventArgs e)
                this.listBox1.AllowDrop = true;
                this.listBox1.DragOver += new DragEventHandler(listBox1_DragOver);
                this.listBox1.DragDrop += new DragEventHandler(listBox1_DragDrop);
                this.multiSelectTreeView1.ItemDrag += new
                     ItemDragEventHandler(multiSelectTreeView1_ItemDrag);
            void multiSelectTreeView1_ItemDrag(object sender, ItemDragEventArgs e)
                this.multiSelectTreeView1.DoDragDrop(this.multiSelectTreeView1.SelectedNodes,
                     DragDropEffects.Move);
            void listBox1_DragDrop(object sender, DragEventArgs e)
                ArrayList selectNodes = e.Data.GetData(
                    e.Data.GetFormats()[0]) as ArrayList;
                foreach (TreeNode node in selectNodes)
                    this.listBox1.Items.Add(node.Text);
            void listBox1_DragOver(object sender, DragEventArgs e)
                 e.Effect = DragDropEffects.Move;

  • How many Scan Listeners, we have to configure for 5 Node RAC?

    Dear Professionals,
    How many scan Listeners we need to configure for 5 Node RAC? Oracle will not allow us to create More than 3 scan's? What If i have one scan Listener per each for 1st 3 node's? what about remaining two Node's? How application user will connect to 4 or 5 Node? Can you please explain..? Forgive me, If am totally wrong?
    Thanks
    Sagar

    Each of the 5 instances will register itself with the scan listener (using the instance parameter remote_listener).  Thus, the scan listener is "aware" of the database instances on the other two nodes where it is not running.  It can still redirect incoming connections to the local listeners on these nodes (registered as local_listener).
    Hemant K Chitale

  • Best practice for .war?  Configure and deploy or deploy and configure?

    In Apache Tomcat for example, I can deploy an app, stop the server, reconfigure the app in situ, then start the server again...
    Is this recommended for deploying Java web apps to Oracle App Server 10g?
    We currently have a consulting firm that is recommending to configure the web app before deploying. Sounds reasonable, except that they want this done via JDeveloper so that the Sys Admin can right click on the "deploy to OAS" button (ie: have the tools generate the .war file after configuration and deploy automagically).

    Thanks for your feedback.
    Are you aware of any way to use the *.deploy configuration file that is created by JDeveloper in an ANT script to create the .war or .ear file?
    If not, I can picture the Sys Admin and developers groaning when they're told that they're JDeveloper web-app configuration cannot be used for production -- and that they must somehow duplicate that functionality in an ANT script!
    I do have the below ANT scripts from Debu to do the deployment etc. But they only help after the .ear is built.
    EAR file deployment:
    <target name="deploy" depends="core">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-deploy"/>
    <arg value="-file"/>
    <arg value="${this.build}/${this.ear}"/>
    <arg value="-deploymentName"/>
    <arg value="${this.application.name}"/>
    </java>
    </target>
    Web application binding:
    <target name="bind-web-app" depends="deploy">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-bindWebApp"/>
    <arg value="${this.application.name}"/>
    <arg value="${this.war}"/>
    <arg value="http-web-site"/>
    <arg value="/${this.uri}"/>
    </java>
    </target>
    Undeployment:
    <target name="undeploy" depends="init">
    <java jar="${j2ee.home}/admin.jar" fork="yes">
    <arg value="${oc4j.deploy.ormi}"/>
    <arg value="${oc4j.deploy.username}"/>
    <arg value="${oc4j.deploy.password}"/>
    <arg value="-undeploy"/>
    <arg value="${this.application.name}"/>
    </java>
    </target>

  • How to deploy and configure custom JAAS login module

    Dear Experts,
    I have created a custom jaas login module, In my .jar I am having
    1. MyLoginModule.class
    2. Handler.class
    3. MyPrincipal.class
    I want to know how to deploy the custom jaas module to oc4j. And make available to all
    other application to use the same for authentication & authorization. Please suggest me.
    Thanks,
    Rajesh A

    This article does not mention that you can put the <jazn-loginconfig> tag into the orion-application.xml as well.
    Much easier to deploy and test.
    --olaf                                                                                                                                                                                                                                                                                                                       

  • Disk and tempdb configuration for multiple Instances in one SQL cluster

    Hi Everyone ,
      I am in planing to build SQL Cluster on Blade server . Two blades are allocated for SQL. and planning to cluster those two blades. It will be windows 2012R2 and SQL 2012/2014. Initially it was plan to put most of database on one SQL instance. but
    due to other requirements I will be Installing more SQL Instance on same cluster. 
    for now Everything will be on  RAID 10 but do not have more details about storage yet that how it will get configured
    256 GB Ram on Each blade server.
    1) what are the Disc configuration recommendation for data and log files when each instance get added?
    2) what are the Disc configuration recommendation for  tempdb files when each instance get added?
    3) Each Instance Tempdb file on Same drive is good practice?
    Any other Best practices should follow for this setup?
    Thank you ....
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    1) what are the Disc configuration recommendation for data and log files when each instance get added?
    Keep both the sql instances in diffèrent drives.read the below link for RAID levels and according to your usage you can cofigure it..
    http://technet.microsoft.com/en-us/library/ms190764%28v=sql.105%29.aspx
    Storage best practice.
    http://technet.microsoft.com/en-us/library/cc966534.aspx
    2) what are the Disc configuration recommendation for  tempdb files when each instance get added?
    Its good to have sepreate drive for tempdb and each instance too with correct allocation or configuration..RAID5
    for data if you must, but have to keep a spare drive around for that.
    Best is 10 everywhere. If that's too expensive, then 1 or 10 for log and 5 for data (and 10 for TempDB)
    RAID 5 is terrible for logs because it has a high write overhead. Tran logs are write-heavy, not read-heavy.
    3) Each Instance Tempdb file on Same drive is good practice?
    Tempdb configuration is depends, its completelty based on your server and tempdb usage..Make sure to have equal size for all the files in tempdb, As a general rule, if the number of logical processors is less than 8, use the same number of data files
    as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to
    acceptable levels or make changes to the workload/code.
    http://www.confio.com/logicalread/sql-server-tempdb-best-practices-initial-sizing-w01/#.U7SStvFBr2k
    http://msdn.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
    http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(1230)-tempdb-should-always-have-one-data-file-per-processor-core.aspx
    Raju Rasagounder Sr MSSQL DBA

  • How to manage one wsp and dll for multiple clients in farm environment

    1. There is a product which is developed using C sharp , jquery,CSS and sharepoint object models which have been packaged into .wsp file. Whenever we introduce new functionality to the product we used to branch the
    previous code as a version , say Version 1.0 and new functionality of the product will in another solution. This is how we are managing the code in TFS as versions. Each newer version will have new functionalities. We do not give latest functionality for all
    the clients. Each client is having its own version of functionality. Technically in order to access the functionality, the wsp solution should be present in the solution repository which is available in SharePoint central administrator site. This solution
    will be deployed on the client’s site. We are following the above process in SharePoint standalone installation where we used to purchase dedicated server per client and installed sql, SharePoint foundation 2010 as standalone installation and adding the client
    related version of the code to the solution repository. Later host on the site which is created for that client purpose. This process is same for all the clients where we purchase individual server for each client .
        Now we want to host our product in farm environment of sharepoint foundation 2010 where we are going to try 3 level architecture. 
    • SQL Server-In this sever we are going to install sql server 2008R2 standard edition. Which should serve the database service for all the web applications/sitecollections which we are going to create in Web front end server.
    • Application server- In this server we are going to install the sharepoint as farm and will install search server express for serving search functionality for our product
    • Web front end server- In this server we are going to add this server to Sharepoint farm which we have created in application server. Here we are going to create web applications and site collections for all the clients.
    In this scenario how to manage multiple versions of same wsp solution?
    Another major issue w.r.t the architecture of the product and new approach for client deployment as follow.We have CSS, jquery files for serving the functionality.These files have been mapped to 14 hive folder.If any changes we do one of the jquery file or
    css file which is meant for latest version and not for old version, then how to manage this new functionality for that particular css or jquery file in 14 hive folder, since there is only one 14 hive folder. What is the best practice to make this happen? Another
    thing is, how to manage dll files for individual client?

    It sounds like you have a farm scoped solution at work. In that case you can only have a single instance of it per farm, you'd have to branch each version so they appear to be seperate solutions entirely (thus ruining your clients upgrade process).
    Bluntly i don't think a single farm can manage all your user environments.

  • Javascript code to enable an ADD button to create duplicate tables and pages for multiple entries

    I am using Adobe XI Pro. I have created a multi-page fillable pdf questionnaire that I want to make interactive to the user. Within each page, I have multiple tables that can have ADD buttons to duplicate the table if the client wants more than one test. I would like this table to insert directly below the previous table and they may be able to add upto 5 of these table.
    I also have a couple pages in the questionnaire that require duplication for multiple samples. Eg. One page per sample, and I may have upto 10 samples. I would like the javascript to have an ADD button and then add each page behind the previous related sheet (Original and duplicates together)
    I am not a javascript coder so I would need some examples and a walk through. I did take some developer courses in engineering but it has been almost two decades
    I have search on several forums and cannot find a good resource so I appreciate the help.
    Sincerely
    Tara

    Some of what you want to do can be done using template features and a bit of JavaScript, but if your users will be using Reader, version 11 will be required. You can dynamically add new pages, but adding new tables to existing pages with reflow of content that comes after is not possible with a form created in Acrobat. If you have access to Adobe's LiveCycle Designer form creation software, you can create what's known as a dynamic XFA form that can do all of what you want. You can ask in the LiveCycle Designer forum for more information: http://forums.adobe.com/community/livecycle/livecycle_modules_and_development_tools/livecy cle_designer_es?view=discussions

  • Best Configuration for Multiple Video Captures

    Hello,
    What would be faster and the most efficient for multiple video captures:
    - 9 video feeds written to 3 separate internal hard drives (3 feeds per hard drive)
    - 9 video feeds written to 3 internal hard drives in a combined RAID configuration
    - 9 video feeds written to 9 separate external hard drives via eSATA
    I have a Mac Pro 3.06GHz 12-core Intel Xeon machine that will be dedicated to a security camera application (SecuritySpy). The Mac OS and SecuritySpy will be on one hard drive and send the captured video from nine network HD cameras to the other hard drives. The computer, eSATA card, and all the hard drives are 6Gb/s capable. I also have access to SoftRAID for the RAID configuration, as well as an 8-bay eSATA hard drive enclosure if I choose. There are pros and cons for each of the configurations, but I am looking for the most reliable setup in terms of efficiency and longevity.
    “Technological change is like an axe in the hands of a pathological criminal.” (Albert Einstein, 1941),
    Dr. Z. 

    Just wanted to post the details of my final set up, maybe this will be useful to others looking to do a similar setup:
    1. Connected and set up the external hard drive (I bought a Western Digital My Book Pro Edition II 1TB) via Firewire 800 on iMac #1.
    2. Set up iMac #1 to backup via Time Machine. Even though the other computers will see the drive, apparently they will not recognize it in Time Machine unless you set it up on the computer the drive is connected to first (I tried).
    3. On iMac #2, connect to iMac #1 via file sharing, this is even easier now that you can access any computer on your network under "SHARED" in the sidebar of any finder window.
    4. Double-click the external drive. If you don't see the drive, click "Connect as" in the top right of the window and login. It's not necessary to do this, but if you want to confirm you've connected to the network drive, you can go to Finder -> Preferences -> General and under "Show these items on the Desktop" check the box next to "Connected Servers". This will make an icon for the external drive appear on your desktop.
    5. Turn on Time Machine on iMac #2. Repeat for any other computers you want to back up.
    That's it. So far this has worked great for me. Obviously the backups run a little slower on the computers running through file share, but its a small trade-off compared to either manually moving the drive around or getting three separate drives.

Maybe you are looking for

  • My iPod Touch is not being recognized by iTunes.

    I already went through all the sugested tips such as restarting the 'Apple Mobile Device' and updating my computer's softwear. I restarted my computar many times, but nothing is being fixed. Help

  • RMAN / backup.  oracle 9 vs. oracle 10

    My colleague mentioned a while ago that we had to investigate RMAN &/or a backup client, "because in oracle 10, when doing an export of a live db, oracle does not switch into "single user mode" like it did in 9. So we couldn't have 100% faith in our

  • AppleScript- filtering a list of records by a property value possible?

    How can a list of records like the one below (except each list has 82 items) be filtered by a field's value so that the resulting list contains only records with a given property value? e.g. set recordList to (*input follows*) {{fnum:1, fname:"Admin"

  • Desktop background options

    I would like to increase the options for desktop backgrounds. There are multiple images in the screensaver options which would make wonderful backgrounds. Does anyone know how to access these slide shows and use the separate images as a background? T

  • [CS2, JS] How do I find without replacing?

    This is a super-simple question, so simple that I can't find the answer anywhere on this forum or in the documentation.  How do I use the search method to find instances of text without replacing them with anything inadvertently?