Best Practices/Advice on deploying .exe setups

We have a couple applications (office, mcafee) that need to be deployed thru .exe. We also have some MSI applications. The msi's work great as Windows can control and know when they are done. What is anyone's advice on making .exe setups run and stopping other zen bundles from starting before that setup is complete?

For such cases, you may need to get a little tricky and use VBS or
another scripting tool (Autoit) that would be launched by ZCM to call
the wrapper app and then also monitor the chained app.
A "Monitor Modules" feature as existed in ZDM7 would really help.
If anyone is actually struggling with "Wrappers", just let me know and I
can whip up a tool to help. I used to have one I even used in ZDM7
since even monitor modules did not always help.
On 12/16/2013 10:46 AM, kjhurni wrote:
>
> craig_wilson;2297895 Wrote:
>> Change the Option from "No Wait" to "When action is complete" for "Wait
>> before Proceeding to the Next Action" on your launch executable action.
>>
>>
>> On 12/15/2013 5:16 PM, dabarnett wrote:
>>>
>>> We have a couple applications (office, mcafee) that need to be
>> deployed
>>> thru .exe. We also have some MSI applications. The msi's work great
>> as
>>> Windows can control and know when they are done. What is anyone's
>> advice
>>> on making .exe setups run and stopping other zen bundles from
>> starting
>>> before that setup is complete?
>>>
>>>
>
> To add to what Craig says, the "wait" action only works if the setup.exe
> doesn't launch a bunch of other things, and exit the original wrapper
> (ie: setupvse.exe for McAfee VSE Enterprise).
>
> setupvse.exe is an installation wrapper that then launches msiexec.exe
> and it (the setupvse.exe) promptly unloads from memory about 2-3 seconds
> after it starts/launches (you can see this via procmon or whatever it's
> called now), so ZCM thinks it's done, and continues onto the next app,
> whilst msiexec.exe continues on it's merry way setting up VSE
> Enterprise.
>
> And McAfee doesn't support running the .MSI for their stuff (it won't
> install properly or any of your custom settings if you use the MID
> either).
>
>

Similar Messages

  • I'm looking for "best practices" advice for a project setup (Premiere CS5.5, Windows 7 Pro)

    I have been tasked with updating an old Corporate video with new music, sounds, graphics and narration.
    The ONLY video-camera material I have is from a corporate DVD (NTSC) - the original source material is simply not available. . 
    I can rip the DVD to MPEG 2  and  import it into Premiere without issue, but would like to be able to use higher-resolutions for the updated graphics.
    I plan to export to Youtube (HD) but may well export to DVD or BluRay as well.
    My question really narrows down to how to manage Sequences.   I figure that my Sequence settings should be the highest resolution I expect to work with (Youtube HD or Bluray) allowing me to easily export to lower quality, but I don't really know if it's as  simple as that.  Is there an industry standard methodology?  Am I close?

    Premiere can actually use .vob files from DVDs, just copy them from DVD to hard drive, then Import into Premiere.
    Is the existing DVD video in a 4:3 or 16:9 format? If 4:3, consider how you will work that into a 16:9 program. Rather than having black pillar bars at the sides, many editors will put in some sort of background to fill that space. A popular solution is to duplicate the 4:3 video on a lower track, stretch to fill the screen and add Gaussian Blur effect. Since the colors/content of the side bars will match the main video, it is somewhat disguises the fact that the video is 4:3.
    If you put SD footage into an HD program, it is going to look "soft" compared to HD footage. As a compromise, you might consider editing the new program as 720p rather than 1080p, so that the SD resolution is not being stretched as far. All computer video/YouTube video is non-interlaced, meaning editing sequences and exports should be done as Progressive.
    Potential workflow issue - if you take the existing SD footage from the DVD, and upscale it into an HD sequence, then export back to DVD (SD resolution), the video is turned to mush. It has been upscaled, then dowscaled again, really kills the quality. Might need to copy the finished HD sequence to an SD sequence, fix any size/scaling issues for graphics and titles, and export DVD from the SD sequence to avoid the upscale/downscale of SD footage. Note that DVD video is highly compressed, so you are already working with a weak source material, recompressing again is going to be a quality hit in any case, but definitely avoid upscale/downscale again.
    I don't have Adobe apps in front of me at the moment, but I believe the 720p Blu-ray export options are limited, perhaps 720p at 59.94 only, not certain. That may effect your decision of 1080p versus 720p.
    Hope these tips help
    Thanks
    Jeff Pulera
    Safe Harbor Computers

  • Best Practices for CS6 - Multi-instance (setup, deployment and LBQ)

    Hi everyone,
    We recently upgraded from CS5.5 to CS6 and migrated to a multi-instance server from a single-instance. Our current applications are .NET-based (C#, MVC) and are using SOAP to connect to the InDesign server. All in all it is working quite well.
    Now that we have CS6 (multi-instance) we are looking at migrating our applications to use the LBQ features to help balance the workload on the INDS server(s). Where can I find some best practices for code deployment/configuration, etc for a .NET-based platform to talk to InDesign?
    We will be using the LBQ to help with load management for sure.
    Thanks for any thoughts and direction you can point me to.
    ~Allen

    Please see if below metalink note guides you:-
    Symmetrical Network Acceleration with Oracle E-Business Suite Release 12 [ID 967992.1]
    Thanks,
    JD

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • What is the best practice for AppleScript deployment on several machines?

    Hi,
    I am developing some AppleScripts for my colleagues at work and I don't want to visit each of them to deploy my AppleScript on their Macs.
    So, what is the best practice for AppleScript deployment on several machines?
    Is there an installer created by the Automator available?
    I would like to have something like an App to run which puts all my AppleScript relevant files into the right place onto a destination Mac.
    Thanks in advance.
    Regards,

    There's really no 'right place' to put applescripts.  folder action scripts nees to go in ~/Library/Scripts/Folder Action Scripts (or /Library/Scripts/Folder Action Scripts), anything you want to appear in the script menu needs to go in ~/Library/Scripts (or /Library/Scripts), script applications should probably go in the Applications folder, but otherwise scripts can be placed anywhere.  conventional places to put them are in ~/Library/Scripts or in a subfolder of ~/Library/Application Support if they are run by an application.  The more important issue is to make sure you generalize the scripts: use the path to command to get local paths rather than hard-coding them in, make sure you test to make sure applications or unic executables you call are present ont he machine, use script bundles rather tna scripts if you scripts have private resources.
    You can write a quick installer script if you want to make sure scripts go where you want them.  Skeleton verion looks like this:
    set scriptsFolder to path to scripts folder from user domain
    set scriptsToExport to path to resource "xxx.scpt" in directory "yyy"
    tell application "Finder"
      duplicate scriptsToExport to scriptsFolder with replacing
    end tell
    say "Scripts are installed"
    save this as a script application, then open the application pacckage and create a folder called "yyy" in the resources folder and copy your script "xxx.scpt" into it.  other people can run the app to install the script.

  • BEST PRACTICES: How to deploy apps with public and private content & data?

    Can anyone recommend a guide, blog post, etc. on best practices for:
    - designing & deploying apps that have publicly-accessible (http + https) content, and
    - content and data for which users must be authenticated and authorized?
    NOTE: In our environment users are authenticated via OID. We're using Apex 4.

    Hi,
    Have a look at this Sample App for getting Auth Token from Instagram in windows phone app. 
    Also read the api documentation for more details from
    here.
    Pradeep AJ

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SubFlow best practice advice please

    I am new to IPCCX scripting and would like some advice on whether multiple SubFlows are a good idea.
    We have 16 small Call Centers that all have very basic scripts. I plan on adding a Holiday/Emergency Closure SubFlow to all of them and I would like to add a few additional features as well.
    I plan on adding:
    ·         Position in Queue
    ·         Expected Wait Time
    ·         If more than X number of callers in queue, inform the caller that they cannot be helped at this time and to call back later.
    ·         If Expected Wait Time exceeds closing time, inform the caller that they cannot be helped at this time and to call back later.
    (I know the last two sound pretty harsh, but it’s government, and there is no budget to hire more operators. I think it is better to let the callers know early that they need to call back, than to have them wait for two hours just to be disconnected. And no, this is NOT the 911 call center!!!   LOL)
    My questions are:
    Would it be ok to add each feature as a SubFlow? Or could there possibly be performance or other issues by having so many SubFlows in one script.
    My other option is to add each item internal to each script, but that would be a lot to tackle 16 times…
    Lastly, is there a best practice on how short a script should be for performance? I know you can’t have one that is longer than 1000 steps, but should I try to keep the step count below a certain number?
    Any advice or insights would be greatly appreciated…
    Thanks,
    Doug.

    Doug,
    Most of the items your list are included natively in UCCX.  Use the Get Reporting Statistic step to obtain the expected wait time, position in queue, and total queue time information. The current time-of-day can be had using a Time variable.  You'll need to do some work to convert the values into something you can play to the calling party as a prompt, a subflow is great for that, but you shouldn't need to reinvent the wheel.
    Take the time to draw out the call flow for each of the 16 contact centers on paper or in MS Visio. If your 16 call flows are very similar in the way they operate, consider a master script that just changes the prompts/menus based on the number dialed.  Leverage XML or a database (if you have premium licensing) to pull in the relevant information you need for each DNIS.  You may find you can streamline the entire system, or at least a good portion of it, without it becoming 16 unwieldy applications.
    Steven

  • Swtich with 2 wireless routers (configuration for best practice/advice?)

    HI folks,
    I have a gigabit switch, and 2 wireless G routers.  I'll leave the model numbers out as it's fairly irrelevant - all linksys.
    Router 1 is used as a router only (due to location in basement)
    Router 2 is used for wireless only
    My current network setup:
    DSL MODEM (accessed on 192.168.2.1 - can not be changed) > Router 1(192.168.1.1)
    Router 1 > Switch (i believe it can't be changed 192.168.2.12 - no webgui)
    Switch > everything else including Router 2
    Everything works except Router 2 - can't connect to it wired or wirelessly until connected directly to a pc.
    Is my setup wrong
    and/or is there a best practice?
    Many thanks!!!

    What is the model number of the switch?
    Normally a switch that cannot be changed does not have an IP address.  So if your switch has an address (you said it was 192.168.2.12)  I would assume that it can be changed and that it must support either a gui or have some way to set or reset the switch.
    Since Router1 is using the 192.168.1.x  subnet , then the switch would need to have a 192.168.1.x  address (assuming that it even has an IP address), otherwise Router1 will not be able to access the switch.
    I would suggest that initially, you setup your two routers without the switch, and make sure they are working properly, then add the switch.  Normally you should not need to change any settings in your routers when you add the switch.
    To setup your two routers, see my post at this URL:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=108928
    Message Edited by toomanydonuts on 04-07-2009 02:39 AM

  • Best Practice/Validation for deploying a Package to Azure

    Before deploying a package to Azure, What kind of best practice/Validation can be done to know the Package compatibility with Azure Enviroment?

    What do you mean by the compatibility of the azure package with the azure environment? what do you want to validate? would be great if you provide bit of a background for your question.
    As far as the deployment best practice is concerned, the usual way is to upload your azure cloud service deployment package and configuration files (*.cspkg and *.cscfg) to the blob container first and upload it to the cloud service by referring from uploaded
    container. This will not only give you flexibility to keep different versions of your deployments which you can use to roll back entire service but also the process of the deployment will be comparatively faster than that of deploying from VS or by uploading
    manually from file system.
    You can refer link - http://azure.microsoft.com/en-in/documentation/articles/cloud-services-how-to-create-deploy/#deploy
    Bhushan | Blog |
    LinkedIn | Twitter

  • Rules Best Practice Advice Required

    I find that I'm fighting with the Business Rules in my BPM project, so I'd thought I throw the scenario out here and see what best practices anyone might propose.
    The Example*:
    Assume I have people, and each of them is assigned a list/array of "aspects" from an enumerated set: TALL; SPORTY; TALKATIVE; TRAVELER; STUDIOUS; GREGARIOUS; CLAUSTROPHOBIC.
    Also assume I have several Marketing campaigns, and as part of one or more processes, I need to occasionally determine whether a person fits the criteria for a particular campaign, which is based on the presence of a one or more aspects. The definitions of the campaigns may change, so the thought is to define them as business rules; if they change, the rule changes, without impacting the processes themselves (assume the set of campaigns doesn't change, just the rules for matching aspects to a particular campaign).
    My initial take is to to define each campaign as a bucketset, containing aspects, the presence of which indicates inclusion in the campaign. If a person has ANY of the aspects, they are considered a member.
    Campaigns (each perhaps defined as a LOV bucketset):
    DEODORANT: SPORTY, TRAVELER, GREGARIOUS
    E_READER:STUDIOUS,TRAVELER
    BREATH_MINT:TALKATIVE, GREGARIOUS
    HELMET:TALL, CLAUSTROPHOBIC
    So we want to create a service to check: Does a person belong to the BREATH_MINT campaign? We extract their aspects and check to see if ANY of them are in the BREATH_MINT campaign. If so, we return true. Basically: return ( intersection( BREATH_MINT.elements(), person.aspects() ) ).size() > 0
    The problem is: what's the best way to implement this using Business Rules? Functions? Decision Functions? Decision Tables? Stright IF/THEN? Some combination of the above? I find I'm fighting the tool, which means that, although this is a fairly simple problem, I don't understand the purpose of the various parts of the tool well.
    Things to consider:
    Purpose: test a person for inclusion in a specific campaign
    Input - the person's aspects, either directly, or extracted from the person
    Output - a boolean
    There can be a separate service for each campaign, or it could be specifed by an enumerated value as a parameter.
    Many thanks in advance!
    ~*Completely Fabricated~
    Edited by: 842765 on Mar 8, 2011 12:07 PM - typos

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • Need best practice advice on RAC 2 nodes configuration/setup

    I need to implement 2 RAC nodes with linux redhat 5.
    Right now I did ifconfig, and it shows:
    ifconfig | grep "^[eb]"
    eth0 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth0:1 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth0:2 Link encap:Ethernet HWaddr 18:03:73:EE:7E:7F
    eth3 Link encap:Ethernet HWaddr 18:03:73:EE:7E:81
    eth3:1 Link encap:Ethernet HWaddr 18:03:73:EE:7E:81
    eth4 Link encap:Ethernet HWaddr 18:03:73:EE:7E:83
    eth5 Link encap:Ethernet HWaddr 18:03:73:EE:7E:85
    I did not see any bond0 or bond 1.
    I think oracle recommend privet interconnect to be bonded. We use CTP-BO , so I want to know should I request to bond the interface plus service?
    What is the command to test that the interface are setup right?
    Thank you for all the response in advance.

    Which version are you setting up and what are the requirements? IF you are discussing the use of NIC bonding for high availability beginning in 11.2.0.2 there is a concept of "High Availability IP" of HAIP as discussed in the pre-installation chapters,
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm, section 2.7.1 Network Hardware Requirements.
    In essence, using HAIP eliminates the need to use NIC bonding to provide for redundancy.

  • Best practice of KM development project setup

    Hello Forum,
    I'm about to write my first application with the Knowledge Management framework.
    The purpose of the app is relatively simple: collect certian documents from the repository, export them to some destination and provide a report over the export (links).
    What approach should I take?
    1) A standard, re-usable (J2EE) Web App, which only utilizes the KM libs? In this approach I see the advantage that my deliverable would be a more flexible, except the KM access platform independend enterprise app. The installable container would be a EAR/SCA archive which also is the more future oriented form of distribution, ins't it?
    2) A SAP portal app created by the Developer Studio utilizing wizard driven KM components, which will end in deployment of a SAP par compontent? This approach has the benefit that the Dev Studio generates the basic KM project setup and generates some stubs what let me concentrate on the business.
    <b>But</b> my app does not integrate so deep with KM services. Offer the wizards enough value?
    And, supposing approach 2 has to be taken, what kind of KM component has to be instantiated for my kind of app? A repository manager or some kind of (filter) service? <b>Do I need such a component at all</b> or would some simple library calls (I've seen interfaces like getContent() ) will do the job?
    But if so, approach 1 would do it, right?
    Thank
    Carsten

    Hi There are many levels involved in PDL .
    Please have white papers on tehis
    http://www.infoworld.com/pdf/whitepaper/SAPParallelDevelopmentManagementByNewmerix103107.pdf
    http://help.sap.com/saphelp_nwce10/helpdata/en/45/4b5d276e891192e10000000a1553f7/content.htm
    http://www.agilejournal.com/news/397-mks-integrity-and-sap-linking-software-and-production-processes-at-continental-automotive-systems
    All the best
    nag

Maybe you are looking for