Bundle testing and best practices, bonus introduction

Hi all, I'm new here and to ZenWorks, please be gentle
While I'm aquainted with puppet and reasonably proficient with ansible, I've just taken on a number of Windows desktops. We're using ZCM to deploy software, and I'm testing to get aquainted. Things don't quite work as I expect; for example, I fought with manually launching QuickTasks to my test workstation manually for hours yesterday before I realized I needed to manually refresh the workstation after updating my bundle, I couldn't just push.
So while I'm reading throught the documentation, I thought I'd say hello and ask for some general advice. In particular, I had trouble identifying why my bundle had failed, and where to look for information about the failures. Including a script then expecting STDERR from that script to tell me about the failure seems like an impossible luxury at this point For now, I'd like to just experiment.
What is your workflow for testing new bundles?

Immanetize,
> I fought with manually launching
> QuickTasks to my test workstation manually for hours yesterday before I
> realized I needed to manually refresh the workstation after updating my
> bundle, I couldn't just push.
Yes, refresh is a timed process. When testing, I always do a zac ref
bypasscache on the testing workstation.
> So while I'm reading throught the documentation, I thought I'd say hello
> and ask for some general advice. In particular, I had trouble
> identifying why my bundle had failed, and where to look for information
> about the failures. Including a script then expecting STDERR from that
> script to tell me about the failure seems like an impossible luxury at
> this point For now, I'd like to just experiment.
> What is your workflow for testing new bundles?
It depends. If it fails, then I first look at the ZCM agent log.
Anders Gustafsson (NKP)
The Aaland Islands (N60 E20)
Have an idea for a product enhancement? Please visit:
http://www.novell.com/rms

Similar Messages

  • Can anyone recommend tips and best practices for FrameMaker-to-RoboHelp migration ?

    Hi. I'm planning a migration from FM (unstructured) to RH. I'd appreciate any tips and best practices for the migration process. (Note that at the moment I plan to import the FM documents into, not link to them from, RH.)
    For example, my current FM files are presently not optimally "chunked", so that autoconverting FM file sections (based on, say, Header 1 paragraph layout) won't always result in an optimal topic set. I'm thinking of going through the FM docs and inserting dummy paragraphs with a tag somethike like "topic_break", placed in more appropriate locations that the existing headers. Then, during import to RH, I'd use the topic_break paragraph to demark the topics. Is this a good technique? Beyond paragraph-based import delineation, do you know of any guidelines for redrafting FM chapter file content into RH topics?
    Also, are there any considerations/gotchas in the areas of text review workflow, multiple authoring, etc. after the migration? (I've not managed an ongoing RH doc project before, so any advice would be greatly appreciated.
    Thanks in advance!
    -Kurt
    BTW, the main reason for the migration: Info is presently scattered in various (and way to many) PDF files. There's no global index. I'd like to make a RoboHelp HTML interface (probably WebHelp layout) so it can be a one-stop documentation shop for users.

    Jeff
    Fm may produce better output for your requirements but for many what Rh produces works just fine. My recent finding re Word converting images to JPG before import will mean a better experience for many.
    Once Rh is set up, and it's not difficult, for many its printed documents will do the job. I would say try it and then judge.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • FWSM interface monitoring and best practices documentation.

    Hello everyone
     I have a couple of questions regarding vlan interface monitoring and best practices specifically for this service module.
     I couldn’t find a suggestion or guideline as for how to define a VLAN interface on a management station. The FWSM total throughput is 5.5gbs and the interfaces are mapped to vlans carried on trunks over 10gb etherchannels. Is there a common practice, or past experience, to set some physical parameters to logical interfaces? "show interface" command states BW as unknown.
     Additionally, do any of you have a document addressing best practices for FWSM? I have this for other platforms and general recommendations based on newer ASA versions but nothing related to FWSM.
    Thanks a lot!
    Regards
    Guido

    Hi,
    If you are looking for some more command to check for the throughput through the module:-
    show firewall module <number> traffic
    Also , I think as this is End of life , you might have to check for some old documentation from Cisco on the best practices.
    http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd805457cc.html
    https://supportforums.cisco.com/discussion/11540181/ask-expertconfiguring-troubleshooting-best-practices-asa-fwsm-failover
    Thanks and Regards,
    Vibhor Amrodia

  • EP Naming Conventions and Best Practices

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi Daya,
    For SAP Best Practices for Portal, read thru these documents :-
    [SAP Best Practices for Portal - doc 1 |http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm]
    [SAP Best Practices for EP |http://www.sap.com/services/pdf/BWP_SAP_Best_Practices_for_Enterprise_Portals.pdf]
    And for Naming Conventions in EP, please go through these two links:-
    [Naming Conventions in EP|naming standards;
    [EP Naming Conventions |https://websmp210.sap-ag.de/~sapidb/011000358700005875762004E]
    Hope this helps,
    Regards,
    Shailesh
    Edited by: Shailesh Kumar Nagar on May 30, 2008 4:09 PM

  • EP Naming Conventions and Best Practices documents

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi,
    Check this:
    Best Practices in EP
    http://help.sap.com/saphelp_nw04/helpdata/en/43/6d9b6eaccc7101e10000000a1553f7/frameset.htm
    Regards,
    Praveen Gudapati

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Create Test Environment - Best Practice

    HI all..
    Please, someone know which is the best practice  to create a environment test from our company DB for testing purpose?
    OUR SCENARIO:
    SAP B1 9.1 PL 6
    MSSQL 2008 R2
    Integration service activated
    WINDOWS 2008 R2 Ent Edition for client machine and server machine
    Thank's in advance
    --LUCA

    Hi Manish..
    I would like to have a copy of our company DB for testing purpose..
    It's not important integration service.
    We would like to test 9.1 PL 06 BoM new features without modify the rpoduction environment...
    Thank's
    --LUCA

  • AD Sites and Services and Best Practices

    Hey All,
    I am new to OES, but not new to AD. I am in an environment in which DSfW was recently setup to support VDI testing.
    I notice that there is no configuration under AD Sites and Services. We have multiple sites, with DCs setup at each site. The consequence of not having Sites and Services configured is that machines/users in site "A" are logging in through site "B" domain controllers. Obviously, this is not ideal nor best practice. Secondly, this leads me to wonder how the domain controllers are replicating since I do not see NTDS entries in Sites and Service MMC for the domain controllers, yet I do see that AD data is replicating by comparing databases (simply adding a new user on one DC I see it added on the secondary DCs). So I know it's replicating, but apparantly not using AD schema?
    One other question I have about DSfW is regarding the migration from a mixed environment to a full AD environment. We are deploying AD primarily due to VDI initiatives, and currently only testing this. Looking further down the road for planning purposes I have to wonder if it's possible to stand up a 2008 R2 server, join it to the domain, dc promo it, FSMO transfer, then decommossion the DSfW systems. This would leave us with purely Windows DC environment for authentication. Is this something some people have done before? Is it a recommended best path for migrating? Cause I also see others creating a second AD environment, then building the trusts between DSfW's domain and the "new" domain (assuming these are not in the same forrest). That would be less than ideal.
    Thanks in advance for any responses...

    Originally Posted by jmarton
    DSfW does not currently support "sites and services" but it's on the
    roadmap and currently targed for OES 11 SP2.
    Excellent! I feel sane now :) I can live with this, as long as it's expected/normal.
    It sounds like you need sites and services, but once that's in DSfW,
    why migrate from DSfW to MAD if DSfW works for your VDI initiative?
    You are correct. I am simply planning and making sure all the options are in play here.
    I would rather not get too deeply reliant on DSfW if it will make any future possible migration more difficult. Otherwise, DSfW is extremely convenient....I am impressed actually.
    I also believe there may be a way we can control the DC used for specific "contexts" (or OUs as Microsoft calls them). So if I have a group of users in a particular OU that reside at a particular branch I think I should be able to set their preferred domain controller....and if so, that means sites & services becomes nearly irrelevent. I would be ineterested to talk to people who are using DSfW with multiple sites in play.

  • Upgrading ldom 1.0.1 to ldom 1.0.3. Steps and best practices?

    I have a guest domain running on ldm 1.0.1. I want to upgrade to 1.0.3 on the control domain. The administration guide states that I don't have to go through a lot of gyrations to do the upgrade but it is unclear what is the best practice to upgrade from 1.0.1 to 1.0.3
    What are the actual steps
    Can I just pkgadd -d . SUNWldm.v and magic happens or do I need to remove the current SUNWldm and then do the pkgadd?
    Should I use the procedures laid out for install-ldm -d none
    It's unclear to me what the actual upgrade path is. If I was running 1.0 the procedures are laid out in the guide but I don't see the steps that I should take to go from 1.0.1 to 1.0.3?
    Thanks to anyone that can provide the best practice here....

    I just upgraded from 1.0.2 to 1.0.3 and it was just a case of removing the old version and installing the new. As a precaution best to save the constraints info to an xml file for each ldom including the primary.
    Make sure that you are running the required firmware and OS kernel patches as per the Release Notes
    Shutdown the guest ldoms
    # ldm stop -a
    # ldm ls-constaints -x ldm-name > ldm-name.xml
    # svcadm disable ldmd
    # pkgrm SUNWldm
    At this point a few dependencies will be flagged up if i.e. SUNWldlibvirt, SUNWldvirtinst etc if they are installed as well. In my case I uninstalled all the dependent packages as well and reinstalled them from the 1.0.3 software bundle.
    # pkgadd -d path-to-1.0.3-sw-bundle-products SUNWldm (plus the others if required)
    # svcadm enable ldmd
    # ldm start -a
    Hope this helps
    cheers
    dqui81

  • Oracle EPM 11.1.2.3 Hardware Requirement and best practice

    Hello,
    Could anyone help me find the Minimum Hardware Requirement for the Oracle EPM 11.1.2.3 on the Windows 2008R2 Server? What's best practice to get the optimum performance after the default configuration i.e. modify or look for the entries that need to be modified based on the hardware resource (CPU and RAM) and number of users accessing the Hyperion reports/files.
    Thanks,
    Yash

    Why would you want to know the minimum requirements, surely it would be best to have optimal server specs, the nearest you are going to get is contained in the standard deployment guide - About Standard Deployment
    Saying that it is not possibly to provide stats based on nothing, you would really need to undertake a technical design review/workshop as there many topics to cover before coming up with server information.
    Cheers
    John

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Installation and best practices

    I saw this link being discussed in a thread about "Live Type," but I think it needs a thread of its own, so I'm going to begin it here.
    http://support.apple.com/kb/HT4722?viewlocale=en_US
    I have Motion 4 (and everything else with FCS 2, of course), and just purchased Motion 5 via the App Store. (I'm sure I'll be buying FCP X also at some point, but decided to hold off for now.)
    When I was reading the "Live Type" thread there was some discussion about Motion 5 overwriting Motion 4 projects or something like that, so I started freaking out. I've opened both 5 and 4, but am closing them until I understand what's going on.
    Since I purchased Motion 5 from the App Store, I'm just under the assumption that my Mac took care of everything correctly. I see that Motion 4 resides in the FCS folder and Motion 5 is a stand-alone in the Applications folder.
    So I guess my questions are these ...
    1) What's so important about having FCS 2009 on a separate drive? I have a couple other internal drives with more than enough lots and lots of free space, so that isn't an issue for me. I just wonder why this is a "best practice." The two programs CAN share the same drive ...the link says so.
    2) I supppose that I'll let 4 and 5 reside side by side for now. How do I make sure Motion 5 won't screw up my Motion 4 projects? (My hunch is that you can open a M4 project in M5 and do a "save as" ...this will create an M5 version and leave the M4 alone. Am I correct about that?) Maybe the answer to this is related to my first question.
    3) I want to make sure I'm not missing something my the words "startup disk." Although I have 3 drives in my MacPro, only is a "startup disk" ...the other two are for storage. If I move everything from FCS to a different internal drive, does it make any difference that the destination drive is NOT a starup disk.
    **I'm gonna separate this part out a bit because it may or may not be related to the previous quesitons.**
    I noticed the Motion 5 came with very little content and only a few templates, but I read in another thread that additional content/t can be downloaded free when I do an update. I also read that thread that this free content is pretty much the same as the content that I have with Motion 4.
    1) If I download this additional content (which is basically the same as what's in Motion 4), will I just have a duplicate of all that material?
    2) Could this be part of the reason that Apple reccomends that Motion 5 be on a separate drive ...so that the content and templates don't get mixed up?
    --Just a couple months ago, I finally got around to cleaning out all the FCS content, throwing away duplicates and organinzing thins properly. If I've got to got through this process again, I want to do it correclty the first time.

    When you install Motion 5 or FCP X, all your Final Cut Studio apps are moved into a folder called Final Cut Studio.  This is because you can't have two apps with the same name in the same folder.  I'm running them both on the same drive, no problems.
    Motion 5 does not automatically overwrite any Motion project files, that is hogwash.  When you open a v.4 file into 5, it will ask if you want to convert the original to 5, or open a copy called Untilted and make it a v.5 project.  Very simple.  If you're super paranoid, Duplicate the original Motion project file, and open the copy into v.5 to be extra safe.  Remember once a project file is version 5, it can't be opened into previous versions.
    You can't launch both at the same time, duh.
    The System Drive, or OS drive, is just that, the drive your operating system is installed on.  All applicactions should be on that drive, and NOT be moved to other drives.  Especiall pro apps like these.  Move them to a non-OS drive, and you'll regret it.  Trust me.
    Yes, run Software Update (Apple Menu) and you'll get additional content for Motion 5 that v.4 doesn't have.  It won't be any problem with space on your drive.  That stuff takes up very little space.
    Apple recommends two different OS drives, or partitions, only to avoid an overwhelming flood of people screaming "What happened to my Final Cut Studio legacy apps?" and other such problems.  Hey, they're put into a new folder, that's all, breath...
    If you're having excessive problems, you may not have hardware up to speed.  CPU speed is needed, at least 8GB RAM (if not 12 or 16 for serious work), but your graphics card needs to really be up to speed.  iMacs and MacBook Pros barely meet up, and will work well.  Mac Pros can get much more powerful graphics cards.  Airs and Minis should be avoided like the plauge.
    After checking hardware, be sure to run Disk Utility to "repair" all drives.  Then, get the free app "Preference Manager" by Digital Rebellion (dot com) to safely trash your app's preference files, which resets it, and can fix a lot of current bugs.

  • Technical documentation for ADF projects - how to and best practices

    Hi,
    I have a question how to create technical documentation for ADF project expecialy ADF BC and ADF Faces. Is there any tool or JDev plugin for that purpose? What informations should contains documentation for that project, any principles ?Has anybody any experienece. Are there something like documentation best practices for ADF projects? E.g. how to create documentation for bussiness components.
    Kuba

    I'm not sure there is "best practices" but some of the things that can help people understand are:
    An ADF BC diagram - this will describe all your ADF BC objects - just drag your components into a new empty diagram
    A JSF page flow - to show how pages are called.
    Java class diagram - for Java code that is not in the above two
    One more thing to consider - since ADF BC, JSF page flow and JSPX pages are all XML based - you could write XSL files that will actually transform them into any type of documentation you want.

  • EFashion sample Universes and best practices?

    Hi experts,
    Do you all think that the eFashion sample Universe was developed based on the best practices of Universe design? Below is one of my questions/problems:
    Universe is designed to hide technical details and answer all valid business questions (queries/reports). For non-sense questions, it will show 'incompatible' etc. In the eFashion sample, I tried to compose a query to answer "for a period of time, e.g. from 2008.5 to 2008.9, in each week for each product (article), it's MSRP (sales price) and sold_price and margin and quantity_sold and promotion_flag". I grabed the Product.SKUnumber, week from Time period, Unit Price MSRP from Product, Sold at (unit price) from Product, Promotions.promotion, Margin and Quantity sold from Measures into the Query Panel. It gives me 'incompatible' error message when I try to run it. I think the whole sample (from database data model to Universe schema structure/joins...) is flawed. In the Product_promotion_facts table, it seems that if a promotion lasts for more than one week, the weekid will be the starting week and duration will indicate how long it lasts. In this design, to answer "what promotions run in what weeks" will not be easy because you need to join the Product_promotion_facts with Time dimention using "time.weekid between p_prom.weekid and p_prom.weekid+duration" assuming weekid is in sequence, instead of simple "time.weekid=p_prom.weekid".  The weekid joins between Shop_fact and product_promotion_facts and Calendar_year_lookup are very confusing because one is about "the week the sales happened" and the other "the week the promotion started". No tools can smart enough to resolve this ambitious automatically. Then the shortcut join between shop_facts and product_promotion_facts. it's based on the articleid alone. obviously the two have to be joined on both article and time (using between/and, not the simple weekid=weekid in this design), otherwise the join doesn't make sense (a sale of one article on one day joins to all the promotions to this article of all time?).
    What do you think?
    thanks.
    Edward

    You seem to have the idea that finding out whether a project uses "best practices" is the same as finding out whether a car is blue. Or perhaps you think there is a standards board somewhere which reviews projects for the use of "best practices".
    Well, it isn't like that. The most cynical viewpoint is that "best practices" is simply an advertising slogan used by IT consultants to make them appear competent to their prospective clients. But basically it's a value judgement. For example using Hibernate may be a good thing to do in many projects, but there are projects where it would not be a good thing to do. So you can't just say that using Hibernate is a "best practice".
    However it's always a good idea to keep your source code in a repository (CVS, Subversion, git, etc.) so I think most people would call that a "best practice". And you could talk about software development techniques, but "best practice" for a team of three is very different from "best practice" for a team of 250.
    So you aren't going to get a one-paragraph description of what features you should stick in your project to qualify as "best practices". And you aren't going to get a checklist off the web whereby you can rate yourself for "best practices" either. Or if you do, you'll find that the "best practice" involves buying something from the people who provided the checklist.

Maybe you are looking for