NAM Deployment Question

Hi,
I am primarily interested in network performance, behaviour analysis and capacity planning. I am in the process of demo'ing a number of different netflow collector tools such as Flukes netflow collector, Plixers Scrutiniser etc.
I have heard of the NAM / NAM 2 before but thought this would be priced too high. I have done some more reading on the product and can see benefits in its ability to also analyse traffic in real time from by using SPAN.
My question is, how many ports can you aggregate into the SPAN session on the NAM 2 6500 card ? My idea is to deploy two NAM modules in our two Core 6500 switches. One "port" on each NAM will be used to collect netflow information from the rest of the network backbone (including remote branch devices), the other "port" will be used as a destination for multiple SPAN source ports, e.g. multiple  Gigabit ethernet backbone links that interconects the entire backbone.
Can the NAM take this load on the SPAN? I have read that the NAM 2 benefits from supporting the 2 backbone fabric ports, does this therefore mean that it fully supports the 40Gbps connection into the switching fabric backbplane if you have  SUP720 deployed ?
Thanks in advance.

I'd suggest you post this in the integration newsgroup.
-- Rob
sangita wrote:
Hallo Everyone,
I have a question regarding WLI deployment. Could somebody please suggest some
of the key elements involved in AUTOMATING the WLI8.1 project.
i know that in WLS8.1 , we can package our application into an .ear file and then
use either wldeploy ant task or use weblogic.Deployer to automate the deployment
process.
I wanted to know how and what are the elements required to be deployed as fas
as WLI applications are concerned. Like adapters and all .....
I really appreciate your time on this.
thank you,
sangita

Similar Messages

  • Please read my question carefully, this is, I think, a question for the experts. It's not the usual name change question.   When I setup my new MacBook Pro, something slipped by me and my computer was named First-Lasts-MacBook-Pro (using my real first and

    Please read my question carefully, this is, I think, a question for the experts. It's not the usual name change question.
    When I setup my new MacBook Pro, something slipped by me and my computer was named First-Lasts-MacBook-Pro (using my real first and last name).
    I changed the computer name in Preferences/Sharing to a new name and Preferences/Accounts to just be Mike. I can right click on my account name, choose advanced, and see that everything looks right.
    However, If I do a scan of my network with my iPhone using the free version of IP Scanner, it lists my computer as First-Lasts-MacBook-Pro! And it lists the user as First-Last.
    So even though another Mac just sees my new computer name, and my home folder is Mike, somewhere in the system the original setup with my full name is still stored. And it's available on a network scan. So my full name might show up at a coffee shop.
    Can I fully change the name without doing a complete re-install of Lion and all my apps?

    One thought... you said the iPhone displayed your computer's old name? I think that you must have used the iPhone with this computer before you changed the name. So no one else's iPhone should display your full name unless that iPhone had previously connected to your Mac. For example, I did this exact same change, and I use the Keynote Remote app to connect with my MacBook Pro. It would no longer link with my MacBook Pro under the old name, and I found that I had to unlink and then create a new link under the new name. So the answer to your question is, there is nothing you need to do on the Mac, but rather the phone, and no other phone will display your full name.

  • JSC deployment question.

    I have a situation where after my JSC created application is deployed, I need to have the ability to access PDF files that I give the user of my application the ability to select in a listbox.
    At development time, the names and locations of these PDF files (which will be stored locally on the Sun App Server) are not known, so I cannot have JSC bundle them up with the WAR. What I do instead is have my JSC app read from a database and populate a listbox with the names of the PDF files and map those to the local URL (as soon as I figure out what that will be).
    My question is this, once I deply the app, where (what directory) do I place the PDF files under assuming that I am using the bundled Sun Application Server that comes with JSC?
    Thanks!

    Actually, it might be better to read the PDF file directly from the database instead of placing it in the path somewhere and redirecting to it (more security if ever needed...no static files to browse to).
    Let me dig around for an exmple of doing this sort of thing and I'll try to apply it to a JSC project context and post my findings here.
    If anyone has done this before please let me know!
    Thanks.

  • Authorization, deployment  question

    Hi all,
    does somebody know whether it is possible to create a VC component that can be later used from users without authentification, just as normal web page.
    And another question is how can I choose a custom link to call a deployed VC application (as webdynpro), and not use the one, generated automatically, because it is too complex (exampe: change the link http://host:port/webdynpro/dispatcher/sap.com/test~newdc_impl/OrderForm
    to something simpler)?
    Thanks in advance,
    best regards,
    Vera Stoyanova

    Hi,
    thanks for the answer. But what I wanted was to be able to create the custom link in CE7.1 directly. Is this possible.
    Or at least is it possible to have the part of the name I have given in VC for the development component, not with this test~ prefix (the example link is in my first post)? It is always automatically generated so.
    Best Regards,
    Vera

  • "Internet" MP & DP Package Deployment Questions

    Good Morning:
    First of all, thanks for all your help over the years - lurking social.technet has provided many a relief for headaches. Now onto my hopefully simple question(s):
    I recently set up SCCM 2012 SP1 in our work environment. I have 1 Site with two servers (let's call them MP & IMP). IMP is set to internet only, and MP to internet & intranet serving our AD-site boundary groups (a couple through forest trusts). PKI
    shows as working on clients/server as far as I can tell (my first PKI implementation). The IMP has a live/valid DNS entry in public DNS with port 443 opened to it through our building firewall and is an MP, DP, EP, EPP, FSP. The site, MP, and IMP are healthy
    and after turning off PULL DP, the IMP is receiving all packages/applications from the primary DP happily. Clients are being pushed with the DNS name of the IMP when they log onto our network from the office (and are receiving it successfully). Updates
    for SCEP and Office/Windows are being delivered timely to clients on the intranet, the ADR's are running well and pretty much allow me to not mess with them except to rebuild every 6 months (which will happen in July). My question is probably something simple,
    so pardon my ignorance... but it seems that clients are phoning home from the IMP just fine (seeing all the 192.x.x.x addresses when laptops call in from home) but they're not getting the deployment packages (SCEP updates is my only reference point at the
    moments) pushed to them while on the internet, even though they reside on IMP's DP. Clients should be getting SCEP updates every morning starting after 6AM (just 1 package a day for now) but the clients at home talking to the IMP are just not receiving the
    push it seems? They check in with policy requests etc... I'm not sure if this could be a simple Windows Firewall issue on the clients I can remediate with GP, is there an extra port(s) that need opened to the IMP other than 443 through the building firewall...
    I'm not tearing out my hair by any means, but I am a bit miffed. I'm happy to provide any logs and run any tests desired. I have ample loaner laptops to try multiple configurations on. Any help would be greatly appreciated in getting the last piece our SCCM
    2012 SP1 puzzle in place so we can label it working at 100% and move onto learning more of its nuances and advanced capabilities. Thanks so much for any help or guidance. 

    Since no one has answer this post, I recommend opening  a support case with CSS as they can work with you to solve this problem.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • AIA Deployment Questions and Answers

    Question Is it recommended to have one single Deployment Plan for all the composites, or one each for the requestor ABCS, Provider ABCS, EBS etc?
    Answer There is no particular recommendation. If you have implemented your services properly and idependently (see http://blogs.oracle.com/aia/2010/11/aia_11g_best_practices_for_dec.html), you can deploy them in any order and it does not really matter if you have one or multiple DPs as everything gets deployed.
    Question Looking at the DP that gets generated, it looks like they can be manually created, without going through the AIA LCW, BOM generation etc. I know it is not the approach suggested as per AIA flow, but is there anything wrong with doing it?
    Answer There is nothing really wrong with hand-crafting the DP except that you lose the overall stream of information from start to end in your development lifecycle. But some customers actually decided to do it this way, though.
    Question What is the significance of the /oracle/product/Middleware/AIA/aia_instances/my_instance/AIAMetaData/config/AIAConfigurationProperties.xml file? When an interface is migrated from Dev to Test, how do I make sure that this file is updated in Test instance? Is it by manually updating and running UpdateMetaData.xml? or the AID will take care of it?
    Answer This file is absolutely key at runtime as services read certain properties (e.g. the actual endpoint of the services that are called) from that file. I would assume your services will fail if you don't have a valid file on your test environment, i.e. having a section for each of your services.
    And yes, the AID takes care of maintaining it. Every service you created with Service Constructor should have a file called AIAServiceConfigurationProperies.xml. When running AID to deploy such a service, it first adjusts the values in that file to match the current environment, then merges the content into AIAConfigurationProperties.xml and finally uploads it to MDS.

    Hi,
    Based on the above, if the composite folder contains AIAServiceConfigurationProperies.xml, the AID should do the following,
    1. Adjust the values of the server and port, as long as the necessary replace token command is included in the Preinstallscript of the Deployment Plan
    2. Merge the content of the /composite/AIAServiceConfigurationProperies.xml into $AIA_INSTANCE/AIAMetaData/config/AIAConfigurationProperties.xml
    3. Upload $AIA_INSTANCE/AIAMetaData/config/AIAConfigurationProperties.xml into MDS.
    I have tried to run the Deployment Plan given below, and the above steps did not happen.
    <DeploymentPlan component="XXXX" version="3.0">
    <PreInstallScript>
         <replace file="${AIA_HOME}/Composites/ABCS/Ebiz/CreatePurchaseOrderListEbizProvABCSImpl/composite.xml"
         token="xxxxxxxx.xxxxx.xxx.xxx" value="${fp.server.adminhostname}"/>
         <replace file="${AIA_HOME}/Composites/ABCS/Ebiz/CreatePurchaseOrderListEbizProvABCSImpl/composite.xml"
         token="9999" value="${fp.server.soaserverport}"/>
         <replace file="${AIA_HOME}/Composites/ABCS/Ebiz/CreatePurchaseOrderListEbizProvABCSImpl/AIAServiceConfigurationProperties.xml"
         token="xxxxxxxx.xxxxx.xxx.xxx" value="${fp.server.adminhostname}"/>
         <replace file="${AIA_HOME}/Composites/ABCS/Ebiz/CreatePurchaseOrderListEbizProvABCSImpl/AIAServiceConfigurationProperties.xml"
         token="9999" value="${fp.server.soaserverport}"/>
    </PreInstallScript>
    <Configurations>
    <EndpointConfigurator target-server="pips.XXXX" dir="${AIA_HOME}">
    </EndpointConfigurator>
    <Datasource name="APPS" jndiLocation="jdbc/APPS" action="create" database="participatingapplications.Ebiz.db.EBIZ01" xa-enabled="true" wlserver="pips.XXXX"/>
    <UpdateMetadata wlserver="pips.XXXX" >
    <fileset dir="${AIA_HOME}/AIAMetaData">
    <include name="AIAComponents/ApplicationObjectLibrary/Mar/**" />
    <include name="AIAComponents/ApplicationConnectorServiceLibrary/Mar/**" />
    <include name="AIAComponents/ApplicationObjectLibrary/Ebiz/**" />
    <include name="AIAComponents/ApplicationConnectorServiceLibrary/Ebiz/**" />
    </fileset>
    </UpdateMetadata>
    <!--<ManagedServer wlserver="pips.XXXX" action="shutdown" failonerror="true"/> -->
    <DbAdapter connection-instance-jndi="eis/DB/APPS" datasource-jndi="jdbc/APPS" xa-enabled="true" action="create" wlserver="pips.XXXX"/>
    <!--<ManagedServer wlserver="pips.XXXX" action="start" failonerror="true"/> -->
    </Configurations>
    <Deployments>
    <Composite compositeName="CreatePurchaseOrderListEbizProvABCSImpl" compositedir="${AIA_HOME}/Composites/ABCS/Ebiz/CreatePurchaseOrderListEbizProvABCSImpl" revision="1.0" wlserver="pips.XXXX" action="deploy"/>
    </Deployments>
    <PostInstallScript>
    </PostInstallScript>
    </DeploymentPlan>
    Can you please let me know if I am missing something in the DP.
    Thanks,
    Anish.

  • JNDI resources and Servlets � deployment question.

    Use of JSTL and Realms implies use of JNDI for accessing to DataSource by its string name. I can set up JNDI resources for a web-application using web.xml or/and context.xml. But web.xml is inside a WAR file. So it is impossible to deploy an application using single WAR file since web.xml must be edited first, and that requires of exploding WAR archive.
    The question is: is there some common practice how to make web applications configurable (database URI, etc) without editing of web.xml? Does it make sense to store settings using Preferences API and to declare JNDI resource within ContextListener class?

    Tomcat 5.5 uses a different format for data source configuration. You're using the Tomcat 4.x and 5.0 series configuration format. Downgrade Tomcat, or use the appropriate config:
    http://tomcat.apache.org/tomcat-5.5-doc/jndi-datasource-examples-howto.html
    You'll want something like this:
    <Resource
       name="jdbc/HDCD"
       auth="Container"
       type="javax.sql.DataSource"
       username="miladmin"
       password="authorize"
       etc...
    />

  • 11.1.2 Deployment question!

    We are using ADF 11.1.1.4 with Webcenter and SOA components but not extensively. We would like to upgrade to 11.1.2 and got little confused after reading note given below.
    "Important Note - This version of JDeveloper doesn't include the SOA and WebCenter pieces - to use these components you'll need to download Oracle JDeveloper 11.1.1.5.0"
    My question is can the code developed using 11.1.2 Jdeveloper be deployed on 11.1.1.4 WLS? If that's the case we are thinking to develop SOA and Webcenter application using 11.1.1.4 and ADF using 11.1.2 and both deployed on 1.1.1.4 WLS!!!!
    Is there any other option for installation like this!!!

    code developed in 11.1.2 will be deployed 10 10.3.5 with sherman patch and adf runtime..
    chk this
    http://tompeez.wordpress.com/2011/06/29/follow-up-upgrading-wls-10-3-5-with-adf-runtime-11-1-2-0-0-sherman-patch/
    you cannot deploy code developer in 11.1.2 to 11.1.1.4 wls.. not possible..

  • [JS][CS3] Script deployment question

    Hi everyone,
    I am currently writing scripts in a corporate environment (~30 users), and I'm contemplating the easiest and most efficient way of deploying the scripts. We have a couple of scripts already with the users that are located locally on their machines. As time goes on we'll be writing more and more scripts and updating the existing ones as necessary.
    My current inclination is to stop saving the files locally on the individual machines and instead put a shortcut to a shared server folder in the scripts folder instead. In my testing it seems to work fine without any significant delay, and it automatically populates all of the files in linked folder. I basically want to be able to make sure that users are always launching the most up-to-date code and that any future scripts can be easily deployed without me walking around to everyone's desk (or worse, e-mailing instructions and hoping for the best).
    Are there any downsides/performance issues to this approach? It's a strictly desktop environment, so I don't have to worry about laptop users who are not connected to the network. Is there an alternate way that would be smoother?
    Thanks!

    We've tried it both ways. On our 12 InCopy machines (OSX), we have a symbolic link from /Applications/Adobe InCopy CS5/Scripts/Scripts Panel/ to a shared folder. This works great as long as our file server doesn't go down, and all sorts of other things would break horribly. And the User scripts folder is still available for any machine-specific customizations.
    For our 4 InDesign machines, we decided that was a little too much risk. So instead we have a script, copyScripts.jsx, that just copies scripts from the file server to the Application scripts folder. So everytime there is a change, we go around to those 4 machines and run copyScripts. This works OK.
    It's a little bit of a pain but it also insulates us a bit from bugs, and it means that whoever is using InDesign can decide to not update the scripts if they are in the middle of something critical. They can choose to defer taking the script updates.
    I think if we had to do it over, we probably wouldn't bother with the copyScripts mechanism. The centralized shared folder works pretty well. But it's not so annoying that we've gotten rid of it. And for 4 machines, it's not a huge burden.
    Also, there are some questions of development. If it's convenient to do development in the ESTK, having the shared folder means you cannot just rightclick in the Scripts panel and choose Edit Script. Or if you do, then any changes you save as you are developing the script are instantly available to everyone, potentially breaking their work if there are problems, or if there is debugging output ,etc., etc. So you need to make sure you do a development in a different place. Just something to keep in mind.

  • Split Directory Packaging and Deployment Question

    Hello Rob Woollen and All,
    I have a question about packaging and deployment with the "split directory structure"
    in WebLogic Server 8.1.
    Specifically, how does one go about deciding which classes to put in myEnterpriseApp/myWebApp/WEB-INF/classes,
    versus myEnterpriseApp/myEjbModule, versus myEnterpriseApp/APP-INF/classes?
    I think the answer to the first part is easy enough: if there are classes depended
    on by, say, the servlets in a web app, but not depended on anywhere else in the
    enterprise app, then those classes should go in WEB-INF/classes.
    It's the other part of the question that gives me trouble. I use local interfaces
    on my session beans. Let's say I have a domain object class returned from a session
    bean method and depended on by the web app. If I put that domain object class
    under myEnterpriseApp/myEjbModule, then the web app can see it by virtue of the
    classloader arrangement.
    But the wlcompile Ant target supposedly compiles stuff to build/APP-INF/classes.
    What stuff? How does it decide? I haven't experimented and empirically observed
    yet, but I couldn't find the answer in the documentation and tutorials. Is it
    looking for java source files under src/myEnterpriseApp but not under myWebApp
    or myEjbModule? In general, does BEA have any recommendations in this area?
    Thanks,
    Randy

    "Randy Stafford" <[email protected]> wrote in message
    news:[email protected]...
    >
    Hi Mark,
    Thanks for the reply. I don't have 8.1 installed yet, so I can'tempirically
    observe the example's behavior. But I downloaded the example andinspected the
    code. It answers some, but not all, of my questions.Where to start.
    In 8.1 we have made optimizations to J2EE packaging. Mostly this is about
    not having to use manifest classpath's to do sharing of of common classes.
    MF Cp's are a pain to configure. You put your classes in one location in
    the ear and then EVERY module has to have a MF CP entry pointing to that
    location, and then you actually have N number of classes loaded per module.
    The mechanism to share classes across all modules is APP-INF/lib and
    APP-INF/classes. The benefit is that APP-INF is shared across all modules.
    So to your question below you could just put it in the EJB module, BUT if
    you have mutliple EJBs that you split into seperate modules your back tot
    the same issue. So APP-INF is just the simplist solution over-all.
    Split-dir is a specified way to lay out disk your src files
    Split-dir
    From code inspection, it looks like the JSP and EJB (therefore the web appmodule
    and EJB module) both depend on the AppUtils class, which is not inAPP-INF, but
    rather in a directory under the enterprise app directory that does notrepresent
    a web app module or EJB module. In the build file's compile target, is itthe
    wlcompile task invocation that causes compilation of AppUtils.java? Or isit
    the ant task invocation (with "build.appStartup" as the value of thetarget attribute)
    that causes compilation of AppUtils.java due to the dependency ofApplicationStartup
    on AppUtils? And what subdirectory of the build directory doesAppUtils.class
    end up in?
    Why not just put AppUtils.java in the EJB module? Both dependent moduleswould
    still be able to see it by virtue of the classloader arrangement. Doesputting
    it in outside of all dependent modules represent a convention that BEArecommends?
    >
    Finally, why not put applicationresource.properties in the same place asits user
    AppUtils.java?
    Thanks,
    Randy
    "Mark Griffith" <[email protected]> wrote:
    Randy:
    (Rob may post later, but here is my take)
    "Randy Stafford" <[email protected]> wrote in message
    news:[email protected]...
    Hello Rob Woollen and All,
    I have a question about packaging and deployment with the "split
    directory
    structure"
    in WebLogic Server 8.1.
    Specifically, how does one go about deciding which classes to put inmyEnterpriseApp/myWebApp/WEB-INF/classes,
    versus myEnterpriseApp/myEjbModule, versusmyEnterpriseApp/APP-INF/classes?
    I think the answer to the first part is easy enough: if there are
    classes
    depended
    on by, say, the servlets in a web app, but not depended on anywhereelse
    in the
    enterprise app, then those classes should go in WEB-INF/classes.
    It's the other part of the question that gives me trouble. I use localinterfaces
    on my session beans. Let's say I have a domain object class returnedfrom
    a session
    bean method and depended on by the web app. If I put that domain
    object
    class
    under myEnterpriseApp/myEjbModule, then the web app can see it by
    virtue
    of the
    classloader arrangement.
    But the wlcompile Ant target supposedly compiles stuff tobuild/APP-INF/classes.
    What stuff? How does it decide?wlcompile has a module factory. If a directory is claimed by a module
    factory then it is compiled by that specific module compiler. The rules
    for
    module definition follow the same J2EE formatting rules.
    So:
    /myejb/
    would be identified as a ebj module by:
    */myejb/meta-inf/ejb-jar.xml
    */myejb/myejb.ejb (EJBGen file)
    then src files (*.java) will be compiled to
    $BUILD_DIR/myejb/
    /myweb/
    would be identifid as a web module by:
    */myweb/WEB-INF/web.xml
    Also for webapps
    /myweb/WEB-INF/src/*.java
    will be compiled ot
    $BUILD_DIR/myweb/WEB-INF/classes
    We choose WEB-INF/src following the struts precedence.
    So a plain old module that has noting but .java files in it will go to
    $BUILD_DIR/APP-INF/classes
    If you have a jar of classes that you need to share across the entire
    ear,
    you would check it into your src tree at:
    $SRC_DIR/APP-INF/lib/mycommon.jar
    You can check out an example at:
    $BEA_HOME/weblogic81/samples/server/examples/src/examples/splitdir/helloWorl
    dEar
    Hope this helps.
    cheers
    mbg
    I haven't experimented and empirically observed
    yet, but I couldn't find the answer in the documentation and tutorials.Is it
    looking for java source files under src/myEnterpriseApp but not undermyWebApp
    or myEjbModule? In general, does BEA have any recommendations in thisarea?
    Thanks,
    Randy

  • Playlist name sorting question

    First off, sorry if this has been discussed before, I couldn't find it anywhere.
    My question is, is there any way to sort my playlists via a "Sort Name" option or something so that a playlist starting with 'the' (for example, "The Letter Black") will be sorted, in the case of the example, under 'L' instead of 'T'? Thanks in advance.
    Blessings.

    Sorry, there are no controls about the sort order of playlists. They go folders, smart playlists, and regular playlists, with a lexical sort in each.
    However, you don't need to create playlists for individual artists and albums. You can easily get to them by the column browser. To turn it on, click View > Column Browser > Show. Then type "lett" in the search bar and you will be right there.

  • WLC 5508 and LightWeight APs Deployment question

    Hi There,
    Can you please wit the following question in regards to the deployment of a new WLC and new LAPs,
    I have configured and connected a 5508 WLC and 3500 series LAP.
    LAG is enabled in the WLC and successfully connected to the neighboring switch (using etherchannel) and to the network.
    The port-channel port is set to trunk mode obviously and certain vlan ids are currently allowed (3-5)
    The management interface has this IP address 192.168.5.250/24
    I created a WLAN with WLAN ID 3, Interface set to Management and say SSID test1
    I have connected a new LAP to the network, which switchport interface is set to access mode and assigned with vlan id 3. The LAP is able to join the WLC successfully with an IP address, such as, 192.168.3.100 (assigned via DHCP).
    When I try connecting a mobile client to the wireless LAN, it can successfully detect and connect to the WLAN, created in the WLC (test1) however it gets an IP address by DHCP, in the 192.168.5.0/24 network, which is the IP range of the management interface's IP address.
    What can I do to get the clients connecting on network 192.168.3.0/24? I thought this would be the case since I allocated the WLAN Id of 3 in the WLAN test1 configuration and since the LAP switchport is set to access mode with vlan ID 3.
    Cheers,
    egua5261

    Hi,
    The WLAN ID has no effect with the VLAN ID. WLAN ID is just an identifier for the WLAN.
    you said "Interface set to Management and say SSID test1" and here is your issue.
    You set the interface of the WLAN to the management. So, the WLAN will be mapped to the VLAN to which the management interface is mapped to.
    What you need to do is to create a dynamic interface with ip range in 192.168.3.0/24 and provide VLAN ID for that interface and assign your WLAN to this new interface. This way your clients will get an IP from this specified range.
    HTH
    Amjad

  • 10.7 client .dmg creation for deployment questions.

    Please forgive me if this question is in the wrong forum.
    I've been doing searches online and in the 10.7 Peachpit books (client and server) and I can't seem to find the info I am looking for.
    I am trying to create a 10.7 .dmg to use on new Macs my company is going to deploy. We are not using 10.7 Server at the moment, we
    are using 10.6.8 Server. This will not be an image we are going to deploy over the network either. I know this may not be "best practices"
    but at the moment, this is the way we are going to (re)image new Macs.
    Basically, I want to create a 10.7 .dmg that does NOT contain the recovery partition. I can't seem to find a way to do this. If I am correct,
    even a "clean" install, when booted from a USB 10.7 recovery drive, will create the recovery partition, right?
    I am running 10.7 client and i have the 10.7.3 Server Admin tools.
    I apologize in advance if I am missing something glaringly obvious.
    Also, any tips on best practices for creating 10.7 client .dmgs for deployment that's any different than creating 10.6 images?
    thanks in advance.

    Using information from this site and my own scripting experience I present to you a more secure way to do it which supports munki and other deployment tools without having the password to the ODM or client in clear text on the client or on packages easeliy accessable on a http server:
    On server:
    ssh-keygen
    Save the output of ~/.ssh/id_rsa.pub to your clip board
    Then create a launchd or something so that this runs at startup
    nc -kl 1337 | xargs -n 1 -I host ssh -q -o StrictHostKeyChecking=no root@host /usr/local/bin/setupLDAP diradminpassword localadminpassword > /dev/null 2>&1
    On client:
    Create script (to use in a package as postinstall or something):
    #!/bin/bash
    # Turns on ssh
    systemsetup -f -setremotelogin On
    # Sets up passwordless login to root account from server
    echo "ssh-rsa FROM_YOUR_CLIPBOARD_A_VERYLONGOUTPUTOFCHARACTERS [email protected]" >> /var/root/.ssh/authorized_keys
    # installs setupLDAP
    mkdir -p /usr/local/bin
    cat > /usr/local/bin/setupLDAP <<'EOF'
    #!/bin/sh
    PATH=/bin:/sbin:/usr/bin:/usr/sbin
    export PATH
    computerid=`scutil --get ComputerName`; yes | dsconfigldap -vfs  -a 'server.domain.no' -n 'server' -c $computerid -u 'diradmin' -p $1 -l 'l' -q $2
    EOF
    chmod +x /usr/local/bin/setupLDAP
    End note
    That was the code, now you just add the skeleton And to clearify what this does, first we let the server connect to the client as root even though root access is "disabled" (he has no password and therefore you can't log in as root as default). Then we create a small script to setup OD binding (/usr/local/bin/setupLDAP) but this script doesn't contain the passwords. Then the client send a request to the small socket server on the server with it's hostname, then the server connects to that hostname and executes /usr/local/bin/setupLDAP with the needed passwords.

  • Deployment Question and non-compliant

    Hi, I have a question about deployments and non-compliant systems. Since we updated to 2012 R2 I have had many patching deployments fail with non-compliant messages. If I bring up the deployment it shows many are installed and many are Required.
    Could anyone answer the following question?
    If for example I am deploying a IE 11 patch to Windows 7 machines that does not have IE 11 installed yet would that give the non-compliant error because it cannot install the IE11 patch? This would also apply to .Net 4 and 4.5 for example. We deploy Software
    Update Groups based on Operating Systems so every month I search for every patch non-expired and not superseded for the OS in question and then make the software update group based on that. I figured if the software update group had a patch for a product like
    IE 11 that was not installed on the device yet it just would not install it.
    Any help is appreciated.

    Hi,
    If the update isn't applicable let's say that IE11 is not installed it will not report it as non-compliant and it will not report it as required either so you should be fine in the reports.
    Regards,
    Jörgen
    -- My System Center blog ccmexec.com -- Twitter
    @ccmexec

  • Multiple JDS as a name serivce question

    I have three JDS (5.2) running as a naming service for host and user authentication: one master, and two slaves. My problem is that the ldap servers themselves point to another ldap server for information. So when I take one server down (patching) everything pointing to that server (ldap1) goes down, and everything pointing to ldap2 (which uses ldap1 for a naming services) goes down as well. The basic effect is I lose 2/3 of my applications.
    I read an LDAP server can not point to itself for authentication. Do I just need to remove the ldap client and run the ldap servers old school (local accounts)? Or is there another solution?
    I do have normal hosts (non-LDAP servers) pointing to a VIP (virtual IP). Do I just need to treat my LDAP servers the same?

    No, I'm not saying that.... I guess I should have spelled out the "clients" comment a little more.
    I have all (well, a majority in the 90% range) of my clients pointing to a Cisco CSS VIP which is really all three of my LDAP servers. This way LDAP queries should never go down. I've tested this on indivudal servers and it appears to run nicely.
    The problem is with my LDAP servers, and their LDAP client configurations. The axiom I'm using is: "you can not point an LDAP server to itself for name service resolution". Would pointing the LDAP servers to the VIP break this axiom?
    Right now they are pointing to other LDAP servers than themselves. And when one goes down for patching (ldap1), the other LDAP server that is using it (ldap1) for it's name service resolution will also go down; effectively shutting down 2/3 of my applicaitons.
    This is really a check-egg type of question.

Maybe you are looking for