Diskless client differentiation

If I have several diskless clients, and I want to separate, say, which init.d scripts I want to run on any given machine is there a quick way I can start/stop scripts per machine.  I want to run these scripts on these machines, but not those.  I don't think I need a totally different installation from scratch to do that.  I'm sure this is probably easy, but probably not used all that much.  It could also be parsed from the kernel command line (possibly?) from an entry in the pxelinux.cfg that way to point each diskless client/group to use its' own rc.conf file?  I want to be able to turn on certain machines to do certain tasks, but still be able to use the diskless for troubleshooting a sick machine in lieu of a live CD (since it would be quicker).
If there's a quick way to grab the MAC address by script or by writing a small C program, that will probably do it.  I'm thinking that sed might also be a possibility.
I could start each machine with a minimal common init scripts in rc.conf and then login via SSH and start the different services on the designated machines, but I know there has to be a way to automate this process.
Last edited by nomorewindows (2012-01-30 17:09:03)

I searched around and found the following:
/sbin/ifconfig \
   | grep 'eth0' \
   | tr -s ' ' \
   | cut -d ' ' -f5
the result it came up with was 'metric' which is another part of it, so I modified it to (shows all MAC):
ifconfig | grep "ether" | cut -b 15-31
Apparently this also works for a specific interface:
cat /sys/class/net/eth0/address
Now I just need to place it in my script with a conditional command to get it to execute based on the comparison.
Last edited by nomorewindows (2012-03-02 14:36:37)

Similar Messages

  • Stuck for 2 weeks on diskless client

    I have signed up with 5 different forums. None of them seemed active. I haven't gotten a single answer. I'm going to try to explain as best as I can what my problem is. I am new to linux and I am using Solaris 10 (newest update).
    I am attempting to make a diskless client. Here are the settings:
    ultramain: this is the OS server (Ultra 60). Everything is set correctly. I have checked ethers, hosts, and all other files relating to diskless booting.
    ip: 120.3.2.250
    broadcast: 120.3.2.255
    netmask: 255.255.255.0
    sf3800-up: this is the system controller of the client
    ip: 120.3.2.41
    gateway: 120.3.2.250
    netmask: 255.255.255.0
    the client is a sun fire 3800.
    It is called sf3800-a in the /etc/hosts file.
    The mac address is right in the ethers file.
    I am attempting to boot up sf3800-a through telnet.
    I have a hub. One ethernet cable goes from ultaserver to the hub, an other from the system controller to the hub, and an other one from sf3800-a to the hub.
    I telnet the system controller fine. I access the console and run show-nets.
    I proceed to boot. It finds the ip address fine. Cursor rotates for a while and it just stops. Cursor becomes blank and just flashes. I have snooped from the OS server. Everything loads up fine at first. All tftp data blocks load up fine. Then the client tries to get an ip for the OS server and finds it okay. It freezes on this line:
    "sf3800-a -> ultraserver SYSLOG R port=32801".
    After around 30 mins it goes through 2 more lines in snoop:
    "120.3.2.41 -> ultraserver Telnet R port=32817 Using RARP/BOOTPARAMS"
    "ultraserver -> 120.3.2.41 Telnet C port=32817".
    I also tried to verbose the boot.
    It freezes here:
    "Found 120.3.2.250 @ 8:0:20:c3:a7:65"
    I also have used in.rarp in debug mode. It finds the addresses find.
    Now here's the real puzzling thing. I can get out of this hang mode by pinging the client (120.3.2.50) from the ultraserver. Right after that, the cursor starts spinning again and everything boots up fine.
    WTF is the problem? I've searched to no ends. I reinstalled Solaris many times from different ISO's. I have switched the ethernet cables. I have changed memory. Changed hubs. Checked every files. Re-done smosservice + smdiskless. Stopped and restarted the daemon services over and over again.
    I just started with this. I have never used Solaris. Only windows. I know I'm missing something really easy. I was thinking that the addresses are maybe not setup right? I also don't have a defaultrouter address in /etc. I doubt that could cause a problem.
    Please, I beg you. Help me. Help a out.
    PS:
    I have left the system while it was hung over night. This morning, it did load up. Is there a way to diagnose which file it is getting stuck on?

    I think that the situation is different, depending on whether your broadband was supplied via cable, or a separate phone line. I believe that some parts of the cable network are still analogue, so a separate copper pair is needed for phone and broadband.
    In the latter case, then a MAC code would be needed, for BT to release the "tag" on the line.
    It should not be needed on a cable service, if the phone and broadband come down the cable, as BT have no involvement.
    If you are in that category, then you should have specified that you did not have a BT phone line, or broadband. You would not have needed a MAC code.
    What category were you in?
    There are some useful help pages here, for BT Broadband customers only, on my personal website.
    BT Broadband customers - help with broadband, WiFi, networking, e-mail and phones.

  • Netboot, diskless clients, and Open Directory users?

    Hi, I've been reading through the System Image pdf & maybe it's me but a couple of things aren't clear.
    I want to set up diskless clients and allow users to log on to their network home folder using their OD login. Is this possible and where would be a good place to start with instructions on setup?
    thanks, Patrick

    Ok, I got it.
    But what if I want the OD user to have some configuration data on the local client?
    Let me explain that a bit better. The configuration I would like for my network and users is as follows: the server works only as an authentication server, I do not want roaming profiles or homes directory on the server; I just want the server to authenticate users when they log in to several client machines amongst the lan.
    For documents sharing, in fact, I much rather prefer using Dropbox, which allows my users to share on a WAN-instead-of-LAN basis.
    But a home local directory is needed for OD users to keep libraries, preferences files and so on.
    Back to the old Windows server (PDC) time, I used the server as a name server authentication only, still the client created a local profile for the user of the server.
    Does OD works this way too or am I missing something?
    Thank you.

  • Diskless client help.

    The diskless client starts to boot. I am booting via telnet. I am using snoop to see what's going on from the main server. Rarp responds with the correct ip address. The cursor starts to spin for a while and stops. On snoop it stops and says "syslog R port=32772". Then after a minute it keeps going. Server and client talk for a while then it stops again. Cursor on telnet is still not rotating. There is no way to get the client to boot unless I ping it in an other console. I ping and the cursor starts rotating again and the client finally boots. What in the world is wrong. I've been stuck on this for a week. I just started using unix and Solaris. I need serious help. I'd really appreciate it if someone could help me live on msn or aim. Help me please!

    Anyone? Please...

  • X86 diskless client's Ethernet interfaces not plumbed/configured

    Hi,
    My client's extra (non-primary) Ethernet interfaces aren't being configured even though I put entries for them in the client's sysidcfg file. Could someone please tell me if this indicates a problem in the sysidcfg file or something more insidious?
    Thanks,
    Dave

    Hi,
    It turns out that my sysidcfg file isn't being used by the client at all. That explains some things I'm seeing, but I'm not sure how to get the file to be used by the client. An entry is in both the bootparams file and the SsysidCF DHCP macro. I tried a sys-unconfig on the diskless client, but it turns out that it can't be used on a diskless client.
    Something more fundamental is amiss...
    Thanks,
    Dave

  • Diskless client boot on a blade farm

    We have a possible blade-centric architecture we are looking at whereby the chassis provides for F/C connection to a SAN and standard Ethernet (x 2) for network.
    As we are looking to use RHEL as the OS I'd be interested to know whether people think that the blades can be booted as a diskless client (is this supported by Oracle?) from a single boot image on the SAN or the network. All Oracle datafiles would be maintained on the SAN. Has anyone experience or comments on this kind of setup?
    The possible advantages of this would be the reduced admin for RHEL: When patches are applied one blade can be taken out of the cluster and booted from a secondary image against which the patches are applied. When completed, the remaining blades can be rebooted one at a time to pick up the revised image. The first boot image can be left hanging about as a fall back option before the next round of upgrades. Also, expanding the number of blades can be done (near as damn it) with zero configuration and there is little chance of version mismatches in the OS.
    I am concerned about database upgrades, the Oracle home in a shared everything environment, impact on Grid Control, so I welcome any comments/criticism/abuse.
    Best regards to all,
    Jon Mercer

    I haven't tried RAC on diskless boots, but I have used diskless boots with RHAS3. Once the system is running, you can't really tell except that all mount points are NFS based.
    But in regards to having the oracle homes shared, I would recommend against it. If you follow the normal guidelines for the diskless boots, you'll find that each node already have it's own "home" on the array; in particular paritions like /var and /tmp cannot be shared between hosts. The same goes for Oracle, as (unforunately) it puts most of it's log files in the oracle home. These would have to be different.
    I would look at OCFS2 - it has support for shared Oracle Homes - in which it can be TOLD to keep seperate copies of "common" log files - meaning seperate files for each host. This way, executables that are the same, stays as one copy but host specific data are separated out. I would be very careful before going that route and do a lot of testing; but that seems to be what the good linux guys thought of when they did the OCFS2.

  • Installing diskless client on solaris 10 1/06 failure

    I have a freshly downloaded version of solaris 10 installed and am trying to get the diskless client installed on it. I'm having the following failure:-
    # ./smosservice add -H destiny:898 -- -o destiny \
    -x mediapath=/export/install/sparc_10 \
    -x platform=sparc.sun4u.Solaris_10 \
    -x cluster=SUNWCXall -x locale=en_GBAuthenticating as user: root
    Type /? for help, pressing <enter> accepts the default denoted by [ ]
    Please enter a string value for: password ::
    Loading Tool: com.sun.admin.osservermgr.cli.OsServerMgrCli from destiny:898
    Login to destiny as user root was successful.
    Download of com.sun.admin.osservermgr.cli.OsServerMgrCli from destiny:898 was successful.
    Failed to create clone area /export/root/clone/Solaris_10/sun4u.
    Are there any patches needed to get this installed or any workarounds?

    In my case I solve this when I realised that the diskless client software existed on all the disks and not just the first disk. Once I'd installed the software off all of the disks it worked as it should.
    The software for the client is not in the same directory on every disk:-
    Disk 2. /cdrom/cdrom0/s0/Solaris_10/Tools/add_to_install_servers
    Disk3. /cdrom/cdrom0/Solaris_10/Tools/add_to_install_servers
    Disk 4. /cdrom/cdrom0/Solaris_10/Tools/add_to_install_servers
    Disk 5. /cdrom/cdrom0/Tools/add_to_install_servers
    Hope this helps.

  • Diskless client NFS boot fails. cannot create /dev/nfs

    Hi
    I've set up a diskless laptop to boot off my server, both running Arch. I followed the wiki article: https://wiki.archlinux.org/index.php/Di … t_NFS_root
    PXE on the laptop connects with the tftpd server just fine, gets the kernel and begins booting up. At some point it begins looking for /dev/nfs, declares it does not exist, attempts to create it and and fails with the following error (from memory as I'm not at the computer now).
    "ERROR: Unable to determine major/minor number of root device root=/dev/nfs"
    my kernel param line in pxelinux.cfg/default is just like the one in the wiki article i.e
    default linux
    label linux
    kernel vmlinuz26
    append initrd=kernel26.img rootfstype=nfs root=/dev/nfs nfsroot=10.0.0.1:/disklessroot,v3,rsize=16384,wsize=16384 ip=::::::dhcp
    any help will be greatly appreciated.
    emk

    I'm assuming youve entered your own IP addresses instead of just copy&pasting from the howto, correct?
    Also, did you install all the nfs packages, and do the mkinitcpio-nfs-utils procedure?

  • Diskless client

    Hi
    I am using Suns CP1500 card which is a disk less processor card and has two ethernet interfaces hme0 and hme1. I have configured the such that there is only one mac and one ip for both the interfaces so at any point only one of the interface is active(I want to use the setup for redundancy).
    For a similar situtuation for a work station I could monitor the link status by using
    $ndd -set /dev/hme instace 0 or 1
    $ndd -get /dev/hme link_status
    $ifconfig hme 1 or 0 up
    depending on which link is up or down.
    But on the CP 1500 card has its disk mounted on one of the servers(netra in oour case). So when one of the link is down I cannor run the shell commands
    so I am unable to run the shell commands and make the other redunant interface active.
    I would like to know if there is any way by which I could moniter the link and use the other inteface with out the use of the shell.
    are there any products which are in the market or sun to sute my needs.
    Thanks in advance
    sastradhar

    I haven't tried RAC on diskless boots, but I have used diskless boots with RHAS3. Once the system is running, you can't really tell except that all mount points are NFS based.
    But in regards to having the oracle homes shared, I would recommend against it. If you follow the normal guidelines for the diskless boots, you'll find that each node already have it's own "home" on the array; in particular paritions like /var and /tmp cannot be shared between hosts. The same goes for Oracle, as (unforunately) it puts most of it's log files in the oracle home. These would have to be different.
    I would look at OCFS2 - it has support for shared Oracle Homes - in which it can be TOLD to keep seperate copies of "common" log files - meaning seperate files for each host. This way, executables that are the same, stays as one copy but host specific data are separated out. I would be very careful before going that route and do a lot of testing; but that seems to be what the good linux guys thought of when they did the OCFS2.

  • Can you setup a diskless server within a zone?

    Is it possible to run a diskless server from with in a zone?
    At present I believe you cannot nfs share a directory out of a zone so this alone would stop a diskless client working.
    Also in /usr/sadm/bin, smc doesn't exist, so smoservice won't run. Why smoservice and smdiskless even in /usr/sadm/bin if you can't run it in a zone?
    I'm using solaris 10 release 1/06.

    Is it possible to run a diskless server from with in
    a zone?I don't know how to interpret that sentence.
    At present I believe you cannot nfs share a directory
    out of a zone so this alone would stop a diskless
    client working.Ah, you mean use a zone to support a diskless workstation. Correct. NFS support is very much tied to the kernel and it is apparently very difficult to allow non-global zones to do NFS without breaking the security guarantees of zones.
    Today, there is no way to do this. I'm sure it's an RFE, but I don't know if anyone is actively working on a solution.
    Also in /usr/sadm/bin, smc doesn't exist, so
    smoservice won't run. Why smoservice and smdiskless
    even in /usr/sadm/bin if you can't run it in a zone?All packages are moved over, even if their operation might fail due to privilege violations.
    Darren

  • Diskless x86 solaris 10 (DHCP, PXE booting)

    Has anyone gotten an x86 box to be a diskless client with Solaris 10?
    I have a server setup(both jumpstart and diskless server) and booting sparc just fine, but I'd like to get an x86 machine working too (you know, for fun :)
    I have my LX50 jumpstarting (standard install) just fine, but when I try to diskless boot it, it hangs with a "cannot mount filesystem" type error. I have a feeling it's because I'm using the standard netinstall nbp boot kernel in tftpboot.
    So, what bootfile should I be using, what options do I need to set in my DHCP server (ISC DCHP version 3)
    Thanks,
    Chris

    After the end of the kernel line in grub add "-v -m verbose" and see what gets printed during the boot before it hangs.
    Darren

  • Diskless Solaris 10 x86?

    Hi,
    Does anyone have any experience with setup of Solaris 10 x86 U2 to run diskless? I've followed the instructions in the Basic Admin book using setup_install_server, smosservice, and smdiskless), but the /tftpboot directory never gets populated with anything and there's nothing anywhere (that I can find) that describes how to get the GRUB-based stuff placed properly in /tftpboot.
    Please note that I'm not interested in diskless INSTALLATION. I'm interested in getting a diskless client to boot and run for real.
    Thanks very much,
    Dave

    Hi Dave et. al,
    I'm having the same problems as you in trying to boot a Solaris 10 11/06 x86 client disklessly, also not wanting to install bits onto it. In fact, it has no hard disk, so installation is not an option.
    I have the Solaris 10 OS files on my server, and I'd like my client system to run entirely over NFS from that server's filesystem tree.
    I'm struggling with the correct pxegrub config files in /tftpboot on my server. I'm not sure:
    (a) Which files (e.g. pxegrub, multiboot, etc.) I need to have available on my TFTP server.
    (b) What exactly to put into the menu.lst.0100<MAC> file to get the kernel to boot and mount the root filesystem via NFS from the server. Currently, I have:
    title Solaris 10 netboot
    root (nd)
    kernel /solaris/multiboot kernel/amd64/unix -r
    #module /solaris/x86.miniroot
    module /platform/i86pc/boot_archive
    which gets the kernel running for 10 seconds or so, but then the system reboots. If I switch to the x86.miniroot, it dumps me into a recovery-mode shell, but without the Ethernet interface configured via DHCP, so I can't even manually mount the NFS root.
    (c) How to avoid doing an install, which is pretty much what's happening in all of the documentation I've found.
    Thanks,
    Chris

  • What are the differences between Bootpd and JumpStart?

    Please tell the defference between Bootpd and JumpStart?
    Where can I find more info about JumpStart?
    Thank you --- Xing

    What is Bootpd?
    <P>
    What I know is bootparamd and bootparams.
    <P>
    bootparamd is a server process that provides information from a bootparams database to diskless clients at boot time.
    <P>
    The bootparams file contains a list of client entries that diskless clients use for booting. Diskless booting clients retrieve this information by issuing requests to a server running the bootparamd program. The bootparams file may be used in conjunction with or in place of other sources for the bootparams information
    <P>
    Information on Jumpstart installation: <BR>
    http://docs.sun.com:80/ab2/coll.214.7/SPARCINSTALL/@Ab2PageView/6302?DwebQuery=jumpstart&Ab2Lang=C&Ab2Enc=iso-8859-1

  • Cluster 3.1 Configurations

    W.r.t to both Calendar and Messaging 05Q4.
    Customer wishes to run each service in active-passive mode on opposing nodes (i.e. msg node a , cal node b).
    They have stumbled across some documented notes implying that only once instance of the service binaries may be installed, these being in a or part of a storage group that is attached to the service group for the service.
    They would like to be able to have two copies of messaging/calendar binaries installed allowing them to fail over the stores to either node have them excecute against a set of binaries while they upgrade the other node, failback, test etc.
    Is this doable and if so is there any documentation? They were thinking local disk installs, but maybe they have to do two binary storage group definitions and leave one inactive on the opposing nodes?
    Thoughts

    W.r.t to both Calendar and Messaging 05Q4.
    Customer wishes to run each service in active-passive
    mode on opposing nodes (i.e. msg node a , cal node
    b).
    They have stumbled across some documented notes
    implying that only once instance of the service
    binaries may be installed, these being in a or part
    of a storage group that is attached to the service
    group for the service.Not quite what the documentation shows.
    You may not install more than a single "instance" of either Messagng Server or Calendar Server on a single system.
    HOWEVER
    this does not mean you cannot install exactly the setup you're talking about.
    >
    They would like to be able to have two copies of
    messaging/calendar binaries installed allowing them
    to fail over the stores to either node have them
    excecute against a set of binaries while they upgrade
    the other node, failback, test etc.
    Is this doable and if so is there any documentation?
    They were thinking local disk installs, but maybe
    they have to do two binary storage group definitions
    and leave one inactive on the opposing nodes?There is indeed documentation, though it's still internal to sun at present:
    4.0 Using alternate root feature
    Solaris has the "alternate root" feature for pkgadd and patchadd (the -R switch). Historically that switch was used for diskless clients. You would run the pkgadd/patchadd command on the server to add packages and patches to the diskless client being hosted. It is my understanding that the Solaris "Live Upgrade" feature uses -R in order to perform the Live Upgrade. This means there is an implicit assumption that when patching on an alternate root, you are not operating against a "live" package. I had original used this assumption when coding the scripts that went into patches (prepatch/postpatch/prebackout/postbackout). The JES arch team is now proposing to use the alternate root as a means of doing multi-install (and potentially for non-root install too). It is worth noting that Linux RPMs also has the alternate root concept.
    4.1 HOWTO use the alternate root feature (-R)
    * install JES as usual
    * go into the JES installation CD area and find the Messaging packages, typically under arch/Product/messaging_svr/Packages/ where arch is one of
    o Solaris_sparc
    o Solaris_x86
    o Linux_x86
    * install the Messaging packages under an alternate root, e.g. /altroots/root1. I would recommend keeping all your alternate roots under a single well-known location (e.g. /altroots) for ease of locating them in the future. The location of alternate roots are not stored anywhere on the system. Substitute altroot in the examples below with your chosen alternate root location.
    o cd arch/Product/messaging_svr/Packages
    o pkgadd -R altroot -d fullpath/arch/Product/messaging_svr/Packages SUNWmsgin SUNWmsgen SUNWmsglb SUNWmsgco SUNWmsgmt SUNWmsgst SUNWmsgmp SUNWmsgwm SUNWmsgmf
    + will install Messaging Server into altroot/opt/SUNWmsgsr
    + Note that you can add the -r switch to change the install location relative to the alternate root
    + Ignore the warning messages that required packages are missing:
    o You can now configure and start messaging as usual
    + cd altroot/opt/SUNWmsgsr
    + sbin/configure
    + sbin/start-msg
    o The usual packaging commands like pkgparam will work given the -R switch
    o In order to patch Messaging, you must first create a few symlinks. This is a one-time setup for each alternate root. The example below assumes the alternate root is altroot
    + mkdir -p altroot/var/sadm/system/admin/
    + cd altroot/var/sadm/system/admin; ln -s /var/sadm/system/admin/INST_RELEASE
    + cd altroot; ln -s /usr
    + Before applyiing the patch, you need to
    # stop services with altroot/opt/SUNWmsgsr/sbin/stop-msg
    # run stored -r if you are upgrading from pre-6.3: altroot/opt/SUNWmsgsr/lib/stored -r
    + You can now patch messaging, e.g. patchadd -R altroot 118207-51
    * In order to use the system, the installations on the host should be configured so that they don't conflict with each other, specifically the ports. There are two ways to accomplish this:
    o configure individual ports, see 4.1.1 below
    o multi-home, see 4.1.2. below
    4.1.1 Configure individual ports
    Configure the individual ports so that they are different between the installations: Off the top of my head a list of ports to change are: SMTP, IMAP, POP, HTTPD, ENS, job_controller, watcher, (any more?). Note there are SSL versions for the various ports too. Plus there may be other ports use like SMTP SUBMIT, the best place to look for MTA related processes is the dispatcher.cnf file. store and mshttpd ports are probably in configutil. MMP ports may be in configutil and/or its config files.
    * docs
    * might be a good idea to do grep the masterconfig file (aka lib/config.meta) for "port"
    * configutil variables
    service     configutil variable     default     comments
    watcher     local.watcher.port     49994     
    metermaid     metermaid.config.port     63837     
    IMAP     service.imap.port     143     
    IMAP SSL     service.imap.sslport     993     
    POP     service.pop.port     110     
    POP over SSL     service.pop.sslport     995      
    webmail     service.http.port     80     
    webmail SSL     service.http.sslport     443     
    ens     local.store.notifyplugin.ensport     7997     
    jmq     local.store.notifyplugin.jmqport     7676     
    * MTA related ports: SMTP, SMTP submit, SMTP SSL, LMTP: in dispatcher.cnf
    * job controller: in job_controller.cnf
    4.1.2 Multi-Home
    Use a different IP address for each installation, and configure the host to be multi-homed (accepting multiple IP addresses). To change the IP address for each installation, run the ha_ip_config utility. Note that you must configure each installation to use a specific IP address, since the out-of-the-box default is to respond to any IP address (INADDR_ANY). There is one service that needs a separate step in order to change the IP address it responds to. This is the ENS server. I'm still looking up how to change the IP address the ENS server responds to. The workaround for now is to either disable (use local.ens.enable) the ENS server for one of the installations or to change the port used by the ENS server. If you don't do this, one of the ENS servers will not start up. This may not be a huge issue at this time since the other ENS server will handle requests.
    * To configure the host to be multihomed: My guess is to edit /etc/hosts, (/etc/inet/ipnodes on Solaris 10 too) and to plumb (ifconfig) the IP addresses to the ethernet addresses. I would think the Linux procedure would be similar. Then update your naming service (/etc/hosts, /etc/inet/ipnodes, NIS and/or DNS) to recognize the new IP address.
    * From the Solaris 2 FAQ
    4.10) How can I have multiple addresses per interface?
    Solaris 2.x provides a feature in ifconfig that allows having more than one IP address per interfaces.
    Undocumented but existing prior to 2.5, documented in 2.5 and later.
    Syntax:
    # This command is only required in later releases
    ifconfig IF:N plumb
    ifconfig IF:N ip-address up
    where "IF" is an interface (e.g., le0) and N is a number between 1 and . Removing the pseudo interface and associated address is done with
    ifconfig IF:N 0.0.0.0 down
    # In newer release you must use the following command, but
    # beware that this unplumbs your real interface on older
    # releases, so try the above command first.
    ifconfig IF:N unplumb
    As with physical interfaces, all you need to do is make the appropriate /etc/hostname.IF:X file.
    The maximum number of virtual interfaces, above, is 255 in Solaris releases prior to 2.6. Solaris 2.6 and Solaris 2.5.1 with the Solaris Internet Server Supplement (SISS) allow you to set this value with ndd, upto a hard maximum of 8192.
    /usr/sbin/ndd -set /dev/ip ip_addrs_per_if 4000
    There's no limit inspired by the code; so if you bring out adb you can increase the maximum even further.
    4.1.3 Multi-Home Example
    An example on my machine budha:
    Create the new interface
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.1.110.114 netmask ffffff80 broadcast 10.1.110.127
    ether 0:c:f1:8e:fb:4
    # ifconfig e1000g0:1 plumb
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.1.110.114 netmask ffffff80 broadcast 10.1.110.127
    ether 0:c:f1:8e:fb:4
    e1000g0:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 0.0.0.0 netmask 0
    # ifconfig e1000g0:1 10.1.110.16 up
    Set the IP address for the Messaging Server on the alternate root (on /var/tmp/altroot/opt/SUNWmsg2 in the example below);
    # cd /var/tmp/altroot/opt/SUNWmsg2
    # sbin/ha_ip_config
    Please specify the IP address assigned to the HA logical host name. Use
    dotted decimal form, a.b.c.d
    Logical IP address: 10.1.110.16
    Please specify the path to the top level directory in which iMS is
    installed.
    iMS server root: /var/tmp/altroot/opt/SUNWmsg2
    The iMS server root directory does not contain any slapd-* subdirectories.
    Skipping configuration of LDAP servers.
    Logical IP address: 10.1.110.16
    iMS server root: /var/tmp/altroot/opt/SUNWmsg2
    Do you wish to change any of the above choices (yes/no) [no]?
    Updating the file /var/tmp/altroot/opt/SUNWmsg2/config/dispatcher.cnf
    Updating the file /var/tmp/altroot/opt/SUNWmsg2/config/job_controller.cnf
    Setting the service.listenaddr configutil parameter
    Setting the service.http.smtphost configutil parameter
    Setting the local.watcher.enable configutil parameter
    Setting the local.autorestart configutil parameter
    Configuration successfully updated
    Do the same for the Messaging Server on the default root.
    # cd /opt/SUNWmsg
    # sbin/ha_ip_config
    Please specify the IP address assigned to the HA logical host name. Use
    dotted decimal form, a.b.c.d
    Logical IP address: 10.1.110.114
    Please specify the path to the top level directory in which iMS is
    installed.
    iMS server root: /opt/SUNWmsg
    The iMS server root directory does not contain any slapd-* subdirectories.
    Skipping configuration of LDAP servers.
    Logical IP address: 10.1.110.114
    iMS server root: /opt/SUNWmsg
    Do you wish to change any of the above choices (yes/no) [no]?
    Updating the file /opt/SUNWmsg/config/dispatcher.cnf
    Updating the file /opt/SUNWmsg/config/job_controller.cnf
    Setting the service.listenaddr configutil parameter
    Setting the service.http.smtphost configutil parameter
    Setting the local.watcher.enable configutil parameter
    Setting the local.autorestart configutil parameter
    Configuration successfully updated
    Disable the ENS server on one of the installation by setting local.ens.enable to 0:
    sbin/configutil -o local.ens.enable -v 0
    Configure the netmask and broadcast on the new IP address
    # ifconfig e1000g0:1 down
    # ifconfig e1000g0:1 netmask 0xffffff80
    # ifconfig e1000g0:1
    e1000g0:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.1.110.16 netmask ffffff80 broadcast 10.255.255.255
    # ifconfig e1000g0:1 broadcast 10.1.110.127
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.1.110.114 netmask ffffff80 broadcast 10.1.110.127
    ether 0:c:f1:8e:fb:4
    e1000g0:1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.1.110.16 netmask ffffff80 broadcast 10.1.110.127
    # ifconfig e1000g0:1 up
    Edit /etc/hosts to add the new IP address 10.1.110.16 to it:
    # cat /etc/hosts
    127.0.0.1 localhost
    10.1.110.114 budha.west.sun.com budha loghost
    10.1.110.4 elegit.west.sun.com
    # multi-home - second IP address on ethernet port
    10.1.110.16 budha2.west.sun.com budha2
    4.2 Problems with the alternate root solution
    In my Messaging patch, I shut down the services during prepatch.
    However, what do you do if you detect that the patch is being done on an alternate root? Consider the scenarios:
    * For a diskless client case, the alternate root is being run on a different machine. I can not try to stop the services in the patch, otherwise it might be try to stop the services on the current machine.
    * The same thing is true for a LiveUpgrade case, the LiveUpgrade partition is not "running".
    * For the multi-install case, I should try to stop the services, because indeed the services are running on the current machine.
    * Also note that for the diskless client/Live Upgrade case the alternate root is mounted in a different location. What is / on the target machine, is mounted (say) as /a on the current machine. Thus absolute symlinks do not point to the "real" place. This is important to me because that is how I find my config/data, i.e. through an absolute symlink. (symlink isn't the issue, I store an absolute path to the config/data).
    The point I'm making is that I can not distinguish between the "diskless client/Live Upgrade" and the "multi-install" case. Thus I have to make an assumption whether it is one or the other.
    So originally, I would check to see if the patch was being applied on an alternate root, and if it was I would simply not try to stop the services (i.e. I assumed it was the diskless client/Live Upgrade case).
    I could change this to assume the multi-intsall case, and go ahead and shut down the services, but I think it is better right now as it is. I will document that the user must stop the services (among other things) when applying the patch to a multi-install case.
    This is what I mean by the "conflict" between the use of alternate root for Live Upgrade and multi-install. I can' t remember who I was talking to in iChange (some of their people happened to be in a drop-in in my building), who explained that alternate root means "not running on the current system". Any references to absolute paths may not work, since the alternate root may be mounted on a different location, so you how can you assume that anything live is running?
    So perhaps some sort of "check" could be added so that I can determine whether the alternate root is a "mulit-install" location or not. I don't have good ideas on how to do that. After all, it was pointed out that the INSTALL_HOME could be installed on one machine and then mounted on others.
    4.2.1 Problem with alternate root with zones?
    The following url regarding zones, has the following text:
    When patchadd and patchrm are being invoked with the "-R" option, by rule, the patch is destined for either a non Zones environment, a diskless client environment and an alternate root environment. If an admin assigns the argument of "-R" to the root of a local Zone, i.e. -R /export/zone1/root then this type of invocation is not supported and unexpected behaviors will occur. The patch commands will not be able to detect this invalid invocation so documentation should include it as a warning.
    Is this a concern?
    5.0 Multiple Instance vs Multiple Install issue
    It was pointed out to me by a customer, that there is an issue with pkgadd. If the binaries are installed on a shared file system, then if the system is jumpstarted, the binaries will still exist on the shared file system but the package database will no longer know the the packages were installed. How can this problem be resolved? Some not-thought-through ideas
    * restore from backup as part of jumpstart - not very good, after all jumpstart is for bringing up the node quickly to a known state
    * do the pkgadd as part of jumpstart with the shared filesystem unmounted. Thus the binaries will be shadowed when the shared filesystem is mounted. The downside is the jumpstart scripts must be kept up to date in terms of the patches applied to the binaries on the shared filesystem.
    This problem seems to imply the multiple instance concept of the binaries being owned by a single node fits a little better than multiple install idea, where the binaries can be installed on the shared file system, and "floats" across nodes like the config.
    update to this issue
    If you put the altroot on the shared filesystem, then the pkg db will also be there, and thus will survive the jumpstart. So that all works out quite nicely!
    >
    Thoughts

  • Why are double undo.Z files no hard links to save diskspace?

    Hi,
    many undo.Z under /var/sadm/pkg files exist twice:
    <pkg>/save/<patch id>/undo.Z and
    <pkg>/save/pspool/<pkg>/save/<patch id>/undo.Z
    Why are these files not hard linked? Is there a technical reason for that?
    It would be nice to save some space in /var/sadm/pkg ...
    Thanks.

    Good to know why undo.Z can't be hard linked. Thank you!
    If diskspace is an issue, you might find this document useful:
    http://sunsolve.sun.com/search/document.do?assetkey=1-9-14295-1
    Sorry, have no service plan, don't ask why, i'm private.
    Yes i have diskspace problems, /var slice was configured too small (1GB).
    First i tried moving /var/sadm/pkg to a larger slice, made pkg a symlink.
    That does not work, SUM means no patches have been installed.
    Second try was to move completly /var/sadm to a larger slice.
    This seems to work good with SUM.
    But today i was wondering about that someone changed the symlink /var/sadm back
    to a new directory and put only 'smc/...' into it.
    Now i know who it was: i run 'smosservice add' to prepare a diskless client.
    sadm/smc/smcreg was modified on the symlinked tree, after that /var/sadm was
    switched back to a new directory. This makes /var/sadm completly useless,
    e.g.: 'showrev -p' answers with 'opendir' :-)
    Ok, if i changed something i should not think all will work, so i think about a new
    installation with much more space for /var.

Maybe you are looking for

  • Will applications update when downloaded with a different iTunes account?

    Here is my scenario: My girlfriend bought an iPhone. To sync her contacts I used iTunes on my computer, unknowlingly setting up her iPhone to have my iTunes account set up on it. Since I didn't realize my iTunes account was saved to her iPhone, she p

  • Shuts down when trying to print.

    Have a Touchsmart 300-2010 running windows 7. Whenever we try printing to our printer, the computer kinda/sorta shuts down. The fan spins up, moniter goes black, and it just hangs like that. The printer is a Samsung ML-2010, and I think maybe it is a

  • Small dvd stuck in dvd drive

    My son stuck a small dvd into the dvd drive in my MacBook Pro and now I can't get it out.  The size of the dvd is much smaller than standard... it came with a watch I bought my husband for Christmas.  Any ideas on how to eject?  I tried going to Laun

  • Purpose of "posting in gl" field in depreciation area

    Hi, I am confused with the posting in gl field in the depreciation area because if i choose area posts depreciation only then i noticed the area also post APC values together with depreciation values. Then what is the real purpose of this function?

  • Essbase 11.1.2.2 setup

    Hi Team, For an essbase setup ( EPM 11.1.2.2) can you let me know if this architecture will work. I have not include RDBMS here. I have not seen a setup where http server and app server(websphere) on different box. Will it be a problem during EPM con