DHCP on OES11 Cluster

Hello,
I have some problems to get DHCP Service running in a Cluster Evironment (OES11 SP1, NSS Volume). The Ressource is up and running, but the DHCP Service wont load.
LDAPS session successfully enabled to 10.0.181.36:636
Error: Cannot find LDAP entry matching (&(objectClass=dhcpServer)(cn=DHCP_CLUSTER-DNSDHCP-SERVER))
Configuration file errors encountered -- exiting
The standard command rcnovell-dhcp start did work. So it looks like the rights or seomthing else is missing on the cluster volume /media/nssDHCP...
Maybe I am misunderstanding the doc as well. It states in the DHCP Linux Guide:
8.3.3. hot wo use a different user - it states that it would use the dhcpd user , but no such user was created. Do I need to create it manually?
Thanks for your help!
Regards Jrgen

Hi,
If I recall correctly from my setup, I had to create a user and LUM enable it, and match the UID to the local account.
See here: http://www.novell.com/documentation/...a/bbe81cc.html
Let us know how it goes.
Cheers,

Similar Messages

  • Manage OES11 Cluster Services

    Hello to all!
    I've just a "big" problem. I have to manage 3 OES-Cluster Nodes. 2 2-Node-Clusters on OES 2 SP3 and 1 4-Node-Cluster on OES 11. The master - where the iManager is installed - is an OES 2 SP3, too.
    When I start the iManager, it's no problem to manage everything on the OES 2-Clusters. If I choose the OES 11-Cluster, I'll get errors and the iManager stops working (the frontend).
    I've searched the internet for information; so I can say, it's not the TID with the sysctl-settings.
    Does anyone have any idea?
    Hans-Christian Wssner

    Originally Posted by hwoess
    Hello to all!
    I've just a "big" problem. I have to manage 3 OES-Cluster Nodes. 2 2-Node-Clusters on OES 2 SP3 and 1 4-Node-Cluster on OES 11. The master - where the iManager is installed - is an OES 2 SP3, too.
    When I start the iManager, it's no problem to manage everything on the OES 2-Clusters. If I choose the OES 11-Cluster, I'll get errors and the iManager stops working (the frontend).
    I've searched the internet for information; so I can say, it's not the TID with the sysctl-settings.
    Does anyone have any idea?
    Hans-Christian Wssner
    Use iManager on one of the OES11 servers to manage the OES11 cluster? (I'm just saying to try that).
    I don't know off the top of my head if the OES11 iMan/cluster stuff can manage both cluster versions. I believe OES11 does not use cimom to manage clustering anymore (sfbcd or something like that now?)

  • Upgrade OES2 to OES11 Cluster

    Hi all,
    we plan to upgrade our 2-node Cluster running on OES2SP3 x64 to current
    version running on OES11SP1. The cluster provides GroupWise, File and iPrint
    Services and is connected via QLogic iSCSI HBA to our SAN.
    What would you recommend how the Upgrade could be done without major
    trouble?
    Thanks in advance
    Alex

    On 16.02.2014 14:54, Alexander Lorenz wrote:
    > Hi all,
    >
    > we plan to upgrade our 2-node Cluster running on OES2SP3 x64 to current
    > version running on OES11SP1. The cluster provides GroupWise, File and iPrint
    > Services and is connected via QLogic iSCSI HBA to our SAN.
    >
    > What would you recommend how the Upgrade could be done without major
    > trouble?
    There should be no trouble at all. Move all ressources to one Node, then
    update the free node to OES11 as documented, move ressources to the
    upgraded server, test, then do the same with the other node. But I
    suggest use SP2 instead of SP1. There's really not much special on a
    cluster node vs a "normal" server.
    One thing you may need to check into is iPrint. If it's fully clustered,
    your weg data of iPrint will reside on a clustered volume, which however
    will never be available on the node that's currently upgraded, so you
    likely end up with the old iPrint stuff there. Yuo may want to re-run
    the iprint cluster setup script again (or if you feel able to, copy the
    local iPrint data over to the cluster volume).
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de

  • Can make a DHCP VM in cluster role

    hi
    i have created fail over cluster with windows server 2012
    i have a DHCP vm 
    when i choose roles>> dhcp role to configure that DHCP vm it says no dhcp server found!!
    so my question is isnt it possibel to make a DHCP vm as DHCP fail over role ??
    but if i install DHCP role in physical machine then it shows availabel in cluster role
    istiaq

    Hi,
    You can use the guest vm failover cluster to deploy the DHCP server, you can refer the following similar thread for the future plan.
    The similar thread:
    DHCP with hyperv failover cluster
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/48d24470-61f3-4ed4-bc09-81e6ee9f38c9/dhcp-with-hyperv-failover-cluster?forum=winserverClustering
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Mac 10.8.4 - OES11 Cluster volume migration

    I am posting this question as we are trying to implement the 'same' features we are used to with Windows.
    The kanaka client 2.8 & server are configured and have the clustering resources set to the same vol-name as in the documentation
    Now some questions:
    1) when a cluster resource moves - the kanaka client loses connectivity - is there a way to have it auto reconnect as windows clients do?
    2) When Word is open and editing a document on a network volume - The network volume is migrated - Word crashes and loses the contents
    Is there a timeout for reconnecting? I see a timeout for logging in and the default is 60 seconds.
    what are some of the best solutions for what I am asking ?
    Thanks in advance
    Joe

    Some further info
    In our testing without cluster migration we have noticed that volumes have been dropped.
    When the Macs went to sleep (for about 15 minutes) - all the volumes were present.
    When they were asleep for longer - some volumes dropped but not all ?

  • Best plan for this scenario (migrate oes2 to oes11)

    Hi there!
    I would like to ask you guys the best plan for migrate a few servers that I have
    in oes2 to oes11.
    I have four servers in oes2sp2/sp3, these servers have a few NSS
    volumes over NCS. Also I have my iPrint service on two machines,
    also using NCS. I have one cluster between two machines and
    other cluster between the other two machines. For exmaple,
    machine1 and machine2, three NSS volumes (every NSS volume is a cluster
    resource) and other resource in the same cluster for iPrint. Cluster: CL1
    machine3 and machine4, one NSS volume (for other people in the company)
    in other cluster. Cluster: CL2. In this cluster only have a NSS volume/resource.
    Machine1/3 are physical, machine2/4 are virtual (vmware)
    My CA is in other server that I will migrate after a while. So, It seems that there is
    no problem with the certificate. Machine1 and machine3 are a replica for the eDir in
    my organization.
    My plan: I have deployed two virtual machines and one physical with oes11sp1 and sles11sp2.
    I think that the best plan is migrate with migration tool iPrint first, like a cluster resource,
    and NSS volumes, and after that, make a transfer ID (like a different project)
    For machine1/3, move NSS volumes and after, make a transfer ID.
    Is this correct? I dont know If I have to delete the current clusters (cl1 and cl2) and
    create new ones, after that, create every resource as It was on the old ones.
    thank you so much !
    Antonio.

    Originally Posted by kjhurni
    This is just my opinion, of course, but:
    If you don't want to have to migrate your NSS data and keep the same server names/IP/s and cluster load scripts, then I believe a Rolling Cluster Upgrade is a good way to go.
    If you look in my signature, there's a link to my OES2 guides. Somewhere there is one that I did for our Rolling Cluster Upgrade.
    If all you have is NSS and iPrint, then you only need to use the miggui (migration utility) for iPrint--or so I think (I do have to followup on this one as I vaguely recall I asked this a while back and there may have been a way to do this without migrating stuff again).
    But your NSS data will simply "re-mount" on the OES11 nodes and you'll be fine (so that's why I like the rolling cluster upgrades).
    Let me double-check on the OES2 -> OES11 cluster option with iPrint.
    --Kevin
    Thank you Kevin for your answer.
    Finally, I think Im going to proceed using transfer ID on my servers that Im only using NSS over NCS (I only have two machines with one NSS volume)
    because it seems that its a good option. I would like to keep old IPs from all the servers, cluster and resources if possible. So, testing this
    migration on my test environment it seems that it works fine:
    - I use miggui for transfer id between all the machines: physical -> physical and virtual -> virtual. eDirectory assumes new hostname, IP, etc.
    the only task "out of the box" is that I have to delete the cluster and regenerate (reconfigure NCS on the new servers) but its pretty easy. This way
    I have the two old ips from the older machines and all the ips from the cluster and the cluster resources also. I think that its the best plan.
    For the other two machines that has 4 nss volumes and iPrint I must think about a plan. But with those, Im going to proceed this way. I hope
    I have chosen a good plan!
    thank you so much all for your answers / advices

  • IManager error 500 managing cluster services

    I have a new OES11 tree with an OES11 cluster. All servers in the tree are OES11. I have the cluster plug installed in iManager and was able to successfully set up and configure the cluster. However, time to time as I go back to check the cluster for some reason, I am getting an iManager 500 error. I am logging in as admin and have tried from 3 different servers running iManager with same result.
    I found TID 7010295 and made the sysctl changes that it specifies, however, it did not solve the problem. If I reboot the cluster, I can then manage again, at least in the short term until whatever happens to knock it out service again.
    Any ideas?
    Thanks,
    Will

    will74103 wrote:
    >
    > I have a new OES11 tree with an OES11 cluster. All servers in the tree
    > are OES11. I have the cluster plug installed in iManager and was able
    > to successfully set up and configure the cluster. However, time to time
    > as I go back to check the cluster for some reason, I am getting an
    > iManager 500 error. I am logging in as admin and have tried from 3
    > different servers running iManager with same result.
    >
    > I found TID 7010295 and made the sysctl changes that it specifies,
    > however, it did not solve the problem. If I reboot the cluster, I can
    > then manage again, at least in the short term until whatever happens to
    > knock it out service again.
    >
    >
    > Any ideas?
    >
    >
    > Thanks,
    >
    >
    > Will
    >
    >
    Yes I agree that this has been long time issues. Here is something you can
    try next time this happens so that you won't have to reboot. Do this atleast
    on the server that is the Master_IP resource.
    Just run at the prompt: rcsfcb restart
    This hopefully will resolve it. I have had on occasion to have to run that
    same command on all nodes in the cluster. It beats the heck out of
    restarting the servers just to manage the cluster.

  • OES Clusters

    I am looking for suggestions on the best way to move our current 3 pair of Netware 6.5 clusters to OES11. We are looking at doing this over the summer. We will need to support macs and ipads in our environment next year. I am trying to figure out if we can use the hardware we have/can or go virtual or a mixture of both.
    All of the clustered servers are HP DL380's G4's (i386). Each clustered pair have their own MSA500 G2's connected. 1 of the clusters is used for Zenworks and the other 2 are used for DHCP & Data.
    Additional hardware we have available for use currently:
    1 - HP 380DL G4 (i386)
    1 - HP 380DL G5 (64bit)
    We also have an Vsphere with ESXi servers. (Can I install OES11 clusters on ESXi)? I had read that many were having problems joining a OES11 cluster on ESXi. When they joined 1 vmserver to the cluster everything seemed fine, when a second server from inside the vm tried to connect it always failed. Is it possible to have more than one clustered OES11 vmserver using vsphere and ESXi?
    If I cannot install on ESXi (I believe I read that we have to use Xen), can I use the two servers listed above to run Xen and install the OES11 clusters on them? Then I could connect the MSA's to the Xen Servers.
    I am looking for what others have done in a hardware pinch.
    Thank you for your suggestions in advance!
    Tracy

    On 07/05/2012 17:06, tmishler wrote:
    > I am looking for suggestions on the best way to move our current 3 pair
    > of Netware 6.5 clusters to OES11. We are looking at doing this over the
    > summer. We will need to support macs and ipads in our environment next
    > year. I am trying to figure out if we can use the hardware we have/can
    > or go virtual or a mixture of both.
    I think you need to explain what you mean by "need to support macs and
    ipads" as NetWare 6.5 will certainly support Mac access via AFP though
    has issues with OS X Lion (10.7) and later due to user authentication
    method in use.
    > All of the clustered servers are HP DL380's G4's (i386). Each
    > clustered pair have their own MSA500 G2's connected. 1 of the clusters
    > is used for Zenworks and the other 2 are used for DHCP& Data.
    What does the "i386" reference mean? That your servers have 32-bit CPUs
    installed? HP product support page suggests they have 64-bit Xeon CPUs.
    > Additional hardware we have available for use currently:
    >
    > 1 - HP 380DL G4 (i386)
    > 1 - HP 380DL G5 (64bit)
    >
    > We also have an Vsphere with ESXi servers. (Can I install OES11
    > clusters on ESXi)? I had read that many were having problems joining a
    > OES11 cluster on ESXi. When they joined 1 vmserver to the cluster
    > everything seemed fine, when a second server from inside the vm tried to
    > connect it always failed. Is it possible to have more than one
    > clustered OES11 vmserver using vsphere and ESXi?
    You can certainly install OES11 servers, standalone or clustered, as
    virtual guests using either ESXi or Xen as the host.
    > If I cannot install on ESXi (I believe I read that we have to use Xen),
    > can I use the two servers listed above to run Xen and install the OES11
    > clusters on them? Then I could connect the MSA's to the Xen Servers.
    >
    > I am looking for what others have done in a hardware pinch.
    First things first establish what CPUs you have in your servers. If
    they're 64-bit then you have several options:
    1) install OES11 (on SLES11 SP1) directly
    2) install ESXi or SLES11 SPn + Xen and virtualise OES11
    but if they're 32-bit then unless you can replace the CPUs you're going
    to new to replace the servers themselves (or at least remove them from
    the equation).
    > Thank you for your suggestions in advance!
    HTH.
    Simon
    Novell/SUSE/NetIQ Knowledge Partner
    Do you work with Novell technologies at a university, college or school?
    If so, your campus could benefit from joining the Novell Technology
    Transfer Partner (TTP) program. See novell.com/ttp for more details.

  • Nss issue with an application

    HI,
    we are using an application since 10 years now. This application use a share on an oes11 cluster. (it was on a Netware 6 before)
    Workstation are Windows seven x64 mainly and some Windows XP (Novell Client 2sp3ir1+ Xp Workstation 4.91sp5ir1).
    The application is used to managed business case. For each business case it creates a directory and several files in it. When you open the application on a device it open the last business case you were working on last time. You can decide to open another case, so the application create the list of all the busness case, then you pick up a case to work on it.
    So our issue is the following :
    user A open a case
    user B launch the application and try to open a new case. When he do that the application failed when it browse the file system on a file from the case open by user A.
    Normally the application browse the file system and display the list of the case. If a case is open by another user it just show that this case number is in use.
    It used to work perfectly on a Netware 6 box, but it does not work on our OES 11 box.
    If i put the share on a Windows box, the application is working.
    This application is very important for us and our business, and i don't understand why it is not working anymore on an oes 11 share.
    Can you help me to find out what it is going wrong ? I pick up a procmon trace and it only show INVALID HANDLE on the file open by user A
    With Wireshark i don't see any error.
    Any idea ?
    NMAS is disabled, File caching is disable also.
    Stephane

    The application is a very specific one. The name is Winchant and has been done using Windev.
    I have open a case and i have done some trace using Wireshark and processmonitor.
    The application is presenting to the user a list of busness case. Each of this busness case are a directory with multiple files :
    OES11SHARE\WInchant\case001
    OES11SHARE\WInchant\case002
    OES11SHARE\WInchant\casexxx
    The application root directory settings is OES11SHARE\WInchant\
    So when you open the application it is browsing and after a few seconds you have a window with all the cases. Then you open a case. Let's say case010.
    On a second device you open Winchant, the application browse the directory and failed on a case10 file.
    In Wireshark and ProcessMonitor the error happen on the case10 file and it is : "Invalid File Handle"
    If i put the Winchant directory and subdirectory to a Windows Share, the application show you all he case with and grey line on the case010 to reflect the fact that it is open on another device.
    I have put the application directory on a OES11sp1 box with latest update (April Maintenance) but it is the same. I have disable Oplock, File caching and Cross protocol file locking.
    I have update the Novell client on the seven box to 2sp3ir1.
    No change.
    I will also update my SR number with yours (10813788341) and let you know.
    Thanks
    Stephane

  • NFS exports and the mandatory no_root_squash

    We are running a SUSE11/OES11 cluster serving NSS volumes as NCP, NFS and AFP. Is the only feasible workaround for the NFS no_root_squash requirement to firewall the mountd port?
    If so will having a list of 1,000+ IP numbers in the allow list for mountd have a significant impact on the cluster nodes? Unfortunately on our University class B IPv4 site the allocated IP addresses are scattered and the subset of PCs controlled by technicians (and therefore 'trusted') are not contiguous and neatly arranged.

    There is another workaround to the "no_root_squash" requirement. The below is taken from TID: Support | OES: Compatibility issues between NSS and NFS
    2. no_root_squash: Officially, this is mandatory, so care should be taken to limit what hosts can mount the export (as the root user of the NFS client host will be able to act as the root user on the NSS exported path).
    However, due to potential security concerns with allowing root access, some administrators chose to set this up in another way. This alternative way is thus far considered experimental, and not thoroughly tested: It seems that the key requirement here is that the user who is requesting the mount (typically root) have at least Filescan rights to the NSS volume. If root is "squashed" he is treated like "nobody." Typically, "nobody" does not have access, neither through its own merits nor by being associated with any LUM-enabled user in eDir. However, an eDir user can be created and LUM-enabled, given Filescan right to the NSS volume(s), and then the UID assigned to that user can be used as the "anonuid" for that particular export. So, for example, if the user in question was given UID 1011, then instead of "no_root_squash" the combination of "root_squash,anonuid=1011" could be used.
    In that case, be sure to remember that even after mount, "squashed root user" will be treated as having whatever rights the anonuid user has been given. Also remember that if you use the "all_squash" parameter as well, all NFS client users (not just eDir users and not just root) will be treated as the anonuid user, and will be able to access the NSS volume.
    On the other subject: I do not know the potential impact of 1000+ IP numbers in an allow list for mountd.
    Darcy

  • NSM 3.0.4 "Enforce Policy Path" problem

    Hi,
    I have run into a big problem using "Enforce Policy Path"
    Due to some SAN changes we have created new USER volumes
    And we are using "enforce policy path" for the user move proccess.
    The problem is that if i run enforce policy path on say 200 users.
    Every user will end up in the eliglible que wating for the policy to alow moves, so far so good.
    When the policy eventualy alow moves the NSM engine will decide where to put the homedirs.
    The problem is that NSM will loock at the volume list and pick the one volume whit most free space
    and then the engine will send all my 200 users to that volume, even if its about to run full.
    The engine will not change desination during the run even if the run will take 10 houers.
    This is a realy big problem. I have filld volumes a cuple of times and it is not plessent to sort out that mess afterwards.
    The dicison where to put the user shuld be made when the copying is begining, not when the policy alow moves.
    Is this work as design or have i missed somthing?
    The system:
    NSM ENgine: NSM 3.0.4 running on SLES11 sp1 OES11
    User volumes: OES11 cluster on SLES11 sp1
    we are not using any kind of quota at this custumer
    Best Regads
    Johan

    On 9/13/2012 4:16 PM, jkullberg wrote:
    >
    > Hi,
    > I have run into a big problem using "Enforce Policy Path"
    >
    > Due to some SAN changes we have created new USER volumes
    > And we are using "enforce policy path" for the user move proccess.
    >
    > The problem is that if i run enforce policy path on say 200 users.
    > Every user will end up in the eliglible que wating for the policy to
    > alow moves, so far so good.
    >
    > When the policy eventualy alow moves the NSM engine will decide where
    > to put the homedirs.
    > The problem is that NSM will loock at the volume list and pick the one
    > volume whit most free space
    > and then the engine will send all my 200 users to that volume, even if
    > its about to run full.
    >
    > The engine will not change desination during the run even if the run
    > will take 10 houers.
    > This is a realy big problem. I have filld volumes a cuple of times and
    > it is not plessent to sort out that mess afterwards.
    >
    > The dicison where to put the user shuld be made when the copying is
    > begining, not when the policy alow moves.
    >
    > Is this work as design or have i missed somthing?
    >
    > The system:
    > NSM ENgine: NSM 3.0.4 running on SLES11 sp1 OES11
    > User volumes: OES11 cluster on SLES11 sp1
    >
    > we are not using any kind of quota at this custumer
    >
    > Best Regads
    > Johan
    >
    >
    Johan,
    I believe we've addressed your concerns directly through our Support
    team, but for anyone else experiencing similar behavior:
    NSM calculates target path distributions when an event is put on the
    queue. For large management actions, like an "Enforce Policy Path"
    action with multiple target paths, using an "Available free space"
    distribution method may well mean that all queued events are given the
    same target path, since that path had more free space when the event was
    initially created.
    Therefore, if even or nearly-even distributions across multiple target
    paths are desired, using the "Random" distribution method may be the
    better choice for a large management action.
    - NFMS Support Team

  • GW2012 updates and OES 11.1 SP1

    I am about to complete an upgrade to GW2012. Existing servers are running on SLES11.1 OES11 cluster. With the new OES11.1 update release and it's support of SLES11.2, does GW2012 support SLES11.2? The documentation is not specific.
    Will the GW2012 SP1 be released soon?

    will74103 wrote:
    > I am about to complete an upgrade to GW2012. Existing servers are
    > running on SLES11.1 OES11 cluster. With the new OES11.1 update
    > release and it's support of SLES11.2, does GW2012 support SLES11.2?
    > The documentation is not specific.
    Official support for OES11 SP1 will be in GW2012 SP1.
    > Will the GW2012 SP1 be released soon?
    It should be.
    We're your Novell--again. http://www.novell.com/promo/backtobasics.html
    Enhancement Requests: http://www.novell.com/rms

  • RSV 400 best solution for this scenario?

    Hi
    As shown in the diagram below, I have a central office and two branch offices, these offices are connected by a private routing service that has no connection to the Internet, the telecommunications operator in each office installs a router with a LAN and a WAN IP and configuration of these devices cannot be changed except the LAN IP. Only the central office network that is 192.168.0.0 have a router that has internet access. Remote offices have no access to the internet, what is needed is that remote offices can access the internet using ADSL router 192.168.0.254 at the central office.  There are a small devices in each remote office  that must connect to the internet and do not support any configuration except IP, mask and gateway, for example you cannot add a static route. Currently the pc’s at remote offices has IP communication with the server from the central office using a static route.
    Does the solution would be to put some VPN routers between each LAN and the operator’s routers (where RT yellow star appears in the diagram) and put the hosts of the two branch offices same IP range that the central office network?
    I had thought to use RSV400 routers, Is this the most appropriate equipment for what we want to do?
    Thank you very much for the help

    Originally Posted by kjhurni
    This is just my opinion, of course, but:
    If you don't want to have to migrate your NSS data and keep the same server names/IP/s and cluster load scripts, then I believe a Rolling Cluster Upgrade is a good way to go.
    If you look in my signature, there's a link to my OES2 guides. Somewhere there is one that I did for our Rolling Cluster Upgrade.
    If all you have is NSS and iPrint, then you only need to use the miggui (migration utility) for iPrint--or so I think (I do have to followup on this one as I vaguely recall I asked this a while back and there may have been a way to do this without migrating stuff again).
    But your NSS data will simply "re-mount" on the OES11 nodes and you'll be fine (so that's why I like the rolling cluster upgrades).
    Let me double-check on the OES2 -> OES11 cluster option with iPrint.
    --Kevin
    Thank you Kevin for your answer.
    Finally, I think Im going to proceed using transfer ID on my servers that Im only using NSS over NCS (I only have two machines with one NSS volume)
    because it seems that its a good option. I would like to keep old IPs from all the servers, cluster and resources if possible. So, testing this
    migration on my test environment it seems that it works fine:
    - I use miggui for transfer id between all the machines: physical -> physical and virtual -> virtual. eDirectory assumes new hostname, IP, etc.
    the only task "out of the box" is that I have to delete the cluster and regenerate (reconfigure NCS on the new servers) but its pretty easy. This way
    I have the two old ips from the older machines and all the ips from the cluster and the cluster resources also. I think that its the best plan.
    For the other two machines that has 4 nss volumes and iPrint I must think about a plan. But with those, Im going to proceed this way. I hope
    I have chosen a good plan!
    thank you so much all for your answers / advices

  • Best plan?

    Hello...
    I live in an area where my cell service is very poor at home, so I am looking to Skype to make calls when I'm home.
    Mostly, I will probably be using my iPhone to make Skype calls when connected to WiFi.  However, I am considering getting one of the Skype ready phones as well, so that everyone in the home can make calls when needed.   If possible, I would like to have a phone number associated with the account as well.
    I will only be making domestic calls (United States), and most often I would probably be receiving calls, not making them myself.
    With that info... what service should I be signing up for?   I see Premium... but does that come with a phone number?  Any suggestions are appreciated...
    Charlie

    Originally Posted by kjhurni
    This is just my opinion, of course, but:
    If you don't want to have to migrate your NSS data and keep the same server names/IP/s and cluster load scripts, then I believe a Rolling Cluster Upgrade is a good way to go.
    If you look in my signature, there's a link to my OES2 guides. Somewhere there is one that I did for our Rolling Cluster Upgrade.
    If all you have is NSS and iPrint, then you only need to use the miggui (migration utility) for iPrint--or so I think (I do have to followup on this one as I vaguely recall I asked this a while back and there may have been a way to do this without migrating stuff again).
    But your NSS data will simply "re-mount" on the OES11 nodes and you'll be fine (so that's why I like the rolling cluster upgrades).
    Let me double-check on the OES2 -> OES11 cluster option with iPrint.
    --Kevin
    Thank you Kevin for your answer.
    Finally, I think Im going to proceed using transfer ID on my servers that Im only using NSS over NCS (I only have two machines with one NSS volume)
    because it seems that its a good option. I would like to keep old IPs from all the servers, cluster and resources if possible. So, testing this
    migration on my test environment it seems that it works fine:
    - I use miggui for transfer id between all the machines: physical -> physical and virtual -> virtual. eDirectory assumes new hostname, IP, etc.
    the only task "out of the box" is that I have to delete the cluster and regenerate (reconfigure NCS on the new servers) but its pretty easy. This way
    I have the two old ips from the older machines and all the ips from the cluster and the cluster resources also. I think that its the best plan.
    For the other two machines that has 4 nss volumes and iPrint I must think about a plan. But with those, Im going to proceed this way. I hope
    I have chosen a good plan!
    thank you so much all for your answers / advices

  • Rolling Cluster Upgrade (OES2 to OES11) - GPT partition

    We've got an OES2 SP3 cluster that we'll be rolling cluster upgrading to OES11 SP2.
    We are currently at max capacity for several of the NSS volumes (2 TB).
    If I'm reading/interpreting the OES11 SP2 docs correctly:
    AFTER the cluster upgrade is complete, the only way I can get to larger volumes will be to create new LUNs on the SAN and intialize those with GPT. Then I could do the NSS Pool Move feature to get the existing 2 TB volumes to the larger setup?
    Is that correct?
    Or is there a better way that doesnt' require massive downtime?

    Originally Posted by konecnya
    In article <[email protected]>, Kjhurni wrote:
    > We are currently at max capacity for several of the NSS volumes (2 TB).
    I was on the understanding that you could bind multiple partitions to
    create NSS volumes up to 8TB. But I'd be hesitant to do that for a
    clustered volume as well.
    > I could do the NSS Pool Move feature to get the existing 2 TB
    > volumes to the larger setup?
    > Or is there a better way that doesnt' require massive downtime?
    My first thought is the migration wizard so that the bulk of the copy can
    be done while system is live but in the quieter times. Then the final
    update with down time should be much faster.
    But then do you really need down time for the Pool Move feature?
    https://www.novell.com/documentation...a/bwtebhm.html
    Certainly indicates it can be done live on a cluster with the move
    process being cluster aware.
    Andy of
    http://KonecnyConsulting.ca in Toronto
    Knowledge Partner
    http://forums.novell.com/member.php/75037-konecnya
    If you find a post helpful and are logged in the Web interface, please
    show your appreciation by clicking on the star below. Thanks!
    Migration = pew pew (re-doing IP's/etc and copying 4 TB of data is ugly especially when it's lots of tiny stuff).
    haha
    Anyway, when I asked about a better way that doesn't require massive downtime what I meant was:
    Is there a better way vs. Pool Move that doesn't require massive downtime (in other words, the "other way" having the massive downtime, not the Pool Move).
    Choice A = Pool Move = No downtime
    But let's say that's not a good option (for whatever reason) and someone says use Choice B. But Choice B ends up requiring downtime (like the data copy option).
    I just didn't know if Pool Move required that you create the partition ahead of time (so you can choose GPT) or if it kinda did it all for you on the fly (I'll have to read up when I get to that point).
    I'm not terribly keep on having multiple DOS partitions, although that technically would work. Just always scares me. It's just temporary for the next 8 months anyway while we migrate to NAS from OES, but I'm running out of space and am on an unsupported OS anyway.

Maybe you are looking for