Creating logical host on zone cluster causing SEG fault

As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
I am now trying to configure the resource groups and resources on additional zone clusters.
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
This error appears to be happening on the other node, ie: not the one that I'm building from.
Anyone seen anything like, this have any thoughts on where I should go with it?
Thanks.

Hi,
In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
6763940 clzc dumped core after zones were installed
As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
"Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
message got printed and see whether that gives more clues.
2) Also see whether there is a core dump associated with the above message and that might also provide more information.
If you need more help, please provide the output for the above.
Thanks,
Prasanna Kunisetty

Similar Messages

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

  • Loading perl module LWP::Simple causes seg fault

    I just ran pacman -Syu (upgraded 400 packages and the kernel went from 3.12.9-2 to 3.16.1-1 to give an idea of the last time I updated this system).
    Now when I run:
    perl -e 'use LWP::Simple'
    It seg faults and dumps core.
    pacman says: /usr/share/perl5/vendor_perl/LWP/Simple.pm is owned by perl-libwww 6.08-1 Which is the current version.  I tried reverting to 6.05-1 (the version I had previously) but it give the same result.
    Checking pacman.log, I see two errors:
    [2014-08-30 21:31] [PACMAN] installed perl-xml-sax-base (1.08-3)
    [2014-08-30 21:31] [ALPM-SCRIPTLET] /tmp/alpm_TUIVDd/.INSTALL: line 1: 21528 Segmentation fault (core dumped) perl -MXML::SAX -e "XML::SAX->add_parser(q(XML::SAX::PurePerl))->save_parsers()" &> /dev/null
    [2014-08-30 21:31] [PACMAN] installed perl-xml-sax (0.99-4)
    [2014-08-30 21:31] [ALPM-SCRIPTLET] could not find ParserDetails.ini in /usr/share/perl5/vendor_perl/XML/SAX
    So I uninstalled those packages and when I reinstalled, there were no errors... that didn't help.
    Any ideas?

    I found what's causing it, but still don't know how it got there.  For anyone else who has similar problems, here's how I found which module was the problem:
    strace perl -e 'use LWP::Simple' |& grep '^open' | tail -1
    In my case, it shows:
    open("/usr/lib/perl5/site_perl/auto/Storable/Storable.so", O_RDONLY|O_CLOEXEC) = 5
    I'm not sure why I have that file when I also have the proper /usr/lib/perl5/core_perl/auto/Storable/Storable.so.  Perhaps I was installing stuff from CPAN and it did that to fulfill a dependency?
    And it's far from the only one:  I found 27 other .so duplicates in site_perl that are also in core_perl:
    cd /usr/lib/perl5/site_perl
    find . -name "*.so" | xargs -n1 pacman -Qo |& grep "No package owns" | cut -c24- | xargs -n1 -I{} ls -l ../core_perl/{} |& grep -v "cannot access"
    Cleaning this up isn't going to be fun... I have 12 more .so's in site_perl that do NOT exist in core_perl or vendor_perl... so I can't just delete all unowned files in site_perl.

  • The hostname test01 is not authorized to be used in this zone cluster

    Hi,
    I have problems to register a LogicalHostname to a Zone Cluster.
    Here my steps:
    - create the ZoneCluster
    # clzc configure test01
    clzc:test01> info
    zonename: test01
    zonepath: /export/zones/test01
    autoboot: true
    brand: cluster
    bootargs:
    pool: test
    limitpriv:
    scheduling-class:
    ip-type: shared
    enable_priv_net: true
    sysid:
    name_service not specified
    nfs4_domain: dynamic
    security_policy: NONE
    system_locale: en_US.UTF-8
    terminal: vt100
    timezone: Europe/Berlin
    node:
    physical-host: farm01a
    hostname: test01a
    net:
    address: 172.19.115.232
    physical: e1000g0
    node:
    physical-host: farm01b
    hostname: test01b
    net:
    address: 172.19.115.233
    physical: e1000g0
    - create a RG
    # clrg create -Z test01 test01-rg
    - create Logicalhostname (with error)
    # clrslh create -g test01-rg -Z test01 -h test01 test01-ip
    clrslh: farm01b:test01 - The hostname test01 is not authorized to be used in this zone cluster test01.
    clrslh: farm01b:test01 - Resource contains invalid hostnames.
    clrslh: (C189917) VALIDATE on resource test01-ip, resource group test01-rg, exited with non-zero exit status.
    clrslh: (C720144) Validation of resource test01-ip in resource group test01-rg on node test01b failed.
    clrslh: (C891200) Failed to create resource "test01:test01-ip".
    Here the entries in /etc/hosts from farm01a and farm01b
    172.19.115.119 farm01a # Cluster Node
    172.19.115.120 farm01b loghost
    172.19.115.232 test01a
    172.19.115.233 test01b
    172.19.115.252 test01
    Hope somebody could help.
    regards,
    Sascha
    Edited by: sbrech on 13.05.2009 11:44

    When I scanned my last example of a zone cluster, I spotted, that I added my logical host to the zone clusters configuration.
    create -b
    set zonepath=/zones/cluster
    set brand=cluster
    set autoboot=true
    set enable_priv_net=true
    set ip-type=shared
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=applh
    set physical=auto
    end
    add dataset
    set name=applpool
    end
    add node
    set physical-host=deulwork80
    set hostname=deulclu
    add net
    set address=172.16.30.81
    set physical=e1000g0
    end
    end
    add sysid
    set root_password=nMKsicI310jEM
    set name_service=""
    set nfs4_domain=dynamic
    set security_policy=NONE
    set system_locale=C
    set terminal=vt100
    set timezone=Europe/Berlin
    end
    I am refering to:
    add net
    set address=applh
    set physical=auto
    end
    So as far as I can see this is missing from your configuration. Sorry for leading you in the wrong way.
    Detlef

  • Logical host name Question

    Hi
    Iam in process of creating 2 node cluster on 2 domains.This is for FailOver Clustering
    I would appreciate if someone can throw some light and explain if Iam doing right and answer some of the Questions that I had
    We have created 2 RG and we are in the process of creating logical host name. Here are the steps that I think Iam going to execute
    1> The name of the logicalhost name resources are LH-01 , LH-02
    The name of the logical host name are virtualhost-01 , virtualhost-02
    The name of the domains are domainapp-01 , domainapp-02
    2> Create entry in /etc/hosts for the new logical host name and IP address
    Question : Is there any command which can be used to add entries to this files , instead of manually addding them. Iam not a Sys Admin and I do not know how many more places I have to add.
    3>Create logicalhostname resources by following commands
    a>clreslogicalhostname create -g RG-01 -h virtualhost-01 LH-01
    b>clreslogicalhostname create -g RG-02 -h virtualhost-02 LH-02
    4>Edit the /etc/nsswitch.conf
    Question : I could not understand why we have to do this and what needs to be modified. Do I have to comment out the lines which says
    host: cluster files
    rpc : files
    5>Question : Do I need to also create Shared Address Resource? I really did not understand the concept of Shared Address Resource
    I would really appreciate if someone can throw some light on this. I have gone through the documents , but It really did not clear up for me
    Thanks
    Edited by: Cluster_newbie on Jun 25, 2008 4:06 PM

    actually scinstall put this host: cluster files , i dont remember was the the logic
    here is some explaination about shared address which used scalable applications like apache
    The SUNW.SharedAddress Resource Type
    Resources of type SUNW.SharedAddress represent a special type of IP
    address that is required by scalable services. This IP address is configured
    on the public net of only one node with failover capability, but provides a
    load-balanced IP address that supports scalable applications that run on
    multiple nodes simultaneously. This subject is described in greater detail
    I hope this helps
    Regards

  • Logical Host name

    Hi,
    Please tell me about procedure, how to create logical host name in sun cluster.

    Please check other recent posts in this forum or try Google as I'm sure I've posted an answer to this very recently.
    You can also find the procedure if you look on docs.sun.com.
    Thanks,
    Tim
    ---

  • FreeRADIUS rlm_krb5 seg fault

    I'm having a few problems setting up freeRadius with a kerberos backend on arch and would really appreciate a little help.
    Kernal: Linux 3.11.6-1-ARCH i686
    freeradius 3.0.0-1
    All the configuration changes I have made to the default configs are listed below:
    /etc/raddb/users
    Added the following line at the top of the file:
    DEFAULT Auth-Type = Kerberos
    /etc/raddb/sites-enabled/default and /etc/raddb/sites-enabled/inner-tunnel
    Added the following in the Authenticate section directly after the pap entry
    Auth-Type Kerberos {
    krb5
    I have also copied the file /etc/raddb/mods-available/krb5 to /etc/raddb/mods-enabled/krb5 and edited the entries to point to the keytab and principle im using for radius. The keytab contains two entries one for radius/hostname.domain and one for host/hostname.domain.
    I have verified the keytab is ok by using it with kinit to get a valid ticket for both principles. Additionally im sure my kerberos setup is ok as it works fine with ldap, nslcd and ssh.
    The problem is when I run radiusd -X and then attempt a radtest I get the following:
    (0) files : users: Matched entry DEFAULT at line 1
    (0) [files] = ok
    (0) [expiration] = noop
    (0) [logintime] = noop
    (0) WARNING: pap: No “known good” password found for the user. Not setting Auth-Type.
    (0) WARNING: pap: Authentication will fail unless a “known good” password is available.
    (0) [pap] = noop
    (0) } # authorize = ok
    (0) Found Auth-Type = Kerberos
    (0) # Executing group from file /etc/raddb/sites-enabled/default
    (0) Auth-Type Kerberos {
    at which point the server dies with no further output. Running the server using systemctl start freeradius and then looking at the status after its died shows its failed with Main PID: 21835 (code=dumped, signal=SEGV)
    I have looked all over the internet but the only place I have found someone with the same problem is here:
    http://www.mail-archive.com/freeradius- … 77744.html
    I have also enabled core dumps in the radiusd.conf however I have no idea how to actually view the dump or where it is (and yes I did google it, but all the responses made no sence to me)
    I have also tried the freeradius-git package on the AUR however that throws errors when building, something to do with undefined symbols while making radattr.
    CC src/main/radattr.c
    LINK build/bin/radattr
    UNIT-TEST rfc.txt
    ./build/bin/radattr: symbol lookup error: ./build/bin/radattr: undefined symbol: _fr_cursor_init
    src/tests/unit/all.mk:23: recipe for target 'build/tests/unit/rfc.txt' failed
    make: *** [build/tests/unit/frc.txt] Error 127
    => ERROR: A failure occurred in build().
    Aborting...
    => ERROR: Makepkg was unable to build freeradius-git.
    => Restart building freeradius-git ? [y/N]
    => -----------------------------------------------
    =>
    I don't usually post here as every problem i've had using arch so far, I've solved after reading the wiki/forums or random googling. However i'm at a complete loss this time, i have literally no idea how to solve this...
    Thanks

    Just as a quick update, the rlm_krb module still seems to be causing seg faults, however it is possible to get it working by configuring freeRadius to use PAM and then telling PAM to authenticate with kerberos.

  • Zone Cluster: Creating a logical host resource fails

    Hi All,
    I am trybing to create a logical host resource with two logical addresses to be part of the resource, however, the command is failing. Here is what I run to create the resource:
    clrslh create -Z pgsql-production -g pgsql-rg -h pgprddb-prod,pgprddb-voip pgsql-halh-rs
    And I am presented with this failure:
    clrslh: specified hostname(s) cannot be hosted by any adapter on bfieprddb01
    clrslh: Hostname(s): pgprddb-prod pgprddb-voip
    I have pgprddb-prod and pgprddb-voip defined in the /etc/hosts files on the 2 global cluster nodes and also within the two zones in the zone cluster.
    I have also modified the zone cluster configuration as described in the following thread:
    http://forums.sun.com/thread.jspa?threadID=5420128
    This is what I have done to the zone cluster:
    clzc configure pgsql-production
    clzc:pgsql-production> add net
    clzc:pgsql-production:net> set address=pgprddb-prod
    clzc:pgsql-production:net> end
    clzc:pgsql-production> add net
    clzc:pgsql-production:net> set address=pgprddb-voip
    clzc:pgsql-production:net> end
    clzc:pgsql-production> verify
    clzc:pgsql-production> commit
    clzc:pgsql-production> quit
    Am I missing something here, help please :)
    I did read a blog post mentioning that the logical host resource is not supported with exclusive-ip zones at the moment, but I have checked my configuration and I am running with ip-type=shared.
    Any suggestions would be greatly appreciated.
    Thanks

    I managed to fix the issue, I got the hint from the following thread:
    http://72.5.124.102/thread.jspa?threadID=5432115&tstart=15
    Turns out that you can only define more than one logical host if they all reside on the same subnet. I therefor had to create 2 logical host resources for each subnet by doing the following in the global zone:
    clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-prod pgsql-halh-prod-rs
    clrslh create -g pgsql-rg -Z pgsql-production -h pgprddb-voip pgsql-halh-voip-rs
    Thanks for reading :)

  • Error when creating zone cluster

    Hello,
    I have the following setup: Solaris 11.2 x86, cluster 4.2. I have already configured the cluster and it's up and running. I am trying to create a zone cluster, but getting the following error:
    >>> Result of the Creation for the Zone cluster(ztestcluster) <<<
        The zone cluster is being configured with the following configuration
            /usr/cluster/bin/clzonecluster configure ztestcluster
            create
            set zonepath=/zclusterpool/znode
            set brand=cluster
            set ip-type=shared
            set enable_priv_net=true
            add sysid
            set  root_password=********
            end
            add node
            set physical-host=node2
            set hostname=zclnode2
            add net
            set address=192.168.10.52
            set physical=net1
            end
            end
            add node
            set physical-host=node1
            set hostname=zclnode1
            add net
            set address=192.168.10.51
            set physical=net1
            end
            end
            add net
            set address=192.168.10.55
            end
    java.lang.NullPointerException
            at java.util.regex.Matcher.getTextLength(Matcher.java:1234)
            at java.util.regex.Matcher.reset(Matcher.java:308)
            at java.util.regex.Matcher.<init>(Matcher.java:228)
            at java.util.regex.Pattern.matcher(Pattern.java:1088)
            at com.sun.cluster.zcwizards.zonecluster.ZCWizardResultPanel.consoleInteraction(ZCWizardResultPanel.java:181)
            at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.cliConsoleInteraction(IteratorLayout.java:563)
            at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.displayPanel(IteratorLayout.java:623)
            at com.sun.cluster.dswizards.clisdk.core.IteratorLayout.run(IteratorLayout.java:607)
            at java.lang.Thread.run(Thread.java:745)
                 ERROR: System configuration error
                 As a result of a change to the system configuration, a resource that this
                 wizard will create is now invalid. Review any changes that were made to the
                 system after you started this wizard to determine which changes might have
                 caused this error. Then quit and restart this wizard.
        Press RETURN to close the wizard
    No errors in /var/adm/messages.
    Any ideas?
    Thank you!

    I must be doing some obvious, stupid mistake, cause I still get that "not enough space" error
    root@node1:~# clzonecluster show ztestcluster
    === Zone Clusters ===
    Zone Cluster Name:                              ztestcluster
      zonename:                                        ztestcluster
      zonepath:                                        /zcluster/znode
      autoboot:                                        TRUE
      brand:                                           solaris
      bootargs:                                        <NULL>
      pool:                                            <NULL>
      limitpriv:                                       <NULL>
      scheduling-class:                                <NULL>
      ip-type:                                         shared
      enable_priv_net:                                 TRUE
      resource_security:                               SECURE
      --- Solaris Resources for ztestcluster ---
      Resource Name:                                net
        address:                                       192.168.10.55
        physical:                                      auto
      --- Zone Cluster Nodes for ztestcluster ---
      Node Name:                                    node2
        physical-host:                                 node2
        hostname:                                      zclnode2
        --- Solaris Resources for node2 ---
      Node Name:                                    node1
        physical-host:                                 node1
        hostname:                                      zclnode1
        --- Solaris Resources for node1 ---
    root@node1:~# clzonecluster install ztestcluster
    Waiting for zone install commands to complete on all the nodes of the zone cluster "ztestcluster"...
    clzonecluster:  (C801046) Command execution failed on node node2. Please refer to the console for more information
    clzonecluster:  (C801046) Command execution failed on node node1. Please refer to the console for more information
    But I have enough FS space. I increased the virtual HDD to 25GB on each node. After global cluster installation, I still have 16GB free on each node. During the install I constantly check the free space and it should be enough (only about 500MB is consumed by downloaded packages, which leaves about 15.5GB free).  And every time the installation fails at "apply-sysconfig checkpoint"...

  • Solaris Cluster  - two machines - logical host

    Good morning!
    I am a complete dummie Solaris Cluster, buuuuuuuuuuuuuuuuuuuut... I need to create a cluster and install an application:
    I have two V440, with Solaris 10;
    I need to put the two machines in the cluster;
    I have CE0 of each of the machines plugged into the network;
    I have CE1 and CE2 of each machine connected together via a crossover cable;
    According to the documentation "Oracle Solaris Cluster HA for Alliance Access" there are prerequisites (http://docs.oracle.com/cd/E19680-01/html/821-1547/ciajejfa.html) as creating HAstoragePlus , and logical host resource group;
    Could anyone give me some tips on how to create this cluster and the prerequisites?
    tanks!
    Edited by: user13045950 on 05/12/2012 05:04
    Edited by: user13045950 on 05/12/2012 05:06

    Hi,
    a good source of information for the beginner is: http://www.oracle.com/technetwork/articles/servers-storage-admin/how-to-install-two-node-cluster-166763.pdf
    To create a highly available logical IP address just do
    clrg create <name-for-resource-group>
    clrslh create -g <name-for-resource-group> <name-of-ip-address> # This IP address should be available in /etc/hosts on both cluster nodes.
    clrg online -M <name-for-resource-group>
    Regards
    Hartmut

  • Cluster changing hostname & Logical host

    hello I have sunCluster 3.1 update 4.
    the cluster is 2(v440) + 1(v240) installed on solaris8.
    Did anyone actualy succeeded to change the hostnames of all the nodes & Logical host on SunCluster system like above?
    I saw a procedure that is not realy supported :
    1. Reboot cluster nodes into non-cluster node (reboot -- -x)
    2. Change the hostname of the system (nodenames, hosts etc)
    3. Change hostname on all nodes within the files under /erc/cluster/ccr
    4. Regenerate the checksums for each file changed using ccradm -I /etc/cluster/ccr/FILENAME -0Reboot every cluster node into the cluster
    is it works?
    Thanks & Regards

    So if I understand you correctly, you have two metasets already created and mounted. If so, this is a fairly tricky process (outlined from memory):
    1. Backup your data
    2. Shut down the RGs using scswitch -F -g <RGnames>, make the RGs unmanaged
    3. Unmount the file systems
    4. Deconstruct the metasets and mediators
    5. Shutdown the cluster and boot -x
    6. Change the hostnames in /etc/inet/hosts, etc
    7. Change the CCR and re-checksum it
    8. Reboot the cluster into cluster mode
    9. Re-construct metasets and mediators with new host names
    10. scswitch -Z -g <RGs>
    If you recreate your metasets in the same way as they were originally created and the state replicas haven't changed in size, then the data should be intact.
    Note - I have not tried this process in a long time. Remember also that changing the CCR as described previously is officially unsupported (partly because of the risks involved).
    Regards,
    Tim
    ---

  • Zone Cluster - oracle_server resource create

    I am having a problem trying to create an oracle_server resource for my zone cluster.
    I have a 2-node zone cluster that utilizes a shared storage zpool to house an Oracle installation and its database files. This is a test system so don't worry about the Oracle setup. I obviously wouldn't put the application and database files on the same storage.
    When I run the following command from a global zone member:
    clrs create -Z test -g test-rg -t SUNW.oracle_server -p Connect_string=user/password-p ORACLE_SID=SNOW -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p Alert_log_file=/u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump -p Restart_type=RESOURCE_RESTART -p resource_dependencies=test-storage-rs test-oracle_server-rsI get the following errors:
    clrs:  taloraan.admin.uncw.edu:test - Validation failed. ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
    clrs:  taloraan.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
    clrs:  taloraan.admin.uncw.edu:test - PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/initSNOW.ora nor server PARAMETER_FILE: /u01/app/oracle/product/10.2.0/db_snow/dbs/spfileSNOW.ora exists
    clrs:  taloraan.admin.uncw.edu:test - This resource depends on a HAStoragePlus resouce that is not online on this node. Ignoring validation errors.
    clrs:  tatooine.admin.uncw.edu:test - ALERT_LOG_FILE /u01/app/oracle/product/10.2.0/db_snow/admin/SNOW/bdump doesn't exist
    clrs:  (C189917) VALIDATE on resource test-oracle_server-rs, resource group test-rg, exited with non-zero exit status.
    clrs:  (C720144) Validation of resource test-oracle_server-rs in resource group test-rg on node tatooine failed.
    clrs:  (C891200) Failed to create resource "test:test-oracle_server-rs".So obviously, the crls command cannot find the files (which are located on my shared storage). I am guessing I need to point the command at a global mountpoint.
    Regardless, can anyone shed some light on how I make this happen?
    I am referencing http://docs.sun.com/app/docs/doc/821-0274/chdiggib?a=view
    The section that reads "Example 3 Registering Sun Cluster HA for Oracle to Run in a Zone Cluster"

    The storage is mounted but it only shows up inside the active node. You can't "see" it from a global cluster member. I am now trying to add the listener but am hitting a dead-end.
    # clrs create -Z test -g test-rg -t SUNW.oracle_listener -p ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_snow -p LISTENER_NAME=test-ora-lsnr test-oracle_listener
    clrs:  taloraan.admin.uncw.edu:test - ORACLE_HOME /u01/app/oracle/product/10.2.0/db_snow does not exist
    clrs:  (C189917) VALIDATE on resource test-oracle_listener, resource group test-rg, exited with non-zero exit status.
    clrs:  (C720144) Validation of resource test-oracle_listener in resource group test-rg on node taloraan failed.
    clrs:  (C891200) Failed to create resource "test:test-oracle_listener".Is the LISTENER_NAME something that is assigned to the listener by Oracle or is it simply something of my choosing?
    Also, how can I see a detailed status listing of the zone cluster? When I execute "cluster status", it doesn't give a verbose listing of my zone cluster. I have tried using "clzc status test" but am not afforded much in the way of output. As in, I would like to see some output that lists all of my resources.

  • Trouble creating domain in Logical host

    Hello.
    I'm taking the foundations course for Java CAPS, and when installing the core products in my computer, im having problems creating a new Domain in logical host.
    Whenever I try to create it, leaving the default options, I get the following error as it tries to create it in windows:
    The Domain failed to be created:
    java.lang.Exception: Integration Server installed unsuccesfully: The following error ocurred while executing this line:
    C:\JavaCAPS51\logicalhost\is\setup.xml:618: Execute failed: java.io.IOException:
    CreateProcess:${env.BASE_DIR}\jre\bin\java -classpath
    C:\JavaCAPS51\logicalhost\is/derby/derby.jar;C:\JavaCAPS51\logicalhost\is/derby/derbytools.jar -Djdbc.drivers=org.apache.derby.jdbc.EmbeddedDriver -Dij.database=jdbc:derby:ejbtimer;create=true org.apache.derby.tools.ij
    ${env.BASE_DIR}/lib/ejbtimer_derby.sql error=2
    From what I can gather from the message, it has to do with the java classpath. I'm using Windows Vista and Java 1.6.0_02. I've already checked the classpath variable and it looks correct.
    Does anyone have any ideas on how to solve this? I'd like to fix it before I get too behind in the course.
    Thanks

    Does anyone have any ideas on how to solve this? I'd like to fix it before I get too behind in the course.Yep. Just upgrade your OS to Windows XP. Or even better, Linux or Unix.
    Probably need java 1.4 also.

  • Creating Logical Hostname Resource - Resource contains invalid hostnames

    I am desperately trying to create a shared ip address that my two-node zone cluster will utilize for a failover application. I have added the hostname/ip address pair to /etc/hosts and /etc/inet/ipnodes on both global nodes as well as within each zone cluster node. I then attempt to run the following:
    # clrslh create -Z test -g test-rg -h foo.bar.com test-hostname-rs
    which yields the following:
    clrslh: host1.example.com:test - The hostname foo.bar.com is not authorized to be used in this zone cluster test.
    clrslh: host1.example.com:test - Resource contains invalid hostnames.
    clrslh: (C189917) VALIDATE on resource test-hostname-rs, resource group test-rg, exited with non-zero exit status.
    clrslh: (C720144) Validation of resource test-hostname-rs in resource group test-rg on node host1 failed.
    clrslh: (C891200) Failed to create resource "test:test-hostname-rs".
    I have searched high and low. The only thing I found was the following:
    http://docs.sun.com/app/docs/doc/820-4681/m6069?a=view
    Which states: User the clzonecluster(1M) command to configure the hostnames to be used for this zone cluster and then rerun this command to create the resource.
    I do not understand what it is saying. My guess is that I need to apply a hostname to the zone cluster. Granted, I don't know how to accomplish this. Halp?

    The procedure to authorize the hostnames for the zone cluster is below:
    clzc configure <zonecluster> (this will bring you under the zone cluster scope like below)
    clzc:<zonecluster> add net
    clzc:<zonecluster>:net> set address=<hostname>
    clzc:<zonecluster>:net> end
    clzc:<zonecluster> commit
    clzc:<zonecluster>info (to verify the hostname)
    After this operation, run the clrslh command to create the logical host resource
    and the command should pass.
    Thanks,
    Prasanna Kunisetty

  • Problem in creating logical hostname resource

    Hi all,
    I have a cluster configured on 10.112.10.206 and 10.112.10.208
    i have a resource group testrg
    I want to create a logical hostname resource testhost
    I have given a ip 10.112.10.245 in /etc/hosts file for testhost
    I am creating a logical hostname resource by below command -
    clrslh create -g testrg testhost
    I am doing this on 206
    As I do, the other node 208 becomes unreachable....I m not able to ping 208 but ssh is done from 206 to 208.
    I am also not able to ping 10.112.10.245
    Please help.

    So, the physical IP addresses of your two nodes are:
    10.112.10.206 node1
    10.112.10.208 node2
    And your logical host is:
    10.112.10.245 testhost
    Have you got a netmask set for this network? Is it 255.255.255.0 and is it set in /etc/netmasks?
    It's most likely that this is the cause of the problem if you have different netmasks on the interfaces.
    Tim
    ---

Maybe you are looking for