Can't get ZFS Pool to validate in HAStoragePlus

Hello.
We rebuilt our cluster with Solaris 10 U6 with Sun Cluster 3.2 U1.
When I was running U5, we never had this issue, but with U6, I can't get the system to validate properly the zpool resource to the resource group.
I am running the following commands:
zpool create -f tank raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0 c3t0d0 c3t1d0 c3t2d0 c3t3d0 spare c2t4d0
zfs set mountpoint=/share tank
These commands build my zpool, zpool status comes back good.
I then run
clresource create -g tank_rg -t SUNW.HAStoragePlus -p Zpools=tank hastorage_rs
I get the following output:
clresource: mbfilestor1 - : no error
clresource: (C189917) VALIDATE on resource storage_rs, resource group tank_rg, exited with non-zero exit status.
clresource: (C720144) Validation of resource storage_rs in resource group tank_rg on node mbfilestor1 failed.
clresource: (C891200) Dec 2 10:27:00 mbfilestor1 SC[SUNW.HAStoragePlus:6,tank_rg,storage_rs,hastorageplus_validate]: : no error
Dec 2 10:27:00 mbfilestor1 Cluster.RGM.rgmd: VALIDATE failed on resource <storage_rs>, resource group <tank_rg>, time used: 0% of timeout <1800, seconds>
Failed to create resource "storage_rs".
My resource group and logical host all work no problems, and when I ran this command on the older version of Solaris, it worked no problem. Is this a problem with the newer version of Solaris only?
I though maybe downloading the most up to date patches would fix this, but it didn't.
I did notice this in my messages:
Dec 2 10:26:58 mbfilestor1 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_validate> for resource <storage_rs>, resource group <tank_rg>, node <mbfilestor1>, timeout <1800> seconds
Dec 2 10:26:58 mbfilestor1 Cluster.RGM.rgmd: [ID 616562 daemon.notice] 9 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hastorageplus/hastorageplus_validate>:tag=<tank_rg.storage_rs.2>: Calling security_clnt_connect(..., host=<mbfilestor1>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
Dec 2 10:27:00 mbfilestor1 SC[SUNW.HAStoragePlus:6,tank_rg,storage_rs,hastorageplus_validate]: [ID 471757 daemon.error] : no error
Dec 2 10:27:00 mbfilestor1 Cluster.RGM.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <storage_rs>, resource group <tank_rg>, time used: 0% of timeout <1800, seconds>
Any ideas, or should I put in a bug fix request with Sun?

Hi,
Thanks, I ended up just going back to Solaris 10 U5. It was too critical to get back up and running, and I got tired of messing with it, so I ended up going back. Everything is working like it should. I may try to do a LU on the server and see what happens. Maybe the pools and cluster resources will be fine.
Edited by: mbunixadm on Dec 15, 2008 9:09 AM

Similar Messages

  • [Solved] Can't Import ZFS Pool as /dev/disk/by-id

    I have a 4 disk raidz1 pool "data" made up of 3TB disks.  Each disk is so that that partition 1 is a 2GB swap partition, and partition 2 is the rest of the drive.  The zpool was built out of /dev/disk/by-id(s) pointing to the second partition.
    # lsblk -i
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 2.7T 0 disk
    |-sda1 8:1 0 2G 0 part
    `-sda2 8:2 0 2.7T 0 part
    sdb 8:16 0 2.7T 0 disk
    |-sdb1 8:17 0 2G 0 part
    `-sdb2 8:18 0 2.7T 0 part
    sdc 8:32 0 2.7T 0 disk
    |-sdc1 8:33 0 2G 0 part
    `-sdc2 8:34 0 2.7T 0 part
    sdd 8:48 0 2.7T 0 disk
    |-sdd1 8:49 0 2G 0 part
    `-sdd2 8:50 0 2.7T 0 part
    sde 8:64 1 14.9G 0 disk
    |-sde1 8:65 1 100M 0 part /boot
    `-sde2 8:66 1 3G 0 part /
    I had a strange disk failure where the controller one one of the drives flaked out and caused my zpool not to come online after a reboot, and I had to zpool export data/zpool import data to get the zpool put back together.  However, now it is fixed, but my drives are now identified by their device name:
    [root@osiris disk]# zpool status
    pool: data
    state: ONLINE
    scan: resilvered 36K in 0h0m with 0 errors on Wed Aug 13 22:37:19 2014
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    sda2 ONLINE 0 0 0
    sdb2 ONLINE 0 0 0
    sdc2 ONLINE 0 0 0
    sdd2 ONLINE 0 0 0
    errors: No known data errors
    If I try to import by-id without a zpool name, I get this (its trying to import the disks, not the partitions):
    cannot import 'data': one or more devices is currently unavailable
    [root@osiris disk]# zpool import -d /dev/disk/by-id/
    pool: data
    id: 16401462993758165592
    state: FAULTED
    status: One or more devices contains corrupted data.
    action: The pool cannot be imported due to damaged devices or data.
    see: http://zfsonlinux.org/msg/ZFS-8000-5E
    config:
    data FAULTED corrupted data
    raidz1-0 ONLINE
    ata-ST3000DM001-1CH166_Z1F28ZJX UNAVAIL corrupted data
    ata-ST3000DM001-1CH166_Z1F0XAXV UNAVAIL corrupted data
    ata-ST3000DM001-1CH166_Z1F108YC UNAVAIL corrupted data
    ata-ST3000DM001-1CH166_Z1F12FJZ UNAVAIL corrupted data
    [root@osiris disk]# zpool status
    no pools available
    ... and the import doesn't succeed.
    If I put the pool name at the end, I get:
    [root@osiris disk]# zpool import -d /dev/disk/by-id/ data
    cannot import 'data': one or more devices is currently unavailable
    Yet, if I do the same thing with the /dev/disk/by-partuuid paths, it seems to work fine (other than the fact that I don't want partuuids).  Presumably because there are no entries here for entire disks.
    [root@osiris disk]# zpool import -d /dev/disk/by-partuuid/ data
    [root@osiris disk]# zpool status
    pool: data
    state: ONLINE
    scan: resilvered 36K in 0h0m with 0 errors on Wed Aug 13 22:37:19 2014
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    d8bd1ef5-fab9-4d47-8d30-a031de9cd368 ONLINE 0 0 0
    fbe63a02-0976-42ed-8ecb-10f1506625f6 ONLINE 0 0 0
    3d1c9279-0708-475d-aa0c-545c98408117 ONLINE 0 0 0
    a2d9067c-85b9-45ea-8a23-350123211140 ONLINE 0 0 0
    errors: No known data errors
    As another approach, I tried to offline and replace sda2 with /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F28ZJX-part2, but that doesn't work either:
    [root@osiris disk]# zpool offline data sda2
    [root@osiris disk]# zpool status
    pool: data
    state: DEGRADED
    status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
    action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
    scan: resilvered 36K in 0h0m with 0 errors on Wed Aug 13 22:37:19 2014
    config:
    NAME STATE READ WRITE CKSUM
    data DEGRADED 0 0 0
    raidz1-0 DEGRADED 0 0 0
    sda2 OFFLINE 0 0 0
    sdb2 ONLINE 0 0 0
    sdc2 ONLINE 0 0 0
    sdd2 ONLINE 0 0 0
    errors: No known data errors
    [root@osiris disk]# zpool replace data sda2 /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F28ZJX-part2
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F28ZJX-part2 is part of active pool 'data'
    [root@osiris disk]# zpool replace -f data sda2 /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F28ZJX-part2
    invalid vdev specification
    the following errors must be manually repaired:
    /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F28ZJX-part2 is part of active pool 'data'
    I would appreciate if anyone else had any suggestions/workarounds on how to fix this
    As I was typing this up, I stumbled upon a solution by deleting the symlinks that pointed to entire devices in /dev/disk/by-id (ata-* and wwn*).  I then was able to do a zpool import -d /dev/disk/by-id data and it pulled in the partition 2's.  It persisted after a reboot and my symlinks were automatically regenerated when the system came back up:
    [root@osiris server]# zpool status
    pool: data
    state: ONLINE
    scan: resilvered 36K in 0h0m with 0 errors on Wed Aug 13 23:06:46 2014
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    ata-ST3000DM001-1CH166_Z1F28ZJX-part2 ONLINE 0 0 0
    ata-ST3000DM001-1CH166_Z1F0XAXV-part2 ONLINE 0 0 0
    ata-ST3000DM001-1CH166_Z1F108YC-part2 ONLINE 0 0 0
    ata-ST3000DM001-1CH166_Z1F12FJZ-part2 ONLINE 0 0 0
    It would appear to be an issue with specifically importing non-whole devices by-id.  Although this was mainly rambling and no longer a question, hopefully this might help someone having issues re-importing a zpool by /dev/disk/by-id.
    Matt

    This just saved my morning Thank you!
    I was using Ubuntu 14.04 and after an upgrade to 3.13.0-43-generic it somehow broke... Anyhow now the zpool survives restarts again and I don't have to import it every time using partuuids.

  • How do I get my feed to validate?

    http://www.sportbiketshirts.com/news_files/page6.xml
    I don't understand anything about RSS or XML. I simply want to produce a simple podcast. I've tried creating this podcast more than once and I can not get the feed to validate through Apple's Feed Validator web page. Can someone please help? Thanks!

    Hi,
    With iChat open as the front application go to the Video menu and make sure +Camera Enabled+ has a Tick
    Presumably the View Menu has +Show Video Status+ Ticked and your Buddy List displays Video icons for your self and Buddies ?
    Go to System Preferences > Quicktime > Streaming and set the speed to 1.5Mbps T1/Intranet/LAN and restart iChat
    Make sure your Buddy has done this as well.
    9:56 PM Thursday; August 21, 2008

  • ZFS - Can't make raidz pool available. Please Help

    Hi All,
    Several months ago I created a raidz pool on a 6 disk external sun array. It was working fine until the other day when I lost a drive. I took out the old drive, and put in the new drive, and am unable to bring the pool back up. It wont let me issue a zpool replace, or an online, or anything. Here is hopefully all the info you need to see what's going on: (If you need more info, let me know)
    Piece of dmesg from after the reboot.
    Dec 19 14:17:14 stzehlsun fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
    Dec 19 14:17:14 stzehlsun EVENT-TIME: Tue Dec 19 14:17:14 EST 2006
    Dec 19 14:17:14 stzehlsun PLATFORM: SUNW,Ultra-2, CSN: -, HOSTNAME: stzehlsun
    Dec 19 14:17:14 stzehlsun SOURCE: zfs-diagnosis, REV: 1.0
    Dec 19 14:17:14 stzehlsun EVENT-ID: 644874cf-084d-413d-88c6-c195db617041
    Dec 19 14:17:14 stzehlsun DESC: A ZFS pool failed to open. Refer to http://sun.com/msg/ZFS-8000-CS for more information.
    Dec 19 14:17:14 stzehlsun AUTO-RESPONSE: No automated response will occur.
    Dec 19 14:17:14 stzehlsun IMPACT: The pool data is unavailable
    Dec 19 14:17:14 stzehlsun REC-ACTION: Run 'zpool status -x' and either attach the missing device or
    Dec 19 14:17:14 stzehlsun restore from backup.
    # zpool status
    pool: array
    state: FAULTED
    status: One or more devices could not be opened. There are insufficient
    replicas for the pool to continue functioning.
    action: Attach the missing device and online it using 'zpool online'.
    see: http://www.sun.com/msg/ZFS-8000-D3
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    array UNAVAIL 0 0 0 insufficient replicas
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 UNAVAIL 0 0 0 cannot open
    c0t14d0 ONLINE 0 0 0
    # zpool online array c0t13d0
    cannot open 'array': pool is currently unavailable
    run 'zpool status array' for detailed information
    # zpool replace array c0t13d0
    cannot open 'array': pool is currently unavailable
    run 'zpool status array' for detailed information
    As you can see, I've replaced c0t13d0 with the new drive, format sees it just fine, and it apprears to be up and running. What do I need to do to get this new drive into the raidz pool and get my pool back on-line? I just don't see what Im missing here. Thanks!
    Steve

    Sadly, I never received an answer on this forum, So I opened a ticket with Sun, and they got right back to me. For anyone following this thread, I'll pass along what they told me.
    Basically, I THOUGHT I had created a raidz pool, apparently I did not, and only created a RAID0 pool. so with the one disk gone, there was no parity disk to re-build the array, so it remained faulted, no way to fix it, only solution is to destroy the pool and start again. I really thought I had created a raidz, but now that I have created a raidz pool, I can see the difference in the zpool status command.
    Before: (MUST have been RAID0)
    NAME STATE READ WRITE CKSUM
    array UNAVAIL 0 0 0 insufficient replicas
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 UNAVAIL 0 0 0 cannot open
    c0t14d0 ONLINE 0 0 0
    After creating a REAL raidz pool:
    NAME STATE READ WRITE CKSUM
    array ONLINE 0 0 0
    raidz ONLINE 0 0 0
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 ONLINE 0 0 0
    c0t14d0 ONLINE 0 0 0
    Note the added raidz line.
    I asked the tech support guy if it was possible that I HAD created a raidz, but due to the disk loss and reboots it was a bug that was only showing it as a RAID0, and he said there are no reported cases of such an incident, and he really didn't think so. So, I guess I just messed up when I created it in the first place, and since I didn't know what a raidz pool would look like, I had no way of knowing I hadn't created one (Yes, I know I could have added up the disk space and realized no disk was being used for parity, but I didn't)
    So moral here is to make sure you created what you thought you had created, and then it will do what you expect.

  • 903/902/BC4J can't get data-sources.xml conn pooling to work in production; help

    I have several BC4J ears deployed to a 903 instance of OC4J being configured as a standalone
    instance. I've had this problem since I started deploying in development on 902. So it's
    some basic problem that I've not mastered.
    I can't get data-sources.xml managed connection pooling to actually pool conn's. I'm wanting
    to declare my jndi jdbc source connection pool in j2ee/home/config/data-sources.xml.
    Have all BC4J apps get conns from this JNDI JDBC pool. I've removed all data-sources.xml from my BC4J ears,
    and published the jndi jdbc source in my oc4j common data-sources.xml. I've tested that this is
    the place controlling the conn URL/login passwd by commenting it out of config/data-sources.xml
    and my BC4J apps then throw exceptions, can't get conn.
    I've set the oc4j startup cmd line with the BC4J property to enabled connection pooling:
    -Djbo.doconnectionpooling=true
    symptom
    Connections are created and closed. Instead of being put back into the pool managed by oc4j,
    what ever BC4J is doing or my data-sources.xml is doing, the connections are just being created and
    closed.
    I can verify this via (solaris) lsof and netstat, where I see my oc4j instance under test load
    with only 1 or 2 conns to the db box, and the ephemeral port is tumbling, meaning a new socket is
    being opened for each conn. ;( grrrrrrr
    Does anyone have a clue as to why this is happening?
    Thanks, curt
    my data-sources.xml
    <data-sources>
         <data-source
            class="com.evermind.sql.DriverManagerDataSource"
            connection-driver="oracle.jdbc.driver.OracleDriver"
            ejb-location="jdbc/DEVDS"
            location="jdbc/DEVCoreDS"
            name="DEVDS"
            password="j2train"
            pooled-location="jdbc/DEVPooledDS"
            url="jdbc:oracle:thin:@10.2.1.30:1521:GDOC"
            username="jscribe"
            xa-location="jdbc/xa/DEVXADS"
            inactivity-timeout="300"
            max-connections="50"
            min-connections="40"
        />
    </data-sources>

    I've run another test using local data-source.xml, that's packaged in the .ear. Still
    pooling under BC4J doesn't work??
    A piece of info is that the 903 oc4j release notes states that global conn pooling doesn't
    work. Infering that the j2ee/home/config/data-sources.xml data sources aren't pooled or ??
    I just tested so called local connection pooling, where I edited the data-sources.xml that
    gets packaged in the ear, to include the min/max params and re-ran my test.
    Still, the AM creates a new conn, it's to a new socket, and closes the conn when done. Causing
    each conn to not be pooled, rather opened then closed to the DB box. As verified with lsof and
    netstat, checking the ephemeral port # on the DB box side, always changes, meaning it's a
    new socket and not an old pooled conn socket.
    ???? What the heck??
    Surely if the AM conn check out / return code works properly, OC4J's pooling JDBC driver would
    pool and not close the socket??
    Has anywone gotten JDBC Datasource connections in BC4J to actually be pooled under OC4J??
    Since I couldn't get this to work in my early 902 oc4j testing, and now can't get it to work
    still under 903 OC4J, either it's my config or BC4J AM's code or OC4J?
    Any thoughts on how to figure out what's not configed correctly or has a bug?
    Thanks, curt

  • 903/902/BC4J can't get OC4J data-sources.xml conn pooling to work in production: help

    [cross posted to the j2ee forum]
    I have several BC4J ears deployed to a 903 instance of OC4J being configured as a standalone
    instance. I've had this problem since I started deploying in development on 902. So it's
    some basic problem that I've not mastered.
    I can't get data-sources.xml managed connection pooling to actually pool conn's. I'm wanting
    to declare my jndi jdbc source connection pool in j2ee/home/config/data-sources.xml and
    have all BC4J apps get conns from this JNDI JDBC pool. I've removed all data-sources.xml from
    my BC4J ears, and published the jndi jdbc source in my oc4j common data-sources.xml.
    I've tested that this is the place controlling the conn URL/login passwd by commenting it
    out of config/data-sources.xml and my BC4J apps then throw exceptions, can't get conn.
    I've set the oc4j startup cmd line with the BC4J property to enabled connection pooling:
    -Djbo.doconnectionpooling=true
    symptom
    Connections are created and closed. Instead of being put back into the pool managed by oc4j,
    what ever BC4J is doing or my data-sources.xml is doing, the connections are just being created and
    closed.
    I can verify this via (solaris) lsof and netstat, where I see my oc4j instance under test load
    with only 1 or 2 conns to the db box, and the ephemeral port is tumbling, meaning a new socket is
    being opened for each conn. ;( grrrrrrr
    Does anyone have a clue as to why this is happening?
    Thanks, curt
    my data-sources.xml
    <data-sources>
         <data-source
            class="com.evermind.sql.DriverManagerDataSource"
            connection-driver="oracle.jdbc.driver.OracleDriver"
            ejb-location="jdbc/DEVDS"
            location="jdbc/DEVCoreDS"
            name="DEVDS"
            password="j2train"
            pooled-location="jdbc/DEVPooledDS"
            url="jdbc:oracle:thin:@10.2.1.30:1521:GDOC"
            username="jscribe"
            xa-location="jdbc/xa/DEVXADS"
            inactivity-timeout="300"
            max-connections="50"
            min-connections="40"
        />
    </data-sources>

    Thanks Leif,
    Yes, set it to the location jndi path.
    A piece of info is that the 903 oc4j release notes states that global conn pooling doesn't
    work. Infering that the j2ee/home/config/data-sources.xml data sources aren't pooled or ??
    I just tested so called local connection pooling, where I edited the data-sources.xml that
    gets packaged in the ear, to include the min/max params and re-ran my test.
    Still, the AM creates a new conn, it's to a new socket, and closes the conn when done. Causing
    each conn to not be pooled, rather opened then closed to the DB box. As verified with lsof and
    netstat, checking the ephemeral port # on the DB box side, always changes, meaning it's a
    new socket and not an old pooled conn socket.
    ???? What the heck??
    Surely if the AM conn check out / return code works properly, OC4J's pooling JDBC driver would
    pool and not close the socket??
    Has anywone gotten JDBC Datasource connections in BC4J to actually be pooled under OC4J??
    Since I couldn't get this to work in my early 902 oc4j testing, and now can't get it to work
    still under 903 OC4J, either it's my config or BC4J AM's code or OC4J?
    Any thoughts on how to figure out what's not configed correctly or has a bug?
    Thanks, curt

  • How can I get authentication and authorization through OS X open directory with the Sun ZFS STOR ZS3-2

    how can I get authentication and authorization through OS X open directory with the Sun ZFS STOR ZS3-2
    I have configure NFS, I need help configuring the share that I created in the Sun ZFS STOR ZS3-2 to connect with the OS X Open Directory

    Hi,
        You may  try checking the help page for ldap configuration :
    https://<Appliance_IP>:215/wiki/index.php/Configuration:Services:LDAP
    ZFS Storage supports LDAP, NIS, AD as directory service.
    Hope Open Directory is also based on LDAP and may work in similar fashion.
    Thanks
    Nitin

  • Can't get XMLto validate against a schema.

    I can't get an XML file to validate against a schema. I'm not sure if the problem is in my schema, XML file, or Java code.
    Here is the schema:
    <?xml version="1.0"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
        <xs:element name="TerraFrame">
         <xs:complexType>
             <xs:element name="connection" minOccurs="1" maxOccurs="unbounded">
              <xs:complexType>
                  <xs:sequence>
                        <xs:element name="label" type="xs:string"/>
                        <xs:element name="type" type="xs:string"/>
                        <xs:element name="address" type="xs:string"/>
                  </xs:sequence>
              </xs:complexType>
             </xs:element>
         </xs:complexType>
        </xs:element>
    </xs:schema>Here is the XML file:
    <?xml version="1.0"?>
    <TerraFrame xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <!-- default connection -->
         <connection>
             <label>default</label>
             <type>JavaProxy</type>
             <address></address>
         </connection>
         <!-- default RMI connection -->
         <connection>
             <label>rmi_default</label>
             <type>RMIProxy</type>
             <address>//localhost/RemoteControllerService</address>
         </connection>
         <!-- default Web Service connection -->
         <connection>
             <label>web_service_default</label>
             <type>WebServiceProxy</type>
             <address>http://localhost/</address>
         </connection>
         <!-- default Java connection -->
         <connection>
             <label>java_default</label>
             <type>JavaProxy</type>
             <address></address>
         </connection>
    </TerraFrame>And finally, here is the code snippit where I'm validating:
    [EDIT:] The constants CONNECTIONS_XML_FILE and CONNECTIONS_SCHEMA_FILE just point to the XML and schema files, respectively. I have verified that these paths are correct and working.
    static final String JAXP_SCHEMA_LANGUAGE = "http://java.sun.com/xml/jaxp/properties/schemaLanguage";
      static final String W3C_XML_SCHEMA = "http://www.w3.org/2001/XMLSchema";
      static final String JAXP_SCHEMA_SOURCE = "http://java.sun.com/xml/jaxp/properties/schemaSource";
    public void parse()
       DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
        factory.setValidating(true);
        factory.setAttribute(JAXP_SCHEMA_LANGUAGE, W3C_XML_SCHEMA);
        factory.setAttribute(JAXP_SCHEMA_SOURCE, new File(CONNECTIONS_SCHEMA_FILE));
        DocumentBuilder builder;
        try
          builder = factory.newDocumentBuilder();
          builder.setErrorHandler(new XMLConnectionsErrorHandler());
          document = builder.parse(new File(CONNECTIONS_XML_FILE));
    }Any clue as to why this is failing with the following error?:
    The content of '#AnonType_TerraFrame' is invalid.  Element 'element' is invalid, misplaced, or occurs too often.Message was edited by:
    sadpanda

    I can't get an XML file to validate against a schema. I'm not sure if the problem is in my schema, XML file, or Java code.
    Here is the schema:
    <?xml version="1.0"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
        <xs:element name="TerraFrame">
         <xs:complexType>
             <xs:element name="connection" minOccurs="1" maxOccurs="unbounded">
              <xs:complexType>
                  <xs:sequence>
                        <xs:element name="label" type="xs:string"/>
                        <xs:element name="type" type="xs:string"/>
                        <xs:element name="address" type="xs:string"/>
                  </xs:sequence>
              </xs:complexType>
             </xs:element>
         </xs:complexType>
        </xs:element>
    </xs:schema>Here is the XML file:
    <?xml version="1.0"?>
    <TerraFrame xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <!-- default connection -->
         <connection>
             <label>default</label>
             <type>JavaProxy</type>
             <address></address>
         </connection>
         <!-- default RMI connection -->
         <connection>
             <label>rmi_default</label>
             <type>RMIProxy</type>
             <address>//localhost/RemoteControllerService</address>
         </connection>
         <!-- default Web Service connection -->
         <connection>
             <label>web_service_default</label>
             <type>WebServiceProxy</type>
             <address>http://localhost/</address>
         </connection>
         <!-- default Java connection -->
         <connection>
             <label>java_default</label>
             <type>JavaProxy</type>
             <address></address>
         </connection>
    </TerraFrame>And finally, here is the code snippit where I'm validating:
    [EDIT:] The constants CONNECTIONS_XML_FILE and CONNECTIONS_SCHEMA_FILE just point to the XML and schema files, respectively. I have verified that these paths are correct and working.
    static final String JAXP_SCHEMA_LANGUAGE = "http://java.sun.com/xml/jaxp/properties/schemaLanguage";
      static final String W3C_XML_SCHEMA = "http://www.w3.org/2001/XMLSchema";
      static final String JAXP_SCHEMA_SOURCE = "http://java.sun.com/xml/jaxp/properties/schemaSource";
    public void parse()
       DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
        factory.setValidating(true);
        factory.setAttribute(JAXP_SCHEMA_LANGUAGE, W3C_XML_SCHEMA);
        factory.setAttribute(JAXP_SCHEMA_SOURCE, new File(CONNECTIONS_SCHEMA_FILE));
        DocumentBuilder builder;
        try
          builder = factory.newDocumentBuilder();
          builder.setErrorHandler(new XMLConnectionsErrorHandler());
          document = builder.parse(new File(CONNECTIONS_XML_FILE));
    }Any clue as to why this is failing with the following error?:
    The content of '#AnonType_TerraFrame' is invalid.  Element 'element' is invalid, misplaced, or occurs too often.Message was edited by:
    sadpanda

  • Can't get FCKeditorAPI to load to validate HTML Editor contents

    I have an HTML Editor item on my page and wanted to quickly check the length of the text in the editor before submitting the page. It should be possible to get the contents of the HTML Editor using the FCKeditorAPI, but I can't get it to load. I'm doing this by putting basically this in the HTML header:
    <script type="text/javascript">
    function checkLength() {
    var oEditor = FCKeditorAPI.GetInstance('MY_EDITOR');
    alert(oEditor.GetXHTML.length);
    </script>
    Then calling the function when the Save button is clicked. I'm only geting "FCKeditorAPI is not defined" errors, however.

    Hi,
    I have the same problem, because I want to check the number of characters in the editor field before the values are stored.
    And when I tried to get the value of the fckeditor field with document.getElementById(XXX).value; I don`t get the actual value.
    So I have to get the value through the instance of the fckeditor.
    Thanks for your help,
    Tim

  • ISCSI array died, held ZFS pool.  Now box han

    I was doing some iSCSI testing and, on an x86 EM64T server running an out-of-the box install of Solaris 10u5, created a ZFS pool on two RAID-0 arrays on an IBM DS300 iSCSI enclosure.
    One of the disks in the array died, the DS300 got really flaky, and now the Solaris box gets hung in boot. It looks like it's trying to mount the ZFS filesystems. The box has two ZFS pools, or had two, anyway. The other ZFS pool has some VirtualBox images filling it.
    Originally, I got a few iSCSI target offline messages on the console, so I booted to failsafe and tried to run iscsiadm to remove the targets, but that wouldn't work. So I just removed the contents of /etc/iscsi and all the iSCSI instances in /etc/path_to_inst on the root drive.
    Now the box hangs with no error messages.
    Anyone have any ideas what to do next? I'm willing to nuke the iSCSI ZFS pool as it's effectively gone anyway, but I would like to save the VirtualBox ZFS pool, if possible. But they are all test images, so I don't have to save them. The host itself is a test host with nothing irreplaceable on it, so I could just reinstall Solaris. But I'd prefer to figure out how to save it, even if only for the learning experience.

    Try this. Disconnect the iSCSI drives completely, then boot. My fallback plan on zfs if things get screwed up is to physically disconnect the zfs drives so that solaris doesn't see them on boot. It marks them failed and should boot. Once it's up, zpool destroy the pools WITH THE DRIVES DISCONNECTED so that it doesn't think there's a pool anymore. THEN reconnect the drives and try to do a "zpool import -f".
    The pools that are on intact drives should be still ok. In theory :)
    BTW, if you removed devices, you probably should do a reconfiguration boot (create a /a/reconfigure in failsafe mode) and make sure the devices gets reprobed. Does the thing boot in single user ( pass -s after the multiboot line in grub )? If it does, you can disable the iscsi svcs with "svcadm disable network/iscsi_initiator; svcadm disable iscsitgt".

  • EAP-FAST on Local Radius Server : Can't Get It Working

    Hi all
    I'm using an 877w router (flash:c870-advsecurityk9-mz.124-24.T4.bin) as local radius server and have followed various config guides on CCO. LEAP works fine but I just can't get EAP-FAST to work.
    I'm testing with win7 client using anyconnect secure mobility client, and also a mac book pro but without luck.
    the router sees unknown auth type, and when I run some debugs it talks of unknown eap type 3
    sh radius local-server s
    Successes              : 1           Unknown usernames      : 0        
    Client blocks          : 0           Invalid passwords      : 0        
    Unknown NAS            : 0           Invalid packet from NAS: 17      
    NAS : 172.27.44.1
    Successes              : 1           Unknown usernames      : 0        
    Client blocks          : 0           Invalid passwords      : 0        
    Corrupted packet       : 0           Unknown RADIUS message : 0        
    No username attribute  : 0           Missing auth attribute : 0        
    Shared key mismatch    : 0           Invalid state attribute: 0        
    Unknown EAP message    : 0           Unknown EAP auth type  : 17       
    Auto provision success : 0           Auto provision failure : 0        
    PAC refresh            : 0           Invalid PAC received   : 0       
    Can anyone suggest what I might be doing wrong?
    Regs, Tim

    Thanks Nicolas, relevant snippets from config:
    aaa new-model
    aaa group server radius rad_eap
    server 172.27.44.1 auth-port 1812 acct-port 1813
    aaa authentication login eap_methods group rad_eap
    aaa authorization exec default local
    aaa session-id common
    dot11 ssid home
    vlan 3
    authentication open eap eap_methods
    authentication network-eap eap_methods
    authentication key-management wpa
    ip dhcp pool home
       import all
       network 192.168.1.0 255.255.255.0
       default-router 192.168.1.1
       dns-server 194.74.65.68 194.74.65.69
    ip inspect name ethernetin tcp
    ip inspect name ethernetin udp
    ip inspect name ethernetin pop3
    ip inspect name ethernetin ssh
    ip inspect name ethernetin dns
    ip inspect name ethernetin ftp
    ip inspect name ethernetin tftp
    ip inspect name ethernetin smtp
    ip inspect name ethernetin icmp
    ip inspect name ethernetin telnet
    interface Dot11Radio0
    no ip address
    encryption vlan 1 mode ciphers aes-ccm tkip
    encryption vlan 2 mode ciphers aes-ccm tkip
    encryption vlan 3 mode ciphers aes-ccm tkip
    broadcast-key vlan 1 change 30
    broadcast-key vlan 2 change 30
    broadcast-key vlan 3 change 30
    ssid home
    speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0
    station-role root
    interface Dot11Radio0.3
    encapsulation dot1Q 3
    no cdp enable
    bridge-group 3
    bridge-group 3 subscriber-loop-control
    bridge-group 3 spanning-disabled
    bridge-group 3 block-unknown-source
    no bridge-group 3 source-learning
    no bridge-group 3 unicast-flooding
    interface Vlan3
    no ip address
    bridge-group 3
    interface BVI3
    ip address 192.168.1.1 255.255.255.0
    ip inspect ethernetin in
    ip nat inside
    ip virtual-reassembly
    radius-server local
    no authentication mac
    nas 172.27.44.1 key 0 123456
    user test1 nthash 0 B151E8FF684B4F376C018E632A247D84
    user test2 nthash 0 F2EEAE1D895645B819C9FD217D0CA1F9
    user test3 nthash 0 0CB6948805F797BF2A82807973B89537
    radius-server host 172.27.44.1 auth-port 1812 acct-port 1813 key 123456
    radius-server vsa send accounting

  • How can I get the "text" field from the actionEvent.getSource() ?

    I have some sample code:
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    import java.util.ArrayList;
    public class JFrameTester{
         public static void main( String[] args ) {
              JFrame f = new JFrame("JFrame");
              f.setSize( 500, 500 );
              ArrayList < JButton > buttonsArr = new ArrayList < JButton > ();
              buttonsArr.add( new JButton( "first" ) );
              buttonsArr.add( new JButton( "second" ) );
              buttonsArr.add( new JButton( "third" ) );
              MyListener myListener = new MyListener();
              ( (JButton) buttonsArr.get( 0 ) ).addActionListener( myListener );
              ( (JButton) buttonsArr.get( 1 ) ).addActionListener( myListener );
              ( (JButton) buttonsArr.get( 2 ) ).addActionListener( myListener );
              JPanel panel = new JPanel();
              panel.add( buttonsArr.get( 0 ) );
              panel.add( buttonsArr.get( 1 ) );
              panel.add( buttonsArr.get( 2 ) );
              f.getContentPane().add( BorderLayout.CENTER, panel );
              f.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE );
              f.setVisible( true );
         public static class MyListener  implements ActionListener{
              public MyListener() {}
              public void actionPerformed( ActionEvent e ) {
                   System.out.println( "hi!! " + e.getSource() );
                   // I need to know a title of the button (which was clicked)...
    }The output of the code is something like this:
    hi! javax.swing.JButton[,140,5,60x25,alignmentX=0.0,alignmentY=0.5,
    border=javax.swing.plaf.BorderUIResource$CompoundBorderUIResource@1ebcda2d,
    flags=296,maximumSize=,minimumSize=,preferredSize=,defaultIcon=,disabledIcon=,
    disabledSelectedIcon=,margin=javax.swing.plaf.InsetsUIResource[top=2,left=14,bottom=2,
    right=14],paintBorder=true,paintFocus=true,pressedIcon=,rolloverEnabled=true,
    rolloverIcon=,rolloverSelectedIcon=,selectedIcon=,text=first,defaultCapable=true]
    I need this: "first" (from this part: "text=first" of the output above).
    Does anyone know how can I get the "text" field from the e.getSource() ?

    System.out.println( "hi!! " + ( (JButton) e.getSource() ).getText() );I think the problem is solved..If your need is to know the text of the button, yes.
    In a real-world application, no.
    In a RW application, a typical need is merely to know the "logical role" of the button (i.e., the button that validates the form, regardless of whether its text is "OK" or "Save", "Go",...). Text tends to vary much more than the structure of the UI over time.
    In this case you can get the source's name (+getName()+), which will be the name that you've set to the button at UI construction time. Or you can compare the source for equality with either button ( +if evt.getSource()==okButton) {...}+ ).
    All in all, I think the best solution is: don't use the same ActionListener for more than one action (+i.e.+ don't add the same ActionListener to all your buttons, which leads to a big if-then-else series in your actionPerformed() ).
    Eventually, if you're listening to a single button's actions, whose text change over time (e.g. "pause"/"resume" in a VCR bar), I still think it's a bad idea to rely on the text of the button - instead, this text corresponds to a logical state (resp. playing/paused), it is more maintainable to base your logic on the state - which is more resilient to the evolutions of the UI (e.g. if you happen to use 2 toggle buttons instead of one single play/pause button).

  • How can I get the context-parm from a web.xml file using struts?

    Hello:
    I need get the context-param from the web.xml file of my web project using struts. I want configurate the jdbc datasource connection pooling here. For example:
    <context-param>
    <param-name>datasource</param-name>
    <param-value>jdbc/formacion</param-value>
    <description>Jdbc datasource</description>
    </context-param>
    and then from any Action class get this parameter.
    Similar using a simple server can be:
    /** Initiates new XServlet */
    public void init(ServletConfig config) throws ServletException {
              for (Enumeration e = config.getInitParameterNames(); e.hasMoreElements();) {
                   System.out.println(e.nextElement());
              super.init(config);
              String str = config.getInitParameter("datasource");
              System.out.println(str);
         public void doPost(HttpServletRequest req, HttpServletResponse res)
              throws ServletException, IOException {
              // res.setContentType( );
              System.out.println("Got post request in XServlet");
              PrintWriter out = res.getWriter();
              out.println("nada");
              out.flush();
              out.close();
    but only this works for init-params, if I use
    <servlet>
         <servlet-name>MyServlet</servlet-name>
         <display-name>MyServlet</display-name>
         <servlet-class>myExamples.servlet.MyServlet</servlet-class>
         <init-param>
         <param-name>datasource</param-name>
         <param-value>jdbc/formacion</param-value>
    </init-param>
    </servlet>
    inside my web.xml. I need something similar, but using struts inside the action class for that I can get the context-params and call my database.
    Thank you

    To get context parameters from your web.xml file you can simply get the ActionServlet object from an implementing action object class. In the perform (or execute) method make the following call.
    ServletContext context = getServlet().getServletContext();
    String tempContextVar =
    context.getInitParameter("<your context param >");

  • Can't get my laptop and airport extreme to work together

    Hi,
    I can't get a green light on my base. Also, when I try and set up the computer to do the wireless thing it's prompting me to enter all this info that I don't have...like network password, info, connection...where would I find that info? Sorry if I'm being vague...it seems to be alot of work to connect! I keep reading posts about it being so easy!

    hi DelilahBelle - maybe consider the following steps:
    1.) establish a valid connection with your dsl-provider. to do so: connect your modem to your computer using your mostly yellow ethernet-cable (so at the moment no wireless established). then check if you can access the internet thru safari.
    1.a) if not, configure first your modem and the dsl-connection in the right way. you can use safari and enter the modem's tcp/ip-(routerside-)adress (e.g. 192.168.1.1 for a netopia modem) and you will get the modemrouters menue. to configure the modem you must have some information from your dsl-provider (at&t, swisscom, t-online whatever):
    - your userid and password to validate yourself in the login process
    - maybe dns1 and dns2 adresses a.s.o. (if you have a simple network it's not required)
    1.b) if ok, ensure your modem is set to DHCP (dynamic tcp/adress management). maybe you have to set this parameter in your modem using its routers menue. then every component in your network will get a dynamic tcp/ip adress
    2.) connect your airport express to the modem with the ethernet cable. do a hard reset with your airport express. afterthat select and configure your airport express using the airport express utility version 5.2.1 (program, utility folder) using its assistant. after finishing you should get a green light on the express and you should be able to go on the internet thru safari
    2.a) if not, select the airport express in the utility and double-click twice on its icon. you will get the summary screen from the airport express. there you will see a yellow spot. click left there and you will get another window where the error is explained ...
    2.b) be happy
    3.) if your network is more complicated then follow maybe this
    http://discussions.apple.com/thread.jspa?threadID=1087373&tstart=0
    but anyway step 1 and 2 i would are the base for any extension.

  • How can I get my iPhone to stop promting me to enter my wife's Apple ID when updating apps? I've checked settings for iTunes

    How can I get my iPhone 6 to stop prompting me to enter my wife's Apple ID when updating apps? I have verified in settings that my acct is listed in itunes & App Store, as well as icloud. i can download new apps fine. Have shut down phone and re-powered, have tried plugging into my iMac while signed into my iTunes acct. Can't figure out what to do next?

    Is the comp a synced device to your apple ID and approved as an active device for this synchronization?
    Just cloud the sync once, activate the comp to your apple I'd in iTunes, ensure all software is up to date, turn on home sharing between the devices and you can reinstall the apps from the apple id purchase history via download.
    You basically cleaned out the canonical data apple uses to validate all that, which was backup data. Like iOS devices have the purchased option on the App Store now, syncing the devices via the home sharing and properly activating them will give your comp the purchased cloud and free up 10gs of space on the precious solid states smaller drives.
    Hope that helps...
    If you need to know the menu to find this info, let me know. But the info is easily accessible in the help menu.

Maybe you are looking for