HASP and ZFS

Hi All,
I'm using clustered zones and ZFS and I get these messages below.
Is this something that I need to be worried about ?
Have I missed something when I created the resource which actually
is configured "by the book"?
Will HAStoragePlus work as expected?
Can I somehow verify that the zpool is monitored?
Apr 4 15:38:07 dceuxa2 SC[SUNW.HAStoragePlus:4,dceux08a-rg,dceux08a-hasp,hastorageplus_postnet_stop]: [ID 815306 daemon.warning] Extension properties GlobalDevicePaths and FilesystemMountPoints are both empty.
/Regards
Ulf

Thanks for Your quick replies.
The HASP resource was created with -x Zpools="orapool1,orapool2"
and all other properties is at their defaults.
part of clrs show -v...
Resource: dceux08a-hasp
Type: SUNW.HAStoragePlus:4
Type_version: 4
Group: dceux08a-rg
R_description:
Resource_project_name: default
Enabled{dceuxa1:dceux08a}: True
Enabled{dceuxa2:dceux08a}: True
Monitored{dceuxa1:dceux08a}: True
Monitored{dceuxa2:dceux08a}: True
FilesystemMountPoints: <NULL>
GlobalDevicePaths: <NULL>
Zpools: orazpool1 orazpool2
(Solaris10u3/Sparc SC3.2, EIS 27-Feb)
/BR
Ulf

Similar Messages

  • Max File size in UFS and ZFS

    Hi,
    Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
    What will be max size file compression using tar,gz ?
    Regards
    Siva

    from 'man ufs':
    A sparse file  can have  a  logical  size  of one terabyte.
    However, the  actual amount of data that can be stored
    in  a  file  is  approximately  one  percent  less  than one
    terabyte because of file system overhead.
    As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
    http://www.sun.com/software/solaris/ds/zfs.jsp
    .7/M.
    Edited by: abrante on Feb 28, 2011 7:31 AM
    fixed layout and 2 ^64^

  • SunCluster, MPXIO, Clariion and ZFS?

    Hi,
    we have a 2 node cluster (SunCluster 3.2). Our Storage is a EMC Clariion CX700. We have created some zpools and integrated them into the suncluster.
    We cannot use PowerPath 5.1 and 5.2 for this because sun cluster and zfs is not supported in this environment. So, we want to use mpxio. Our question is, if there is a SP-Failover at the clariion, does mpxio support this and everything works fine without any problems?
    Thanks!
    Greets
    Björn

    Hi,
    What you need todo is the following.
    edit this file /kernel/drv/scsi_vhci.conf
    follow the directions of this link
    http://www.filibeto.org/sun/lib/nonsun/emc/SolarisHostConectivity.pdf?bcsi_scan_1BD4CB6F2E356E40=0&bcsi_scan_filename=SolarisHostConectivity.pdf
    regards
    Filip

  • ZfD and ZfS on same server? Centralized monitoring of disk usage? ZfS 6.5 or 7?

    We have ZfD running on one server for approx. 600 users (Sybase db on
    NetWare 6.5).
    We use it for; WS registration, WS Inventory, Application Mgmt, NAL
    database, Imaging)
    I have a mixture of Microsoft Windows and Novell NetWare servers.
    Approximately:
    30 Microsoft Windows servers (2000 and 2003)
    10 Novell NetWare servers (NW 5.1 SP7 and NW 6.5 SP3)
    Q1: Is it feasable to have the ZfS backend running on the same server that
    hosts the ZfD backend ?
    We are trying to find a way to monitor all server for disk usage. Ideally
    we want to get a view/report of all servers (regardless of Novell or
    Microsoft) to see where each disk is at with regards to available space and
    also see historical trends for disk usage.
    Q2: Can ZfS do this for us? We are licensed to use it but so far we've
    only implemented the ZfD 6.5.2 and are quite please with the results.
    Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
    to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
    the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
    move forward with ZfS, I'm guessing that sticking with same version being
    used with ZfD is a good idea?
    Thanks for any answers!
    Marc

    Marc Charbonneau,
    >Q1: Is it feasable to have the ZfS backend running on the same server that
    >hosts the ZfD backend ?
    >
    >We are trying to find a way to monitor all server for disk usage. Ideally
    >we want to get a view/report of all servers (regardless of Novell or
    >Microsoft) to see where each disk is at with regards to available space and
    >also see historical trends for disk usage.
    Yes, it's very workable with both ZFD and ZFS on the same box. ZFS can
    monitor any of these features. It uses SNMP to do this on both netware and
    windows.
    >
    >Q2: Can ZfS do this for us? We are licensed to use it but so far we've
    >only implemented the ZfD 6.5.2 and are quite please with the results.
    >
    Glad to hear ZFD is working for you.
    >Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
    >to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
    >the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
    >move forward with ZfS, I'm guessing that sticking with same version being
    >used with ZfD is a good idea?
    Yes, although ZFS7 subscribers can run on XP, but I don't think 6.5 can.
    In a way, zfd and zfs are very separate and the patches do not have to
    match, but if you can keep it the same, than do. :)
    Hope that helps.
    Jared
    Systems Analyst at Data Technique, INC.
    jjennings at data technique dot com
    Posting with XanaNews 1.17.6.6 in WineHQ
    Check out Novell WIKI
    http://wiki.novell.com/index.php/IManager

  • ZfD and ZfS on same server? Centralized monitoring of disk usage? ZfD 6.5 or 7?

    We have ZfD running on one server for approx. 600 users (Sybase db on
    NetWare 6.5).
    We use it for; WS registration, WS Inventory, Application Mgmt, NAL
    database, Imaging)
    I have a mixture of Microsoft Windows and Novell NetWare servers.
    Approximately:
    30 Microsoft Windows servers (2000 and 2003)
    10 Novell NetWare servers (NW 5.1 SP7 and NW 6.5 SP3)
    Q1: Is it feasable to have the ZfS backend running on the same server that
    hosts the ZfD backend ?
    We are trying to find a way to monitor all server for disk usage. Ideally
    we want to get a view/report of all servers (regardless of Novell or
    Microsoft) to see where each disk is at with regards to available space and
    also see historical trends for disk usage.
    Q2: Can ZfS do this for us? We are licensed to use it but so far we've
    only implemented the ZfD 6.5.2 and are quite please with the results.
    Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
    to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
    the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
    move forward with ZfS, I'm guessing that sticking with same version being
    used with ZfD is a good idea?
    Thanks for any answers!
    Marc

    Marc,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
    - Check all of the other support tools and options available at
    http://support.novell.com.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://support.novell.com/forums)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://support.novell.com/forums/faq_general.html
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • After updating kernel and ZFS modules, system cannot boot

    Starting Import ZFS pools by cache file...
    [ 4.966034] VERIFY3(0 == zap_lookup(ddt->ddt_os, ddt->ddt_spa->spa_ddt_stat_object, name, sizeof (uint64_t), sizeof (ddt_histogram_t) / sizeof (uint64_t), &hht->ddt_histogram[type][class])) failed (0 == 6)
    [ 4.966100] PANIC at ddt.c:124:ddt_object_load()
    [*** ] A start job is running for Import ZFS pools by cache (Xmin Ys / no limit)
    And then occasionally I see
    [ 240.576219] Tainted: P O 3.19.2-1-ARCH #1
    Anyone else experiencing the same?

    Thanks!
    I did the same and it worked... kind of. The first three reboots it failed (but did not stop the system from booting) producing:
    zpool[426]: cannot import 'data': one or more devices is currently unavailable
    systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
    The second boot also resulted in a kernel panic, but as far as I can tell unrelated to zfs.
    After reboot one and three imported the pool manually.
    From the fourth reboot on loading from cache file always succeeded. However, I it takes faily long (~8 seconds) and even shows
    [*** ] A start job is running for Import ZFS pools by cache (Xmin Ys / no limit)
    briefly. Altough I might only notice that because the recent updates sped up oder parts of the boot process. Did you observe a slowdown during boot time, too, kinghajj?
    Last edited by robm (2015-03-22 01:21:05)

  • Solaris 10 upgrade and zfs pool import

    Hello folks,
    I am currently running "Solaris 10 5/08 s10x_u5wos_10 X86" on a Sun Thumper box where two drives are mirrored UFS boot volume and the rest is used in ZFS pools. I would like to upgrade my system to "10/08 s10x_u6wos_07b X86" to be able to use ZFS for the boot volume. I've seen documentation that describes how to break the mirror, create new BE and so on. This system is only being used as iSCSI target for windows systems so there is really nothing on the box that i need other then my zfs pools. Could i simply pop the DVD in and perform a clean install and select my current UFS drives as my install location, basically telling Solaris to wipe them clean and create an rpool out of them. Once the installation is complete, would i be able to import my existing zfs pools ?
    Thank you very much

    Sure. As long as you don't write over any of the disks in your ZFS pool you should be fine.
    Darren

  • Solaris 10 JET install and ZFS

    Hi - so following on from Solaris Volume Manager or Hardware RAID? - I'm trying to get my client templates switched to ZFS but it's failing with:
    sudo ./make_client -f build1.zfs
    Gathering network information..
    Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
    Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
    Solaris: client_prevalidate
    Clean up /etc/ethers
    Solaris: client_build
    Creating sysidcfg
    WARNING: no base_config_sysidcfg_timeserver specified using JumpStart server
    Creating profile
    Adding base_config specifics to client configuration
    Adding zones specifics to client configuration
    ZONES: Using JumpStart server @ xxx.14.80.199 for zones
    Adding sbd specifics to client configuration
    SBD: Setting Secure By Default to limited_net
    Adding jass specifics to client configuration
    Solaris: Configuring JumpStart boot for build1.zfs
    Solaris: Configure bootparams build
    Starting SMF services for JumpStart
    Adding Ethernet number for build1 to /etc/ethers
    cleaning up preexisting install client "build1"
    removing build1 from bootparams
    removing /tftpboot/inetboot.SUN4V.Solaris_10-1
    svcprop: Pattern 'network/tftp/udp6:default/:properties/restarter/state' doesn't match any entities
    enabling network/tftp/udp6 service
    svcadm: Pattern 'network/tftp/udp6' doesn't match any instances
    updating /etc/bootparams
    copying boot file to /tftpboot/inetboot.SUN4V.Solaris_10-1
    Force bootparams terminal type
    -Restart bootparamd
    Running '/opt/SUNWjet/bin/check_client build1.zfs'
    Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
    Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
    Checking product base_config/solaris
    Checking product custom
    Checking product zones
    Product sbd does not support 'check_client'
    Checking product jass
    Checking product zfs
    WARNING: ZFS: ZFS module selected, but not configured to to anything.
    Check of client build1.zfs
    -> Passed....
    So what is "WARNING: ZFS: ZFS module selected, but not configured to to anything." referring to? I've amended my template and commented out all references to UFS so I now have this:
    base_config_profile_zfs_disk="slot0.s0 slot1.s0"
    base_config_profile_zfs_pool="rpool"
    base_config_profile_zfs_be="BE1"
    base_config_profile_zfs_size="auto"
    base_config_profile_zfs_swap="65536"
    base_config_profile_zfs_dump="auto"
    base_config_profile_zfs_compress=""
    base_config_profile_zfs_var="65536"
    I see there is a zfs.conf file in /opt/SUNWjet/Products/zfs/zfs.conf do I need to edit that as well?
    Thanks - J.

    Hi Julian,
    You MUST create /var as part of the installation in base_config, as stuff gets put there really early during the install.
    The ZFS module allows you to create additional filesystems/volumes in the rpool, but does not let you modify the properties of existing datasets/volumes.
    So,
    you still need
    base_config_profile_zfs_var="yes" if you want a /var dataset.
    /export and /export/home are created by default as part of the installation. You can't modify that as part of the install.
    For your zones dataset, seems to be fine and as expected, however, the zfs_rpool_filesys needs to list ALL the filesystems you want to create. It should read zfs_rpool_filesys="logs zones". This makes JET look for variables of the form zfs_rpool_filesys_logs and zfs_rpool_filesys_zones. (The last variable is always picked up, in your case the zones entry. Remember, the template is a simple name=value set of variables. If you repeat the "name" part, it simply overwrites the value.)
    So you really want:
    zfs_rpool_filesys="logs zones"
    zfs_rpool_filesys_logs="mountpoint=/logs quota=32g"
    zfs_rpool_filesys_zones="mountpoint=/zones quota=200g reservation=200g"
    (incidentally, you don't need to put zfs_pools="rpool" as JET assumes this automatically.)
    So, if you want to alter the properties of /var and /export, the syntax you used would work, if the module was set up to allow you to do that. (It does not currently do it, but I may update it in the future to allow it).
    (Send me a direct e-mail and I can send you an updated script which should then work as expected, check my profile and you should be able to guess my e-mail address)
    Alternatively, I'd suggest writing a simple script and stick it into the /opt/SUNWjet/Clients/<clientname> directory with the following lines in them:
    varexportquotas:
    #!/bin/sh
    zfs set quota=24g rpool/export
    zfs set quota=24g rpool/ROOT/10/var
    and then running it in custom_scripts_1="varexportquotas"
    (Or you could simply type the above commands the first time you log in after the build. :-) )
    Mike
    Edited by: mramcha on Jul 23, 2012 1:39 PM
    Edited by: mramcha on Jul 23, 2012 1:45 PM

  • ARC and ZFS

    A question about where the ARC cache resides in a Sun ZFS 7320 Storage Appliance? Does it run in the cache of the storage head or the RAM of the node?

    Thanks for the reply. I see you are pointing to the physical 'read' hardware in the storage head or readzilla. I believe this is where L2ARC storage is maintained. My question is about the Adaptive Replacement Cache (ARC). I am confused about where this and the ghost lists are maintained. References in the various blogs talk about main memory/system memory, which memory is this - the memory in the server node or the memory in the storage head - say the ZFS 7320 Storage as a standalone device or the ZFS 7320 Storage Appliance embedded in an Exalogic.

  • Replace a mirrored disk with SVM meta and ZFS

    hello everybody,
    i've a mirrored disk that have some metadata (svm) configured (/ - /usr -/var and swap ) and a slice with zfs filesystem .
    I need to replace the disk .
    Someone could help me ?
    Edited by: Nanduzzo1971 on Jul 18, 2008 4:46 AM

    It's quite easy, just check the videos on the link below.
    http://www.powerbookmedic.com/manual.php?id=4

  • SunMC Agent 3.6.1 and ZFS

    Hello,
    I was wondering if a SunMC agent is able to recognize a ZFS filesystem? I've tried it on one of our test servers there is no category under Kernel Reader-Filesystem Usage for ZFS...only ufs and vxfs

    I was wondering if a SunMC agent is able to recognize
    a ZFS filesystem? I've tried it on one of our test
    servers there is no category under Kernel
    Reader-Filesystem Usage for ZFS...only ufs and vxfsNot quite yet. In fact a SunMC Server will refuse to even install on a ZFS partition without some minor changes to its setup utils. But the next release should be fully ZFS aware and compatible.
    Regards,
    [email protected]
    http://www.HalcyonInc.com

  • JET install and ZFS failure

    Hi - I have a JET (jumpstart) server that I've used many times before to install various Solaris SPARC servers with - from V240's to T4-1's. However when I try to install a brand new T4-2 I keep seeing this on screen and the install reverts to a manual install:
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: one or more file systems failed to mount
    There's been a previous post about this but I can't see the MOS doc that is mentioned in the last post.
    The server came pre-installed with Sol11 and I can see the disks:
    AVAILABLE DISK SELECTIONS:
           0. c0t5000CCA016C3311Cd0 <HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625>  solaris
              /scsi_vhci/disk@g5000cca016c3311c
              /dev/chassis//SYS/SASBP/HDD0/disk
           1. c0t5000CCA016C33AB4d0 <HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625>  solaris
              /scsi_vhci/disk@g5000cca016c33ab4
              /dev/chassis//SYS/SASBP/HDD1/disk
    If I drop to the ok prompt there is no hardware RAID configured and raidctl also shows nothing:
    root@solarist4-2:~# raidctl
    root@solarist4-2:~#
    The final post I've found on this forum for someone with this same problem was "If you have an access to MOS, please check doc ID 1008139.1"
    Any help would be appreciated.
    Thanks - J.

    Hi Julian,
    I'm not convinced that your problem is the same one that is described in this discussion:
    Re: Problem installing Solaris 10 1/13, disks no found
    Do you see the missing volume message (Volume 130 is missing) as described in this thread?
    A google search shows that there are issues with for a T4 Solaris 10 install due to a network driver problem and also if the system is using
    a virtual CD or device through a LDOM.
    What happens when you boot your T4 from the installation media or server into single-user mode? You say that you can see the disks, but can  you create a ZFS storage pool on one of these disks manually:
    # zpool create test c0t5000CCA016C3311Cd0s0
    # zpool destroy test
    For a T4 and a Solaris 10 install, the disk will need an SMI (VTOC) label, but I would expect a different error message if that was a problem.
    Thanks, Cindy

  • ISCSI and ZFS Thin Provisioning Sparse Volumes - constraints?

    Hello,
    I am running an iSCSI target using COMSTAR.
    I activated Time Slider (Snapshot feature) for all pools.
    Now I want to set up an iSCSI target using thin provisioning, storing the data in a file system rather than a file.
    Is there any official documentation about thin p.?
    All I found was
    http://www.cuddletech.com/blog/pivot/entry.php?id=729
    http://www.c0t0d0s0.org/archives/4222-Less-known-Solaris-Features-iSCSI-Part-4-Alternative-backing-stores.html
    Are there any problems to be expected about the snapshots?
    How would I set up a 100 GByte iSCSI target with mentioned thin p.?
    Thanks
    n00b

    To create a thin provisioned volume:
    zfs create -V <SIZE> -s path/to/volume
    Where <SIZE> is the capacity of the volume and path/to/volume is the ZFS path and volume name.
    To create a COMSTAR target:
    stmfadm create-lu /dev/zvol/rdsk/path/to/volume
    You'll get a LU ID, which you can then use to create a view, optionally with target and host groups to limit access.
    -Nick

  • NFSv4 and ZFS ACLs

    Hello All,
    Is there any specific shell command on Solaris 10, to find out if the file has NFSv4 or ZFS style ACLs set on them ??
    -- I see there is a system call available for this purpose, but did not come across any specific command just to tell the presence of the above style ACLs.
    -- "ls -v" displays the ACLs itself, but its too verbose
    Also, are there any Perl modules available for checking the above style ACLs
    Edited by: manjunathbs on Dec 12, 2008 6:03 PM

    You will need to have concurrent ACLs applied to an interface.
    The access lists are address familiy specific in their syntax and features so they cannot be mixed. An indicative example is shown below.
    interface Ethernet1/1
    ip access-group test-v4 in
    ipv6 traffic-filter test-v6 in
    ip access-list extended test-v4
    permit ip any host 1.1.1.1
    deny   ip any any
    ipv6 access-list test-v6
    permit ipv6 any host 2001:DB8::1
    deny ipv6 any any

  • Ldmp2v  and ZFS  root source system question

    hi
    reading the ldmp2v doc, it seems to implay that p2v only support source system with UFS root
    this is fine for s8 and s9 system.
    what about the new s10 with zfs root?
    thx

    Chk the links
    Transfer global settings - Multiple source systems
    Re: Difference between Transfer Global Setting & Transfer Exchange rates
    Regards,
    B

Maybe you are looking for

  • Installing Oracle 11g on Windows Xp Professional .... .Please HELP

    When i try to launch the setup.exe .... This is what i get and the setup quits ... ============================================================= Starting Oracle Universal Installer... Checking monitor: must be configured to display at least 256 color

  • Activation troubles: Helpful suggestion that worked for me

    I finally got my iphone activated after 26 hours. However, I didn't get it by receiving a link from ATT to my email. I instead pressed down the home button and the on/off button simultaneously for about 3 or 4 seconds. When I released the buttons my

  • Disc won't eject, system now won't recognize the burner or player

    Hi, looked and looked but couldn't find this covered already. I was trying to burn a DVD from IMovie via the share file option. It never would complete the burning. It's like it froze up or something. Now I can't get the disk to eject. Tried to just

  • Calling a function every time a canvas component is viewed

    hi all, i have a function in a canvas component which i need to call every time i view that component. is there a way to generate an event each time a i view that component.. 'show' works only when the component turns from invisble to visible.. thanx

  • Where is Find and Replace?!?

    I have been using DW 8 on my computer (Apple OS 10.4.11) successfully for about a year. Just this week the Find and Replace function stopped working. I can press Apple + F and I get nothing, no popup windows or anything. I can go to Edit>Find and Rep