ZFS and Jumpstart

Hello All ---
Does anyone know if Sun intends on putting ZFS creation functionality in custom jumpstart profiles? I've put together a really lame script to get done what I'm attempting to do, but it stinks and would be much more professional if the filesys directive could also contain a filesystem type of "zfs" or something so nifty.
Thanks
-bw

Yes, they do.
I understand zfs root is scheduled for u4 so mid 2007.
So it will certainly be available then.
But its possible that non root jumpstart partitions might be available in u3 ie end 2006.

Similar Messages

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • What are the differences between Bootpd and JumpStart?

    Please tell the defference between Bootpd and JumpStart?
    Where can I find more info about JumpStart?
    Thank you --- Xing

    What is Bootpd?
    <P>
    What I know is bootparamd and bootparams.
    <P>
    bootparamd is a server process that provides information from a bootparams database to diskless clients at boot time.
    <P>
    The bootparams file contains a list of client entries that diskless clients use for booting. Diskless booting clients retrieve this information by issuing requests to a server running the bootparamd program. The bootparams file may be used in conjunction with or in place of other sources for the bootparams information
    <P>
    Information on Jumpstart installation: <BR>
    http://docs.sun.com:80/ab2/coll.214.7/SPARCINSTALL/@Ab2PageView/6302?DwebQuery=jumpstart&Ab2Lang=C&Ab2Enc=iso-8859-1

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • Dhcpd SUNW options and jumpstart...

    I am having problems getting my SUNW,Sun-Blade-100 's to boot from via
    boot net:dhcp - installIt acquires the the ip address and boots from the tftp server just fine. The problem is that I have a 'sysidcfg' file on the NFS server and it is read by the install process (That is, I can see the printout that says it has read the sysidcfg file), but the install does not seem to use it to configure the system. It then reverts to a manual configuration.
    Any Ideas as to how to get the clients to use the 'sysidcfg'?
    I am using
    Internet Systems Consortium DHCP Server V3.0.5 -- on CentOS Linux
    the tftp server is also on the CentOS Linux server
    The NFS server is running solaris 10.
    To add to the confusion, the NFS server is on a different subnet then the install clients.
    My setup.
    This setup is spread accross multiple files by way of 'include "/path/to/foo.conf", but this is the order in which they appear.
    Odviously the IP's are changed and the FQDN's are changed to example.com.
    ddns-update-style none;
    option domain-name-servers x.y.33.100, x.y.33.101;
    allow bootp;
    always-reply-rfc1048 on;
    option space SUNW;
    option SUNW.root-mount-options code 1 = text;
    option SUNW.root-server-ip-address code 2 = ip-address;
    option SUNW.root-server-hostname code 3 = text;
    option SUNW.root-path-name code 4 = text;
    option SUNW.swap-server-ip-address code 5 = ip-address;
    option SUNW.swap-file-path code 6 = text;
    option SUNW.boot-file-path code 7 = text;
    option SUNW.posix-timezone-string code 8 = text;
    option SUNW.boot-read-size code 9 = unsigned integer 16;
    option SUNW.install-server-ip-address code 10 = ip-address;
    option SUNW.install-server-hostname code 11 = text;
    option SUNW.install-path code 12 = text;
    option SUNW.sysid-config-file-server code 13 = text;
    option SUNW.JumpStart-server code 14 = text;
    option SUNW.terminal-name code 15 = text;
    option SUNW.SbootURI code 16 = text;
    option SUNW.JumpStart-server "mysrv:/export/jumpstart/sol10/jumpstart";
    option SUNW.install-server-hostname "mysrv";
    option SUNW.install-server-ip-address x.y.56.7;
    option SUNW.install-path "/export/jumpstart/sol10";
    option SUNW.root-server-hostname "mysrv";
    option SUNW.root-server-ip-address x.y.56.7;
    option SUNW.root-path-name "/export/jumpstart/sol10/Solaris_10/Tools/Boot";
    option SUNW.sysid-config-file-server = "mysrv:/export/jumpstart/sol10/jumpstart";
    subnet x.y.140.0 netmask 255.255.254.0 {
       authoritative;
       default-lease-time 86400;
       max-lease-time 172800;
       option broadcast-address x.y.141.255;
       option routers x.y.140.1;
       option domain-name "example.com";
       next-server x.y.140.6;
    shared-network example-corp {
       option broadcast-address x.y.63.255;
       option routers x.y.33.1;
       option subnet-mask 255.255.224.0;
       option domain-name "example.com";
       next-server x.y.56.122;
       default-lease-time 43200;
       max-lease-time 86400;
       subnet x.y.32.0 netmask 255.255.224.0 {
          not authoritative;
       # Become authoritative on the networks we 'Own'
       subnet x.y.56.0 netmask 255.255.255.0 {
          authoritative;
       subnet x.y.57.128 netmask 255.255.255.128 {
          authoritative;
    use-host-decl-names on;
    group {
       vendor-option-space SUNW;
       filename "sol10-u3";
    host afu      { hardware ethernet 00:03:ba:xx:xx:xx; fixed-address x.y.140.90;  }
    # -- CUT --
    }my sysidcfg file is located at mysrv:/export/jumpstart/sol10/jumpstart/sysidcfg
    look like
    system_locale=en_CA
    timezone=Canada/Mountain
    terminal=sun-cmd
    timeserver=x.y.56.15
    name_service=DNS {domain_name=example.com
                      name_server=x.y.33.100, x.y.33.101
                      search=example.com}
    network_interface=PRIMARY {dhcp protocol_ipv6=no}
    security_policy=NONE
    root_password=secretA couple things that come to mind that I don't have the answer to, and can't find in the Docs:
    1) Do the dhcpd, tftp and the NFS server all need to be the same server for some reason?
    2) In regards to the dhcpd, is there some protocol that the sun-boxes expect that the dhcp server is not giving?
    3) Is having the NFS share across subnets of any consequence?
    Thank you all in advance, any help will be appreciated.

    Pelleux wrote:
    I am having problems getting my SUNW,Sun-Blade-100 's to boot from via
    boot net:dhcp - installIt acquires the the ip address and boots from the tftp server just fine. The problem is that I have a 'sysidcfg' file on the NFS server and it is read by the install process (That is, I can see the printout that says it has read the sysidcfg file), but the install does not seem to use it to configure the system. It then reverts to a manual configuration.What questions are asked? Exactly what is the behavior that it shows?
    Any Ideas as to how to get the clients to use the 'sysidcfg'?It sounds like it is using it, but perhaps the entries are not complete, or something else is happening.
    my sysidcfg file is located at mysrv:/export/jumpstart/sol10/jumpstart/sysidcfg
    look like
    system_locale=en_CA
    timezone=Canada/Mountain
    terminal=sun-cmd
    timeserver=x.y.56.15
    name_service=DNS {domain_name=example.com
    name_server=x.y.33.100, x.y.33.101
    search=example.com}
    network_interface=PRIMARY {dhcp protocol_ipv6=no}
    security_policy=NONE
    root_password=secret
    The specific bits you need depend on the specific version of Solaris that you're installing. Does the installer ask you questions about all of those items? If not, it sounds like the file is being read. What release is being installed? I don't see any mention of a default_router. You may want to supply one (or define NONE) here.
    (I wouldn't bother with an external timeserver. I find it easier to specify timeserver=localhost and fix the time post-boot. Probably not your problem here, but I've seen it cause problems in older setups).
    A couple things that come to mind that I don't have the answer to, and can't find in the Docs:
    1) Do the dhcpd, tftp and the NFS server all need to be the same server for some reason?No. The fact that you're getting to this point shows that all of that phase is working. You've already succesfully booted from the NFS root filesystem and are running Solaris.
    2) In regards to the dhcpd, is there some protocol that the sun-boxes expect that the dhcp server is not giving?No. You're done with DHCP by this point. Congratulations.
    3) Is having the NFS share across subnets of any consequence?Shouldn't be.
    Darren

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • ZFS and Windows 2003 Diskpart

    I was given some space on a thumper that has zfs drives. I connect from a Windows 2003 using ISCI. I was running out of space and the admin gave me more space which ended up with my losing the drive, but it came back (is that normal). When I went to use diskpart to expand the drive to the additional space, it would not work. Can I not use diskpart to extend the drive size or do I need to something additional?
    Thanks for your help.

    Earl,
    I'm stuck with the 2003 itunes install problem, too. Can you post or email your solution?
    THANKS,
    Glenn

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • Solaris 10 x86 u5 dhcp and jumpstart install fail

    hello
    I have problem in solaris 10 u5 jumpstart install.
    I can use jumpstart install with dhcp and get a static ip address (assigned by dhcp server) before solaris 10 u3.
    But now I can't use jumpstart install in solaris 10 u5 without setting up a static ip address in sysidcfg.
    I have many x86 machines.
    If I have to set up every different sysidcfg for every machine when I install a new machine.
    I will get into big trouble.
    here is my sysidcfg
    ###### sysidcfg #######
    system_locale=en_US
    timeserver=localhost
    timezone=Asia/Taipei
    terminal=sun-color
    security_policy=NONE
    root_password=xxxxxxxx
    nfs4_domain=example.com
    network_interface=primary { hostname=solaris
    default_route=192.168.100.254
    netmask=255.255.255.0
    protocol_ipv6=no}
    name_service=DNS {domain_name=example.com
    name_server=192.168.100.1
    search=example.com}
    Edited by: cheung79 on 2008/4/19 ?? 5:29

    I think that you should modify the script discovery-install, so you'll be able to create the sysidcfg file dynamically. I had the same problem as you and there is a possibility to add some arguments to the boot command that you execute at the ok prompt. These arguments can be defined in the discovery-install script. It's quite easy.
    Regards,
    Przemek

  • Solaris 10 x86 PXE and jumpstart using Linux DHCP server !!

    Hi,
    I am trying to get a my Solaris 10x86 jumpstart rolling.
    I have created the images for the OS, but the only issue I have ahead is using a Linux box as a DHCP server for my X86 box to get the image.
    Is it possible to have a linux host that serves as a dhcp server to jumstart X86 host with Sol 10 x86
    or do I need to have a solaris host that runs DHCP service on it.
    Any advice on this issue.
    Thanks.

    Well, if you don't think the online Documentation helpful, then the better way is reading step-by-step instructions from a book. Get to local bookstore, i.e Barne&Nobles or Border or any big local bookstore, there should be pretty good book for Unix Administrator (Solaris version).
    If you have time and think you can memorize then, read on the spot; otherwise, buy the book for future reference.
    If that's not what you had in mind, then this link of free online book might help : http://www.oreilly.com/catalog/solaris8/chapter/ch04.html
    Normally, oreilly online bookstore offers free books to accredited universities, colleges, and organizations. However, if that option isn't for you, it might even offer free sample chapters that might just suit your needs.
    hoep it helps.
    -van.

  • Zfs and encryption

    we are looking for a filesystem level encryption technology. At this point most of our services are on zfs. At one time I saw encryption on the roadmap for zfs features. Where does this sit now?
    Are there test bed versions of opensolaris where we can test this?
    Is the answer known as to if and when zfs encryption will be in Solaris 10 or beyond??
    Thanks.

    I don't believe that the feature is ready yet, but you may find some more information about the project here: [http://hub.opensolaris.org/bin/view/Project+zfs-crypto/]
    You would probably also be better of with asking for a status on the forum/mailinglist for the project: [http://opensolaris.org/jive/forum.jspa?forumID=105]
    Edited by: Tenzer on May 11, 2010 9:31 AM

  • ZFS and grown disk space

    Hello,
    I installed Solaris 10 x86 10/09 using ZFS in vSphere, and the disk image was expanded from 15G to 18G.
    But Solaris still sees 15G.
    How can I convince it to make notice of the expanded disk image, how can I grow the rpool?
    Searched a lot, but all documents give answers about adding a disk, but not if the space is additionally allocated on the same disk.
    -- Nick

    nikitelli wrote:
    if that is really true what you are saying, then this is really disappointing!
    Solaris can so many tricks, and in this specific case it drops behind linux, aix and even windows?
    Not even growfs can help?Growfs will expand a UFS filesystem so that it can address additional space in its container (slice, metadevice, volume, etc.). ZFS doesn't need that particular tool, it can expand itself based on the autogrow property.
    The problem is that the OS does not make the LUN expansion visible so that other things (like the filesystems) can use that space. Years and years ago, "disks" were static things that you didn't expect to change size. That assumption is hard coded into the Solaris disk label mechanics. I would guess that redoing things to remove that assumption isn't the easiest task.
    If you have an EFI label, it's easier (still not great), but fewer steps. But you can't boot from an EFI disk, so you have to solve the problem with a VTOC/SMI label if you want it to work for boot disks.
    Darren

  • Where can I find the latest research on Solaris 10, zfs and SANs?

    I know Arul and Christian Bilien have done a lot of writing about storage technologies as they related to Oracle. Where are the latest findings? Obviously there are some exotic configurations that can be implemented to optimizer performance, but is there a set of "best practices" that generally work for "most people"? Is there common advice for folks using Solaris 10 and zfs on SAN hardware (ie, EMC)? Does double-striping have to be configured with meticulous care, or does it work "pretty well" just by taking some rough guesses?
    Thanks much!

    Hello,
    I have a couple of links that I have used:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
    http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
    These are not exactly new, so you may have encountered them already.
    List of ZFS blogs follows:
    http://www.opensolaris.org/os/community/zfs/blogs/
    Again, there does not seem to be huge activity on the blogs featured there.
    jason.
    http://jarneil.wordpress.com

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

Maybe you are looking for

  • Mac Mini wont' Display Sleep after Yosemite Update

    My Mac Mini is no longer going into Display Sleep. Instead I now see the Screen Saver.  I have checked System Preferences | Energy Saver and it is definitely set to Display Sleep after 10 minutes (The default). What is interesting is that if I try to

  • Is this a bug? Time Pitch Machine Changes Gain Levels

    Logic Pro 9.1.4, macpro dual quad 2.93mhz, osx 10.6.8 i'm altering the 'speed' of some of my 88.2khz aif files by a very small amount using Time and Pitch Machine, in Classic and Free modes ( -1.3% / -22cents apprx) but to my surprise i noticed that

  • How to Re-Install Address Book Please?

    When i open AddressBook it only shows me a blank grey box. It locks up when i click System Preferences. When i use the Address Book Widget i can search my addresses. But i cannot see the Address Book Application. Please tell me how to re-install Addr

  • If we use the $495 DPS service we don't really need the cloud correct?

    If we use the $495 DPS service we don't really need the cloud, correct?

  • HTML development in Oracle PLSQL

    Hello Gurus, I am new to HTML development in PLSQL. So, questions: 1) What is the difference between htp package and utl_http. 2) I have an external .html file. I want to launch that html page from Oracle middle tier (apache http server) How can I do