ZFS and "MIssing" Disk

Have A V240 in my lab.
Previously configured a ZFS pool using one of the two disks.
Need to rebuild box for other tests. Reinstall Solaris 10 using only one disk.
Unable to see the second disk that was originally configured with the zpool.
What do I need to "recover" this disk so I can use it again.
Thanks

That's because zpool/zfs uses EFI labels on the disk, 8 partitions 0-6 and 8 which is the whole disk.
while normal disk usage is an SMI label, 8 partitions, 0-7, with number 2 being the entire disk.
to get the label back to smi, do the following:
execute format -e
select the drive in question
enter "l" (lower case L), for label
select the entry for smi and write it to the disk.
now exit format
the disk should now be available for use outside of the zpool/zfs.

Similar Messages

  • ZFS and grown disk space

    Hello,
    I installed Solaris 10 x86 10/09 using ZFS in vSphere, and the disk image was expanded from 15G to 18G.
    But Solaris still sees 15G.
    How can I convince it to make notice of the expanded disk image, how can I grow the rpool?
    Searched a lot, but all documents give answers about adding a disk, but not if the space is additionally allocated on the same disk.
    -- Nick

    nikitelli wrote:
    if that is really true what you are saying, then this is really disappointing!
    Solaris can so many tricks, and in this specific case it drops behind linux, aix and even windows?
    Not even growfs can help?Growfs will expand a UFS filesystem so that it can address additional space in its container (slice, metadevice, volume, etc.). ZFS doesn't need that particular tool, it can expand itself based on the autogrow property.
    The problem is that the OS does not make the LUN expansion visible so that other things (like the filesystems) can use that space. Years and years ago, "disks" were static things that you didn't expect to change size. That assumption is hard coded into the Solaris disk label mechanics. I would guess that redoing things to remove that assumption isn't the easiest task.
    If you have an EFI label, it's easier (still not great), but fewer steps. But you can't boot from an EFI disk, so you have to solve the problem with a VTOC/SMI label if you want it to work for boot disks.
    Darren

  • Remove programs and missing disk space

    I'm just switched to Mac a few weeks ago and I got problem when I remove program in my MacBook.
    Is that ok when I just sent my program to trash and that's all for removing a program? I saw that's some program such as "Adobe Reader" it has a uninstall dmg file but another do not.
    After I remove program, my disk's space still doesn't change so did I remove it? My disk's space's 120Gb but I just have 60Gb left now even I did not add a lot of program, so why's it disappear?
    MacBook Intel C2D 2.16 GHz   Mac OS X (10.4.10)   30 GB iPod Video (Black)

    Dragging a file to the Trash does not delete it from the drive until you empty the Trash.
    Uninstalling Software: The Basics
    Most OS X applications are completely self-contained "packages" that can be uninstalled by simply dragging the application to the Trash. Most applications create preference files which are stored in the /Home/Library/Preferences/ folder. Although they do nothing once you delete the associated application, they do take up some disk space. If you want you can located them in the above location and delete them, too.
    Some applications may install an uninstaller program that can be used to remove the application. In some cases the uninstaller may be part of the application's installer, and is invoked by clicking on a Customize button that will appear during the install process.
    Some applications may install components in the /Home/Library/Applications Support/ folder. You can also check there to see if the application has created a folder. You can also delete the folder that's in the Applications Support folder. Again, they don't do anything but take up disk space once the application is trashed.
    Some applications may install a startupitem or a Log In item. Startupitems are usually installed in the /Library/StartupItems/ folder and less often in the /Home/Library/StartupItems/ folder. Log In Items are set in the Accounts preferences. Open System Preferences, click on the Accounts icon, then click on the LogIn Items tab. Locate the item in the list for the application you want to remove and click on the "-" button to delete it from the list.
    If an application installs any other files the best way to track them down is to do a Finder search using the application name or the developer name as the search term.
    There are also several shareware utilities that can uninstall applications:
    AppZapper
    CleanApp
    Yank
    SuperPop
    Uninstaller
    Spring Cleaning
    Look for them at www.versiontracker.com or www.mackupdate.com.
    For more information visit The XLab FAQs and read the FAQ on removing software.

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • Have a macbook pro running osx lion, connected a kindle fire using micro usb to transfer pictures and the disk icon won't show up on the desktop.  When I ran disk utilities the kindle does show up there.  Any thoughts as to why it's not recognizing?

    Have a macbook pro running osx lion.  Connected a Kindle Fire using micro usb and the disk icon doesn't show up on the desktop.  Also sometimes get a message sayiing computer doesn't recognize device.  When I opened disk utilities the Kindle disk does show up there.  What am I missing?

    cashworth wrote:
    Kindle is connected directly, not thru a hub.  Also called Amazon Kindle support, tech was baffled when I told her I was using a Mac.  She said, "I'm not really familiar with apple products." 
    Precisely why it's not an iPad, and why it doesn't cost very much.  An Apple rep would tell you how to connect an iPad to every computer.
    Go here:  http://www.amazon.com/gp/help/customer/display.html/ref=sv_kinh_9?ie=UTF8&nodeId =200127470

  • How do i find or replace the missing disk image for my HD

    How do i find or replace the missing disk image? for my HD ........as it is missing when i try to back up with time machine, 
    i asked for assistance the other day with time machine
    now i have worked out this is the reason i can not back up with TM, due to no disk image in my portable hard drive
    I saw a post on youtube where it explains you can copy a your HD disk imag from your time machine back ups enabeling, time machine to back up
    But i can not find where the TM Back up driver is situated in my TM Back up's
    Some advice would be greatly appreceate   as i am rather new to Mackintosh 
    Thank you
    Aiden

    Hi again.
    Just re-reading your first post, I wonder if you're confusing a disk image backup with Time Machine?
    It's certainly possible to create a disc image of your Macintosh HD and copy it to an external drive.
    However, as a backup that method has a drawback; you can't boot from a disc image directly. It needs to be mounted on a bootable system before it can be used. So it's not really much use if the internal HD fails, at least as far as allowing you to carry on working while the repair is carried out.
    Much better (and simpler) is to clone the whole drive to the external using CarbonCopyCloner or SuperDuper (there are plenty of others, but those appear to be the most popular).
    That will give you a bootable backup which can be kept up to date by incremental backups using the same applications.
    In the event of total internal HD failure all you need to do is replace the drive then boot from the clone and reverse clone to the internal to get up and running again.
    Many of us use both a bootable clone and Time Machine (on seperate drives) as each has it's different uses.

  • Bridge CS5 - Adding metadata to RAW files is hit and miss

    Can someone help me??? Bridge CS5 will only add the metadata I enter to the files it feels like doing. It is hit and miss. Some files will add it, and other files will get the message "There was an error writing metadata to 100824_Banks_06.NEF". It will go through my whole folder of RAW files, then other times it will allow one or two of the files to get the information. I am going crazy! Is this a preference setting issue, or what?

    Tai Lao wrote:
    Bridge is the weakest of all Adobe applications I use.  It suffers from the same issues since its first release, when we were told to forgive the teething problems of any version 1 application.  Now it's in version 4.x.
    I actually spent most of the last two days wrestling with Bridge, and getty pretty annoyed with it. The "Error writing metadata" bug caused me no end of problems in trying to reorganise my keywording of 20,000 photos, and turning the air blue in the process.
    I agree that Bridge seems to be the poor relation in the Creative Suite. I call it the Less Glamourous Sister. It needs to be 64-bit to host 64-bit Camera Raw, and it needs to manage resources faster and more efficiently.
    I've spent a fair amount of time watching Bridge's activity with a resources monitor utility, and it's a very busy lady. It seems to poll several drives every second or two, accessing the registry, disk-based settings and library files regularly too, even when it appears to be doing nothing. And, even though I had indexed and cached my whole 20,000 photo collection, Bridge insists on recreating some thumbnails and previews at random intervals.*
    I decided that the anti-virus software suggestion was a red herring in my case (I now realise this is the Mac forum anyway). Photos which gave an error became subsequently locked for several minutes, and I had to edit the sidecar files by hand. In the end, I worked out that, even though there is no on-screen indication of activity, Bridge is quietly working its way through every item in the Content pane, and it is this action which seems to coincide with the Metadata Error. Wait for it to end, and chances are much reduced, but not fully removed.
    Oh, and Adobe's customer treatment might be debatable, but that helpful customer you quoted was a regular abuser of other forum users, so I like to see a silver lining from that post.
    * Prefer Embedded thumbnails was also selected.
    Message was edited by: Yammer P

  • "Drive is offline" message in Missing Disks window

    Hello support person,
    When I try to open my project I get a Missing Disks window and the message says: To preserve the integrity of the data used by Final Cut Pro HD, it is necessary to ensure the existence of the following path(s): 'Justine's Drive' is offline. The options the window message gives are: "Quit", "Check Again" and the highlighted one is "Reset Scratch Disks" The first window that pops up before the Missing Disks one is an External AV window which says unable to locate the following external devices: Apple FireWire NTSC (720 x 480) Your system configuration may have changed, or your deck/camera may be disconnected or turned off. Please check connections .... And as I have done throughout working on the project after the initial digitizing process with my camera, I have clicked "Continue" on this window but this time the Missing Disks window comes up.
    My external drive called "Justine's Drive" is mounted and appears on the desk top. I have tried to dismount and unplug the drive and then to just open/launch Final Cut Pro HD (Academic 4.5) straight from the applications folder in my laptop's hard drive but I get the same messages and Final Cut will not even open independent of the project.
    The last time I worked on the computer (yesterday) everything was fine. The only thing is that I transported the laptop and drive to another location. But I had them well protected and didn't drop them or anything...
    Hope you can help. Thank you.
    G4 667 PowerPC Laptop   Mac OS X (10.4.6)   512 MB RAM
    G4 667 PowerPC Laptop   Mac OS X (10.4.6)   512 MB RAM

    I found answer in archive! You guys rock! Thanks!!!!

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • "Missing Disks" Error Message - Unable to open FCP

    Every time I tried to open FCP, a "missing disks" error message shows up and prevents me from opening the application. It says, "To preserve the integrity of the date used by Final Cut Pro, it is necessary to ensure the existence of the following path(s):" and it says "Macintosh HD is offline." There are two options for the error message, "Check Again" or "Reset Scratch Disks." When I click "Check Again," nothing happens. When I click, "Reset Scratch Disks", it takes me to a "Scratch Disks" window, which shows, "9.8GB on Macintosh HD is offline." I click "OK", but it only takes me back the same "missing disks" error message. What should I do to solve this error and open FCP?
    Thanks in advance.

    Hi -
    If you are mid-project, then first I would find your project file and make a duplicate by Control-clicking on it, and selecting Duplicate. You can rename that new file so it is easy to remember. Once that is done, you have a protection back up of your project file so that whatever happens, you can recover back to this point in your project by opening the duplicated file.
    Selecting (or Re-Selecting) a Scratch Disk does not affect what has been done in a project's past, it will only affect what FCP will do once it is launched and where it will place media and renders as you go forward.
    A bigger concern/interest to me is why the disk Macintosh HD is offline, since Macintosh HD is usually the name of the internal system disk on your computer.
    Have you renamed your internal hard disk (either intentionally or unintentionally)?
    Do you have external disks connected to your computer?
    Have you installed any new software - like Snow Leopard, or Fusion?
    If you have had the disk Macintosh HD selected as your scratch disk all throughout your project up to this point, it is likely that the disk Macintosh HD has your captured footage and renders. So . . . if it is off-line as a scratch disk, once you reset your scratch disk to an available drive and open your project, it is likely that the project will show that all your media, which it previously associated with the missing disk is offline as well.
    If that is the case, you will need to reconnect your media and re-render your renders. This can be done by selecting your offline clips and Control clicking on one of them and selecting Reconnect Media . . .
    *If you are not feeling confident about proceeding, I would recommend a trip to an Apple store to get someone to walk you through the process.*
    Hope this helps.

  • Horrible problem: 10.4.3 and lost disk

    Here it is:
    After hanging round here trying to solve a different problem I upgraded from 10.4.2.to 10.4.3
    Repaired permissions.
    Started Logic. Missing samples.
    Checked the disk (2nd internal drive) The folder marked VSO containing 300 gig of Vienna Library is listed, but no folder icon beside it.
    I go to select the file, it disappears. I run EXS manager, it says'no samples are present.'
    Restart. Same problem.
    Apple+I on the faulty disk says the disk is 360GB full... and here's the funny thing...
    Every other folder on the disk is useable and accessible except the one holding the samples (the largest folder on the disk)
    All other Apps seem to be normal.
    Any ideas at all would be very gratefully received.

    now, i don't want to scare you, but i just spent my weekend retreaving all my data from my LaCie firewire drive... and i had the same error in Disk Utility. i tried Disk Warrior, TechTool Pro... but only Data Rescue from Prosoft was able to help... (http://www.prosofteng.com/products/data_rescue.php)
    Brilliant tool. the interface is $hit, but i was able to get everything back... and the disk wouldn't even mount before that...
    hope you don't have to go throught the same trouble, but just keep Data Rescue in mind...
    Cheers.

  • Missing disk space on T400S

    My T400S has a ~160GB SSD disk - however, there seems to be ~75GB which are permanently unavailable on C: drive, and do not show up in a directory listing. This is NOT the Lenovo or Service partitions, which take up only ~8GB between them. Moreover the Drive Space manager in the latest ThinkVantage Toolbox reports that it cannot scan these files due to access restrictions. I am logged in as administrator!
    Solved!
    Go to Solution.

    It's probably taken up by System Restore's restore points. These are found in C:\System Volume Information, and is unaccessible even to the Super-Administrator. The only way to get rid of them is using the "Create a Restore Point" page where there is an option to clear all of the restore points. However, CCleaner's System Restore tab allows you to selectively delete restore points as you desire.
    It may be a good idea to disable System Restore on an SSD to reduce the amount of unnecessary disk writes to the drive.
    Alternatively, the missing disk space may be due to the ThinkVantage Rescue and Recovery restore point(s). I'm not exactly sure how these work, but I think they're found in a folder in  C:\RRbackup or some variation on the spelling thereof.
    W520: i7-2720QM, Q2000M at 1080/688/1376, 21GB RAM, 500GB + 750GB HDD, FHD screen
    X61T: L7500, 3GB RAM, 500GB HDD, XGA screen, Ultrabase
    Y3P: 5Y70, 8GB RAM, 256GB SSD, QHD+ screen

  • Missing disk space Problem

    Hi,
    I have recently purchased a T61p from lenovo equipped with a hard drive of capacity 120GB. I have never regretted buying a thinkpad since. Except for the fact that after a couple of weeks I kept losing huge amounts of disk space from my hard drive. I assumed this was due to the downloads I was doing and hence, purchased a WD 500 gig hard drive. I shifted my downloads folder including everything that I ever downloaded to it. I then found out I gained only 20-30GB which was the size of my downloads folder (I never bothered to check it earlier.). Now I was down with missing a whopping amount of 60GB. I dont know if a solution already exists for this as could not find any search results for missing disk space. (For users who are still suffering with the same issue and do not want to read all the trouble I went through, scroll down till you find a void for the solution). As always I sought the almighty google's help. There were a million pages on how vista eats up space for backup and recovery. But after about an hour of surfing over a 100 pages and trying the solutions in vain, I found out that vista takes a max of 15% of disk space. But I lost 60GB then I found out that the trouble was not with vista but with the Thinkvantage R&R utility. I then remembered a prompt form thinkvantage s/w stating that my hard drive was too low on space and that a backup was not possible that appeared not too long ago.
    Solution : (Not recommended if you are prone to crashes and always seek R&R's help to regain control over your box) Delete all the backups and disable R&R. Go to Rescue and Recovery program and click the advanced menu button and select delete all backups. Then go to preferences and uncheck the scheduled backup stuff. (Thats how it worked on my T61p. Don't know if the interface is the same on others.)
    I always assumed the extra 6gig that thinkpads usually have as the extra partition has all the restoration data. But I guess 6gig is too small for it and the 6gig only has data  to restore the  PC back to factory condition. Then I found it funny for a program to have been scheduled to make backups of sizes 30GB or perhaps the total used space for a hard drive that is of 120 gigs size. What was even more funnier was that this program was scheduled to run "daily!". I don't know if I set it to that option myself unwittingly or if it was the same by default. I wanted to post here just so others know of this well in advance before they go through all the trouble I did. Please delete this post if this was a known issue. If this has been fixed or sorted out please let me know of that too
    Solved!
    Go to Solution.

    Try disabling RR. That might work. You might want to enable it again later.
    Note from Moderator: Please update your profile with your correct country location as per the forum rules. Products, options and services vary from market to market. Knowing your location helps us help you.
    Message Edited by nonny on 04-19-2008 11:44 AM

  • Missing disk space - 'get info' differs from 'du'

    Hi there
    I have three harddrives in the server - one 80 GB drive for the system (called Server HD) and two 1TB drives for the home directories and network shares.
    The problem is that, according to 'get info', the 80GB drive is almost full (only 6 G left) and I don't know where all the space is used.
    When I the applications 'disk inventory' or 'whatsize'(in administrator mode) or the terminal with 'sudo du -shx /Volumes/Server\ HD' they all report a disk usage of about 23GB, which is more likely as the system is just a few weeks old and the home directories are not located in this volume.
    But if I use 'df' to see the free space this reports the same as 'get info' - only about 6GB left.
    According to the system profiler the smart status is verified, so the disk should be ok (I think).
    Unfortunately I am pretty new to Macs so I don't have the slightest idea what to do now to check and hopefully reclaim the missing disk space.
    Thanks for any tips
    Claudia

    Look in /Volumes directory there should only be an alias or mount point for any volumes you have mounted. If a volume didn't mount properly, it is possible to find "name" and "name_1" entries, and for actual files to now reside there.
    Another utility I like is from OmniGroup
    http://www.omnigroup.com/omnidisksweerer which should easily show where space is used.
    For books, Missing Manual, and O'Reilly OS X for Unix Geeks
    http://oreilly.com/catalog/9780596520625/index.html
    http://books.slashdot.org/books/08/02/27/1551206.shtml

  • How to reclaim the missing disk space

    i have a t61 with a 250gb HD.  when i migrated to window xp the local disk c  showed only 120gb and there is no other local disks appeared on the window desktop. can anyone help me how to recover the missing disk space. i am not a computer
    expert so please showed it as detailed as you could. thank you for your attntion

    You probably installed from original Windows XP disk [w/o SP2], and original XP support disks only upto 128gb. If you install SP2/3 later on, the support will be added , but the partition woun't magicaly grow by itself If you want to resize it w/o loosing data you have to use 3rd party tools like GParted Live CD [free] or Partition Magic [commercial] software... http://gparted.sourceforge.net/
    Bored SysAdmin Blog

Maybe you are looking for