Logical volume is 'not' in filesystem?

Hi,
I have installed arch linux on a PogoPlug B01, and have used lvm to configure two HDDs into a software-raid 0 logical volume, as such:
lvdisplay
--- Logical volume ---
LV Path /dev/VolGroup00/lvolpink
LV Name lvolpink
VG Name VolGroup00
LV UUID pchxXD-c2j4-phRy-v5q5-3Bfd-R1Vy-TsAete
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 3.64 TiB
Current LE 953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0
However, the LV Path does not exist, and the lv doesn't show up in /dev/mapper. The only thing within that directory is a 'control' directory. If I add the LV UUID to /etc/fstab and mount -a, I get the usual 'special device does not exist' error. Does anyone know what may be causing this?
Thanks,
Chris

Hi,
If possible , could you buy without catalogue just to eliminate a business case.
If for non catalogue purchase , the error still occurs thne you probably have a customizing error.
If it works, then we have to focus on the catalog flow side.
Kind regards,
Yann

Similar Messages

  • 10g ASM on Logical Volumes vs. Raw devices and SAN Virtualization

    We are looking at setting up our standards for Oracle 10g non-rac systems. We are looking at the value of Oracle ASM in our environment.
    As per the official Oracle documentation, raw devices are preferred to using Logical Volumes when using ASM.
    From here: http://download.oracle.com/docs/cd/B19306_01/server.102/b15658/appa_aix.htm#sthr
    ef723
    "Note: Do not add logical volumes to Automatic Storage Management disk groups. Automatic Storage Management works best when you add raw disk devices to disk groups. If you are using Automatic Storage Management, then do not use LVM for striping. Automatic Storage Management implements striping and mirroring."
    Also, as per Metalink note 452924.1:
    "10) Avoid using a Logical Volume Manager (LVM) because an LVM would be redundant."
    The issue is: if we use raw disk devices presented to ASM, the disks don't show up as used in the unix/AIX system tools (i.e. smit, lspv, etc.). Hence, when looking for raw devices on the system to add to filesystems/volume groups/etc., it's highly possible that a UNIX admin will grab a raw device that is already in use by Oracle ASM.
    Additionally, we are using a an IBM DS8300 SAN with IBM SAN Volume Controller (SVC) in front of it. Hence, we already have storage virtualization and I/O balancing at the SAN/hardware level.
    I'm looking for a little clarification to the following questions, as my understanding of their responses seem to confict:
    QUESTION #1: Can anyone clarify/provide additional detail as to why Logical volumes are not preferred when using Oracle ASM? Does the argument still hold in a SAN Virtualized environment?
    QUESTION #2: Does virtualization at the software level (ASM) make sense in our environment? As we already have I/O balancing provided at the hardware level via our SVC, what do we gain by adding yet another level of I/O balancing at the ASM level? Or as in the
    arguments the Oracle documentation makes against using Lvm, is this an unnecessary redundant striping (double-striped or in our case triple-striped/plaid)?
    QUESTION #3: So does SAN Virtualization conflict or compliment the virtualization provided by ASM?

    After more research/discussions/SR's, I've come to the following conclusion.
    Basically, in an intelligent storage environment (i.e. SVC), you're not getting a 100% bang for the buck by using ASM. Which is the cat's meow in a commodity hardware/unintelligent storage environment.
    Using ASM in a SVC environment potentially wastes CPU cycles having ASM balance i/o that is already balanced on the backend (sure if you shuffle a deck of cards that are already shuffled you're not doing any harm, but if they're already shuffled - then why are you shuffling them again??).
    That being said, there may still be some value for using ASM from the standpoint of storage management for multiple instances on a server. For example, one could better minimize space wastage by being able to share a "pool" of storage between mulitiple instances, rather than having to manage space on an instance-by-instance (or filesystem by filesystem) level.
    Also, in the case of having a unfriendly OS where one is unable to dynamically grow a filesystem (i.e. database outage required), there would be a definite benefit provided by ASM in being able to dynamically allocate disks to the "pool". Of course, with most higher-end end systems, dynamic filesystem growth is pretty much a given.
    In the case of RAC, regardless of the backend, ASM with raw is a no-brainer.
    In the case of a standalone instance, it's a judgement call. My vote in the case of intelligent storage where one could dynamically grow filesystems, would be to keep ASM out of the picture.
    Your vote may be different....just make sure you're putting in a solution to a problem and not a solution that's looking for a problem(s).
    And there's the whole culture of IT thing as well (i.e. do your storage guys know what you're doing and vice versa).....which can destroy any technological solution, regardless of how great it is.

  • Logical Volume Group and Logical Partition not matching up in free space

    I was dual booting Windows 7 and Mountain Lion. Through Disk Utility, I removed the Windows 7 Partition and expanded the HFS+ partition to encompass the entire hard drive. However, the Logical Volume Group does not think that I have that extra free space. The main problem is that I cannot resize my partition. I am wanting to dual boot Ubuntu with this. Any ideas? Any help is appreciated. I will post some screenshots with the details. Furthermore, here are some terminal commands I ran: /dev/disk0
    #: TYPE NAME SIZE IDENTIFIER
    0: GUID_partition_scheme *250.1 GB disk0
    1: EFI 209.7 MB disk0s1
    2: Apple_CoreStorage 249.2 GB disk0s2
    3: Apple_Boot Recovery HD 650.0 MB disk0s3
    /dev/disk1
    #: TYPE NAME SIZE IDENTIFIER
    0: Apple_HFS MAC OS X *248.9 GB disk1 Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on
    /dev/disk1 243031288 153028624 89746664 64% 38321154 22436666 63% /
    devfs 189 189 0 100% 655 0 100% /dev
    map -hosts 0 0 0 100% 0 0 100% /net
    map auto_home 0 0 0 100% 0 0 100% /home CoreStorage logical volume groups (1 found)
    |
    +-- Logical Volume Group 52A4D825-B134-4C33-AC8B-39A02BA30522
    =========================================================
    Name: MAC OS X
    Size: 249199587328 B (249.2 GB)
    Free Space: 16777216 B (16.8 MB)
    |
    +-< Physical Volume 6D7A0A36-1D86-4A30-8EB5-755D375369D9
    | ----------------------------------------------------
    | Index: 0
    | Disk: disk0s2
    | Status: Online
    | Size: 249199587328 B (249.2 GB)
    |
    +-> Logical Volume Family FDC4568F-4E25-46AB-885A-CBA6287309B6
    Encryption Status: Unlocked
    Encryption Type: None
    Conversion Status: Converting
    Conversion Direction: backward
    Has Encrypted Extents: Yes
    Fully Secure: No
    Passphrase Required: No
    |
    +-> Logical Volume BB2662B7-58F3-401C-B889-F264D79E68B4
    Disk: disk1
    Status: Online
    Size (Total): 248864038912 B (248.9 GB)
    Size (Converted): 130367356928 B (130.4 GB)
    Revertible: Yes (unlock and decryption required)
    LV Name: MAC OS X
    Volume Name: MAC OS X
    Content Hint: Apple_HFS

    Here is another try via the command line:
    dhcp-10-201-238-248:~ KyleWLawrence$ diskutil coreStorage resizeVolume BB2662B7-58F3-401C-B889-F264D79E68B4 210g
    Started CoreStorage operation
    Checking file system
    Performing live verification
    Checking Journaled HFS Plus volume
    Checking extents overflow file
    Checking catalog file
    Incorrect block count for file 2012.12.11.asl
    (It should be 390 instead of 195)
    Checking multi-linked files
    Checking catalog hierarchy
    Checking extended attributes file
    Checking volume bitmap
    Checking volume information
    Invalid volume free block count
    (It should be 21713521 instead of 21713716)
    The volume MAC OS X was found corrupt and needs to be repaired
    Error: -69845: File system verify or repair failed

  • Finding whole mapping from database file - filesystems - logical volume manager - logical partitions

    Hello,
    Trying to make reverse engeneering of database files and their physical carriers on logical partitions ( fdisk ).
    And not able to make whole path from filesystem down to partitions with intermediate logical volumes.
    1. select from dba_data_files ...
    2. df -k
    to get the listing of filesystems
    3. vgdisplay
    4. lvdisplay
    5. cat /proc/partitions
    6. fdisk /dev/sda -l
       fdisk /dev/sdb -l
    Problem I have is that not able to determine which partitions are consisten in logical volumes. And then which logical volumens are consisted in filesystem.
    Thank you for hint or direction.

    Hello Wadhah,
    Before start the discussion let me explain I am newcommer to Oracle Linux. My genetic with dba experience of Oracle is from IBM UNIX ( AIX 6.1 ) and Oracle 11gr2.
    First task is to get the complete picture of one database on Oracle Linux for future maintenance tasks and make database more flexible and
    preparing for more intense work:
    -adding datafiles,
    -optimize/replace archive redolog files on separated filesystem from ORACLE_BASE
    - separating auditing log files from $ORACLE_BASE to own filesystem
    - separating diag directory on separated file system ( logging, tracing )
    - adding/enlarging TEMP ts
    - adding/enlarging undo
    - enlarging redo for higher transaction rate ( to reduce number of switched per time perceived in alert_SID.log )
    - adding online redo and control files mirrors
    So in this context try to inspect content of the disk space from the highest logical level in V$, DBA views down to fdisk partitions.
    The idea was to go in these steps:
    1. select paths of present online redo groups, datafiles, controlfiles, temp, undo
       from V$, dba views
    2. For the paths got from the step 1
       locate filesystems and for those filesystems inspect which are on logical volumens and which are directly on partitions.
    3. For all used logical volumes locate the logical partitions and their disks /dev/sda, /dev/sdb, ...

  • [solved] Filesystem check fail - Cannot access LVM Logical Volumes

    I am getting a "File System Check Failed"on startup, I recently did a full system upgrade but I'm not entirely sure that the cause of the issue as I don't reboot very often.
    I get the error right before this line is echo'ed out:
    /dev/mapper/Arch_LVM-Root:
    The super block could not be read or does not describe a correct ext2 filesystem...
    this is odd because the only ext2 filesystem I have is on an non-LVM boot partition...
    I can log-in and mount / as read/write and I can activate LVM with
    modprobe dm-mod
    and
    vgchange -ay Arch_LVM
    and they show up in lvdisplay but their status is "NOT available"
    I just need to mount these logical volumes so I can retrieve some personal data in my home directory, I am also hesitant to use LVM again if I can't retrieve my data.
    any suggestions?
    Last edited by action_owl (2010-08-15 02:15:58)

    I just popped in the install disk and was able to mount and access the LVM groups as expected, something must have been wonky with my filesystem

  • Error message when installing Yosemite OS: this core storage operation is not allowed on a sparse logical volume group

    I was updating my Macbook Air (i7, 4mb RAM, 256 gb) to Yosemite OS when the process was interrupted. After retry, my computer is showing the following message: "this core storage operation is not allowed on a sparse logical volume group". I tried to restart several times, but the problem goes on with error. It would be my computer damaged?

    If you don't already have a current backup of all data, back up before proceeding. There are ways to back up a computer that isn't fully functional. Ask if you need guidance.
    Start up in Recovery mode. When the OS X Utilities screen appears, select Disk Utility.
    In the Disk Utility window, select the icon of the startup volume from the list on the left. It will be nested below another disk icon, usually with the same name. Click the Unlock button in the toolbar. When prompted, enter the login password of a user authorized to unlock the volume, or the alternate decryption key that was generated when you activated FileVault.
    Then, from the menu bar, select
              File ▹ Turn Off Encryption
    Enter the password again.
    You can then restart as usual, if the system is working. Decryption will be completed in the background. It may take several hours, and during that time performance will be reduced.
    If you can't turn off encryption in Disk Utility because the menu item is grayed out, you'll have to erase the volume and then restore the data from a backup. Select the Erase tab, and then select
              Mac OS Extended (Journaled)
    from the Format menu.
    You can then quit to be returned to the main Recovery screen. Follow these instructions if you back up with Time Machine. If you use other backup software, follow its developer's instructions.
    Don't erase the volume unless you have at least two complete, independent backups. One is not enough to be safe.

  • I need to sudo cs delete an encrypted hard drive but get an error "Not a valid CoreStorage Logical Volume Group UUID"

    Ok so I encrypted an internal HDD with DOE compliant encryption, and forgot the password. I am not using a typical mac bootloader so apple C when its booting will not work to delete it before its mounted. I have to do it through the terminal. The drive I want to delete is HDD2. Here is the screen capture of running diskutil cs list in terminal.
    Then I disconnected everything to avoid problems, leaving my encrypted HDD only, copied the UUID, it's in this format:
    diskutil cs delete XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
    But I am unsure which group or physical UUID I need to use, there are three associated with HDD2, logical volume group, physical volume and logical volume family. I thought before pressing enter I better ask first ... as I tried it on the physical volume first and I got the error in the title so dont want to start guessing what do to.
    Thanks                       

    Is it not good form for members to advise about using the console? Surely some members are knowledgeable enough to tell me the answer to this .... I know it's easy to bork your system up totally with the console but I'm desparate, my main drive has 1GB free!

  • Logical Volumes Not Creating w lvcreate? Install??

    After following the arch raid guide i have gotten all tge way down to creating logical volumes and i get  this
    lvcreate -L 20G VolGroupArray -n lvroot
    /dev/VolGroupArray/lvroot: not found: device not cleared
      Aborting. Failed to wipe start of new LV.
      device-mapper: remove ioctl on  failed: Device or resource busy
    here is vgdisplay
    -- Volume group ---
      VG Name               VolGroupArray
      System ID             
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  17
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                0
      Open LV               0
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               97.75 GiB
      PE Size               4.00 MiB
      Total PE              25024
      Alloc PE / Size       0 / 0   
      Free  PE / Size       25024 / 97.75 GiB
      VG UUID               rP0ooH-VdCy-fMZM-sB0g-90zi-Gv2o-gBZWC1
    pls help i am now stuck

    mattbarszcz wrote:So I went back to the live cd, I ran efibootmgr with no arguments, and sure enough there was only 1 entry for the Windows Boot Manager.  It seems that efibootmgr commands don't seem to take effect.
    Because of bugs in efibootmgr and/or EFI implementations, efibootmgr doesn't always work. Using bcfg from an EFI shell or bcdedit in Windows can be more effective on some systems. That said, if efibootmgr worked for you before, I'm not sure why it would stop working, or why an existing entry might disappear. (Some EFIs do delete entries that they detect are no longer valid. This shouldn't have happened with a whole-disk dd copy, but if you duplicated the partition table and then used dd to copy partitions, it might well have happened to you. I suppose it's conceivable that the firmware detected the change from one physical disk to another, too, and deleted the original entry because of that change.)
    I referenced the rEFInd page here: http://www.rodsbooks.com/refind/installing.html#windows and followed the procedure to install an entry for rEFInd from Windows.  I was able to add an entry no problem through windows.
    Does anyone have any suggestions as to why this doesn't work?  Nothing has changed about the system other than the disk.
    It's unclear from your post whether the entry you created in Windows now works. If it does, then don't sweat it; just keep using that entry, and if you need to make changes and efibootmgr doesn't work in the future, plan to use either Windows or the bcfg command from an EFI version 2 shell.

  • RAID on LVM logical volumes

    Hey,
    i've got a 1T and a 500G hard drive in my workstation. The 500G drive is my media disk. I initialized a software RAID-1 with one missing drive on a 500G logical volume on the 1T disk. After formatting and copying all the data from the media disk over to the new raid partition, I added the old 500G partition to the RAID.
    After rebooting my machine, the RAID was in inactive state and hence the filesystem could not be mounted.
    I found out, that in /etc/rc.sysinit the search for software RAID arrays is performed before the search for LVM volume groups. I started switching the RAID and LVM sections but after reboot nothing changed. I presume it's caused by the lv devices missing in /proc/partitions, which is used by md to scan for raid members by default.
    I decided to add some lines to /etc/rc.local (also my RAID device is /dev/md127 Oo):
    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/vg00/raid-data /dev/sdb1
    mount /mnt/raid
    It works now, but it is not a clean solution.
    How is it supposed to do RAID on top of LVM logical volumes in Arch Linux? Are there any alternative ideas of solutions to this problem (sure there are)?
    Thanks in advance.

    delcypher wrote:
    Hi I've been looking into backing up one of my LVM logical volumes and I came across this (http://www.novell.com/coolsolutions/appnote/19386.html) article.
    I'm a little confused by it as it states that "Nice work, your logical volume is now fault tolerant!".
    Are you sure that's the article? I can't find anything on that page about "nice work" or "fault tolerant"...?
    delcypher wrote:My concern is that because my volumegroup spans across two drives if anyone of them fails I will loose all may data.
    Correct.
    delcypher wrote:Is there any setup I can do by which I can have a perfect mirror of my volumegroup so that if one of my drives fail I still have a perfectly function volumegroup.
    Use 2 disks to create a RAID-1 array, then use that as your PV.
    sda1 + sdb1 = md0 => pv
    delcypher wrote:I understand linux supports software raid but would I need two drives identically sized (1TB) or can I just have one drive (i.e. 2TB) that is a mirror of my volumegroup?
    You can use 2 partitions on the same disk to create a RAID array, but that defeats the whole purpose of RAID.
    It sounds like you're merging the meanings of 'redundant' and 'backup' -- they are distinct things, so you need to decide if you want redundancy or backup. Or both ideally.

  • Corrupt Root Logical Volume

    Hey Folks,
    I'm having a bit of a problem with the Root Logical Volume / Volume Group on one of our Servers (OEL 4 - Release 7 running on Oracle VM 2.1.2). One of our SAN Administrators pulled the disk from the system by accident, and has since re-presented it to the server but now when it boots I'm getting errors on the Root Logical Volume.
    It boots in to maintenance mode and I've tried fsck'ing the root file system but it gives errors about the first SuperBlock...
    "Attempt to read block from filesystem resulted in short read while trying to open /dev/VolGroup00/LogVol00
    Could this be a zero-length partition? "
    I've tried specifying the next SuperBlock and the e2fsck runs through repairing corrupt inodes, but it doesn't make a difference, I still get errors on bootup. Any other LVM commands give an error to say that 'File Descriptor XX not closed'.
    I've booted from an OS DVD in to Rescue Mode and it mounts the Volume Groups and I can still see the files are present.
    Does anyone know of anything I can do? Are their commands I can run in Rescue Mode to fix corrupted file systems? I'm off to do some googling on the subject but if anyone could offer advice it would be much appreciated.
    Thanks
    James

    Please mind any corruptions inside the filesystem can occur. You should check that in two stages:
    -Non-oracle
    All operating system tools use normal, buffered IO. (that I am aware of, please correct me if somebody has seen something different)
    This means that any file modifications (user-imposed, thus modified configuration files, and system imposed, like installing binaries) are done in memory, and eventually flushed to disk (by a process called 'pdflush'). When a disk is pulled, there is a chance not all modifications are already flushed to disk, thus will be corrupt.
    -Oracle
    This is the same for the oracle database files, with the exception of databases using DIO, direct IO. Direct IO is enabled if the database parameter 'filesystemio_options' is set to 'direct' or 'setall'. Direct IO bypasses the operating system buffercache and does IO (reading and writing) directly from the blockdevice, instead of asking the operating system for the block in normal mode, which caches the requested block(s) in the operating system/linux buffercache.
    That is why normal/cached IO also is said to do 'double buffering', because there are two cache's ('buffers') involved.
    -Okay, but it's journalled, isn't it?
    Eh, yes.
    But it depends on the type of journalling. Journalling might not be what you expect it to be.
    Most filesystems, including NTFS and default ext3, only journal metadata transactions. Metadata transactions mean that only changes to the structure of the filesystem are journalled, not data-transactions.
    Please mind that it is possible for ext3 to journal all modifications instead of only meta-data with the 'data=journal' mount option.

  • [SOLVED] Volume groups not showing up after update

    nunikasi wrote:
    Hello!
    I run a arch linux setup with LVM volumes on a LUKS encrypted drive /dev/sda. The header resides in the initramfs-linux-ck.img (So kernel and header are on a USB stick and my hard drive is completely encrypted).
    Everything worked nicely with this setup until today. I ran a
    pacman -Syu
    and the kernel was updated as normal and made a new initramfs-linux-ck.img on the usb stick (mounted in /boot). This is the way I always updated the kernel, and it always worked, but for some reason, after the reboot, and after typing my passphrase I got an error
    ERROR: device '/dev/mapper/nunikasi-root' not found. Skipping fsck.
    This was strange, so I booted the latest arch live-cd and did a
    cryptsetup luksOpen /dev/sda nunikasi --header (pathToHeaderFile)
    and entered the passphrase and still there are no volume groups in /dev/mapper. Am I wrong in thinking that this command should load my volume groups as
    /dev/mapper/nunikasi-root
    and
    /dev/mapper/nunikasi-home
    Running
    vgscan
    also reports no volume groups.
    Additional information:
    Last time I updated the kernel was around 24th of June.
    I did run pacman -r linux before the update, assuming there is no need for it since I run linux-ck. I hope this is not relevant.
    Do you know what went wrong?
    What was the problem?
    As I saw that dm-crypt had a problem with finding my logical volumes, I fired up
    cryptsetup luksOpen /dev/sda nunikasi --header (pathToHeaderFile)
    dhex /dev/mapper/nunikasi
    and found that the lvm metadata was cut in half. So I ran
    dhex /dev/mapper/nunikasi /dev/sda
    and quickly saw that there was an attempt to make a MBR on /dev/sda (I saw the boot signature 55 aa on offset 1FE-1FF). I remembered that I tried to use GParted some days earlier to format a memory stick, and I probably forgot to change from /dev/sda before creating a MBR. This is most probably what happened, and now I had to restore the metadata in some way.
    What was the solution?
    As I had never made a backup of the metadata, I knew that LVM makes one for you and puts it in /etc/lvm/backup/. So I fired up dhex on /dev/mapper/nunikasi and searched for it. Found the offset and did:
    dd if=/dev/mapper/nunikasi count=2148 skip=8928722944 of=lvm-metadata.txt iflag=skip_bytes,count_bytes
    Now I had the metadata backup and the right UUID for my physical volume. Only steps left were:
    pvcreate -ff -u [UUID] —restorefile lvm-metadata.txt /dev/mapper/nunikasi
    vgcfgrestore -f lvm-metadata.txt nunikasi
    vgchange -ay
    And my setup is working again!
    Have a nice day,
    nunikasi
    Last edited by nunikasi (2014-07-07 20:30:16)

    nunikasi wrote:
    clfarron4 wrote:Hmm... Have you tried breaking out an Arch installation disk and having a look with that?
    Yes, I wrote that in the original post.
    Sorry about that. I've just read it again.
    OK, based on the information you've given us (LVM on LUKS, VG=nunikasi and LVs=root,home, backed up header), I would have thought that the commands you issued in your original post would have unlocked it. I'm not going to make any assumptions about the header of the actual LUKS partition though.
    nunikasi wrote:I remember core/filesystems and some other core packages were updated.
    I've had a look through my pacman.log and I can't think of anything that might upset it (that said, I am quite tired). I run 5 (maybe 6) different kernels (3.10, 3.12, 3.14 and 3.15 branches) and I haven't had any problems with finding PVs, VGs and LVs on either of my systems.
    I'm at a loss as to what could be up right now and I'll this thread for tomorrow (because sleep beckons).
    EDIT: Wow. That's quite something.
    Last edited by clfarron4 (2014-07-08 11:21:46)

  • Arch 0.7 Wombat full iso error with logical volumes

    I have a 160 GB ide-hd as primary master in my system.
    Whenever i create logical partitions through the installer and try to give them a filesystem i get errors.
    the installer reports unable to mount /mnt/usr or not enough space to mount.
    Checking the partition table with Partition magic 8 dos tells that the extended partition starts above cil. 1024 and should be of type extendedX.
    I let PM8 repair the table and go back to the archlinux install that now works fine.
    it looks to me like the error is in cfdisk 2.12a.

    Nope, cfdisk gave no errors or reboot instructions.
    1 time i tested this, used the install cd to create 1 primary and 1 logical volume , rebooted and tried /arch/setup without success.
    rebooted with PM8 rescue cd, repair things, rebooted again with arch cd and install went on without problems.
    It looks like cfdisk/large HD's / logical volumes don't work well together.
    I have switched to using LVM since then, so i don't need physical logical volumes anymore.

  • Volumes could not be started.

    OS: Mac Os X 10.4.11
    Xsan: 1.4.2
    There are three logic volumes (RAIDL1, RAIDL2 andRAIDL3) created by Xsan, RAIDL1 and RAIDL3 could be started normally, but RAIDL3 could not. The followings are the error log:
    [0511 10:51:53.447000] 0x1801000 (Debug) sigwait handler starting
    [0511 10:51:53] 0xa000ed88 (Info) Server Revision 2.7.201 Build 7.40 Built for Darwin 8.0 Created on Thu Oct 11 19:05:39 PDT 2007
    [0511 10:51:53] 0xa000ed88 (Info)
    Configuration:
    DiskTypes-4
    Disks-4
    StripeGroups-2
    ForceStripeAlignment-1
    MaxConnections-75
    ThreadPoolSize-128
    StripeAlignSize-256
    FsBlockSize-4096
    BufferCacheSize-32M
    InodeCacheSize-8192
    RestoreJournal-Disabled
    RestoreJournalDir-None
    [0511 10:51:53] 0xa000ed88 (Info) Self (192.168.0.180) IP address is 192.168.0.180 .
    [0511 10:51:53.455119] 0xa000ed88 (Debug) No fsports file - port range enforcement disabled.
    [0511 10:51:53] 0xa000ed88 (Info) Listening on TCP socket 192.168.0.180:64082
    [0511 10:51:53] 0xa000ed88 (Info) Node [0] [192.168.0.180:64082] File System Manager Login.
    [0511 10:51:53] 0xa000ed88 (*FATAL*) PANIC: /Library/Filesystems/Xsan/bin/fsm "Inodeinit_preactivation: InodeInode version mismatch! expected-0x203 or 0x204, received-0x76777776
    " file inode.c, line 5696
    [0511 10:51:53] 0xa000ed88 (*FATAL*) PANIC: aborting threads now.
    Huge useful data is stored in this volume, please help!!!

    There have been about four previous posts for the same error. It looks like those may hp you. You can find those by using the search function on the right and search the iPad forum for:
    "session could not be started"
    Use the quote marks.

  • [SOLVED] LVM Volume Groups Not Found

    Hi,
    I'm installing Arch on my desktop following the installation in the software raid and LVM section (https://wiki.archlinux.org/index.php/So … ID_and_LVM). I'm using RAID1. I've followed the important instructions on the LVM page (https://wiki.archlinux.org/index.php/LVM#Important). I have a UEFI motherboard so I created a separate boot partition and installed grub2 as my bootloader based on the instructions on the UEFI page (https://wiki.archlinux.org/index.php/Un … _Interface).
    After a few hiccups, I think I have the system installed properly on my hard drive. Since the root partition is on the LVM, it needs to be loaded up pretty early in the boot process and this is where I'm getting a really strange issue. I have my motherboard configured to boot straight into the built-in UEFI shell. From here, I select the filesystem for my hard drive and launch grubx64.efi. So far so good. Now here's the weird part - if the LiveCD I used to install Arch is in the optical drive then everything boots up just fine, I get a prompt and I can log in and generally do as I see fit. But if the LiveCD isn't in the drive, then after I select the Arch Linux option in GRUB I get a message that says No Volume Groups found and dumps me into the rootfs shell.
    As far as I can tell the LiveCD isn't actually being used during boot, but I don't understand why taking it out stops my logical volumes from loading =_=. Does anyone have some idea what's wrong / what I can do to fix it? Is there some config file that has been messed up? This is my second time trying this installation, if it doesn't work I'm just going to drop LVM and stick to just the software RAID. I'm not particularly attached to LVM, I decided to use it mainly because I could (or at least I thought I could).
    Any help would be much appreciated. If there is any extra information I need to post, I can do that as well.
    Update: It doesn't seem like the Arch LiveCD itself has anything to do with this. I inserted a blank CD and was able to boot up just fine but as soon as I took it out, I got the No volume groups found error again. This makes me think that it is a timing issue. Maybe one of the modules is not fully loaded before it is needed? Where can I go to find out what modules those are? Is there any way to force the boot process to wait until all the necessary modules are loaded?
    SOLUTION: https://bbs.archlinux.org/viewtopic.php?id=145714
    Last edited by jynnantonix (2012-09-07 19:30:01)

    What you did has zero effect on the runtime order of the hooks (lsinitcpio would have shown you this).
    https://bbs.archlinux.org/viewtopic.php?id=145714

  • Volume encrypt and erase failed; unable to delete core storage logical volume

    I was attempting to slowly migrate [MI-***] from early 2013 MBPRO to New iMac 5K w/Ceiling Level components.
    Kept going through LONG process and then told me it couldn't create [MBPRO HD Home Username] "Jim" on volume or whatever. NO FileVault enabling/ still skittish from White iMac Encrypting Nightmare days... I don't even know -- I guess it's encrypted on Airport, but not on MBPRO.
    Moved over Applications from outside User account fine; anything inside any User account NOT FINE.
    Hooked up Thunderbolt cable between two macs and restarted MBPRO in T mode... displaying the lightning BOLT on screen that moves around to reduce Burn-in.
    Was able to go onto desktop and use windows to drag n drop 190G of movies over to iMac... wondering how I was going to get all right settings over form FCP...
    Bottom Line: I only have 16G left on MBPRO and need to MOVE video editing to be exclusive on 3T Fusion on Maxed out iMac 5K.
    > Have concluded through whole process that I just want to clone the MBPRO and then delete most of the Larger Videos from MBPRO to recover some of my 760G SSD back.
    So, I grabbed my 2T Airport Extreme and Hooked up the Cat5 LAN port to Cat5 input on back of NEW iMac; Now my MBPRO doesn't have to be locked up for days, because i can use TimeMachine backup to restore or clone the two macs... i hope.
    Went into recovery mode and selected sub-Macintosh HD and attempted to erase; Result time after time after time: "Volume Encrypt and Erase failed." Reason: "Unable to delete the Core Storage logical volume." It dismounts it and I have to restart computer to get it back on. Funny thing is I don't have to use R anymore... which by the way, Command+R appears to be same as just plain old R when restarting... why is that?
    This has become a "since Christmas" runaround session for me and I am sick of it.
    Please help. I would've called Apple Care [and I did last night while driving... just to get advice on direction. I'm usually a pretty savvy PowerUser but this is driving me crazy.] but have to get things done for a meeting tomorrow. Can work on it after hours if someone can advise today on this post.
    Thx,
    Jim

    I have taken some screenshots of the error I get and the state of my HDD in Disk Utility. I did have a weird thing happen after trying to repair using single user mode, where I reopened disk utility and the partitions were NOT greyed out and displayed the correct info concerning space available etc, although after verifying it then reverted back to greyed out with no info.

Maybe you are looking for