2TB limit in ASM on ext3

total newbie question here, so forgive any ignorance I will reveal in the next lines....
What's the best way to get ASM to scale up beyond the 2TB limit on ext3? Start with DATA at 2TB and then when that's full add a DATA1 at 2TB, then DATA2 ... and so on?  Or is there some sort of supported trick to make ext3 go above 2TB and still be manageable (RHEL v5 AS)?  Or is that just crazy to make a file system over 2TB and I'd be better off making another disk group on DATA1?
Thanks for any advice
LJ

Hi Sebastian -
Thanks very much for your reply. I think I understand now that ASM doesn't use the ext3 file system, because it is its own file system and just uses blocks from the disks given it ASM in the diskgroups. Cool, thanks for clarifying that, it makes perfect sense now.
We have a CLARiiON system that hold 15 disks in an enclosure with 14 of them in RAID 10 and have a global spare. This will be for +DATA
Our System Admin. recently quit and everyone looked at me to fill in the role until a replacement can be found. So now I'm trying to figure out how to initialize the disks (fdisk I think) on RHEL v5 AS to make them usable by ASM. I tried to get that article you mentioned above, but it's apparently only for Oracle Partners (of which I'm not at the moment). So I'll look through the other documentation on ASM again and see if there are good examples on how to format the disks correctly for ASM.
Thanks again Sebastian for your help.

Similar Messages

  • ASM vs ext3 File system(mount point)

    Please suggest which one is better for small databases.
    ASM or ext3 File system(mount point)?
    Any metalink note.

    ASM better if you do not want to play with I/O tiuning, (if you tune ext3 file system it woud be the same from performace view),
    but it more compilcated for admininstering database files in ASM then in ordinary file system.
    Oracle is recomending to use ASM for database file system.
    I woud think if you have some development database and nead a lot of cloning, moving of datafiles its better to use ordinary file system,
    so you can use copy OS comands, not so complicated.
    If you nead some striping, miroring, snapshoting from ext3 you can use LVM on unix/linux.
    I am not sure but I think what striping or miroring is better on ASM then on LVM, becouse ASM is doing it better for databse I/O.

  • MacPro2009-Apple Raid Card-2.2TB limit

    Hello,
    I have a MacPro (early 2009) with an Apple Raid Card. Lion 10.7.4
    I just bought a 4 Tb Sata hard drive and there is no way to make it work on the Apple Raid Card. When it mount , it only show a capacity of 2,2 TB and Apple disk Utility is of no utility with it on the Raid Card.
    So I managed to hook it in the Optical Drive bay and I have now a 4 TB volume available for Network backup.
    Can you confirm the limit on the Apple raid card
    Version Matériel:          2,00
      Version du programme interne:          E-1.3.2.0
    Thanks,
    Marc LE BRET

    I have this same issue with a mid 2010 MacPro Server running 10.8.2/OS X Server (10.8) and the Apple RAID Card with 3 - 1TB drives and an added 4TB drive (in Bay 4) that is only being seen with the 2.2TB limit.  I even initialized it and tested it a bit with my MacPro1,1 and a NewerTech toaster on eSata.  Very nice to see a workaround with the optical bay... thank you for that suggestion!  I was going to go to another enclosure or hard drive "toaster" / dock that allows 4TB drives and use eSata or FW800 as a connection... the 2nd optical drive bay sounds great and will give it a try. 
    The only thing I can figure from all this and talking to Tier 2 AppleCare is that the RAID card is only 32 Bit addressing O.o  ... which seems crazy but makes the most sense.  They will more than likely respond with "The drive is unsupported, since it isnt an "Apple" HD."

  • Creating DAS or SAN Disk Partitions for ASM

    Oracle® Database Installation Guide 11g Release 2 (11.2) for Linux (http://docs.oracle.com/cd/E11882_01/install.112/e24321/oraclerestart.htm#CHDBJGEB)
    *3.6.3 Step 2:* Creating DAS or SAN Disk Partitions for Oracle Automatic Storage Management
    In order to use a DAS or SAN disk in Oracle ASM, the disk must have a partition table. Oracle recommends creating exactly one partition for each disk.
    My question: why does Oracle recommend creating exactly one partition for each disk? For a disk to be used on Suse Linux, I normally need to at the least 3 partitions, i.e., boot, swap, and root.
    Scott

    Please read the entire manual - before starting setup and configuration.
    From the very same manual:
    Do not specify multiple partitions on a single physical disk as a disk group device.
    Oracle ASM expects each disk group device to be on a separate physical disk.So why use partitioning? If you need to subdivide a LUN into smaller LUNs and use these instead of the larger LUN. Reasons could range from LUNs bigger than 2TB in size (ASM only support up to 2TB size LUNs), to wanting partition 1's of x size for ASM diskgroup 1 and partition 2's of y size for ASM diskgroup 2.
    If you do partition, take the extra precaution of marking the partition as a non-file system (partition type <i>da</i>) to safeguard against someone (like a sysadmin looking for space) from mounting it as a file system.
    Generally though - I would not partition LUNs for ASM use.

  • MB BIOS 2.2TB MBR limitation: Can add-on drive controller BIOS enable UEFI/GPS on a non-bootable data drive?

    I am working on older hardware that is not UEFI enabled so it is subject to the 2.2TB volume limitation under Windows Server 2012 R2 resulting in 11.6TB of unusable drive space. I installed an Adaptec Series 7 2274500-R 71605E controller card and created
    a striped RAID 10 totalling 13.6 TB of drive space. During boot, the Adaptec BIOS is installed on the system, which prompts the following question:
    Since the boot utilizes the motherboard BIOS, clearly the boot partition cannot exceed the 2.2TB limit. But what about a separate logical data drive? If I shrink the boot partition to 1TB, create a second volume at RAID level of 12.6TB formatted with GPT
    NTFS, would the Adaptec BIOS enable UEFI/GPT (since the MB BIOS would not boot to the second logical drive) and Windows Server see the full 12.6TB second drive volume?
    Yes, I am aware that I can create multiple 2.2TB partitions, but that would be an inefficient option for this installation.
    Any thoughts or suggestions will be greatly appreciated.
    Thanks!

    Hi,
    According to your description, my understanding is that you want to use add-on drive controller to enable UEFI/GPS on a non-bootable drive.
    In order for an operating system to fully support storage devices that have capacities that exceed 2 TB, the device must be initialized by using the GUID partition table (GPT) partitioning scheme. If the user intends to start the computer from one of these
    large disks, the system’s base firmware interface must use the Unified Extensible Firmware Interface (UEFI) and not BIOS. Windows support for hard disks that are larger than 2 TB, and there are some prerequisites are needed.
    Detailed information reference link:
    https://support.microsoft.com/en-us/kb/2581408
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Time Machine Limits - Will it support 6TB

    I'd like to set up an 8TB external disk array (RAID 5 probably) to back-up my MacPro with 3TB of storage.
    Plan to use Time machine for incremental back-ups. Have had success using TM with a ReadyNAS NV+ NAS but as I added more storage to the Mac Pro I'm exceeding the 2TB limit of the NV+ (it only supports Ext3 file system).
    Has anyone set up TM to back-up to a 6TB+ external device? If so any comments on hardware (ReadyNAS, Drobo, QNAP, other...).
    Does it matter (besides speed) whether I use Firewire or the network connection.
    What else do I need to be aware of?

    Nolie rules wrote:
    Sorry I meant to delete the other post - just marked it as solved.
    I just want to find out if there's any limitation HW or SW that would preclude TM from taking advantage of 6TB of available storage.
    No. As long as it looks like a single volume, TM won't have a problem with it.
    Also interested in any differences in operation whether the connection is via the network or Firewire-800.
    Just speed and reliability.

  • OS platform for BigFile TBS

    Hi Everyone -
    I'm looking to yuse BigFile Tablespaces, and read in the documentation "Bigfile tablespaces should not be used on platforms with filesize restrictions, which would limit tablespace capacity." Which is good to know, but can anyone tell me which OS Platforms out there do not have filesize restrictions?
    I'm on RHEL AS 4 right now, and looking ahead to RHEL AS 5 they state that the ext3 file system has a 2TB limit for filesize.
    How can I create a 16TB Bigfile TBS this way?
    Any help is appreciated.
    LJ

    Hi Everyone -
    I'm looking to yuse BigFile Tablespaces, and read in
    the documentation "Bigfile tablespaces should not be
    used on platforms with filesize restrictions, which
    would limit tablespace capacity." Which is good to
    know, but can anyone tell me which OS Platforms out
    there do not have filesize restrictions?
    I'm on RHEL AS 4 right now, and looking ahead to RHEL
    AS 5 they state that the ext3 file system has a 2TB
    limit for filesize.
    How can I create a 16TB Bigfile TBS this way?
    Any help is appreciated.
    LJi think the Big file tbs documentation meant was use Bigfile TBSs on Operating systems that have large file size restrictions so as not to limit tablespace growth because all the operating systems i know have a limit to file size (maybe am wrong).
    On the flip-side, i don't see how having a bigfile TBS close to 2TB, leave alone 16TB can be usefull in a production environment. it will cause bigger problems than the ones Bigfile TBSs were meant to solve! maybe in research it might be useful.
    consider having 2 or more tbs with a smaller file sizes.

  • IMac 5K - 3TB fusion, bootcamp damaged, removed and now extremely confused in reading answers for fix

    Two days ago I rebooted to Bootcamp Windows 8.1 and it blue screened. I have been trying for two days to fix my system following helpful posts here and I'm just digging myself further into confusion and complications. At this point I'm extremely in over my head. I have removed the Bootcamp partition using Boot Camp Assistant and tried re-installing Windows again and again using BCA…
    At the moment I am back to no Windows partition, just a 3TB OS X partition, but when I try to create a Windows partition it gets created as a logical core... heck, I don't understand enough, just know it's not right.
    How do I get my drive tables all right again and then be able to create a working Bootcamp partition? What screenshots do I need? I appreciate any help I can get.

    Okay, I used Bootcamp to create a EFI-bootable USB drive with Windows 8.1
    I had my drive back to a single 3TB partition.
    I created a Free Disk (empty) partition of 750GB (which was then set as 876.82GB).
    I rebooted.
    Created a new partition, got the notice that Windows will create a smaller 128 MB partition.
    It did create the 128 MB partition, or showed such.
    Then when I go to format the other, last free space partition it seems to work.
    When I go to install Windows 8.1 in that partition I am told Windows cannot find a partition to install into.
    Here is my diskutil list as it is now.
    … and I just saw the "must be under the 2TB limit" again… I don't know how I had Bootcamp working until last week. It was cleanly set up on my new Mac as ~750GB the first time with BCA and worked for months until whatever changes were made with 10.10.3.

  • Boot camp Yosemite Win 8.1 3tb Fusion Drive problems

    Hi,
    As many other have reported on the internet I am having problems installing windows on my iMac 27' late 2014 3tb fusion drive, where the partition I create falls outside the 2.2tb limit that prevents windows to be installed when selecting disk under the windows installation (i.e, same problem as Win 8.1, iMac Retina 3TB Fusion, Yosemite, Boot Camp fail!).
    Is my understanding correct that this issue was solved in the OS X Mountain Lion Update v10.8.3 or was all they did to make sure boot camp could create a partition, but windows still couldn't find it?
    I am using Yosemite 10.10.2 and I am wondering if for some reason they didn't include this update?
    Has anyone successfully installed windows through boot camp on a 3tb fusion drive on yosemite, without 'manually' dividing the 3tb fusion into two partitions, and then creating the windows partition within the first partition?
    Many thanks
    iMac (Retina 5K, 27-inch, Late 2014), OS X Yosemite (10.10.2), 3tb fusion drive

    You are correct. Please see - iMac (27-inch, Late 2012): Boot Camp alert with 3TB hard drive - Apple Support.
    BCA should work. If you are running into any issues, can you post the output of the following Terminal commands?
    diskutil list
    diskutil cs list
    sudo gpt -vv r show /dev/disk0
    sudo gpt -vv r show /dev/disk1
    sudo fdisk /dev/disk0
    sudo fdisk /dev/disk1

  • Boot camp for Win 8.1 on 3tb Fusion drive, Yosemite

    Hi,
    As many other have reported on the internet I am having problems installing windows on my iMac 27' late 2014 3tb fusion drive, where the partition I create falls outside the 2.2tb limit that prevents windows to be installed when selecting disk under the windows installation (i.e, same problem as Win 8.1, iMac Retina 3TB Fusion, Yosemite, Boot Camp fail!).
    Is my understanding correct that this issue was solved in the OS X Mountain Lion Update v10.8.3 or was all they did to make sure boot camp could create a partition, but windows still couldn't find it?
    I am using Yosemite 10.10.2 and I am wondering if for some reason they didn't include this update?
    Has anyone successfully installed windows through boot camp on a 3tb fusion drive on yosemite, without 'manually' dividing the 3tb fusion into two partitions, and then creating the windows partition within the first partition?
    Many thanks

    There is a Boot Camp forum that I'd recommend you post to rather than a general iMac forum.

  • Can I use a 3T disk on a G5 running Tiger?

    I am trying to upgrade the hard drives in my 2004 2 Ghz Dual Processor PowerPC G5 running Mac OS X 10.4.11.  (I need Tiger to run some older apps.)  I replaced the original Maxtor 250GB SATA with a 3T Hitachi Deskstar.  I'm hoping to install a second 3T drive and mirror all my work using RAID1.  I would like to use this machine to drive my Nikon Slide Scanner and manage large archives of TIF and Raw photographic files.
    Since Tiger would not recognize the larger drive, I used Disk Utility from a Leopard DVD to partition the drive into 2 parts: 1T and 1.8T.  These partitions checked out when verified with the Leopard version of DU.  I have successfully copied my applications and files to the smaller partition (I'm using it now), but the Tiger Disk Utility tells me I have problems with the 1.8T partition.  I get the message: "The underlying task reported failure on exit."  I had successfully repaired the partition with the Leopard version of DU ("Invalid file clump size."), but cannot repair it using Tiger DU.
    Questions:
    Is there a disk size limit under Tiger?
    Is it possible that I'm encountering a timing problem with the bigger faster disk?  Is there a different jumper setting I can use?
    Is this likely a problem with the drive itself?
    Is there another utility that I can use with OSX 10.4 that will fix this problem?
    I've searched discussions and boards for the last day or so, and am not finding any answers.
    Help would be appreciated.
    Dual 2 GHz PowerPC G5
    4 GB DDR SDRAM
    Mac OS X Version 10.4.11

    To promote file contiguity and avoid fragmentation, disk space is typically allocated to files in groups of allocation blocks, or clumps. The clump size is always a multiple of the allocation block size. The default clump size is specified in the volume header.
    http://dubeiko.com/development/FileSystems/HFSPLUS/tn1150.html#HFSPlusBasics
    To use Partitions of more than 2.2TB they came up with GUID Partition Scheme, as APM had the 2.2TB limit. Drives also now use Advanced Format...
    http://en.wikipedia.org/wiki/Advanced_Format
    I've read that 10.4.11 is supposed to deal with that, but it's not 100% clear to me if that includes G5 Macs running 10.4.11, or only Intel Macs, or whether there might even be some split between early & late G5s with that capability.

  • Mailbox and public folder backup frequency

    I appreciate this is very much organisation specific, but out of interest;
    1) how often do you do full backups of your mailbox databases..
    2) how often do you do incremental backups of your mailbox database(s)..
    3) how often do you do full backups of your public folder database(s)..
    4) how often do you do incremetial backups of your public folder database(s)..

    You are right - this is going to depend highly on your organization - and more about how much redundancy is built into your system than anything else.  I have built Exchange solutions for many corporations (5000 to 200,000+ desktops) and have seen everything
    from "we will backup daily and delete the backups the next day" (a legal firm who wanted no historic data kept in their system) to "we will backup every day and save two weeks, and save a weekly backup for six months and save a monthly backup for six years"
    (can't remember who did that one).  My previous employer (20,000 seats) had Exchange 2010, and have about 100 databases (200GB limit) with three copies of each database (two local for HA, one elsewhere for DR), and did daily backups that were
    kept for a week.  My current employer (13,000 seats) has Exchange 2010 with 24 databases (2TB limit) with four database copies (three local for HA, one in another datacenter for DR, and two DAGs that mirror each other, so both datacenters host
    active mailboxes), and we have no backups.
    Oh, and for both of my recent employers, we have no public folders.  Just another thing to break down that we don't need to worry about.  (-:

  • An unexpected error occurred while the job was running. (ID: 104)

    I'm getting this error in the event logs when trying to run a consistency check / sync with one of our file servers.
    This server was working fine, but w needed to migrate the data to a new partition (GPT) to allow it to expand past the 2TB limit of MBR partitions.  I added a new 3TB disk migrated the data and changed the drive letter to what the old partition was
    prior.  Since then we seem to get this error.
    I have installed the update roll up on both the server and the agent side.  I have also removed the server from the protection group then re-added it.  I've also removed and reinstalled the agent on the file server.
    Any help is appreciated!  Here's the full error from the DPM server:
    The replica of E:\ on server is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. (ID: 3106)
    An unexpected error occurred while the job was running. (ID: 104)

    Server Version is Windows 2012 R2 STD.  DPM is 4.2.1235.0 (DPM 2012 R2).  I expanded the production file server.  All that server does is host up file shares.
    I suspected the fact that I created a new volume and changed drive letter back to the original to be the source cause of the issue.  What I ended up doing is blowing away the backups from this server on disk and re-adding it to the protection group. 
    It now runs much longer, but still times out at random with the error mentioned above.
    Vijay:  The error was copied from Event Viewer.  Not sure what else you require?  Event ID is 3106 from DPM-EM.
    The DPM logs say:
    Affected area: E:\
    Occurred since: 2015-01-05 2:04:38 AM
    Description: The replica of Volume E:\ on servername is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing recovery
    points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
     An unexpected error occurred while the job was running. (ID 104 Details: The semaphore timeout period has expired (0x80070079))
    Date inactivated: 2015-01-05 7:03:04 AM
    Recommended action: No action is required because this alert is inactive.
    Affected area: E:\
    Occurred since: 2015-01-05 2:04:38 AM
    Description: The replica of Volume E:\ on servername is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing
    recovery points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
     An unexpected error occurred while the job was running. (ID 104 Details: The semaphore timeout period has expired (0x80070079))

  • Can I use a beefier disk ?

    Hello !
    I own a E540 Thinkpad (20C600JHFR) which came with a very thin 7200 RPM 500 GB disk.
    As I use Windows 7 and Ubuntu, I'm short of disk space.
    Alas, I can't find 1TB disks in 7 mm thick but they are quite common on 9.5 thickness. So I was wondering if it is possible to remove the blue foam pads to gain the extra space to mount the drive. Is this foam part of the shock detection device ? 
    And if yes, will it void the warranty ?
    Many thanks in advance for your help.

    To promote file contiguity and avoid fragmentation, disk space is typically allocated to files in groups of allocation blocks, or clumps. The clump size is always a multiple of the allocation block size. The default clump size is specified in the volume header.
    http://dubeiko.com/development/FileSystems/HFSPLUS/tn1150.html#HFSPlusBasics
    To use Partitions of more than 2.2TB they came up with GUID Partition Scheme, as APM had the 2.2TB limit. Drives also now use Advanced Format...
    http://en.wikipedia.org/wiki/Advanced_Format
    I've read that 10.4.11 is supposed to deal with that, but it's not 100% clear to me if that includes G5 Macs running 10.4.11, or only Intel Macs, or whether there might even be some split between early & late G5s with that capability.

  • Maximum Storage Size in cDOT 8.2 Simulator???

    Hi All, I'm trying to help my customer to preserve his 5 years of weekly Snapshots (!). Many of them are 32-Bit an are not out of age in the near future. He wants/needs to turn to 8.3 due to a very dirty SMB Bug (currently on cDOT 8.2P3) and because we don't offer4 32-Bit Snapshots Capability in 8.3 I have the following in Mind: Install cDOT82-Sim in VMWare, SnapMirror all Data from current SnapVault-Destination into Sim to preserve it forever. Then delete 32-Bit Snaps on physical System and Upgrade Env to 8.3. Capacity is around 6 TB. Can 8.2 cDOT Simulator handle this? >> Bringing Data to Cloud is not an Option due to Person data, buying new Gear is also not an Option because they just bought new gear and are very unhappy about our Snapshot "Story"... RegardsOli

    That's much too large for the sim. The one we can download is good to about 220gb raw, but it could be configured to about 0.5t raw.
    Edge would be a much better fit, but you would need a full version key for FDvM200. The evaluation version has a 2tb limit but the full version can take 10tb raw. Even after WAFL reserve and peeling off a root vol it should comfortably hold your 6tb archive.
    Cloud ONTAP wouldn't work for you anyway since it can't run anything older than 8.3RC1.

Maybe you are looking for