Changing BTRFS RAID level

Linux 3.3 was supposed to have introduced support for re-striping between raid levels. Does anyone know how to go about doing so or know any good tutorials/documentation?
I'm thinking of getting a second drive, formatting it with btrfs, copying over my existing drive, and then adding the old drive and converting it to RAID1. If I can figure out how that is.

I am running a similar setup to your description.  Yes, syslinux has a difficult time booting a raid1 btrfs array (it may be possible though?). I wound up giving up on syslinux and just using grub. 
I left a small partition ext3 partition for grub on both my drives.  grub-install <disk> works flawlessly this way.  Technically, I dont think you need a dedicated /boot with grub now (as it now has full btrfs support?)
for both disks gdisk -l looks the same:
Number Start (sector) End (sector) Size Code Name
1 2048 6143 2.0 MiB EF02 BIOS boot partition
2 6144 1953523021 931.5 GiB 8300 Linux filesystem
I then created my btrfs raid1 array:
mkfs.btrfs -L TANK -d raid1 -m raid1 /dev/sda2 /dev/sdb2
grub-install went smoothly.
grub-install /dev/sda
grub-install /dev/sdb
grub-mkconfig -o /boot/grub/grub.cfg
MAKE sure you add the "btrfs" hook mkinitcpio to mkinitcpio.conf hooks array and regen your initramfs-linux.img.  Somehow I forget to do this sometimes...
The resulting setup can be booted even if one drive fails (in degraded mode atleast).  If a drive were to fail, its a little bit of a pain to setup the partitions on the failed drive replacement, add the device to the degraded btrfs, and install grub to it... but hey.  Its relatively bullet proof: even more so with weekly snapshots and backups to a non-btrfs device.
Anyway I am really enjoying the btrfs features!  good luck with the install.
Last edited by cteampride (2012-05-20 01:10:35)

Similar Messages

  • Raid Level 5+6 under 10.5.6 ?

    we couldn't use our SCSI RAID Level 6 under 10.52/10.53/10.54/10.5/5
    heavy CPU load after a while and crash of the AFP
    now the RAID is for sale !
    Murphy's low, seems that Apple has fixed some kinds of problems with RAID
    http://support.apple.com/kb/HT3192
    any body has already Experience with the 10.5.6 and the use of RAID Level 5 or Level 6 on a Server?
    any information would help
    thanks

    No version of Mac OS X supports RAID5 or 6. Theses RAID levels are always implemented in hardware and therefore not subject to the whims of the OS. In reality the RAID is shrouded by the controller and OS has no idea of the underlying array format and there there is nothing to fix in the OS.
    I think the pertinent part of your post, though is the 'SCSI' element. I expect that ant problems you have relate to the SCSI card you're using and not the OS. Since there are no specific SCSI fixes in 10.5.6 I doubt anything has changed (but Apple are notorious for not detailing all the low-level changes they make, so you might be lucky.
    You'd have a better chance, though, of looking at the SCSI card you're using (what make/model is it?) and checking the drivers provided by the vendor. Most SCSI problems in my experience have come down to card and drivers, not necessarily the OS.

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Hitachi Array Configuration  (Cache - RAID levels ? on Oracle9i/Solaris9)

    I shall install Oracle9i Release2 on a Sun Enterprise 4500 with 12 processors using a Hitachi 9200 array with a two-controller, 512M each, and 10 73GB drives). IT recommended and decided no fail-over and/or clustering.
    1) Does anyone have any previous experience in configuring the cache arrays with specific recommendations. We plan to set the redo logs on that cache as a leading-edge mechanism from that vendor. Are there any known drawbacks.
    2) Hitachi suggests that their technology is fast enough to handle RAID 5. What RAID levels do you recommend for permanent, in particular for indexes, and also for temporary datafiles? Oracle listed recommendations have been RAID 0+1 on datafiles; and RAID 1 on redo log files, and archivelog files.
    3) Has anyone implemented successfuly RAID 5 on Hitachi before ? The database server will primarily be an MTS with some hybrid EIS features for primarily overnight processing and two consumer groups are associated.

    Great work vango44!
    Here are some RAID performance statistics I gathered while testing RAID on my system.  The testing software was Winbench 99.  The hard drives tested were new Seagate ST380013AS drives, formatted NTFS.  Winbench was running on a third drive that is not included in the tests and should not affect the results.
    The drives were reformatted between tests and chkdsk'ed to try and keep things "apples to apples".
    No hardware or software changes other than the RAID setup/connections were made between tests.
    Higher numbers mean better performance.
    I also ran the same tests on the newish WD Raptor 10K drives:
    I couldn't stand all the noise   the Raptors made, so I returned them.
    On my motherboard:
    SATA 1 & 2 = Intel RAID controller
    SATA 3 & 4 = Promise RAID controller
    If the test title does not include "RAID", then it was a single drive test.
    Unfortunately, I don't have a spreadsheet version of the above stats.  Otherwise I'd create nice bar charts for us and it's would be easier to deduce performance.
    Perhaps some kind reader will OCR the pictures, put them into Excel, and make some nice bar charts for us?
    Hope the info helps.

  • Which one raid level is better

    Hi:
    We have an 3310 raid configured like raid1 with one logical drive,
    with 10 disks (5 mirror) and 2 for spare.
    In last 6 months we had 5 fatal fails, first time we changed the controller,
    second time, we changed the chassis, however it did not fix the problem.
    Data was lost in each fatal fail.
    is it normal?
    maybe, another raid level will fix our problem?
    Our system is redundant, has 5 servers-prime and 5 servers-backup
    When one prime fails, the backup takes the control.
    Only, raid 3310 is not redundant.
    Which one will be an better configuration?
    is it possible to have 3310 redundant?
    thanks

    Hi cmadero33
    Did the Sun engineers upgrade the firmware on the 3310 when they replaced all the disk's and controllers for you ?
    I'm assuming this array is in a data-center/server room with good cooling etc (ie it's not cooking itself)
    David

  • RAID Levels

    Hello, I know that there are many different ways that a database can be set up with regards to RAID levels but what i have found is that you get told what disks you have available and then try to work around it.
    Our infrastructure guys want to move my 2 of my databases from the storage they are on.  ISCSI drives on the SAN to free up some space.
    They have offered me
    2 sets of local mirrored pairs of disks on the server and potentially some RAID 5 on the SAN for datafiles and RAID 10 for logs.
    Now,
    The local disks are big so it is a shame to not use the space.
    Would the following work ok do you think?
    Option 1
    Mirrored Pair 1 : (C: Drive (Windows and Oracle Binaries) + Redo Logs
    Mirrored Pair 2   ( Data Files and Archive Logs)
    Can we have Redo Logs on the same drive as the O/S without potential performance problems?  There isn't massive activity on these databases.  Generally only 0.6 log switches per hour (Usually there is a peak of activity for a few hours, perhaps 6 log switches each hour and then only 1 or 2 for the remaining hours in the day.  50MB size redo logs.
    I presume having archive logs on the same as the data files storage with the low amount of log switching shouldn't be a performance problem.
    Option 2  (Not keen as these are disks with a few hundred GB but would only be having a few hundred MB used
    Mirrored Pair 1 : (C: Drive (Windows and Oracle Binaries) + Archive Logs
    Mirrored Pair 2 : Redo Logs
    SAN RAID 5 - Data Files
    The info i have found is that Archive/Redo Logs should be on RAID 1.
    Datafiles should be on RAID 10, if not RAID 5 or RAID 1
    Any advice as to what to do based on what i have available to me?
    Thanks

    I couldn't tell ya, since I only have a few things on windows and don't like it.  But I'll make these observations:
    Redo is Oracle's achilles heel.  Lose it and you fail.  You want it on multiple devices in addition to mirroring.  Slowest device determines one bottleneck (with some possible modern exceptions).
    Modern raid-5 is ok, except when it's not.  The latter includes when a disk fails.  Disks fail more often than mtbf implies.
    Check your apps load profile.  For my situation, undo is the most used data file.  YMMV.
    As has been said, more spindles is more better.  But the real determinant is data flow.  If you put everything through one controller, you might have a bottleneck.  If there is a net involved, something else might be on the net.  You need to consider everything from where Oracle requests data to what happens on the spinning rust, or cycling electrons on an SSD, as the case may be.  So if you have Windows randomly deciding to do stuff on a disk, and that interferes with redo, you can have mysterious slowdowns.  I couldn't tell you whether your setup would have this as an issue.  I can tell you sometimes a bored Oracle on Windows will take some time to wake up when you poke at it.
    Redo and archiving redo have sequential read and write characteristics.  Data files have random read and write characteristics.  Best not to mix the two if possible.
    You can worry about this until the cows come home, but in the end, you still have to empirically determine how your app actually works.  This can change over time, sometimes radically with very little provocation.  So try to leave yourself some wiggle room if whatever you decide on changes - and be ready if your organization suddenly can let some more hardware drop from above.
    I've heard good things about SLOB (Silly Little Oracle Benchmark), worth googling.  Good luck.

  • How to change default compatibility level

    How can one change the default setting for Compatibility, under Password Security - Setting. Default in Acrobat 8 is "Acrobat 5.0 and later". I would like to change this to "Acrobat 7.0 and later".

    Hi,
    You cannot change the logging level which comes in console output.
    The log configuration you have modified using EM, will change the soa log configuration. You can access the SOA logs at $DOMAIN_HOME/servers/soa_server1/logs/soa_server1-diagnostics.log file. This is the file, SOA used to store its component logs.

  • Changing the Dunning level and reports for Dunning

    How can I change the 'Dunning level' for line items in the 'Dunning History/Dunning overview'? I have read in help.sap that I can change it, but am not able to find where I could do that.
    link: http://help.sap.com/erp2005_ehp_03/helpdata/EN/bc/2127c4265911d286d6006008bbc8e8/frameset.htm (In 3rd para).
    Can anyone tell me if there are any reports available to view the 'Dunning list'(other than Dunning History in F150).
    Also, Our client needs a report which shows a list of all 'customers' and 'line items' Blocked for Dunning.
    Please give me your suggestions ASAP.
    Thanks
    Edited by: sapfan 20062007 on May 3, 2008 10:39 PM

    Thanks .
    I'm looking at running the ALFS editor . I'm in WebAS 7.00 . The Themes/templates and brand colours on the left frame does appear but the bsp page does not appear . used the standard bsp page which appears as a default ( bsp add it05 ) . The error message is as follows
    The following error text was processed in the system:
    BSP Exception: Das Objekt entrypoint.htm_sap-themeRoot=/sap/public/bc/ur/design2002/themes/alfs1000101F5FFFA2F4F4F2929296F8FAF3F5F7F in der URL /sap/bc/bsp/sap/it05/entrypoint.htm&sap-themeRoot=%2fsap%2fpublic%2fbc%2fur%2fdesign2002%2fthemes%2falfs1000101F5FFFA2F4F4F2929296F8FAF3F5F7F ist nicht gültig.
    But if I run the bsp application manually then the right theme does get picked up . I made the entry in the bspthemeroot table and put in the alfs parameter in the above url ( 1000101F5FFFA2F4F4F2929296F8FAF3F5F7F  ) . The bsp application works fine but not the alfs editor . Any ideas ?

  • Advice on RAID Sets, Volume Sets, and RAID Levels of the Volume Sets using an Areca Controller

    I have read through a lot of information on disk usage, storage rules for an editing rig, users inquiries/member responses in this forum and I thank each and every one of you – especially Harm.
    In building my new workstation, I purchased five (5) WD 1T, 7k, 64M SATAIII hard drives and an Areca RAID card, ARC-1880ix-16-4G, which I plan to use primarily as my media/data disk array.  The workstation will use a 128GB SATAIII SSD as the OS/program drive and I will transfer two (2) WD Raptor/10k SATA 70GB drives from my current system for pagefile/scratch/render use.  I tentatively plan on using a mobo SATAIII port for the SSD and mobo SATA ports with a software RAID (level 0) for the 10k Raptors.
    In reading the Areca Instruction manual, I am now considering exactly how I should configure the 5 physical 1TB drives in terms of RAID Level(s), Volume Sets, and RAID Sets.  I must admit that I like the opportunity of allowing for a Dedicated Hot Spare as I am generally distrustful of the MTBF data that drive vendors tout and have the bad experience in the past of losing data from a mal-configured RAID array and a single drive hardware failure (admittedly, my fault!).
    In line with the logic that one doesn’t want to perform disk reading while trying to write at the same time (or vice-versa), I am thinking the approach above should work OK in using the mobo disk interface and both software and external hardware RAID controllers without having to create separate RAID level configurations within a Volume Set or further dividing up the physical drives into separate RAID sets.  I know in forum messages that Harm noted that he had 17 drives and I could envision a benefit to having separate RAID sets in that situation, but I am not at that point yet. 
    To some degree I think it might be best to just create one RAID Level on one Volume Set on one RAID Set, but want to solicit thoughts from veteran controller users on their workflows/thoughts in these regards.
    Anyone care to share thoughts/perspectives?  Thanks
    Bill

    Thanks for the speedy feedback Harm - I appreciate it.
    I was thinking RAID level 3 as well.
    Of course, it's always something!   I purchased the Caviar Blacks by mistake - which are non-TLER.   I will work with EggHead to return the ones I purchased and replace them with RE4 versions  as I'm not thrilled about the possibility of the controller declaring the volume/disks degraded unnecessarily and although I have the DOS utility WDTLER where one is supposed to be able to enable/disable TLER on WD drives  - I suspect WD is way beyond that now anyway with current builds.
    I agree with you about just testing the performance of the options for the raptors - on the mobo and then on the controller.  When I benchmark them I'll post the results in case others are curious.
    Thanks again....off to EggHead!

  • I need to change the input level and I can not open the track info pane in the new garageband. Can you help me?

    I have tried the help menu. It only shows how to open the track info pane in the old garageband. It shows the info button, which is not in garageband anymore. I need to open the track info pane to change the input level. If there is another way to do this please tell me.

    Open the "Smart Controls" for your track, and then press the Info button .
    The "record level" slider is at the top of the Info panel below the track heads.

  • Exporting a project with multiple audio tracks changes the loudness level regardless of export format

    I recently had to submit a TVC and discovered that premiere was changing the loudness level after export, when the TVC failed the quality check.
    I checked the audio loudness of other formats that I had exported and the same thing had happened the audio loudness, it had changed.
    After a bit of troubleshooting I found a workaround. I mixdown all of the tracks in adobe audtion to one track and then imported it into premeire.
    Once I did this the audio loudness was not changed after exporting.
    Workflow for when it didn't work.
    I used edit in adobe audtion for the audio tracks from premiere.
    Matched the audio to the loudness level I wanted, and then saved each track.
    Export from premiere.
    To Check Audio Loudness
    Bring in exported file into new premiere project
    Edit the audio in adobe audtion.
    Drag the audio file into match audio to check loudness level.
    I am using
    Adobe Premiere Pro CC v8.1
    Windows 7 on PC
    Please fix this adobe.

    I recently had to submit a TVC and discovered that premiere was changing the loudness level after export, when the TVC failed the quality check.
    I checked the audio loudness of other formats that I had exported and the same thing had happened the audio loudness, it had changed.
    I have never experienced that or had any need for a work around.
    What did your Broadcast Quality Control pick up on?
    Audio Levels and Loudness are two different things in this regard.

  • How to change the source level in java Studio creator

    Hi,
    I am using Sun Java Studio Creator 2 Update 1 IDE, in this IDE how to change the source level to 1.5 for an existing project.
    Thanks in advance,
    Rajesh.

    Hi!
    Unfortunately Sun Java Studio Creator 2 Update 1 doesn't support 1.5 source level. But You can try to download NetBeans IDE (from http://www.netbeans.org) and there with Visual Web project You will find the same functionality as for Creator. And 1.5 source level is supported by NetBeans.
    Thanks,
    Roman.

  • How to change xserve raid-card battery?

    how to change xserve raid-card battery?

    Xserve RAID Card Battery Replacement Instructions:
    http://manuals.info.apple.com/MANUALS/0/MA277/en_US/Xserve_RAID_Card_Battery_DIY Instructions.pdf

  • RAID level for Redo

    Hi,
    My storage admin created RAID10 and RAID5 for database, I would like to know which RAID Level is best for keeping the REDO logs. Can someone tell me what's best for REDO?
    Thanks

    hello,
    In case of using redo logs then you need performance and availability.that is why i put this comparison for you.
    I suggest you use raid 5 because it is faster in the write operations and more recoverable.*
    <pre class="jive-pre">Note: RAID 5 disks are primarily used in the processes that require transactions. Relational databases are among the other fields that run very well under a RAID 5 storage scheme*</pre>
    RAID 5 vs RAID10
    Data Loss and Data Recovery
    Let us start off by having RAID 5 explained. In RAID 5, the data backup of any one of the disks is created. If there are 5 disks, in the storage system, then 4 of the disks will be used for storing the data and one of the disks will be used for keeping the backup of any one of the hard disks. If one of the disks in the array fails, then the data can be recovered, but in the event of a second disk failure, the recovery is not possible. RAID 10 on the other hand is a combination of RAID 0 and RAID 1. In a RAID 10 storage scheme, an even number of disks is required. Each disk array has a disk array, which is a mirrored set of the former. In RAID 10, data recovery of all but one disk can be performed. In the case of a disk failure, all the remaining disks can be used effectively without any impact on the storage scheme.
    Performance
    The RAID 5 performance in the read operations is quite appreciated, though its write operation is quite slow, as compared to RAID 10. RAID 10 is thus used for systems which require high write performance. Hence, it is very obvious, RAID 10 is not used for systems like heavy databases, which require high speed write performance.
    Redundancy
    The RAID 10 arrays are more data redundant than the RAID 5 arrays. This makes RAID 10 an ideal option for the cases where high data redundancy is required.
    Architectural Flexibility
    RAID 10 provides more architectural flexibility, as compared to RAID 5. The amount of free space left is also minimized, if you use a RAID 10 data storage scheme.
    Controller Requirement
    RAID 5 demands a high end card for the data storage performance. If the purpose of the RAID 5 controller is being solved by the operating system, then it will result in the slowing down of the performance of the computer. In case of a RAID 10 controller, any hardware controller can be used.
    Applications
    RAID 10 finds a wide variety of applications. Systems with RAID 0, RAID 1 or RAID 5 storage schemes are often replaced with a RAID 10 storage scheme. They are mainly used for medium sized databases. RAID 5 disks are primarily used in the processes that require transactions. Relational databases are among the other fields that run very well under a RAID 5 storage scheme.
    With this, I complete the RAID 5 vs RAID 10 comparison. This comparison, I hope, will help you in deciding the right storage scheme, that can suit your purpose.
    kind regards
    Mohamed

  • File change for highest level page home page

    file change for highest level page home page
    When I got these files
    I would type server/mywork/bkb
    These files are in the server under the folders mywork/bkb
    When I type server/mywork/bkb
    The my index page would show up
    That’s fine.
    Now, I created a page called highestpage.cfm
    The purpose is for me to put the link myindex.cfm and it
    works fine.
    But my problem that I am really still confused is
    It will show up when I type server/mywork/bkb/ myindex.cfm
    But my goal is not to type that far off.
    I just want to type I type server/mywork/bkb/ without
    myindex.cfm
    And still will lead me to I type server/mywork/bkb/
    myindex.cfm
    Just like the earlier version
    server/mywork/bkb that lead me to myindex.cfm
    but only now will lead me to highestpage.cfm
    I hope I am expressing my self correctly.
    Thanks for understanding

    If I understand your situation correctly, you can accomplish
    this by setting the default web page names in your web server,
    whatever that might be - Apache, IIS, etc. The web server will have
    some default file names (web pages) that it will serve if no
    specific file is requested. this is often index.html, index.cfm,
    etc. You want to add myindex.cfm to that list.

Maybe you are looking for

  • Automatic creation of idoc

    I want an idoc to get triggered as soon as a scheduling agreement is created for a particular plant.How to do that??

  • System recovery failed new hard drive

    I have an HP Pavilion g-6 1d26dx laptop.  I just replaced the hard drive with one ordered directly from HP.  When I went through the System Recovery, using the disks from HP, I got the following message: There might be unexpected reboot during specia

  • MRP issue with statergy 10

    Dear Firends, Iam having problem in MRP, My statergy is MTS. i get for a fert material purchase document/sale order from different shops say arround 50nos. per day. Now my stock is ZERO, i have not put any PIR in MD61. if i run MRP it is considering

  • Problem Coding Button over Frame by Frame Animation

    Compiler Error: Scene=Scene 1, layer=actoins, fr;'{' expected

  • Mac Main - transfer to different Mac

    Hi...I'd like to transfer my Mac Mail settings and messages from one Mac to another. Is there an export/import or can I just copy ./Library/Mail, or is there another way? It's not a new Mac so I don't think Migration Assistant would work (besides the