How to view or list cluster and block size of an ocfs2 filesystem

Hello all...
Have read ocfs2 user guide and do not see a way to list block or cluster size of an ocfs2 filesystem and additional properties similar to tune2fs -l.
Could someone post a way they have found?

man tunefs.ocfs2
-Q, --query query-format
Query the file system for its attributes like block size, label, etc. Query formats are modified ver-
sions of the standard printf(3) formatting. The format is made up of static strings (which may include
standard C character escapes for newlines, tabs, and other special characters) and printf(3) type for-
matters. The list of type specifiers is as follows:
B Block size in bytes
T Cluster size in bytes
N Number of node slots
R Root directory block number
Y System directory block number
P First cluster group block number
V Volume label
U Volume uuid
M Compat flags
H Incompat flags
O RO Compat flags

Similar Messages

  • How do I use Active Sync to view SharePoint Lists (Contacts and Calendars) on a Mobile Phone?

    We are attempting to use SharePoint 2010 in combination with Exchange 2010 to implement shared calendars and contact lists throughout our organization.  We are able to connect the lists to Outlook 2010, but have been unsuccessful in viewing
    the calendars and contact lists from our mobile phones.  How do we use Active Sync to view SharePoint Lists (Contacts and Calendars) on a Mobile Phone?
    In trying to answer this question, we have come across a few different possibilities, all of them falling just short of a long term solution for us.  After doing research, we found that Active Sync will only show the default folders of the account.  To
    solve this, we downloaded an Add-In for Outlook (CodeTwo FolderSync) to synchronize folders and synchronized our SharePoint list with a new Contact list in the default folder.  The issue we came across with this method is that the Add-In we are using
    is not capable of automatic synchronization.  There is a button and it must be clicked after every update is made, which is not ideal for our solution.  We then went to the company (CodeTwo) and found server side software (Exchange Sync) that they
    offer which will automatically synchronize the folders.  After installing that on the Exchange Server, we now are running into the issue of not being able to locate the SharePoint lists on the Exchange Server.
    Does anyone happen to know how we can get to the SharePoint lists from the Exchange Server?  Has anyone else been able to use shared contacts lists and calendars from SharePoint on their mobile phones using Active Sync?  If so, are we in the right
    direction with what we have found so far?
    Thanks,
    Brad

    You cannot use ActiveSync for that, but there are SharePoint clients for the iPhone. Windows Mobile 7 natively supports SharePoint with SharePoint Workspace Mobile, part of Microsoft Office Mobile. Android and BlackBerry might also have some apps.
    Use Microsoft SharePoint Workspace Mobile
    http://www.microsoft.com/windowsphone/en-us/howto/wp7/office/use-office-sharepoint-workspace-mobile.aspx
    iPhone SharePoint Apps Shootout
    http://www.codeproject.com/KB/iPhone/iPhoneSharePointApps.aspx 
    Comparing SharePoint iPhone Apps
    http://blog.praecipio.com/2010/11/02/comparing-sharepoint-iphone-apps/
    MCTS: Messaging | MCSE: S+M

  • How to save a list template and make use of it in another website as webpart ?

    How to save a list template and make use of it in another website as webpart ?
    1. Save As "List A" as .STP
    2. Go to another website, in the document workspace I try to Add webpart -> Browse -> Upload the STP file, but after that the List A webpart still doesn't appear.
    Any clue which part went wrong? 

    Hi,
    firstly you need to upload the .STP file to a list solution gallery of the site and then you can add the list (under custom name) to the webpage
    see here for how to-
    http://office.microsoft.com/en-us/sharepoint-server-help/copy-or-move-a-list-by-using-a-list-template-HA101782479.aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • How can i create list box and  dropdown in my report?

    dear experts
    how can i create list box and  dropdown in my report?
    thanks  in advance

    Pl. see the code given below.
    REPORT Z_LISTBOX.
    Data declaration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    TYPE-POOLS: VRM.
    DATA: NAME  TYPE VRM_ID,
          LIST  TYPE VRM_VALUES,
          VALUE LIKE LINE OF LIST.
    Selection screen ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    PARAMETERS: PS_PARM(10) AS LISTBOX VISIBLE LENGTH 10.
    AT SELECTION SCREEN OUTPUT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    AT SELECTION-SCREEN OUTPUT.
      NAME = 'PS_PARM'.
      VALUE-KEY = '1'.
      VALUE-TEXT = 'Line 1'.
      APPEND VALUE TO LIST.
      VALUE-KEY = '2'.
      VALUE-TEXT = 'Line 2'.
      APPEND VALUE TO LIST.
      CALL FUNCTION 'VRM_SET_VALUES'
           EXPORTING
                ID     = NAME
                VALUES = LIST.
    START-OF-SELECTION.
    Regards,
    Joy.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • Ios 5 Update New Features says that iCal now has a week view for iPhone but after updating last weekend, I don't see any week view--just List, Day and Month..How do I get the week view?

    ios 5 Update New Features says that calendar now has a week view but after updating my iPhone 4 last weekend, I don't see any week view for iCal--just List, Day and Month as in the past. Is the weekly view available for iCal and if so, how do I access it?

    Rotate your phone to landscape.

  • How to view pre-built Dashboards and Reports of Oracle BI- Apps?

    Hi All,
    We have sucessfuly installed and configured the Oracle Apps, and also we configured the OracleBIAnalyticsApps.rpd and we are able to view the different subject areas.
    But we are facing problem in viewing any prebuilt dashboard or report.
    Is it possbile to view pre-built Dashboards and Reports of Oracle BI- Apps if we dont have any ERP applicaton as a source?
    As we dont have any ERP application we have loaded the data from the sample flat files to the Analytics warehouse.
    Any help would be highly appreciable.
    Regards,
    Manmohan Sharma

    Hi Damon,
    Thanks a lot.
    Now I understand that we have to have ERP data source if want to view the prebuild Dashborads and reports.
    Could you please provide the path to get Prebuilt Dashboard & reports ?
    Regards,
    Manmohan Sharma

  • How to get index of cluster and make popup vi for every event

    Hi
    I have problem. the problem is i have to find out the index of cluster.
    I have an event "mouse down" from this cluster  and index should be the value where i put my mouse.
    2nd thing is curently i am just abel to display only 1 popup. but i need each pop up for every every element where i move my mouse
    thanks in advance.
    BR

    altenbach wrote:
    Use a "value change" event on the array and do a "not equal" on the old a new value. Only the changed value will result in a  true. Use search array to find the TRUE in the "not equal" array and the resulting index points to the changed element. You can do basically the same with clusters, by using "cluster to array" at the right time.
    Here's what I had in mind (LabVIEW 8.2)
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    DetectElementChange.vi ‏17 KB

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • Specifying segments and block size manaually

    Hi, just a quick question,
    But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
    And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
    Any help would be appriciated

    Hi,
    In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
    Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
    -Ease of use
    -Better space utilization, especially for the objects with highly varying size rows
    -Better run-time adjustment to variations in concurrent access
    -Better multi-instance behavior in terms of performance/space utilization
    For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
    see this link
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :)

  • How to view browser history off-line, full size

    hello - i am able to search for and locate web pages i visited many months ago - however i don't know how to view them as they were then.  for instance, one was a blog with links i tried to revisit, so i found it, but the blog no longer exists.  i figure that since the text and image were saved (obviously i was able to search for the text and view a thumbnail) they are probably available somehow "offline" - how can i view pages visted off-line rather than revisiting them?  thanks! 

    Assuming you are using Safari as your browser, in it's general preferences, there's a "Remove history items" menu popup which gives you choices of when to delete history.  Included in that list is "manually" if you so choose.  Other browsers have similar options (e.g., firefox).
    As for what's kept, it's url's, not whole web pages.  So if a url is old enough there is a probability that the page it is pointing to changed or no longer exists. 

  • How do I change the font and graphic size when using Firefox

    I have recently switched from Windows to Mac. When resizing the window in Mac the font and graphic size do not follow -- they stay the same size and only more "white space" is created by increasing the window size. How do I get the graphics and fonts to adapt to the new window size or at least make the whole image bigger. I am using Firefox 3.6.12 and Mac OS10.6.5. Thanks for the help.

    You can select to zoom the full page or only the text: View > Zoom > Zoom Text Only<br />
    Firefox 3 can remember the zoom level site specific.
    See:
    * http://kb.mozillazine.org/Zoom_text_of_web_pages
    * http://kb.mozillazine.org/browser.zoom.siteSpecific
    * http://kb.mozillazine.org/browser.zoom.full

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

Maybe you are looking for

  • What happens to the array built inside a subvi

    Hi My operation inside a subvi goes like this, i acquire data from a source continuously in a loop, build it in the loop and pass it out of the loop. I complete the processing of the built array in the same subvi and come out of it. I have the follow

  • Invoices report that displays also the VAT number

    Hi I am looking for invoices report that displays also the customer/vendor name and its VAT registration number. The customer invoices are SD invoices (can be displayed using the FI) and the supplier invoices are MM or FI invoices. It is relevant for

  • Container layout in WAD

    Hi, Could you please tell about container layout in WAD.I am facing errors while using it.Please send any helpful links on this topic. Thanks, Rakesh

  • AV HMDI cable help please.

    I purchased the AV hdmi cable for the ipad 2. it's connected directly to my hdtv's hdmi input. can i use the tv's optical output so i can get the sound out to my reciever? thanks.

  • Default image viewer iPad

    Weird.  My third gen iPad using iOS 6 just started showing a different default image viewer when looking at pictures in emails.  In the past, I've not had any trouble with downloading images that were in emails. However just recently, I've noticed th