Max_io_size equivalent in Linux and block/stripe sizes

I'm configuring a linux Red Hat 7.1 for Oracle 9i Rel2. I'm trying to determine the best db_block_size, and db_file_multiblock_read_count parameters. I know that these Oracle settings are dependent on the OS block size and the max_io_size of the OS.
Does anyone know what the equivalent Linux parameter for max_io_size (solaris) is and how I set it in Linux? Does resetting it involve reinstalling Linux? Any suggestions on an appropriate range to set it? Is the default Linux 1K block size OK? (The server is a Compaq DL380 with 1.4 GHertz processor and 1 GB RAM.)
Additionally, I have a Compaq 5300 Series RAID, (5i-integrated), that we plan to configure with RAID 0+1. Our controller only goes up to a stripe size of 256K, with a default of 128K. For a "general"-type database that could hold up to 80 GB of data over 50 or so tables, with a possible equal number of full-table scans and indexed scans, would you suggest I set the stripe size at 256 for the most flexiblility down the road?
I don't fully understand what it takes to configure Linux and RAID for the best I/O for Oracle. So, I'd really appreciate any suggestions, tips, or doc references that can help out.
Thanks,
Deb

the ssd is both sd and ssd.. inside the sd.conf and inside the /etc/system...
the following below is a tnf report of the IO size of my process to show the kernel is breaking the IO down.
sorry i wasn't clear on this part..
62.059582 16.185079 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60396848 size: 1048576 buf: 0x30000a78340 flags: 34088209
306.154426 17.819569 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60398896 size: 1048576 buf: 0x300035dcc00 flags: 34088209

Similar Messages

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • ASM with block device - stripe size question

    Implementing a 2 node RAC system on Linux/RHEL 10gRel2. Hardware is HP SAN storageworks storage array. We plan to use External redundancy for the disk groups, and I read in one of the RACSIG Best Practices for ASM documents to select a stripe size of 1mb or as close to that as possible. The storage array stripe size max is 64k. Can anyone discuss/explain if this will be an issue if we're using External redundancy?

    Hi buddy,
    Can anyone discuss/explain if this will be an issue if we're using External redundancy?The idea is to improve as much as possible the I/O operations. Until 10g ASM had two options of allocate space for its files: coarse and fine. For coarse, the AU (Allocation Unit) was of 1M, for fine, its size is 128K.
    Most of database files have the AU of 1M, due to this, When ASM allocates 1M of space, if You have the stripe size of 1MB, You'll guarantee one better distribuition of the I/O on all disks the LUN belongs.
    Take a look here and here
    Hope it helps,
    Cerreia

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Space utilization and optimal row size in Data Blocks in Oracle 11g

    Hi,
    My main concern till now is to find the Optimal no. of rows/tuples per one oracle data block of size 8192. For that purpose i'm doing different kind of testing.
    I created one table:
    SQL> create table t5
    2 (x char(2000));
    Table created.
    Inserted 4 rows in it.
    Queried to check in which block no's these 4 rows exist.
    SQL> SELECT x,DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) "Block No."
    2 from t5;
    x Block No.
    A 422
    B 422
    C 422
    D 423
    SQL> analyze table t5 compute statistics;
    Table analyzed.
    SQL> select LOGGING,BACKED_UP,NUM_ROWS,BLOCKS,EMPTY_BLOCKS, AVG_SPACE,CHAIN_CNT, AVG_ROW_LEN
    2 from user_tables
    3 where table_name = 'T5';
    LOG B NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN . YES N 4 5 3 6466 0 2006 .
    8 data blocks are initially allocated. but my question is that while creating table of even small sizes, every time i'm seeing that it shows BLOCKS (used) are 5.
    why 3 block are EMPTY_BLOCKS allocated all the time?
    AVG_SPACE is the FREE space in every BLOCK that is used if i'm not wrong?
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or what?
    How can I see this CHAIN_CNT column value other than 0?
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.
    I'll highly appreciate the genuine help and solid suggestions.
    Thanks in Advance.
    Best Regards,
    Kam

    kamy555 wrote:
    If you want to suggest something it'll be nice of yo.I can't disclose my main concern :)If you can't disclose what you mean by "optimal" and you can't disclose what "overhead" you are concerned about, I'm not sure that anyone could answer your question.
    In this scenerio 3 rows (each of around 2006 bytes) are inserted into into one BLOCK of 8192 i.e. 8192-6006=2186. 2186 is the PCTFREE or
    what?PCTFREE is a percentage. You haven't specified what you set it to, but that decreases the space in a block available for inserts to 8192 * (1 - PCTFREE/100).
    How can I see this CHAIN_CNT column value other than 0?Why do you believe there would be chained rows? You would need to use the ANALYZE command to populate the CHAIN_CNT column.
    How can actually I find the optimal no. of rows/tuples in 1 data block that will not increase OVERHEAD on the system.Since you can't disclose what "optimal" means or what "overhead" you're concerned with, I don't see how this question could be answered.
    Justin

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • RSA key and block size

    Let's say that I have an RSA key pair that has been generated in a keystore using the keytool utility.
    I am now accessing this key pair through some java code (using the Keystore class) and I want to encrypt/decrypt data using this public/private key.
    In order to encrypt/decrypt arbitray length data, I need to know the maximum block size that I can encrypt/decrypt.
    Based upon my experiment, this block size seems to be the size of the key divided by 8 and minus 11.
    But how can I determine all that programatically when the only thing that I have is the keystore?
    I did not find a way to figure out the size of the key from the keystore (unless it can be computed from the RSA exponent or modulus, but this is where my knowledged of RSA keys stops) and I did not find a way to figure out where this "magic" number 11 is coming from.
    I can always encrypt 1 byte of data and look at the size of the result. This will give me the blocksize and the key size by multiplying it by 8. But it means that I always need the public key around to compute this size (I cannot do it if I have only the private key).
    And this is not helping much on the number 11 side.
    Am I missing something obvious?
    Thanks.

    It is probably a bug. A naive implementation of RSA key generation that would exhibit this bug would work as follows (I'm ignoring the encrypt and decrypt exponents intentionally):
    input: an rsa modulus bit size k, k is even:
    output: the rsa modulus n.
    k is even, so let k=2*l
    step1: generate an l bit prime p, 2^l(-1) < p < 2^l
    step2: generate another l bit prime q, 2^l(-1) < q < 2^l
    step3: output n = p*q
    Now the above might seem reasonable, but when you multiply the inequalities you get
    2^(2*l -2) < n < 2^(2l)
    That lower bound means that n can be 1 bit smaller than you expect.. The correct smallest lower bound for generating the primes p and q is (2^l) / sqrt(2), rounded up to the nearest integer.
    I'll bet the IBM code implements something like the first algorithm.

  • CentOS based linux VM running on Hyper-v : Checking root filesystem fails when kernel switches having old PV(para virtualised driver based on 2.6.32 linux kernel) to new PV(which is equivalent to linux integration component 3.4)

    hi all,
    I am running a CentOS base VM on top of Hyper-V server. I upgraded PV drivers of Hyper-V in linux kernel 2.6.32 in order to support
    Windows Server 2012, then i am hitting below issue on Windows Server 2008 when kernel switches from old PV(which is 2.6.32 based) to new PV(which is equivalent to linux integration component 3.4).i
    am hitting following filesystem check error messages :
    Setting hostname hostname:
    Checking root filesystem
    fsck.ext3/dev/hda2:
    The superblock could not be read or does not describe correct ext2 filesystem. If the device is valid and it really contains an ext2
    filesystem(and not swap or ufs or something else),then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
    : No such file or directory while trying to open /dev/hda2
    *** An error occurred during the filesystem check.
    *** Dropping you to a shell; the system will reboot
    *** When you leave the shell.
    Also, when I go to the repair filesystem mode. I found out the strange behaviour when i ran those command :
    (Repair filesytem) 1 # mount
    /dev/hda2 on / type ext3 (rw)
    proc on /proc type proc (rw)
    (Repair filesystem) 1# cat /etc/mtab
    /dev/hda2 /ext3 rw 0 0
    proc /proc proc rw 0 0
    (Repair filesystem) 1# df
    Filesystem 1K-blocks used Available Use% Mountedon
    /dev/hda2 4%
    I think for all above command there should be /dev/sda2 instead of /dev/hda2.
    Also my fstab , and fdisk -l looks like ok for me.
    (Repair filesystem) 1# cat /etc/fstab
    LABEL=/ / ext3 defaults 1 1
    LABEL=/boot /boot ext3 defaults 1 2
    devpts /dev/pts devpts gid=5,mode=620 0 0
    tmpfs /dev/shm tmpfs defaults 0 0
    proc /proc proc defaults 0 0
    sysfs /sys sysfs defaults 0 0
    LABEL=swap-xvda3 swap swap defults 0 0
    (Repair filesystem) 1# fdisk -l
    Device Boot Start End Block Id System
    /dev/sda1 * 1 49 98535 83 Linux
    Partition 1 does not end with cylinder boundary.
    /dev/sda2 49 19197 39062500 83 Linux
    Partition 2 does not end with cylinder boundary.
    /dev/sda3 ......
    Partition 3 does not ......
    /dev/sda4 ......
    Partition 4 does not end ....
    (Repair filesystem) 1# e2label /dev/sda1
    /boot
    (Repair filesystem) 1# e2label /dev/sda2
    (Repair fielsystem) 1# ls /dev/sd*
    /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
    (Repair filesyatem) 1# ls /dev/hd*
    ls: /dev/hd*: No such file or directory
    Kindly suggest any configuration of windows server or kernel configs missing or how to resolve this issues
    Many many thanks for your reply.
    thanks & Regards,
    Ujjwal

    i am not able to understand duplicate UUID and from where it is picking /dev/hda* ?
    ~
    VVM:>>
    VVM:>> Output of dmesg | grep ata contain substring "Hyper-V" ?
    VVM:>>
    it doesn't contain "Hyper-V" or ata related message and the output doesn't change with boot parameter reserve=0x1f0, 0x8
    ~~
    ~~~~
    ==
     output of dmesg related "ata" Ubuntu v13.04 mini.iso ( with boot parameter reserve=0x1f0, 0x8)
    ==
     see later ( in "good situation" example  )
    ~~
    ===
    Disable legacy ATA driver by adding the following to kernel command line in /boot/grub/menu.lst:
    reserve=0x1f0, 0x8
    . (This option reserves this I/O region and prevents ata_piix from loading).
    ==
     See output of dmesg related "ata" Ubuntu v13.04 mini.iso ( with boot parameter reserve=0x1f0, 0x8) :
    ~~
    [ 0.176027] 
    libata version 3.00 loaded.
    [ 0.713319] 
    ata_piix 0000:00:07.1: version 2.13
    [ 0.713397] 
    ata_piix 0000:00:07.1: device not available (can't reserve [io 0x0000-0x0007])
    [ 0.713404] 
    ata_piix: probe of 0000:00:07.1 failed with error -22
    [ 0.713474] 
    pata_acpi 0000:00:07.1: device not available (can't reserve [io 0x0000-0x0007])
    [ 0.713479] 
    pata_acpi: probe of 0000:00:07.1 failed with error -22
    ~~
      As result: 1) IDE disk handled by hv_storvsc , but 2) no CD-ROM device
    ==
    ~ # blkid
    /dev/sda1: LABEL="ARCH_BOOT" UUID="009c2043-4bl7-4f95-al4d-fb8951f95b5d" TYPE="ext2"
    ==
    ~~
    VVM>>
    VVM>>Q1: Output of blkid contain duplicate UUID ?
    VVM>>
    -> blkid contains duplicate UUID, below are the output.
    ~~
     This situation is classic problem "
    use hv_storvsc instead of ata_piix to handle the IDE disks devices ( but not for the DVD-ROM / CD-ROM device handling)
    ~~
     For compare, see example "good situation": 
     See output of dmesg related "ata" Ubuntu v13.04 mini.iso ( without boot parameter reserve=0x1f0, 0x8) :
    ~~~~
    ~ # dmesg |grep ata
    [ 0.167224] libata version 3.00 loaded.
    [ 0.703109] ata_piix 0000:00:07.1: version 2.13
    [ 0.703267] ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set
    [ 0.703339] ata_piix 0000:00:07.1: setting latency timer to 64
    [ 0.704968] scsi0 : ata_piix
    [ 0.705713] scsi1 : ata_piix
    [ 0.706191] atal: PATA max UDMA/33 cmd 0xlf0 ctl 0x3f6 bmdma 0xffa0 irq 14
    [ 0.706194] ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
    [ 0.868844] atal.00: host indicates ignore ATA devices, ignored
    [ 0.869142] ata2.00: ATAPI: Virtual CD, , max MWDMA2
    [ 0.871736] ata2.00: configured for MWDMA2
    ~~~~
    ===
    ~ # uname -a
    Linux ubuntu 3.7.0-7-generic #15-Ubuntu SUP Sat Dec 15 14:13:08 UTC 2012 x86_64 GNU/Linux
    ~ # lsmod
    hv_netvsc 22769 0
    hv_storvsc 17496 3
    hv_utils 13569 0
    hv_vmbus 34432 3 hv_netvsc,hv_storvsc,hv_utils
    ~ # blkid
    /dev/sr0: LABEL=”CDROM" TYPE="iso9660”
    /dev/sda1: LABEL="ARCH_BOOT" UUID="009c2043-4bl7-4f95-al4d-fb8951f95b5d" TYPE="ext2"
    ===
     ( only CD-ROM and 1( one) IDE disk connected to ATA)
    ~~
    regarding ata_piix.c patch . . .
    As far as i understand this patch , it ignore ATA devices on Hyper-V when PV drivers(CONFIG_HYPERV_STORAGE=y) are enabled.
    ~~
     Yes:
    ignore ATA-HDD ( but not ignore ATA CD-ROM )  on Hyper-V when PV drivers(CONFIG_HYPERV_STORAGE=y) are enabled.
    ~
     this patches need be backported:
      cd006086fa5d ata_piix: defer disks to the Hyper-V drivers by default
    and its prerequisite
      db63a4c8115a libata: add a host flag to ignore detected ATA devices
    ~
    ~~
    P.S.
     Are You do this:
    ==
    As temporary solution, increase on 1-2 Gb size all .vhd connected to IDE bus
    ( but not increase size of partitions inside disks)
    ==
    ? fsck write message a-la: "no error in file system" ?
    2013-01-24 Answer by Ujjwal Kumar: As a temporary solution looks ok for me, but [ VVM: need true solution ]
    P.P.S.
    To Ujjwal Kumar :
     My e-mail:
    ZZZZZZZZZZZZZZZ
    please send e-mail to me,  in reply I send to You patches to ata_piix ( and *.c before and after patches) , etc.
    } on 2013-01-14 -- DoNe

  • CD different for Linux and Windows

    I have in hand a very strange CD. This is an (old) CD with drivers for a Samsung printer. The CD has drivers for Linux and Windows. But if I mount the CD in Linux, I see only the Linux driver and if I access the CD from Windows, only the Windows drivers are seen. How they can do that ? How can I mount in Linux the CD in order to see the content seen from Windows ?

    In Windows ( DIR D: )
    Le volume dans le lecteur D s'appelle SAMSUNG_LBP
    Le numéro de série du volume est C432-A954
    Répertoire de D:\
    28/02/2005 00:53 <REP> ACROBAT_READER
    26/10/2004 06:11 740 AUTORUN.INF
    28/02/2005 00:53 <REP> DATA
    28/02/2005 00:53 <REP> ML-1610
    28/02/2005 00:53 <REP> Manual
    09/12/2004 03:55 11 219 SETUP.DAT
    20/09/2004 07:29 270 336 SSAuto.Dll
    17/09/2004 08:03 253 952 SSEtc.dll
    22/09/2004 11:09 225 280 SSFcs.dll
    12/03/2004 05:59 1 622 016 SSRes.dll
    17/09/2004 08:04 155 648 SSTtp.dll
    26/10/2004 05:42 307 200 Setup.exe
    28/02/2005 00:54 <REP> USB
    8 fichier(s) 2 846 391 octets
    5 Rép(s) 0 octets libres
    In linux (ls -l -F /mnt/cdrom)
    total 44
    dr-xr-xr-x 22 root root 4096 Feb 28 2005 Manual/
    -r--r--r-- 1 root root 2555 Feb 28 2005 README.txt
    -r-xr-xr-x 1 root root 51 Feb 28 2005 autorun*
    dr-xr-xr-x 3 root root 2048 Feb 28 2005 bin/
    dr-xr-xr-x 8 root root 2048 Feb 28 2005 cups/
    dr-xr-xr-x 4 root root 2048 Feb 28 2005 data/
    dr-xr-xr-x 3 root root 2048 Feb 28 2005 help/
    -r--r--r-- 1 root root 8517 Feb 28 2005 icon.xpm
    dr-xr-xr-x 9 root root 2048 Feb 28 2005 locale/
    dr-xr-xr-x 2 root root 6144 Feb 28 2005 misc/
    dr-xr-xr-x 3 root root 2048 Feb 28 2005 ppd/
    dr-xr-xr-x 2 root root 2048 Feb 28 2005 scripts/
    dr-xr-xr-x 4 root root 2048 Feb 28 2005 setup.data/
    -r-xr-xr-x 1 root root 6603 Feb 28 2005 setup.sh*
    ouput of isoinfo -d dev=dev/sr0 (in linux)
    CD-ROM is in ISO 9660 format
    System id: LINUX
    Volume id: SAMSUNG_LBP
    Volume set id:
    Publisher id:
    Data preparer id:
    Application id: MKISOFS ISO 9660/HFS FILESYSTEM BUILDER & CDRECORD CD-R/DVD CREATOR (C) 1993 E.YOUNGDALE (C) 1997 J.PEARSON/J.SCHILLING
    Copyright File id:
    Abstract File id:
    Bibliographic File id:
    Volume set size is: 1
    Volume set sequence number is: 1
    Logical block size is: 2048
    Volume size is: 239152
    Joliet with UCS level 3 found.
    SUSP signatures version 1 found
    Rock Ridge signatures version 1 found
    Rock Ridge id 'RRIP_1991A'
    If I use on Windows the Windows port of cdrtools; then these tools behave as in Linux, showing me the Linux contents. In Linux I can also mount -t udf and then I see:
    total 32
    -rw-r--r-- 1 root root 32768 Feb 28 2005 Desktop DB
    -rw-r--r-- 1 root root 0 Feb 28 2005 Desktop DF
    drwxr-xr-x 1 root root 22 Feb 28 2005 Manual/
    This was for a Samsung ML-1610 B/W laser printer. This is crazy.
    @lolilolicon That explain the result when I mount -t hfs. But the difference in Linux and Windows remain mysterious. How Windows "mount" the CD? I believed it was the equivalent of mount -t iso9660 in Linux but apparently, it is not.
    Last edited by olive (2011-09-08 12:12:16)

  • RAID Stripe size for HD Capture

    Hey everyone,
    So I just captured a lot of Uncompressed 8 bit 4:2:2 1080i60.
    The video alone requires ~ 118 MB/s sustained transfer speed for live capture without dropped frames.
    I used our new MacPro. We built an internal RAID with 3 250 GB drives all striped together. I had to decide which stripe size to use. Since we have proportionally less files than the average RAID(200 or so at the most), and because our files are HUGE (2-40 GB), I figured a large stripe size would be apropriate. I set it to the max that Mac OS X software RAID supports, though I must confess I dont remember exactly what that was.
    We had little trouble with capture, sustaining over 200 MB/s bandwidth with this setup according to the Blackmagic drive speed test. However, when the drives filled up, they (expectedly) slowed down quite a bit. I frequently did speed tests, and when it got low I had to switch into shooting DVCPRO HD to avoid dropped frames.
    We had similar experiences before, when I had this same setup with the default stripe size, though it seemed overall a bit better with larger stripe sizes.
    My questions are thus:
    1) Is there any way to use a really large stripe (2MB or so) without buying a hardware RAID controller?
    2) Is my thinking correct that if I have a small number of gigantic files I should use a large stripe size?
    3) If I were to wipe out all my drives (including the 250 gig startup drive), and boot off the CD, could I tie all 4 drives together into RAID-0, and then install Mac OS X to that (and presumably partition it for a data storage volume)?
    4) Does anyone know of a good 2-4 port eSata PCI Express card that has drivers for the MacPro? I know sonnet has a 2 port, but its one internal SATA and one eSata. I need at least 2 eSata ports for it to be worth my time, because I have a 4 drive eSata RAID box with 4 320GB drives in it. I can use the motherboard's additional 2 SATA connectors through a backplate SATA -> eSATA converter, and with 2 more eSATA ports through PCIe card, I would be able to use all 4 eSATA drives at once.
    4 ports would be even better as I would like to leave internal ports for a BD-R or HD-DVD-R (or whatever they're called) drive later on.
    Thanks!
    -Derek Prestegard

    1) Is there any way to use a really large stripe (2MB
    or so) without buying a hardware RAID controller?
    256k is the largest block size available with Disk Utility.
    2) Is my thinking correct that if I have a small
    number of gigantic files I should use a large stripe
    size?
    128k is the largest size I would ever think about using. I even use 32k in many situations. You can test it and see what works best for you. Large block sizes do not always translate into higher performance. I find more drives in the striped RAID set helps me more than larger block sizes.
    3) If I were to wipe out all my drives (including the
    250 gig startup drive), and boot off the CD, could I
    tie all 4 drives together into RAID-0, and then
    install Mac OS X to that (and presumably partition it
    for a data storage volume)?
    You could boot from a FW800 external and use all 4 internals
    for a RAID but I think you will be happier with SATA host adapters.
    4) Does anyone know of a good 2-4 port eSata PCI
    Express card that has drivers for the MacPro
    Here are a few options for SATA host adapters on a Mac Pro that I use. The WiebeTech Tera Card TCES0-2e SATA host adapter provides two SATA ports and works with the Mac Pro using SiI-3132 Mac drivers 1.1.6. You can see a review at amug.org:
    http://www.amug.org/amug-web/html/amug/reviews/articles/wiebetech/tces0/
    The FirmTek SeriTek/2SE2-E 2-port host adapter can be found here:
    http://www.firmtek.com/seritek/seritek-2se2-e/
    It works best on the Quad as it provides boot support and SMART drive support. On the Mac Pro you use the Firmtek cardbus 2SM2-E Mac driver until FirmTek can build new EFI drivers for the Mac Pro. No boot support is provided yet but it does pass SMART data to Mac OS X. Eventually the card will have Mac Pro boot support which will be nice.
    Both cards use the SiI-3132 controller. If you mix the WiebeTech and the FirmTek cards the Silicon Image Mac driver version 1.1.6 will take over and block the SMART data info that the FirmTek card supplies. As this is the case, I would go with one brand or the other but not mix them in the same Mac Pro.
    I have used three cards to provide six external SATA ports. You could use two cards with your external 4-bay enclosure. I would create a striped RAID with 4 external drives and two internal drives. This should provide you with the performance you need to handle 1080HD. If you want more power you could use 7 hard drives = 3 int. and 4 external.
    Have fun!

  • RAID0 Stripe size

    im using the 3ware 8506-4LP with 4 WD 250 GB Caviar Drives(7200rpm).. What would be the ideal stripe size for :
    -reiserfs
    -reiser 4
    -xfs
    does the file system make a difference? whats the best stripe fo rlinux in general..everyday desktop use..
    my options are from 64k all the way upto 1MB
    ive read that 64k is ideal for windows..
    with the 4-way raid, im getting 100MB/s in HDTach with a stripe size of 512KB (which seems a little low)
    my current plan is to use 2 of the drives with a stripe size of 64k with windows and the other two with linux.. im a general desktop user, bit of divx encoding and lots of compiling. Im using the AMD FX-55 with 2GB of RAM
    so what would be the ideal linux stripe size?

    Hi,
    Sun support confirmed that you can have the 2 halfs of the mirror with different interlace sizes, this obviously is not the optimum setup, but will allow me to detach d61 and recreate with correct interlace size, reattach d61 and let it resync, then detach d62 and recreate with correct interlace size and finally reattach d62 and let that resync
    Kevin....

  • Stripe size for scatch disk array

    I am building a CS5/64 workstation running on Win 7/64 that will be used to edit 1-4Gb images. The scratch disks will consist of a RAID 0 array using 3-4 WD600Gb 10K drives shortstroked on an Areca card.
    What is the best stripe size for a 3-4 disk array for large images? Does Adobe publish how they R/W to the scratch disk, size of block,etc?
    Larry

    Right click on the root of your C: drive, and choose Properties.
    Click the Hardware tab, select the drive (array) you'd like to set advanced caching on, and click the [Properties] button.
    Click the Policies tab, and note the setting of the [ ] Turn off Windows write-cache buffer flushing on the device.  This may not be available, depending on the drivers.
    Note, specifically, that this feature can cause quite a lot of disk data to end up in your RAM for a while, if an application gets significantly ahead of the drive's ability to write data.  This is where the warning about having good battery backup comes in.  I'll add my own comment:  Your system should be very stable as well.  You don't want it crashing when a lot of writes are pending.
    -Noel

  • SQL Timeouts and Blocking Locks

    SQL Timeouts and Blocking Locks
    Just wanted to check in and see if anyone here has feedback on application settings, ColdFusion settings, JBOSS settings or other settings that could help to limit or remove SQL Timeouts and blocking locks on SID's.
    We're using MS SQL 2000 with JBOSS and IIS5.
    We've been seeing the following error in our logs that starts blocking locks in SQL:
    java.sql.SQLException: [newScale] [SQLServer JDBC Drive] [SQLServer] Lock request time out period exceeded.
    Once this happens, we're hosed until we remove the blocking SID in SQL.  These are the connections to the application.
    Any feedback would be great.  Thanks!

    Hi
    This is your exact solution:
    Select a.username, a.sid, a.serial#, b.id1, c.sql_text
    From v$session a, v$lock b, v$sqltext c
    Where b.id1 in( Select distinct e.id1
    from v$session d , v$lock e
    where d.lockwait = e.kaddr ) and
    a.sid = b.sid and
    c.hash_value = a.sql_hash_value and
    b.request =0;
    Thanks
    Sarju
    Oracle DBA
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by I'm clueless:
    Can someone give me the SQL statement to
    show if there are any blocking database locks and if so - which user is locking the Database?
    Thanks in Advance<HR></BLOCKQUOTE>
    null

  • Tutorial - How to triple boot OSX, Linux and Windows 8.1 with a shared Data Partition without any third party Win / OSX softwares

    This is not a question, but rather a personal guide that has proved to be running successfully.
    I would like to thank numerous sources, including Christopher Murphy's suggestions at:
    Re: Repairing Boot Camp after creating new partition
    Before proceeding, there are certain concepts needs to know:
    Why Boot Camp does NOT allow further partitioning of drives after Windows has installed?
    Answer: Because the way Apple configures the Mac to be recognized as non UEFI capable system on Windows.
    Quote from Christopher Murphy based on the above line:
    However, Windows on Macs right now use CSM-BIOS mode in Mac firmware that presents BIOS to Windows rather than EFI. Windows thinks it's on a BIOS computer, and therefore mandates the use of MBR for boot disks, rather than GPT. So that's why we have this hybrid MBR+GPT approach on Mac with Windows on it. You inherit the limitations of MBR, which is four primary partitions.
    So what does it means?
    It means that OSX + EFI + Recovery HD + Boot Camp partition = 4 primary partitions and thus any attempt to modify the disk will render booting issues of either system.
    For more info on GPT (GUID Partition Table disks VS Master Boot Record or MBR in short, you may visit: http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535%28v=vs.85%29.a spx)
    So, how to overcome it?
    The general guideline is to install ALL GPT ready OS first then create a Data partition, before installing Windows (Which is again, NOT supported GPT due to EFI configuration by Apple where end-users are not able to modify it).
    Interestingly, since Mac Pro 2013 Late supports only Windows 8 and above, thus it is not known if this CSM-BIOS applies to it or not.
    Do take note that GPT disks in Windows can only be booted when the system meets the 2 requirements:
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn640535%28v=vs.85%29.a spx#gpt_faq_win7_boot
    1) Windows x64 version (Which is a must for newer Macs. If you cannot go to Boot Camp 5, then you need Windows 7 x86 or 32bit version)
    2) UEFI system. However, Windows sees all Macs (With the possibility of Mac Pro 2013 Late is an exception. To be determined) as BIOS, or rather NON-UEFI system.
    In short, booting on GPT disks is not possible for Mac in Windows.
    Summary,
    It is tested that a combination of the following will not work:
    - OSX + Windows + Linux
    - Windows + OSX + Linux
    - Windows + Linux + OSX
    Usually it can create the system un-bootable or OSX refused to install due to the system does not recognize such partitions and / or Disk Utility refused to format a free space. An example screen-shot is provided below:
    The error message is shown as
    Title: "Failed to erase volume" Message: "Failed to wipe volume, as an error occurred: MediaKit has reported that the device does not have enough free space to execute the requested operations."
    The second thing is about the preparations we need.
    1) 1X Windows 7 or 8 DVD or USB thumbdrive
    1A) If you uses a DVD to install, you will need another thumbdrive to load the BootCamp drivers for Windows as well as may requires an external DVD drive for newer Macs
    2) 1X Linux DVD of your choice. Personally I choose Fedora 20.
    So ready? Let's go.
    1. Using Disk Utility, shrink the OSX's partition size to what is needed. For me, I give OSX 150GB. Do NOT create any new partition.
    Disk Utility should see something like below whereby only OSX partition is left with desired disk space. The remaining space are to be unused disk space for the moment.
    Note: Click on the top most item that should start with the size of your HDD / SSD. Then clicked on "Partition" and specify the desired OSX size. Hit "Apply" after that.
    2: Download Boot Camp drivers only via Boot Camp Assistant. The USB thumbdrive shall be used later after Linux's installation.
    Boot Camp Assistant should see this:
    I have only selected "Download latest Windows Support Files from Apple"
    3. Insert Linux DVD, reboot Mac into EFI mode (The left most first "EFI mode").
    Note 1: Before rebooting, please plugged in an Ethernet adapter because Wi-Fi drivers is not installed.
    Note 2: For Thunderbolt adapters, it must be plugged in before reboot as hot-swapping is not supported under Linux. More on the tips at the end of this article.
    Note 3: Press and hold "Option" after the screen turns black. Release Option key after you see the image as below:

    For the unfortunate part that did not make it on time to edit the images:
    9. Install the Windows Support software from your CD/USB drive to gain full functionality of your computer. Reboot and go to Windows again.
    Note 1: You may choose to eject disc at this point of time. For Apple SuperDrive users, you will need to wait until the drivers (i.e. Boot Camp support files) is installed and rebooted before ejecting is reasonably possible (As I failed to figured out how to right click without the drivers)
    Note 2: Unlike Windows 7 on KBase article TS4599 Keyboard/trackpad inoperative, black screen, or alert messages when installing Windows 7, USB stick can be plugged in after the Windows installation is done. This is because Windows 7 (And probably Windows 7 with SP1 DVD) does not have a built in USB 3 drivers when it was released back in 2009 where USB3 has not arrived then.
    Note 3: Due to TPM, Bitlocker is not supported without the use of thumbdrives.
    10. Using Disk Management to determine the given drive letter for the DATA partition (DO NOT DELETE and RECREATE partition or else you can goodbye to booting Linux and OSX). Disk Management will not allow you to format it as exFAT / FAT32 in graphical way.
    Note: You may remove or modify some of the disk letters in Disk Management. However, do NOT remove / modfify the drive letter for the partition with 200MB size in HFS. This is because it will disallow booting of Linux and neither could Windows nor OSX can do anything EXCEPT to reinstall Linux only.
    11. Open Command Prompt in Administrator Mode (Important!!), and key in the following command:
    format F: /FS:exFAT
    Give this volume a label after it has successfully formatted before hitting "Enter" again.
    Note: Mine Data partition was assigned as F drive. Please make necessary adjustment to "F:" should your Data partition is assigned to other letters.
    12. After that, Setup your Data partition structure as you like.
    Tip: Minimally create the important folders such as:
    - Music
    - Documents
    - Movie (Videos)
    - Downloads
    - Pictures
    All these folders are commonly used by the 3 OSes. I do NOT recommend changing of /home (OSX and / or Linux) and / or user home directory (Windows) either partially or as a whole.
    This is because of compatibility issue.
    On a side note, iTunes Media Library used in OSX and Windows are NOT able to be use interchangably due to hard-coded path used.
    13. Useful troubleshooting in Fedora / Linux:
    With references to these:
    http://chaidarun.com/fedora-mbp
    http://anderson.the-silvas.com/2014/02/14/fedora-20-on-a-macbook-pro-13-late-201 3-retina-display/
    http://unencumberedbyfacts.com/2013/08/16/linux-on-a-macbook-pro-101/
    I would like to highlight a few important points:
    1) Wi-Fi driver:
    http://rpmfusion.org/Configuration
    Note 1: The sound driver should be installed at Out of Box Experience. However, the Wi-Fi is not.
    Note 2: Install both free and non-free repository. By the way, some other software like VLC can only be found after the Free Repository is installed.
    Search for "akmod-wl" in Gnome-Package-Installer in order to install Wi-Fi drivers
    Note 3: For those who do not have Ethernet adapters and their Mac does NOT have a built-in Ethernet port, it is recommended to get one. This is because Fedora 20 does not have a good support for iPhone USB tethering. Unsure for Andriod / Blackberry / Windows Phone users.
    2) Grub Menu:
    It will show several options to boot into OSX, even of the capability to boot into x86 or x64 mode. However, neither of them is bootable except Linux and the rescue.
    Hence, it is recommended to remove the items by hand in this file:
    /boot/efi/EFI/fedora/grub.cfg
    Command to be used:
    "sudo gedit /boot/efi/EFI/fedora/grub.cfg"
    Parts to be removed:
    - For any extra kernels, delete the target entry by locating the line "menuentry" under "/etc/grub.d/10_linux" sector to one line above the next "menuentry".
    It is recommended to keep one main kernel, and one recovery at the minimal.
    - For other OS, delete all the entry (Since neither it can works) under "/etc/grub.d/30_os-prober" sector without removing the lines starts with ###.
    Auto Mount exFAT partition:
    - After installing extra packages for exFAT support (Since it is not supported by Fedora 20 from a default installation), you may wish to edit "/etc/fstab" in order to mount the exFAT partition during boot time.
    Command to be used:
    "sudo gedit /etc/fstab"
    Add the following line in gedit:
    UUID=702D-912D /run/media/Samuel/DATA                   exfat    defaults        1 2
    Note 1: For DATA partition, OSX & Boot Camp partition, Fedora defaults mounts under: "/run/medua/<Username with case sensitive>/<Partition Label Name>"
    Note 2: UUID is unique ID. You can find out the UUID by:
    Step 1: First determine the DATA partition number:
    "sudo gdisk /dev/sda"
    Step 2: Determine the UUID of this partition number:
    "sudo blkid /dev/sda8"
    Reference 1: http://manpages.courier-mta.org/htmlman5/fstab.5.html
    Reference 2: http://liquidat.wordpress.com/2007/10/15/short-tip-get-uuid-of-hard-disks/
    3) Overheating CPU
    Solution is to issue the following command in Linux terminal: su -c "echo -n 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"
    4) System resumes immediately after suspend
    Solution is to issue the following command in Linux terminal: su -c "echo XHC1 > /proc/acpi/wakeup"
    5) What does not works well out of box:
    - Both GNOME and KDE's fonts are too small to be readable for out of box experience. Additional configuration is a need. (Some of the info can be found on "More Tips" later)
    - Thunderbolt hotplugging is NOT supported under Windows and Linux so far. Neither FaceTime HD camera works as well.
    - The red light in Headphone jack is always on. I do not have luck in switching off the light without losing the sound.
    Note 1: It is determined that the module "snd_hda_intel" is used by both cards (HDMI and normal output)
    Note 2: It is also known that blacklisting it can switch off the redlight at the price of muting the system.
    Note: Based on this article, http://support.apple.com/kb/TS1574
    A Mac (Except Mac Pro) needs servicing when there is a red light while the system fails to detect internal speakers. However, this article does NOT applies to this issue.
    5A) More Tips:
    Install gnome-tweak-tool for more customization
    Search for: "gnome-package" to install:
    Install Gnome Package Installer for advanced package repository
    Install Gnome Package Updater for advanced updates to be install (Whereby Fedora's App Store alike might not show the relevant updates)
    14. Verify if disk is still GPT:
    Use Gdisk to determine if the disk is pure GPT:
    http://ubuntuforums.org/showthread.php?t=1742682
    Command: sudo gdisk -l /dev/sda (The entire hard drive)
    You should see the MBR is "Protective" instead of anything else.
    15. Congrats, the system is ready for triple boot. (I forgot to eject my Windows DVD when the photo was taken)
    Note 1: You cannot set the default startup disk in Linux due to the lack of Boot Camp Control Panel in Linux.
    Neither is changing startup disk recommended in Windows due to the inability to display correctly.
    For me, I click "Cancel" whenever I am on this tab (Feel free to make other Boot Camp adjustments in other tabs).
    Only OSX I know that can show the startup disk options correctly.
    Note 2: For some reason, OSX likes to auto mount the EFI partition everytime it boots up. It is not known to have any issue for ejecting other disks or mounting disks via Disk Utility.
    Note 3: It is not determined if any Firmware or System upgrades will cause issues. It is only known that all 3 OS's regular updates should not be an issue.
    System Updates excludes Mac OSX 10.9.3 updates to OSX 10.9.4 type as I had done it on a OSX 10.9.4 Mac or Windows 8.1 to Windows 8.1 Update 1 since my Windows DVD comes with Update 1.
    System Upgrades refers to OSX Mavericks to Yosemite, Fedora 20 to Fedora 21, Windows 8.1 Update 1 to Windows 8.2 / Windows 9 for that matter.
    Note 4: Reset SMC and / or PRAM will NOT affect your ability to boot any of the OS (OSX, Recovery HD, Fedora & Windows 8)
    Yup, that is it!

Maybe you are looking for

  • How to display multiple columns under one item

    Hi i have one page in that page person having 4 (multipul) phone number i need to desplay under the item these phone numbers so witch item style i need take please tell me

  • Entry view posting key not defaulting to General ledger view

    Hello guys, I did a downpayment to my vendors account with assignment to asset under construction and noticed that the posting key of the bank accounts line did not default to the GL view.So instead of a credit to the bank i have a debit to the bank

  • Report of scam

    [Private information removed by moderator] Uses Skype to contact women worldwide. Spends months gainining their trust and confidence. Builds a relationship. Pretends to run a mediacl supplies business. working projects like millions of pounds wotrth

  • My music has gone! How do I get it back?

    Can I put all of the tracks/videos on with the play count/playlists still there? Hi, I have just turned on my computer, then itunes. What I found was nothing. I store all my files on an external hard drive and wasn't connected when I first loaded it

  • Some Applications won't launch in RDP

    I am using RDP to get into some Hyper-V VMs.  Most applications launch no problem but I have a few similar ones that will launch local on the server through the VM Connector but when I use RDP they won't launch.