Drives, block size and raptor300 choice

Hi,
Got MacPro2,1 (07 flavour) and was planning on upgrading some internal drives. My boot is currently using two striped WD 500G's with a block size of 16k. If i were to replace these with newer drives and changed the block size to 32k, would there be any issues to speak of ? Thinking of superduper backups, Adobe CS3 licensing, Time machine doing whole boot update etc.
Alternatively i may go for a pared-down boot and use one raptor 300GB, but which one ?
WD3000BLFS / WD3000HLFS / WD3000GLFS ? - HLFS looks like the one with regularly placed SATA connections, but unsure which fits the MacPro sleds. Also is there a link for further isolation solutions eg. vibration dampeners.
Do the Raid Edition's of WD drives still cut the mustard in terms of performance (1TB Western Digital WD1002FBYS RE3, SATA 3Gb/s, 7200 rpm, 32MB Cache, 4.20 ms).
Many thanks
J

16k use to be slightly better for boot drive. The trouble wtih Apple's, I don't know how to change it on the fly like I can with SoftRAIDs.
Okay, so you probably might want 4 WD Veloci's for scratch, or 3 SSDs.
Any WD Black or RE3 should be just fine, 500GB up to 1TB, and then you get into 2TB Green RE4, yes, an RE Green drive edition.
The other factor is you want even more than 16GB RAM to be used as cache for primary 'scratch'.
There is a guide to photoshop acceleration and optimizing up on
http://www.macgurus.com - lower left side panel of links to articles.

Similar Messages

  • Zfs boot block size and ufs boot block size

    Hi,
    In Solaris UFS file system , boot block  resides in 1 to 15 sectors, and each sector is 512 bytes total makes 7680 bytes
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/ufs
    bash-3.2# ls -ltr
    total 16
    -r--r--r--   1 root     sys        7680 Sep 21  2008 bootblk
    for zfs file system the boot block size is
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/zfs
    bash-3.2# ls -ltr
    total 32
    -r--r--r--   1 root     sys        15872 Jan 11  2013 bootblk
    when we install zfs bootblk on disk using the install boot command ,how many sectors it will use to write the bootblk?
    Thanks,
    SriKanth Muvva

    Thanks for your reply.
    my query is when  zfs  boot block size is 16K, and on disk 1 to 15 sectors(here boot block going to be installed) make around 8K,
    it mean in the 16K,it writes only 8K on the  disk
    if you don't mid will you please explain me  in depth
    I m referring the doc for UFS, page no 108 ,kernel bootstrap and initialization  (its old and its for Solaris 8)
    http://books.google.co.in/books?id=r_cecYD4AKkC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
    please help me to find  a doc for kernel bootstrap and initialization for Solaris 10 with zfs and  boot archive
    Thanks in advance .
    Srikanth

  • Oracle's block size and operative system cluster

    hallo,
    I would like to move our oracle's database 10.2g from server in a datacenter with Vmware environment, where the net admin will install windows 2008 with 64 bit.
    Now, the net admin strong believe that if oracle's block size is near at size of cluster of the cooked filesystem, oracle perfomance will increase, and he wants to set oracle's block size to 64 K
    It's a right idea or a mistake?
    thanks' in advance anyone answers.

    Dan_58 wrote:
    hallo,
    I would like to move our oracle's database 10.2g from server in a datacenter with Vmware environment, where the net admin will install windows 2008 with 64 bit.
    Now, the net admin strong believe that if oracle's block size is near at size of cluster of the cooked filesystem, oracle perfomance will increase, and he wants to set oracle's block size to 64 K
    It's a right idea or a mistake?
    I think he's probably had more experience with SQL Server - which is more closely integrated with the operating system, which means this type of thinking can be of some help. SQL Server has a fixed extent size of 64K (8 x 8KB pages) and an algorithm that allows it to use readahead on extents, so it's fairly common practice to synchronise the SQL Server extents with the operating system allocation unit - which is why, historically, SQL Server admins would set up the O/S with a 64KB allocation unit and fuss about aligning the allocation unit properly on the hardware.
    This type of thinking is not quite so important in Oracle systems.
    Regards
    Jonathan Lewis

  • Is tablespace block size and database block size have different meanings

    at the time of database creation we can define database block size
    which can not be changed afterwards.while in tablespace we can also
    define block size which may be different or same as define block size
    at the time of database creation.if it is different block size from
    the database creation times then what the actual block size using by oracle database.
    can any one explain in detail.
    Thanks in Advance
    Regards

    You can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
    So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
    Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't.

  • Database block size and swap

    Hi,
    I changed the db_block_size from 4k(oracle 9.2.0.1) to 8k on my solaris machine. The swap use has increased dramatically I am wondering is there any relation between db_block_size and swap.

    Dup Database Block Size, please check our replies in other post.

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • Best Block Size in Raid for Photo files

    I am setting up my two drive striped RAID 0 and came to a screeching halt at the raid block size.
    This RASID is strictly for photo scans and PS CS2 photo files, mostly high res, some medium JPEGs.
    Adobe says PS CS2's default block size in 64K, if I can believe the technical support guy, who said it off the top of his head, after not understanding what I was talking about.
    Apple Tech support first knew nothing about it. Then, after checking all over for quite some time, said 32K is adequate for what I am doing but 64K is alright. In other words, he said nothing.
    What would be the best block size for my purpose and why.
    One scan file size that I just checked is 135.2MB, another 134.6 MB and that is typical. JPEGs are, of course, smaller, ca 284 KB. Photos with the Canon EOS-1Ds Mk II run 9mb up to 200mb after processing. No other tyhpes of files will be on this drive.
    What would be the ideal block size and why?
    Thanks much,
    Mark

    The default 32K is for small random I/O pattern of a server. Use 128/256K for audio and video files. And 64K for workstation use.
    the larger block size gives the best performance for sequential I/O. Someone mentioned an AMUG review of CS2 tests that showed that 64K.
    Because this is probably a scratch volume, you could always test for yourself, and rebuild the RAID later and try a different scheme. Sometimes that is the best way to match your drives, your workflow, and system. There are a couple CS2 scripts and benchmark utilities to help get an idea of how long each step or operation takes.

  • What is Optimal Block Size?

    Given technological improvements in the past several years to hardware, should we stick with block size of 8 to 100 kb per DBAG? I would think we can go higher.
    But how high is very high?
    We designing a database and if I stick with Acc (D) and Time (D) I will have around 700k of block size and this may grow in the future if more accounts are added.
    What is your recommendation? Pros and cons with your recommendation.
    Server Config:
    OS: Sun Solaris 10
    RAM: 32 GB
    CPUs: 4
    Thanks in advance for your ideas and answers!
    Venu

    If you're in a Planning environment, you really have to benchmark form performance when Accounts and Time are the axes if they're not dense -- sometimes pulling multiple blocks per form is okay, sometimes not. And in turn that performance drives what is in the block and thus block size.
    The "right" answer is based on perspective -- is it calc time or retrieve time that is the performance measure, or both. Where you fall in that multidimensional continum determines block size. This would be the art part of sizing.
    Regards,
    Cameron Lackpour

  • 8k block size upgrade

    Hello All,
    Our team is currently quickly approaching an 8k block size upgrade of a fairly large production database (800 gig). This is one step in several to improve the performance of a long running (44 hour) batch cycle. My concern is the following and I hoping people can tell me why it shouldn't or shouldn't be a concern. We do not have a place at the moment where we can test this upgrade. My fear is that Oracle may in some cases alter the execution plans for the queries in our batch due to the new larger block size and make bad choices. I am afraid it may choose table scans instead of an index scan (as an example) at the wrong time and cause our batch to run much longer than normal. While we can resolve these issues, this happening in production the first time we run 8k would be a big issue. I might be able to deal with a problem here or there but several issues may cause us to not meet our service level agreement. Should I be concerned about this with an upgrade 4k to 8k? Should I cancel the upgrade for now? -- ORACLE 8i.
    Is there anything else I should stay up at night worry about with this upgrade?
    Thanks,
    Jeff Vacha

    If all you are doing in this upgrade is changing the database block size, and you don't already have some compelling evidence that changing the block size is going to improve performance, I wouldn't bother. Changing database block size is a lot of work and is very rarely going to have a noticable impact.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to view images full size and then import selected ones to macbook

    I want to view my images stored on external drive full size and download selected to MacBook pro

    Use a photo viewer like Preview (this has nothing to do with iPhoto)
    LN

  • Will block size effect the calc script performance?

    Hi Experts,
    I have a cube called RCI_LA:RCI_LA, now I have created calc scripts and working fine. But those calc scripts are taking too much time than expected (normally it should not take more than 15 min but those are taking nearly 1 hr or more some calc scripts.)
    In database properties I found that block size is 155896 B i.e. 152.KB but this size should be 8 to 100 KB & Block density is 0.72%
    If block size exceeds more than 100 KB will it impact the performance of Calc scripts?
    I think answer to the above question is “yes”. In this case what should I need to do to improve calc scripts performance?
    Could you please share your experience here with me to come out of this problem?
    Thanks in advance.
    Ram

    I believe Sandeep was trying to say "Dynamic" rather than "Intelligent".
    The ideal block size is a factor in all calcs, but the contributing reasons are many (The main three are CPU caching, Data I/O overhead, Index I/O overhead).
    Generally speaking, the ideal block size is achieved when you can minimize the combination of Data I/O overhead and Index I/O overhead. For this reason a block size that is too large will incur too much Data I/O, while a block size that is too small will incur too much Index I/O. If your Index file is small, increasing your block size may help, although the commonly acceptible block size is between 8K and 64K in size, this is just a guideline.
    In other words, if you test it with something right in the middle and your index file is tiny, you might want to test it with a smaller block size. If your index file is very large (i.e. 400 MB or more), you may want to increase the block size and retest.
    Ways to increase/decrease it are also many. Obviously, changing the dense/sparse settings is the main way, but there are some considerations that make this a touchy process. Other ways are to use dynamic calc in the dense dimensions. I say start at the top of your smallest dense dimension and keep the number of DIMENSIONS that you use D-C on limited. Using D-C members in a dense dimension does NOT increase the index file, so it could be considered a "free" reduction in block size -- the penulty is paid on the retrieve side (there is no free ride).

  • Tablespace - Block size

    Is it possible to have tablespaces of different block size in a given database? if so what is the advantage/usage? Thanks!

    Possible? Yes.
    A smart thing to do? In almost every conceivable case it is a terrible idea.
    Unfortunately our profession has a few self-annointed talking heads promoting the idea but as a Beta tester I can tell you that Oracle develops ONLY using 8K blocks and Beta testing, in almost every case, is done with 8K blocks. If there are bugs related to other block sizes, and they are found from time-to-time, you should not be surprised. Additionally it can make a bit of a mess of memory management.
    The only two cases where I would ever consider using block sizes other than 8K are with Advanced Compression in 11g where there are some advantages to 32K blocks and in Real Application Clusters where there are some possible advantages in improving node affinity with 2K or 4K blocks.
    The experts on this subject are Oak Table members and ACE Directors Jonathan Lewis and Richard Foote. I would advise you to google them for their comments if you wish to investigate further.

  • Database block size vs. tablespace blocksize

    dear colleques
    the block size of my database is 8192 (db_block_size=8192), all the tablespace that i have, have the same block size, now i want to create a new tablespace with a different block size, and when i do so i receive the following error
    ORA-29339: tablespace block size 4096 does not match configured block size
    please help solving the problems
    cheers
    asif

    $ oerr ora 29339
    29339, 00000, "tablespace block size %s does not match configured block sizes"
    // *Cause:  The block size of the tablespace to be plugged in or
    //          created does not match the block sizes configured in the
    //          database.
    // *Action:Configure the appropriate cache for the block size of this
    //         tablespace using one of the various (db_2k_cache_size,
    //         db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
    //         db_32K_cache_size) parameters.
    $                                                 You have to configure db_4k_cache_size to a value different from zero. See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams037.htm#sthref161

  • Multi Block Size

    Oracle 10.2.0.4:
    We are creating tablespace of 32K block to place BLOB table. I needed some suggestion on the following:
    1. Currently our DB_CACHE_SIZE is set to 0. Do I need to set DB_nk_CACHE_SIZE to 0? Would oracle auto tune this parameter if I set to 0? What's preferable?
    2. If I enable CACHE on LOB column would it affect other online users? I am assming there will be a separate buffer pool for 32K block size and probably will not affect online users.

    1. Say your DB_BLOCK_SIZE (for the LOB Segment Tablespace) is 8KB. Say your CHUNKSIZE is 32KB.
    Say you insert a LOB of 10KB. Oracle's write will be 32KB.
    Say you insert a LOB of 100KB. Oracle's writes will be 4 32KB chunks.
    2. For a normal table, multiple rows will fit into a block. The "free" space is initial reserved based on PCTFREE. However, actual usage will vary based on the pattern of INSERTs and DELETEs. ASSM manages the candidacy of a block for new rows based on the free space.
    3. Before you create a 32KB tablespace, you have to allocate a 32K CACHE. That requires a restart !
    4. If you use an SPFILE , the parameter can be "unset" by using the
    "ALTER SYSTEM RESET 'db_file_multiblock_read_count'; " command.
    TEST TEST TEST !!
    You must test the impact of 32KB on your LOBs and other issues. Note that this means that you'd be setting up a separate cache. Whatever cache space the LOB was using the DEFAULT cache earlier is now "released" to other tables/indexes etc. However, if the new 32KB CACHE is much lesser than the space it used to take in the DEFAULT, then the LOB operations may be slower !
    Also, obviously your I/Os are now larger with a larger CHUNK size
    TEST TEST TEST !!
    Test for the impact of unsettting db_file_multiblock_read_count and relying on SYSTEM Stats.

Maybe you are looking for

  • In BI Bex Query Designer output, dont want to have unit at column Heading..

    Hi I have a BI Bex Query Designer report showing output with Qty unit at Column Heading level. Even in KF's Text i didn't mention unit. I want unit to appear to column fields instead of column heading. Pls let me know how to do that. One more thing i

  • Even since I have been using mountain lion my Dreamweaver CS6 is not working.

    I have downloaded it again from Creative Cloud & also de-installed but it is just not booting at all from the icon in either the dock or applications. Please help?

  • Rewrite presets for Lightroom 4?

    Since the sliders have changed in the Develop module of Lightroom 4, will some of my existing presets need to be changed in order to work with the new processing version?

  • How to pass the valus to z-table

    hi all i want to pass values to Z-table.. i hav the ALV program... now i want to pass that valus in to ztable..plz gv me ur instruction how to do it.... regard nawanandana

  • Correcting fillable forms

    thank you for the answer on the fillable forms yesterday. my next question is Is there a way to correct a pdf form by the webmaster to make it fillable after it is already online using Adobe Pro so that people can use adobe reader 8.o to fill in the