SSD erase block size

I've just had a new T410 delivered with a 128Gb solid state drive. Linux reports the drive model as a "SAMSUNG MMCRE28G". Can someone tell me what the erase block size is, please, so I can get things aligned properly? I can't find any mention of this drive at all, let alone specs, on Samsung's rather useless website. It's easy enough to align the partitions on multiples of 1Mb to be safe, but I'm not sure what "stripe-width" to use to tell the ext4 filesystem to align files appropriately. It's too bad this basic information seems not to be published anywhere...
Thanks!
James.

Did you read the section immediately after the one you linked? https://wiki.archlinux.org/index.php/Pa … ng_tools_2
In past, proper alignment required manual calculation and intervention when partitioning. Many of the common partition tools now handle partition alignment automatically:
To verify a partition is aligned, query it using /usr/bin/blockdev as shown below, if a '0' is returned, the partition is aligned
Is that not sufficient for your purposes?
Last edited by 2ManyDogs (2014-11-04 18:55:37)

Similar Messages

  • Really still need EBS (Erase Block Size) for partitioning a SSD?

    The Arch wiki mentions that when partitioning an SSD you need to align the partitions to the erase block size (see https://wiki.archlinux.org/index.php/Pa … ate_drives ). There are a lot of how-to-partition-an-ssd-under-linux which also mention this value, even an online calculater.
    Those value are known for a lot of the serious models, among them also the Samsung 840 Pro and EVO. As I already asked in an older thread, I would like to know the value for the current model Samsung 850 Pro. Unfortunately Samsung Germany's support refuses to provide me with those values.
    Therefore I ask myself, is it still necessary to align partititions to a multiple of the EBS, or is this unnecessary if the device supports TRIM?
    Thanks!

    Did you read the section immediately after the one you linked? https://wiki.archlinux.org/index.php/Pa … ng_tools_2
    In past, proper alignment required manual calculation and intervention when partitioning. Many of the common partition tools now handle partition alignment automatically:
    To verify a partition is aligned, query it using /usr/bin/blockdev as shown below, if a '0' is returned, the partition is aligned
    Is that not sufficient for your purposes?
    Last edited by 2ManyDogs (2014-11-04 18:55:37)

  • SSD raid 0 block size?

    I just installed two OCZ Vertex 2 60Gb SSD into my Mac Pro and I'm about to set them up into a Raid 0 array but I'm wondering what's the best setting for "block size" on these? The volume will be used as system/apps/lightroom-catalog primarily. Thanks!

    I've seen this same question here a few times, and the answer is almost always "leave it at the default".
    From what I've read, nobody could ever notice the difference between the different sizes.
    I have a RAID 0 with 2 WD Black drives (1TB each) and the standard block size (think its 32000 or something along those lines, maybe just 32, i can't remember how it's measured) and i use it for PS, Aperture etc etc, and It is lightening fast.
    Regards

  • Anyone using an SSD? did you align partitions to erase blocks?

    I'm waiting on the 2nd gen Intel SSDs to copy my / and /home to the faster I/O device.  In the meantime, I'm reading up on these things and have found numerous sources that suggest aligning partitions on SSD's to the erase block of the device.  I found two major sources detailing the process.  This one shows a setup using parted and explains the process.
    I'm just posting to solicit feedback from the community on this issue.  Does Aloisio have it right in that blog post?

    Thanks for the link... most of that went over my head though. I'm not too sure about the whole boot 'tricky offset' as you described it though
    My partition scheme for the 80 gig ssd will be pretty simplistic:
    20 gigs of NTFS as partition #1
    15 gigs of ext4 as partition #2
    200 meg of ext3 for /boot as partition #3
    rest of the drive for /home as partition #4
    I'll keep /var and /data partitions on my HDD.
    Glad to hear more comments about this so I can do it right the first time
    Last edited by graysky (2009-08-14 21:35:29)

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • Drives, block size and raptor300 choice

    Hi,
    Got MacPro2,1 (07 flavour) and was planning on upgrading some internal drives. My boot is currently using two striped WD 500G's with a block size of 16k. If i were to replace these with newer drives and changed the block size to 32k, would there be any issues to speak of ? Thinking of superduper backups, Adobe CS3 licensing, Time machine doing whole boot update etc.
    Alternatively i may go for a pared-down boot and use one raptor 300GB, but which one ?
    WD3000BLFS / WD3000HLFS / WD3000GLFS ? - HLFS looks like the one with regularly placed SATA connections, but unsure which fits the MacPro sleds. Also is there a link for further isolation solutions eg. vibration dampeners.
    Do the Raid Edition's of WD drives still cut the mustard in terms of performance (1TB Western Digital WD1002FBYS RE3, SATA 3Gb/s, 7200 rpm, 32MB Cache, 4.20 ms).
    Many thanks
    J

    16k use to be slightly better for boot drive. The trouble wtih Apple's, I don't know how to change it on the fly like I can with SoftRAIDs.
    Okay, so you probably might want 4 WD Veloci's for scratch, or 3 SSDs.
    Any WD Black or RE3 should be just fine, 500GB up to 1TB, and then you get into 2TB Green RE4, yes, an RE Green drive edition.
    The other factor is you want even more than 16GB RAM to be used as cache for primary 'scratch'.
    There is a guide to photoshop acceleration and optimizing up on
    http://www.macgurus.com - lower left side panel of links to articles.

  • Solaris 10g RAC Clusterware Raw device dd comand block size, sizing

    Hi
    We are in the process of installing clusterware for RAc on solaris 10. I have 4 luns 500mb each, we want to use them for the voting disks and ocr,
    1. Which one do we use /dev/dsk or /dev/rdsk?
    2) Also how do you erase the data to clear them out, I think you use the dd comand correct, but what is the full sytax to cllear the voiting disk and the ocr, do I need to specify a block size ??
    3) Somebody was saying I should label the disks, just incase the controllers change the numbering if they ever get reconfigured, can this be done in SUN Solaris, if so can you give me an example, and do I specify the label as the ocr path??
    Thanks
    For all the info.

    1. Which one do we use /dev/dsk or /dev/rdsk?
    $$
    Please use Character Device ,This is general practice. However from 10.2.0.1 Block Device can also work.
    # chown oracle:dba /dev/rdsk/cxtydzs6
    # chmod 660 /dev/rdsk/cxtydzs6
    2) Also how do you erase the data to clear them out, I think you use the dd comand correct, but what is the full sytax to cllear the voiting disk and the ocr, do I need to specify a block size ??
    $$
    do
    dd if=/dev/zero of=/dev/ocr_disk$i bs=8192 count=25000 &
    done
    3) Somebody was saying I should label the disks, just incase the controllers change the numbering if they ever get reconfigured, can this be done in SUN Solaris, if so can you give me an example, and do I specify the label as the ocr path??
    $$
    Oracle recommends that you use a file name similar to dbname_raw.conf for this file.
    The following example shows a sample mapping file for a two-instance RAC cluster. Some of the partitions use alternative symbolic link names. Make sure that the partition device file name that you specify identifies the same partition on all nodes.
    system=/dev/rdsk/c2t1d1s3
    sysaux=/dev/rdsk/c2t1d1s4
    example=/dev/rdsk/c2t1d1s5
    users=/dev/rdsk/c2t1d1s6
    temp=/dev/rdsk/c2t1d2s3
    undotbs1=/dev/rdsk/c2t1d2s4
    undotbs2=/dev/rdsk/c2t1d2s5
    redo1_1=/dev/rdsk/c2t1d2s6
    redo1_2=/dev/rdsk/c2t1d3s3
    redo2_1=/dev/rdsk/c2t1d3s4
    redo2_2=/dev/rdsk/c2t1d3s5
    control1=/dev/rdsk/c2t1d4s3
    control2=/dev/rdsk/c2t1d4s3
    spfile=/dev/rdsk/dbname_spfile_raw_5m
    pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
    Hope this will help..
    Bharat
    NS

  • HT2559 Help with setting raid block size after the fact

    I screwed up and created my raid 1 with block size set at 32. I need 256....it won't let me change it? What do I do?  Do I delete and re-configure it?

    thanks for the reply.  I am editing huge photo files (HDR Pano's) off the drive.  Doesn't that mean I need 256?  Anyway, when I go to erase it, it says "Deleting a mirrored RAID set changes each of its slices into a partition that contains a complete copy of the data from the deleted RAID set".   Is that a problem?

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • Mac Pro RAID block size recommendations for working with audio in Logic Pro

    I have recently ordered a Mac Pro and plan to do a RAID configuration across 3 HDD's
    The RAID type i am going to do is a RAID 0 striped.
    The computer is going to be used primarily for audio post production and working with 20+ 24-Bit audio files at any one time within a Logic project.
    I want to know what is the best block size i should use when configuring the RAID.
    I understand that using a higher block size is best for working with large files but do i need to do this in my case or will the default 32k block size be enough?
    Thanks in advance

    Use 64k. Things like databases like having 32k blocks because of all the small files. Audio files are pretty small even at 24-bit 192KHz. Go to 128k if all you are doing is streaming and no samples. But 20+ 24-bit is really not too large anyway considering most modern HDD's can stream 100MB/s off one spindle. You'll probably be fine regardless of the block size you choose. But most audio pro's choose 64k.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Can't change default block size in dbca

    10.1.0.3
    solaris
    I am using the dbca to create a database. When I go to the sizing screen and try to change the default block size this option is always greyed out at 8k.
    does anyone know why? this happens even when i pick a data warehouse template.

    There is a reason Oracle uses 8K as the default database block size for their warehouse template. Changing the default block size to a larger size generally does not result in better performance when both databases are allocated the exact same SGA memory allocations.
    HTH -- Mark D Powell --

  • Change block size for several log-files simultaneously?

    Hi,
    I'm using SignalExpress to record and analyze data.
    Sometimes I want to analyze the recorded data both for a short period of time and for longer time.
    (Imagine creating an average of every second first and then an average of every 10 seconds)
    Then I need to change all the log-files, and also the specific parts of the log-file. See attachment.
    I have sometimes up to 1000 log-files containing signals from 4 different modules, that makes 4000 adjustments to change from block size 10000 to block size 1000.
    Is there any way to adjust all the log-files block size at once?
    Many thanks!
    Anders Hansson
    Engineer
    Attachments:
    NI.JPG ‏95 KB

    Hi,
    Is't anyone else interested in a solution for this operation?
    I reported this to the NI-feedback service and they adviced me to report/request advice here to get a quicker reply.
    So...
    Best regards
    Ingenjör Hansson

Maybe you are looking for

  • DUMFILE TRANSFER using DBMS_FILE_TRANSFER

    Hi , BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi PL/SQL Release 10.2.0.4.0 - Production CORE     10.2.0.4.0     Production TNS for Solaris: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production when am tri

  • Help with loading pictures in my organizer from elements 9 trial version

    I downloaded Trial Version of elements 9 last thursday.  Since then I am trying to get someone at Adobe to tell me why, when I open Organizer there are NO pictures from my Pictures Albums.  No help from them because they do NOT service "trail version

  • ISE Profilinh and Thin Clients

    I have ISE 1.2 and HP T610 thin client on the network 802.1x authorization is working correctly but clients are profiled as generic HP-devices or HP printers I don't know how to create custom profiling policy for 'HP-Thin-Client' device. What OUI con

  • My ipod touch won't go any further then the white screen with the apple on it

    i went to erase my ipod and it erased and then restarted but when it restarted it only goes to the loading screen with the apple sign on it and even if i restart it it goes back to the same screen. pleas somby help me i ant afford a new one

  • Time Machine external drive not displayed in Migration Assistant

    I have been suffering repeated lockups with my iMac, so I decided to bite the bullet and reformat the drive and restore from a Time Machine backup. I have been backing up to an external USB drive, not a Time Capsule. It seemed to be working fine and