Is my OS block size 4096 or 512 bytes?

Hi!
I am working on RHEL 4.5 ES edition. I have an installation of an Oracle 10g database whose files spread out over two separate partitions /u01 and /u02 that show a block size of 4096 using dumpe2fs.
dumpe2fs /dev/sdb1 | grep 'size'
dumpe2fs /dev/sda7 | grep 'size'
Block size: 4096
Fragment size: 4096
Inode size: 128
Using a different method of deriving the OS block size by querying v$archived_log I get the output of 512.
Is the v$archived_log method of finding the OS block size incorrect?
Thanks and Regards,
Aniruddha

What does
/sbin/tune2fs -l /dev/xxx
return? 4096 only? If yes, then your block size is 4096.
From http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1016.htm#sthref3455
The online log logical block size is a platform-specific value that is not adjustable by the user.
So may be it is fixed at 512 bytes.
Others: Please comment.

Similar Messages

  • Database block size vs. tablespace blocksize

    dear colleques
    the block size of my database is 8192 (db_block_size=8192), all the tablespace that i have, have the same block size, now i want to create a new tablespace with a different block size, and when i do so i receive the following error
    ORA-29339: tablespace block size 4096 does not match configured block size
    please help solving the problems
    cheers
    asif

    $ oerr ora 29339
    29339, 00000, "tablespace block size %s does not match configured block sizes"
    // *Cause:  The block size of the tablespace to be plugged in or
    //          created does not match the block sizes configured in the
    //          database.
    // *Action:Configure the appropriate cache for the block size of this
    //         tablespace using one of the various (db_2k_cache_size,
    //         db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
    //         db_32K_cache_size) parameters.
    $                                                 You have to configure db_4k_cache_size to a value different from zero. See http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams037.htm#sthref161

  • OS block size

    Hi
    How to check OS block size in LINUX and Windows.
    Regards

    But as a oracle dba one should know how to find/know the os block size to set proper value for the
    db_block_size oracle initialization parameter.http://www.lmgtfy.com/?q=ext3+block+size
    Once you know OS block size, what formula is used "to set proper value for the db_block_size oracle initialization parameter"?
    bcm@bcm-laptop:~$ sudo tune2fs -l /dev/sda1 | grep -i 'block size'
    [sudo] password for bcm:
    Block size: 4096
    Edited by: sb92075 on Dec 19, 2009 8:41 PM

  • How to find block size of my OS

    Hi I am using redhat 9, I would like to know how to find out my OS block size as I need to give the database block size accordingly...

    I would rather adjust the OS block size to required database block size, not vice versa if possible.
    You can check the FILESYSTEM block size using dumpe2fs command (replace /dev/sda5 with your filesystem blocks device):
    [root@localhost ~]# dumpe2fs /dev/sda5 | grep "Block size"
    dumpe2fs 1.39 (29-May-2006)
    Block size: 4096
    If you want to know what's the minimum OS block device IO block size (which is not the same thing as filesystem block size), you can do it easily using an Oracle command (if your db is in archivelog mode):
    select block_size from v$archived_log where rownum = 1;
    Tanel.
    http://blog.tanelpoder.com

  • What USB storage devices have a block size of 512 bytes?

    After pulling my hair out for weeks trying to get a usb hard drive to work with my new AirPort Extreme (802.11n), I ran across this
    http://docs.info.apple.com/article.html?artnum=305038
    AirPort Extreme (802.11n): USB storage device supported formats and protocols
    You can connect USB-based storage devices to an AirPort Extreme (802.11n). Learn which formats and protocols are supported.
    The AirPort Extreme (802.11n) supports USB storage devices that have a block size of 512 bytes, and are formatted as Mac OS Extended (HFS-plus), FAT16, or FAT32. Not all USB storage devices use a block size of 512 bytes.
    The AirPort Extreme (802.11n) shares storage devices based on the format used to initialize the storage device. For example, if HFS-plus formatting was used, AFP and SMB/CIFS protocols are used to share the device on the network. If FAT16 or FAT32 was used, SMB/CIFS protocols are used.
    The AirPort Extreme (802.11n) works with disks that have a single partition and are not software RAID volumes (no more than one volume per physical disk). If the disk is a self-contained RAID that presents itself to a computer as a single volume requiring no software support, then it is supported.
    Note: Use AirPort Disk Utility to discover and mount AirPort Extreme-based volumes over the network.
    Now, this information is not easily obtainable while
    shopping for a new usb hard drive. How do I find out which
    ones support this 512 byte block size????
    Would have bee nice to know that not all usb hard drives
    are supported by the AirPort Extreme (802.11n) before I
    purchased it.
    Thanks
    J Riley

    Duane posted a link to an unofficial 802.11n Airport Extreme Hard Drive Compatibility List.
    http://www.ifelix.co.uk/tech/8014.html
    Still not enough information to make an informed purchase that
    will work.

  • Force Logical Block size for HDD pulled from external enclourse

    So I pulled a 3tb hitachi drive from an external enclourse, and the usb-to-sata board in the enclourse forced a logical sector size of 4K for the drive. But connecting directly to the drive exposes the logical sector size to 512 bytes. Resulting in me being unable to mount the partition. So what I am asking is, is there a way to force the logical sector size for a drive to be different than the drive exposes or am I going to have to move everything off the drive and reformat it?
    The enclourse in question:
    http://www.newegg.com/Product/Product.a … 6822145510
    Direct:
    [73828.076447] sd 0:0:1:2: [sdl] Synchronizing SCSI cache
    [73838.912744] scsi 0:0:1:2: Direct-Access Hitachi HDS723030ALA640 R001 PQ: 0 ANSI: 5
    [73838.912889] sd 0:0:1:2: [sdo] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
    [73838.912892] sd 0:0:1:2: Attached scsi generic sg12 type 0
    [73838.912995] sd 0:0:1:2: [sdo] Write Protect is off
    [73838.912999] sd 0:0:1:2: [sdo] Mode Sense: cb 00 00 08
    [73838.913042] sd 0:0:1:2: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [73838.949343] sdo: sdo1
    [73838.949614] sd 0:0:1:2: [sdo] Attached SCSI disk
    USB:
    Mar 5 18:52:35 localhost kernel: [ 4.711438] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.712191] sd 8:0:0:0: [sdl] Write Protect is off
    Mar 5 18:52:35 localhost kernel: [ 4.712193] sd 8:0:0:0: [sdl] Mode Sense: 4b 00 10 08
    Mar 5 18:52:35 localhost kernel: [ 4.712938] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.713978] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.714720] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.716186] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.717215] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.735573] sdl: sdl1
    Mar 5 18:52:35 localhost kernel: [ 4.736557] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.738058] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.739085] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.740238] sd 8:0:0:0: [sdl] Attached SCSI disk

    Hi ebrian,
    You can't perform a migration from Linux to Windows in that manner with 9i.The plan can be changed to migrate from 9.2.0.4 to 10g and than move from 32-bit Linux to 32-bit Windows.
    It should not matter the end result for the OP is the same 10g on 32 bit windows.
    It just depends on what technoly the OP wants to use (export/import or rman convert).
    Regards,
    Tycho

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

  • Change default block size on tftp server

    Is there a way to change the default block size on the tftp server so that it defaults to 1024 instead of 512
    I am aware that the client can accomplish this by:
    tftp> tsize
    tftp> blkzise 1024
    but I don't want the client to have to enter these additional commands.

    Were you ever able to get an answer on this? I have the same question.

  • Buffer data before chart it and block size

    I hope you can help me with this situation because I have been stoped in this situation for two days and I think that I won't see the light in the short time and the time is a scarce resource.
       I want to use a NI DaqCard-AI-16XE-50 ( 20KS/s accordig to specifications). To acquired data in DasyLab I've been using OPC DA system but when I try get a chart from the signal I get awfull results. 
      I guess the origin of the problem is the PC is not powerfull to generate a chart in real time, so Is there a block to save the data, then graph the data without use "Write Data" Block to avoid write data to disk? 
      Another cause of the problem it could be an incorrect set value of Block size, but in my piont of view with the 10Khz, and 4096 of block size is more than neccesary to acquire a signal of 26[Hz] (showing in the photo). If I reduce the block size to 1 the signal showing in the graph is a constant of the first value acquire. Why could be this situation?
    Thanks in advance for your answers, 
    Solved!
    Go to Solution.
    Attachments:
    data from DAQcard.PNG ‏95 KB

    Is there someone who can help me
    I connect CN606TC2 and Dasylab 11... using the RS232 cable, and have done the instruction manual of the book cn606tc2.
    In dasylab RS232 module, there is a box:
    1). Measurement data request,
    2). Measurement data format ...
    what should I write in the box is so modules dasylab Digital meters can be read from CN606TC2.
    To start communication, the Command Module must send alert code ASCII [L] hex 4C. 
    Commands requesting data from the scannerCN606TC2)
     ASCII [A] hex 41 = Zones/ Alarms/ Scan time
     ASCII [M] hex 4D = Model/ Password/ ID#/ # of zones
     ASCII [S] hex 53 = Setpoints
     ASCII [T] hex 54 = Temperature
    I did not understand the program and the ASCII code.
    I send [T] in the RS232 monitor and restore dasylab 54.
    I am very grateful for your help

  • Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)

    Hi All,
    We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
    But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
    Thanks in advance

    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
    or at least be N times less than Oracle block size.
    For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
    Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
    Contemporary HDDs have internal sector size 4K. Usually.

  • Zfs boot block size and ufs boot block size

    Hi,
    In Solaris UFS file system , boot block  resides in 1 to 15 sectors, and each sector is 512 bytes total makes 7680 bytes
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/ufs
    bash-3.2# ls -ltr
    total 16
    -r--r--r--   1 root     sys        7680 Sep 21  2008 bootblk
    for zfs file system the boot block size is
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/zfs
    bash-3.2# ls -ltr
    total 32
    -r--r--r--   1 root     sys        15872 Jan 11  2013 bootblk
    when we install zfs bootblk on disk using the install boot command ,how many sectors it will use to write the bootblk?
    Thanks,
    SriKanth Muvva

    Thanks for your reply.
    my query is when  zfs  boot block size is 16K, and on disk 1 to 15 sectors(here boot block going to be installed) make around 8K,
    it mean in the 16K,it writes only 8K on the  disk
    if you don't mid will you please explain me  in depth
    I m referring the doc for UFS, page no 108 ,kernel bootstrap and initialization  (its old and its for Solaris 8)
    http://books.google.co.in/books?id=r_cecYD4AKkC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
    please help me to find  a doc for kernel bootstrap and initialization for Solaris 10 with zfs and  boot archive
    Thanks in advance .
    Srikanth

  • Software RAID0 for Video Editing+Storage: What Stripe Block Size ?

    Hi
    I'm planning to setup a raid0 with 2x 2TB HDD's to store and edit the videos from my digital camera on it. The most files will have a size of 5 to 15GB. In the past i had raid0's on RHEL with the (old) default stripe block size of 64kb. Since the new drive will only contain very big files would it be better to go with 512kb stripe block size ? 512 is also the default setting now in the gnome disk utility which i use for my partitions.
    Another question: Does the so called "raid lag" exist ? I think i've seen occasional stuttering in movies when they're played from a raid0/5 without cache enabled in the player (with cache its fine). Games also seem to occasional freeze when installed on raid (I had this problem in the past with wine games installed on raid0, they sometimes freeze when they try to load data which never occurred ever on a normal HDD).
    Many thanks in advance for your suggestions

    That is a hard question to answer.. Nothing is best for everyone... However if i am to generalize it I would put it this way.. If you want to cut everything from a short promos to hollywood pictures..
    A high end windows pc (only cause mac pro hasn't been updated in ages)
    Avid Media Composer with Nitrus DX
    Two monitors
    Broadcast monitor
    HD Deck
    pimping 5.1 speakers
    A good mixer
    You are looking at over 70,000 or 80,000, could even approach even more.. HD decks run atleast 15k.
    If price is not an issue then there you go....
    However this is not realistic for most people nor best solution by no means... I run a macbook pro with Avid (as primary) Final Cut 7, Final Cut X (for practice, didn't have to use it for a job yet), and Premiere (just in case)
    I am a final cut child who grew up on it and love it however everything I am doing in last few years is on AVID...
    Have a second monitor..
    I am very portable and rest of the gear I usually get where ever I work at.. I am looking into getting a good broadcast monitor connected with AJA thunderbolt..
    Like I said this is very open question, there is no (BEST) it all depends what you will be doing.. If you get AVID (which can do everything, however is cluncky as **** and counter intuitive) but you are only cutting wedding videos and short format stuff, it would be an overkill galore.. Just get FCP X in that case... Simple,easy, one app...
    Be more specific and you will get clearer answers..

  • ORA-00374 - Block Size issue

    I've already searched and researched this quite a bit. I am not using 9i like the other post about this issue from many years ago. Before you ask "Why a db_block_size of 32k?", this is for a test case. Simple as that.
    My system:
    Quad DC Operton, 32GB RAM, 4x 15k SAS disks in hardware RAID10.
    Windows Server 2008 R2 Standard 64bit (I much prefer Linux, but this test requires Win2008)
    Oracle 11g R2 64-bit EE
    The Disk with the O/S and ORACLE_HOME is formatted with the default 4k size allocation units.
    The allocated database file storage IS formatted with 32k sized allocation units and is on a SAN.
    I know it's 32k because when I presented the LUN to the server, I formatted it with 32k allocation units. See below the output paying attention to the * * section:
    C:\Users\********************>fsutil fsinfo ntfsinfo o:
    NTFS Volume Serial Number : 0x60d245e2d245bcd2
    Version : 3.1
    Number Sectors : 0x00000000397fc7ff
    Total Clusters : 0x0000000000e5ff1f
    Free Clusters : 0x0000000000e5f424
    Total Reserved : 0x0000000000000000
    Bytes Per Sector : 512
    Bytes Per Cluster :               32768
    Bytes Per FileRecord Segment : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length : 0x0000000000040000
    Mft Start Lcn : 0x0000000000018000
    Mft2 Start Lcn : 0x0000000000000001
    Mft Zone Start : 0x0000000000018000
    Mft Zone End : 0x0000000000019920
    RM Identifier: 35506ECB-7F9E-11DF-99F3-001EC92FDE3F
    So, my db file storage is formatted with a 32k allocation size.
    My issue is this:
    Oracle shows me the 32k block size when running DBCA using the Custom template. I choose it and the other required options are configured, and when it starts building the DB, I get this:
    ORA:00374: parameter db_block_size = 32768 invalid ; must be a multiple of 512 in the range [2048..16384].
    Other responses I've seen to this says "Windows doesn't support a allocation size above 8k or 16k" which is utterly absurd since I run SQL2008 on a few machines and it DOES support up to a 64k allocation size, which is what I run. I know this for a FACT.
    Windows DOES support up to a 64k allocation size. Does anyone know why Oracle is giving me a hard time about it?
    I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
    Edited by: user6517483 on Jun 24, 2010 9:21 PM

    user6517483 wrote:
    I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
    A WAG.. Oracle is written to be run on a wide variety of operating system. As operating systems differ one typically designs something that is equivalent in functionality to a Windows HAL - this provides an abstraction layer between the kernel services/calls that is needed by the s/w and the actual implementation of such by the kernel itself.
    So despite the kernel supporting feature X, it could be different than similar features supported on other kernels and difficult to implement via this s/w HAL-like interface. The Windows kernel has a lot of differences from the Linux kernel for example.. and somehow Oracle needs to make the same core db s/w run on both. Not unrealistic to expect that some kernel features will be supported better than others, as there is a common denominator ito design and implementation in the core s/w.
    As this is a case with block sizes... not that critical IMO. I have played with different block sizes in a 20+TB storage system and Oracle RAC (10.2g) on Linux (part of testing the combo of storage system and cluster and technologies used). Larger block sizes made zero difference to raw I/O performance. The impact was instead more a logical one. Fewer db blocks can be cached as these are larger.. more data can be written into a datablock. And as numerous experienced Oracle professionals have commented, Oracle decided that the default 8KB size is a best fit at this layer.
    So extensive and very accurate testing needs to be done IMO to determine whether a larger block size is justified... and the effort to do that may just outweigh the little gains achieved by finding the "+perfect+" block size. Why not focus all that effort instead on correctly using Oracle? Application design? Data modeling? Development and coding? These are factors that play the most dominant roles at the end of day that impact and determine performance.

  • Increasing TFTP block size for 2960 series switches

    I have read that some Cisco components can increase the default TFTP block size to values greater than 512 bytes by using the command -
    ip tftp blocksize xxxx
    This doesn't seem to be available on Cisco 2960 series switches. Is there a way to do this with the 2960's?

    I have moved a WS-C2960-24LC-S running LanLite to 12.2(55)SE9 - the current end of the line for this switch - and indeed the command is not present. Sooo....this appears to be a limitation of LanLite.
    My predecessor implemented about 70 switches with LanLite. I put a stop to this about a year ago but it is going to take some time to flush them out of the inventory.
    Thanks for your response.

Maybe you are looking for