OSD-04001: invalid logical block size (OS 2800189884)

My Windows 2003 crashed which was running Oracle XE.
I installed Oracle XE on Windows XP on another machine.
I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
When I start the database in WinXP using SQLPLUS i get the following message
SQL> startup
ORACLE instance started.
Total System Global Area 146800640 bytes
Fixed Size 1286220 bytes
Variable Size 62918580 bytes
Database Buffers 79691776 bytes
Redo Buffers 2904064 bytes
ORA-00205: error in identifying control file, check alert log for more info
I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 4 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
Wed Apr 25 18:38:36 2007
ALTER DATABASE MOUNT
Wed Apr 25 18:38:36 2007
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Wed Apr 25 18:38:36 2007
ORA-205 signalled during: ALTER DATABASE MOUNT...
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Please help.
Regards,
Zulqarnain

Hi Zulqarnain,
Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
Regards

Similar Messages

  • DB Cloning.file size is not a multiple of logical block size

    Dear All,
    I am trying to create database in windowsXP from the database files running in Linux.
    When i try to create control file, i m getting the following errors.
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    'D:\oracle\orcl\oradata\orcl\system01.dbf'
    ORA-27046: file size is not a multiple of logical block size
    OSD-04012: file size mismatch (OS 367009792)
    Pls tell me the workarounds.
    Thanks
    Sathis.

    Hi ,
    I created database service by oradim. Now i m trying to create control file after editing the controlfile with the location of windows datafiles(copied from Linux)
    Thanks,
    Sathis.

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • Force Logical Block size for HDD pulled from external enclourse

    So I pulled a 3tb hitachi drive from an external enclourse, and the usb-to-sata board in the enclourse forced a logical sector size of 4K for the drive. But connecting directly to the drive exposes the logical sector size to 512 bytes. Resulting in me being unable to mount the partition. So what I am asking is, is there a way to force the logical sector size for a drive to be different than the drive exposes or am I going to have to move everything off the drive and reformat it?
    The enclourse in question:
    http://www.newegg.com/Product/Product.a … 6822145510
    Direct:
    [73828.076447] sd 0:0:1:2: [sdl] Synchronizing SCSI cache
    [73838.912744] scsi 0:0:1:2: Direct-Access Hitachi HDS723030ALA640 R001 PQ: 0 ANSI: 5
    [73838.912889] sd 0:0:1:2: [sdo] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
    [73838.912892] sd 0:0:1:2: Attached scsi generic sg12 type 0
    [73838.912995] sd 0:0:1:2: [sdo] Write Protect is off
    [73838.912999] sd 0:0:1:2: [sdo] Mode Sense: cb 00 00 08
    [73838.913042] sd 0:0:1:2: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [73838.949343] sdo: sdo1
    [73838.949614] sd 0:0:1:2: [sdo] Attached SCSI disk
    USB:
    Mar 5 18:52:35 localhost kernel: [ 4.711438] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.712191] sd 8:0:0:0: [sdl] Write Protect is off
    Mar 5 18:52:35 localhost kernel: [ 4.712193] sd 8:0:0:0: [sdl] Mode Sense: 4b 00 10 08
    Mar 5 18:52:35 localhost kernel: [ 4.712938] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.713978] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.714720] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.716186] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.717215] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.735573] sdl: sdl1
    Mar 5 18:52:35 localhost kernel: [ 4.736557] sd 8:0:0:0: [sdl] 732566016 4096-byte logical blocks: (3.00 TB/2.72 TiB)
    Mar 5 18:52:35 localhost kernel: [ 4.738058] sd 8:0:0:0: [sdl] No Caching mode page present
    Mar 5 18:52:35 localhost kernel: [ 4.739085] sd 8:0:0:0: [sdl] Assuming drive cache: write through
    Mar 5 18:52:35 localhost kernel: [ 4.740238] sd 8:0:0:0: [sdl] Attached SCSI disk

    Hi ebrian,
    You can't perform a migration from Linux to Windows in that manner with 9i.The plan can be changed to migrate from 9.2.0.4 to 10g and than move from 32-bit Linux to 32-bit Windows.
    It should not matter the end result for the OP is the same 10g on 32 bit windows.
    It just depends on what technoly the OP wants to use (export/import or rman convert).
    Regards,
    Tycho

  • Is my OS block size 4096 or 512 bytes?

    Hi!
    I am working on RHEL 4.5 ES edition. I have an installation of an Oracle 10g database whose files spread out over two separate partitions /u01 and /u02 that show a block size of 4096 using dumpe2fs.
    dumpe2fs /dev/sdb1 | grep 'size'
    dumpe2fs /dev/sda7 | grep 'size'
    Block size: 4096
    Fragment size: 4096
    Inode size: 128
    Using a different method of deriving the OS block size by querying v$archived_log I get the output of 512.
    Is the v$archived_log method of finding the OS block size incorrect?
    Thanks and Regards,
    Aniruddha

    What does
    /sbin/tune2fs -l /dev/xxx
    return? 4096 only? If yes, then your block size is 4096.
    From http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1016.htm#sthref3455
    The online log logical block size is a platform-specific value that is not adjustable by the user.
    So may be it is fixed at 512 bytes.
    Others: Please comment.

  • Mac Pro RAID block size recommendations for working with audio in Logic Pro

    I have recently ordered a Mac Pro and plan to do a RAID configuration across 3 HDD's
    The RAID type i am going to do is a RAID 0 striped.
    The computer is going to be used primarily for audio post production and working with 20+ 24-Bit audio files at any one time within a Logic project.
    I want to know what is the best block size i should use when configuring the RAID.
    I understand that using a higher block size is best for working with large files but do i need to do this in my case or will the default 32k block size be enough?
    Thanks in advance

    Use 64k. Things like databases like having 32k blocks because of all the small files. Audio files are pretty small even at 24-bit 192KHz. Go to 128k if all you are doing is streaming and no samples. But 20+ 24-bit is really not too large anyway considering most modern HDD's can stream 100MB/s off one spindle. You'll probably be fine regardless of the block size you choose. But most audio pro's choose 64k.

  • Buffer I/O error on device hda1, logical block 81934

    I getting the follow erro message when I session into the CUE of the UC520. All advice appreciated. I read the similar post, however  I'm not given any prompts to make a selection.
    Buffer I/O error on device hda1, logical block 81934
    Processing manifests . Error processing file exceptions.IOError [Errno 5] Input/output error
    . . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
    . . . . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
    . . complete
    ==> Management interface is eth0
    ==> Management interface is eth0
    malloc: builtins/evalfile.c:138: assertion botched
    free: start and end chunk sizes differ
    Stopping myself.../etc/rc.d/rc.aesop: line 478:  1514 Aborted                 /bin/runrecovery.sh
    Serial Number:
    INIT: Entering runlevel: 2
    ********** rc.post_install ****************
    INIT: Switching to runlevel: 4
    INIT: Sending processes the TERM signal
    STARTED: cli_server.sh
    STARTED: ntp_startup.sh
    STARTED: LDAP_startup.sh
    STARTED: SQL_startup.sh
    STARTED: dwnldr_startup.sh
    STARTED: HTTP_startup.sh
    STARTED: probe
    STARTED: superthread_startup.sh
    STARTED: ${ROOT}/usr/bin/products/herbie/herbie_startup.sh
    STARTED: /usr/wfavvid/run-wfengine.sh
    STARTED: /usr/bin/launch_ums.sh
    Waiting 5 ...Buffer I/O error on device hda1, logical block 70429
    Waiting 6 ...hda: no DRQ after issuing MULTWRITE
    hda: drive not ready for command
    Buffer I/O error on device hda1, logical block 2926
    Buffer I/O error on device hda1, logical block 2927
    Buffer I/O error on device hda1, logical block 2928
    Buffer I/O error on device hda1, logical block 2929
    Buffer I/O error on device hda1, logical block 2930
    Buffer I/O error on device hda1, logical block 2931
    Buffer I/O error on device hda1, logical block 2932
    Buffer I/O error on device hda1, logical block 2933
    Buffer I/O error on device hda1, logical block 2934
    REISERFS: abort (device hda1): Journal write error in flush_commit_list
    REISERFS: Aborting journal for filesystem on hda1
    Jun 17 16:36:11 localhost kernel: REISERFS: abort (device hda1): Journal write error in flush_commit_list
    Jun 17 16:36:11 localhost kernel: REISERFS: Aborting journal for filesystem on hda1
    Waiting 8 ...MONITOR EXITING...
    SAVE TRACE BUFFER
    Jun 17 16:36:13 localhost err_handler:   CRASH appsServices startup startup.sh System has crashed. The trace buffer information is stored in the file "atrace_save.log". You can upload the file using "copy log" command
    /bin/startup.sh: line 262: /usr/bin/atr_buf_save: Input/output error
    Waiting 9 ...Buffer I/O error on device hda1, logical block 172794
    INIT: Sending processes the TERM signal
    INIT: cannot execute "/etc/rc.d/rc.reboot"
    INIT: no more processes left in this runlevel

    The flash card for CUE might be corrupt. Try and reinstall CUE and restore from backup to see if that fixes it. If it doesnt, try a different flash card.
    Cole

  • ORA-00374 - Block Size issue

    I've already searched and researched this quite a bit. I am not using 9i like the other post about this issue from many years ago. Before you ask "Why a db_block_size of 32k?", this is for a test case. Simple as that.
    My system:
    Quad DC Operton, 32GB RAM, 4x 15k SAS disks in hardware RAID10.
    Windows Server 2008 R2 Standard 64bit (I much prefer Linux, but this test requires Win2008)
    Oracle 11g R2 64-bit EE
    The Disk with the O/S and ORACLE_HOME is formatted with the default 4k size allocation units.
    The allocated database file storage IS formatted with 32k sized allocation units and is on a SAN.
    I know it's 32k because when I presented the LUN to the server, I formatted it with 32k allocation units. See below the output paying attention to the * * section:
    C:\Users\********************>fsutil fsinfo ntfsinfo o:
    NTFS Volume Serial Number : 0x60d245e2d245bcd2
    Version : 3.1
    Number Sectors : 0x00000000397fc7ff
    Total Clusters : 0x0000000000e5ff1f
    Free Clusters : 0x0000000000e5f424
    Total Reserved : 0x0000000000000000
    Bytes Per Sector : 512
    Bytes Per Cluster :               32768
    Bytes Per FileRecord Segment : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length : 0x0000000000040000
    Mft Start Lcn : 0x0000000000018000
    Mft2 Start Lcn : 0x0000000000000001
    Mft Zone Start : 0x0000000000018000
    Mft Zone End : 0x0000000000019920
    RM Identifier: 35506ECB-7F9E-11DF-99F3-001EC92FDE3F
    So, my db file storage is formatted with a 32k allocation size.
    My issue is this:
    Oracle shows me the 32k block size when running DBCA using the Custom template. I choose it and the other required options are configured, and when it starts building the DB, I get this:
    ORA:00374: parameter db_block_size = 32768 invalid ; must be a multiple of 512 in the range [2048..16384].
    Other responses I've seen to this says "Windows doesn't support a allocation size above 8k or 16k" which is utterly absurd since I run SQL2008 on a few machines and it DOES support up to a 64k allocation size, which is what I run. I know this for a FACT.
    Windows DOES support up to a 64k allocation size. Does anyone know why Oracle is giving me a hard time about it?
    I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
    Edited by: user6517483 on Jun 24, 2010 9:21 PM

    user6517483 wrote:
    I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
    A WAG.. Oracle is written to be run on a wide variety of operating system. As operating systems differ one typically designs something that is equivalent in functionality to a Windows HAL - this provides an abstraction layer between the kernel services/calls that is needed by the s/w and the actual implementation of such by the kernel itself.
    So despite the kernel supporting feature X, it could be different than similar features supported on other kernels and difficult to implement via this s/w HAL-like interface. The Windows kernel has a lot of differences from the Linux kernel for example.. and somehow Oracle needs to make the same core db s/w run on both. Not unrealistic to expect that some kernel features will be supported better than others, as there is a common denominator ito design and implementation in the core s/w.
    As this is a case with block sizes... not that critical IMO. I have played with different block sizes in a 20+TB storage system and Oracle RAC (10.2g) on Linux (part of testing the combo of storage system and cluster and technologies used). Larger block sizes made zero difference to raw I/O performance. The impact was instead more a logical one. Fewer db blocks can be cached as these are larger.. more data can be written into a datablock. And as numerous experienced Oracle professionals have commented, Oracle decided that the default 8KB size is a best fit at this layer.
    So extensive and very accurate testing needs to be done IMO to determine whether a larger block size is justified... and the effort to do that may just outweigh the little gains achieved by finding the "+perfect+" block size. Why not focus all that effort instead on correctly using Oracle? Application design? Data modeling? Development and coding? These are factors that play the most dominant roles at the end of day that impact and determine performance.

  • Finding appropriate block size?

    Hi All,
    I believe this might be basic question, How to find appropriate block size for building an database to an specific application?
    I had seen always default 8K block size is used every where(Around 300-350 databases i have seen till now)....but why and how do they estimate this block size blindly before creating production database.
    Also in the same way how memory settings are finalized before creating database?
    -Yasser

    Yasser,
    I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning storing, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me - probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the amazon.com website.
    Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book (no, I have not read this book, I bumped into it a couple times when performing Google searches):
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA17#v=onepage&q=&f=false
    The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:
    * “That obviously means that the CPU is overwhelmed!”
    * “Wow 4,851 wait events on the CPU, that sure is a lot!”
    * “Wow wait events on the CPU, I didn’t know that was possible?”
    * “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
    * “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”
    A Google search then might take place to determine if anyone else reports the number of waits for the CPU in an Oracle instance:
    http://www.google.com/search?num=100&q=Event+Waits+Time+CPU+time+4%2C851+4%2C042
    So, it must be correct… or is it? What does the documentation show?
    Another page from the same book:
    http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA28#v=onepage&q=&f=false
    Shows the command:
    alter system set optimizer_index_cost_adj=20 scope = pfile;Someone reading the book might be left thinking one of:
    * That looks like an easy to implement solution.
    * I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
    * That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
    * Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans?
    A Google search finds this book (I have not read this book either, just bumped into it during a search) by a different author which also shows that it is possible to alter the pfile through an ALTER SYSTEM command:
    http://books.google.com/books?id=ufz5-hXw2_UC&pg=PA158#v=onepage&q=&f=false
    So, it must be correct… or is it? What does the documentation show?
    Regarding the question of updating my knowledge, I read a lot of books on a wide range of subjects including Oracle, programming, Windows and Linux administration, ERP systems, Microsoft Exchange, telephone systems, etc. I also try to follow Oracle blogs and answer questions in this and other forums (there are a lot of very smart people out there contributing to forums, and I feel fortunate to learn from those people). As long as the book or blog offers logical reasoning, it is fairly easy to tie new material into one’s pre-existing knowledge.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Buffer I/O error on device sdd1, logical block, HDD failure? [SOLVED]

    Hello!
    I'm a bit puzzled here to be honest. Granted i'm not using linux as much as I used to (not after windows 7). I have archlinux running on my HTPC. Never had any issues before that was this severe, unless I upgraded and forgot to read news section. Booted the htpc today to be greeted by "Buffer I/O error on device sdd1, logical block" with a massive wall of text, a few seconds later "welcome to emergency mode."
    *This is NOT hdd where the linux kernel is residing on. What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something? If this was indeed my linux partition I would fully understand.
    Anyways, I used parted magic, ran fsck, smart. Sure enough fsck warned me about bad/missing superblock. Restored the superblock by using e2fsck. I had over 10 000 "incorrect size" chunks. Ran 2-3 SMART after that. fsck says okay, smart gives a 100% status report with no errors.
    Oh yeah, I have turned off FSCK completly in my fstab, thinking about at least turning it on my bigger hdds
    Questions:
    *Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
    *I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
    *Sometimes the HTPC have been forefully shut down (power breakage), could this be one of the causes of the I/O error?
    As always, thank you for your support.
    Last edited by greenfish (2013-10-23 13:23:21)

    graysky wrote:Any reallocated sectors in smartmontools?  If you run 'e2fsck -fv /dev/sdd1' does it complete wo/ errors?  Probably best to repeat for all linux partitions on that disk.
    Sorry for the late reply guys. Been busy with my other hdd that decided to screw with me. e2fsck first complained about bad sectors, and wrong size. Now it says all clean. I've decided to remove this HDD from server and mark it "damaged".
    Thank you again for your help
    alphaniner wrote:
    greenfish wrote:*Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
    *I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
    1) Don't trust the 'SMART overall-health self-assessment test result', run the diagnostics (short, long, conveyance, offline). The short and conveyance tests are quick so start with them. If they both pass run the long test. The offline test is supposed to update SMART attributes, but it generally takes longer than the long test, so save it for last if at all. Usually when I see bad drives the short or long tests pick them up.
    2) Look into smartd.service.
    greenfish wrote:What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something?
    Systemd craps itself if an fs configured to mount during boot can't be mounted, even if the fs isn't necessary for the system to boot. Rot sure about how it handles fsck failures. This 'feature' can be disabled by putting nofail in the fstab options. I add it to every non-essential automounting fs.
    Thank you for the useful information. I will save this post for future references.
    Will deff look into smartd.service, especially when I have so much data running 24/7.
    Will also update my fstab table with "nofail" like you suggested
    Thank You!

  • 8K Database Block Size in 11.5.2

    The Release Notes for 11.5.2 state the minimum database block size is 8K. However, this was not a requirement for 11.5.1 and instead patch 1301168 was created for customers with a database block size less than 8K.
    In 11.5.2 is it MANDATORY that the database block size be 8K or greater, or can patch 1301168 be applied in lieu of migrating to a larger block size?

    4K to 8K:
    Use 7.3.4 Export using Unix Pipe to create the export. Files mayhave to split.
    Rebuild the 8.1.6 database. Good Opportunity to consolidate files.
    Import using Unix Pipe with 8.1.6 import utility.
    Set data files to autoextend, unlimited including system. Do not autoextend RBS, TEMP and log.
    Use COMMIT and IGNORE on import.
    Use COMPRESS on export.
    USE very large buffers on both export and import.
    Ignore messages on SYS and SYSTEM tablespace and other definitions.
    If you are using 10.7 with 8.1.6, use interop patch,a nd other module patches (see release notes). Run adprepdb.sql after Import.
    Compile all invalid modules till they match 10.7 invalid modules. Fix invalid modules like the ones documented in release notes.
    If you doing it for just 11.5.2 Just do export import in 8.1.6 with large file system set on your file system and you should be fine. AIX supports Large file systems.
    null

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

  • Invalid free block count

    Hi,
    Earlier today I wanted to use the boot camp utility to install WinXP on my MBP. I decided to set the partition to about 60 Gb of space for Windows, but the utility crashed from some reason. (The screen went slightly dim grey and the message "please restart your computer now" appeared).
    So I restarted the computer, but now, the 60 Gb I allocated for the new partition has been interpreted by the OS as used space on the Mac HD. I tried running Disk Utility to verify what was going on, and I get the message "Disk repair failed, invalid free block count".
    How can I get my free space back? Do I really need to use the Leopard dvd to run disk utility again? I hope not, because I left the dvd at my other house, that's pretty far away from where I am right now. Any help would be appreciated! Thanks.

    I don't know that Disk Utility can help since it will not repair the corrupted Boot Camp partition. You can run Boot Camp Assistant and see if it will remove the partition. If it cannot, then the next steps can be a gamble so you should backup your OS X partition in case you may need to repartition the drive.
    If you cannot remove the partition with Boot Camp, then you can try this:
    Open Disk Utility, select the drive entry (mfgr.'s ID and drive size) from the left side bar, then click on the Partition tab in the DU main window. You should now see the partition sizing graphic. Hopefully you should see two partitions. The top partition will be your OS X system volume. The bottom partition should be the one Boot Camp tried to create. Click in the bottom partition to select it (you will see it will be outlined in blue) and click on the "-" button in the lower left corner. The partition should then be removed. Click on the Apply button and wait for the process to complete. If the process fails then you will have to start from scratch and repartition the entire drive. Your OS X installation will be lost. This is why you need to make the backup in advance. If it does work then select the sizing gadget in the lower right corner of the remaining partition and drag it down to the bottom of the sizing window, then click on the Apply button again. Wait for the repartitioning to complete.
    Just to be on the safe side I would repair the drive:
    Extended Hard Drive Preparation
    1. Open Disk Utility in your Utilities folder. If you need to reformat your startup volume, then you must boot from your OS X Installer Disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger or Leopard.)
    2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Note the SMART status of the drive in DU's status area. If it does not say "Verified" then the drive is failing or has failed and will need replacing. SMART info will not be reported on external drives. Otherwise, click on the Partition tab in the DU main window.
    3. Click on the Options button, set the partition scheme to GUID (only required for Intel Macs) then click on the OK button. Set the number of partitions from the dropdown menu (use 1 partition unless you wish to make more.) Set the format type to Mac OS Extended (Journaled.) Click on the Partition button and wait until the volume(s) mount on the Desktop.
    4. Select the volume you just created (this is the sub-entry under the drive entry) from the left side list. Click on the Erase tab in the DU main window.
    5. Set the format type to Mac OS Extended (Journaled.) Click on the Options button, check the button for Zero Data and click on OK to return to the Erase window.
    6. Click on the Erase button. The format process can take up to several hours depending upon the drive size.
    Steps 4-6 are optional but should be used on a drive that has never been formatted before, if the format type is not Mac OS Extended, if the partition scheme has been changed, or if a different operating system (not OS X) has been installed on the drive.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

Maybe you are looking for

  • AS2 Decrypt Error

    Hi Gurus I am testing AS2 connection with a partner using Seeburger adaptor. The partner is sending us edi files using AS2 HTTP protocol. In Seeburger workbench, I can see error that 'com.seeburger.ediint.edi.EDIMessageException: CANNOT DECRYPT MESSA

  • AUDIO VOLUME TOO LOW

    My hp 650 notebook's audio volume is too low. How can I make it sound louder

  • How to inject ejb in servlet

    hi all How to inject ejb in servlet ? please explain how to config my servlet and my paroject I have an ear file with two jar files Thanks in advance

  • The question remains; how often?

    This post was merged (inappropriately, in my opinion) into this morning's over-arching topic of mail server outage.  And yet the question remains, how often do these mail server outages happen?  Please post replies to this topic with your experiences

  • Event at selection-screen output

    Hi all, we have common include for all the reports. Now i need to add few selects to all screens based upon some conditions.so i added the seletions to the common include. Based upon some conditions ,i need to activate or deactivate these selections.