ORA-00374 - Block Size issue

I've already searched and researched this quite a bit. I am not using 9i like the other post about this issue from many years ago. Before you ask "Why a db_block_size of 32k?", this is for a test case. Simple as that.
My system:
Quad DC Operton, 32GB RAM, 4x 15k SAS disks in hardware RAID10.
Windows Server 2008 R2 Standard 64bit (I much prefer Linux, but this test requires Win2008)
Oracle 11g R2 64-bit EE
The Disk with the O/S and ORACLE_HOME is formatted with the default 4k size allocation units.
The allocated database file storage IS formatted with 32k sized allocation units and is on a SAN.
I know it's 32k because when I presented the LUN to the server, I formatted it with 32k allocation units. See below the output paying attention to the * * section:
C:\Users\********************>fsutil fsinfo ntfsinfo o:
NTFS Volume Serial Number : 0x60d245e2d245bcd2
Version : 3.1
Number Sectors : 0x00000000397fc7ff
Total Clusters : 0x0000000000e5ff1f
Free Clusters : 0x0000000000e5f424
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster :               32768
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000000040000
Mft Start Lcn : 0x0000000000018000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x0000000000018000
Mft Zone End : 0x0000000000019920
RM Identifier: 35506ECB-7F9E-11DF-99F3-001EC92FDE3F
So, my db file storage is formatted with a 32k allocation size.
My issue is this:
Oracle shows me the 32k block size when running DBCA using the Custom template. I choose it and the other required options are configured, and when it starts building the DB, I get this:
ORA:00374: parameter db_block_size = 32768 invalid ; must be a multiple of 512 in the range [2048..16384].
Other responses I've seen to this says "Windows doesn't support a allocation size above 8k or 16k" which is utterly absurd since I run SQL2008 on a few machines and it DOES support up to a 64k allocation size, which is what I run. I know this for a FACT.
Windows DOES support up to a 64k allocation size. Does anyone know why Oracle is giving me a hard time about it?
I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
Edited by: user6517483 on Jun 24, 2010 9:21 PM

user6517483 wrote:
I saw Metalink note 794842.1, but I'd like to know the reasoning/logic for this limitation?
A WAG.. Oracle is written to be run on a wide variety of operating system. As operating systems differ one typically designs something that is equivalent in functionality to a Windows HAL - this provides an abstraction layer between the kernel services/calls that is needed by the s/w and the actual implementation of such by the kernel itself.
So despite the kernel supporting feature X, it could be different than similar features supported on other kernels and difficult to implement via this s/w HAL-like interface. The Windows kernel has a lot of differences from the Linux kernel for example.. and somehow Oracle needs to make the same core db s/w run on both. Not unrealistic to expect that some kernel features will be supported better than others, as there is a common denominator ito design and implementation in the core s/w.
As this is a case with block sizes... not that critical IMO. I have played with different block sizes in a 20+TB storage system and Oracle RAC (10.2g) on Linux (part of testing the combo of storage system and cluster and technologies used). Larger block sizes made zero difference to raw I/O performance. The impact was instead more a logical one. Fewer db blocks can be cached as these are larger.. more data can be written into a datablock. And as numerous experienced Oracle professionals have commented, Oracle decided that the default 8KB size is a best fit at this layer.
So extensive and very accurate testing needs to be done IMO to determine whether a larger block size is justified... and the effort to do that may just outweigh the little gains achieved by finding the "+perfect+" block size. Why not focus all that effort instead on correctly using Oracle? Application design? Data modeling? Development and coding? These are factors that play the most dominant roles at the end of day that impact and determine performance.

Similar Messages

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • ORA-01144: File size (4194304 blocks) exceeds maximum of 4194303 blocks

    Hello all,
    Wen i try to add new datafile(32GB) to tablespace i found the below error. I have space available in my disk, why i am not able to add new datafile to tablespace?
    ERROR at line 1:
    ORA-01144: File size (4194304 blocks) exceeds maximum of 4194303 blocks
    here is my db_block_size information:
    NAME TYPE VALUE
    db_block_size integer 8192
    How can i add new datafile with out any issues.
    Regards,
    RHK

    Thanks,
    I minimise the size, now i am able to add new datafile.
    From long time back i am getting below error,
    ORA-1653: unable to extend table PQB_ADMIN.RPT_TR by 128 in tablespace USERS
    ORA-1653: unable to extend table PQB_ADMIN.RPT_TR by 8192 in tablespace USERS
    to avoid this error i added new datafile. Now the data will sync automatically? (it may be nearly 2 months data)
    Regards,
    RHK

  • ORA-00349: failure obtaining block size

    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 1 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 1 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

  • "ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 blocks.

    Hi Team,
    While increasing the tablespace i am getting below error. How to handle this any one please suggest.
    SQL> set lin 300
    SQL> col TABLESPACE_NAME for a25
    SQL> col FILE_NAME for a65
    SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB
    2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
    TABLESPACE_NAME FILE_ID FILE_NAME AUT MB
    SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
    SQL> Alter database datafile 3 RESIZE 60000M;
    Alter database datafile 3 RESIZE 60000M
    ERROR at line 1:
    ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 blocks
    Regards,

    941829 wrote:
    Hi Team,
    While increasing the tablespace i am getting below error. How to handle this any one please suggest.
    SQL> set lin 300
    SQL> col TABLESPACE_NAME for a25
    SQL> col FILE_NAME for a65
    SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB
    2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
    TABLESPACE_NAME FILE_ID FILE_NAME AUT MB
    SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
    SQL> Alter database datafile 3 RESIZE 60000M;
    Alter database datafile 3 RESIZE 60000M
    ERROR at line 1:
    ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 blocks
    Regards,You must know that its really important to mention your db version and other details so that we can answer in a more proper manner. Since you haven't mentioned your db size and block size so here is a generic reply. If you are using 8kb Blocksize,you should be able to go till 32gb of one file(8192*4194303/1024/1024=>32G). So your solution would be to either go for a different file or use Big File(if you are on 10g and above) .
    HTH
    Aman....

  • TABLE --- BLOCK SIZE - HELP

    Hi,
    Good day to all.
    There are totally 4 tables(EMP, DEPT, STORE_INFO,WAREHOUSE_INFO) which has the range partition been applied.(Partition is on the date range for current date i.e. for every 1 day the partition will be automatically created)
    There is a timer concept been applied such as the purge should automatically be done based on the days we pass.
    But I can see the information in my table as “11 – Which indicates that the table is overloaded”.
    For my surprise; for all the tables (NUM_ROWS and BLOCKS is NULL) except “EMP” table for which the NUM_ROWS IS NULL and BLOCKS size is 1,540,356.
    1)     Will this be an issue during purge?
    2)     Why NUM_ROWS is NULL and BLOCKS size is showing high.
    Please help....

    Thanks Hemant for your prompt reply.
    Sorry and forgot to mention that this information is for our records "11 - Status which moves when table is overloaded".
    Actually i was in an assumption that "because of the BLOCK SIZE" the table overloaded issue was getting raised.
    Is the below code right?
    When i am running it; it says the
    Error
    ===
    ORA-20000: TABLE "MOQ"."EMP" does not exist or insufficient privileges
    ORA-06512: at "SYS.DBMS_STATS", line 2105
    ORA-06512: at "SYS.DBMS_STATS", line 5210
    ORA-06512: at "SYS.DBMS_STATS", line 5243
    ORA-06512: at line 8
    DECLARE
       num_rows      NUMBER;
       num_blocks    NUMBER;
       avg_row_len   NUMBER;
    BEGIN
       -- retrieve the values of table statistics on MOQ.EMP
       -- statistics table name: EMP    statistics ID: TEST1
       -- MOQ - SCHEMA_NAME
       DBMS_STATS.get_table_stats ('MOQ'
                                  ,'EMP'
                                  ,NULL
                                  ,'EMP'
                                  ,'TEST1'
                                  ,num_rows
                                  ,num_blocks
                                  ,avg_row_len
       -- print the values
       DBMS_OUTPUT.put_line (   'num_rows='
                             || num_rows
                             || ',num_blocks='
                             || num_blocks
                             || ',avg_row_len='
                             || avg_row_len
    END;Or only DBA has the privileage?
    Please help

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

  • Data block size

    I have just started reading the concepts and I got to know that Oracle database data is stored in data blocks. The standard block size is specified by
    DB_BLOCK_SIZE init parameter. Additionally we can specify upto 5 other block sizes using DB_nK_CACHE_SIZE parameter.
    Let us say I define in the init.ora
    DB_BLOCK_SIZE = 8K
    DB_CACHE_SIZE = 4G
    DB_4K_CACHE_SIZE=1G
    DB_16K_CACHE_SIZE=1G
    Questions:
    a) Does this mean I can create tablespaces with 8K, 4K and 16K block sizes only?
    b) whenever I query data from these tablespaces, it will go and sit in these respective cache sizes?
    Thanks in advance.
    Neel

    yes, it will give error message if you create tablespace with non standard block size without specify the db_nk_cache_size parameter in the init parameter file .
    Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE initialization parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter setting. Although redundant, specifying a BLOCKSIZE equal to the standard block size, as specified by the DB_BLOCK_SIZE initialization parameter, is allowed.
    The following statement creates tablespace lmtbsb, but specifies a block size that differs from the standard database block size (as specified by the DB_BLOCK_SIZE initialization parameter):
    CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    BLOCKSIZE 8K;
    reference:-http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tspaces003.htm

  • Drives, block size and raptor300 choice

    Hi,
    Got MacPro2,1 (07 flavour) and was planning on upgrading some internal drives. My boot is currently using two striped WD 500G's with a block size of 16k. If i were to replace these with newer drives and changed the block size to 32k, would there be any issues to speak of ? Thinking of superduper backups, Adobe CS3 licensing, Time machine doing whole boot update etc.
    Alternatively i may go for a pared-down boot and use one raptor 300GB, but which one ?
    WD3000BLFS / WD3000HLFS / WD3000GLFS ? - HLFS looks like the one with regularly placed SATA connections, but unsure which fits the MacPro sleds. Also is there a link for further isolation solutions eg. vibration dampeners.
    Do the Raid Edition's of WD drives still cut the mustard in terms of performance (1TB Western Digital WD1002FBYS RE3, SATA 3Gb/s, 7200 rpm, 32MB Cache, 4.20 ms).
    Many thanks
    J

    16k use to be slightly better for boot drive. The trouble wtih Apple's, I don't know how to change it on the fly like I can with SoftRAIDs.
    Okay, so you probably might want 4 WD Veloci's for scratch, or 3 SSDs.
    Any WD Black or RE3 should be just fine, 500GB up to 1TB, and then you get into 2TB Green RE4, yes, an RE Green drive edition.
    The other factor is you want even more than 16GB RAM to be used as cache for primary 'scratch'.
    There is a guide to photoshop acceleration and optimizing up on
    http://www.macgurus.com - lower left side panel of links to articles.

  • DB Cloning.file size is not a multiple of logical block size

    Dear All,
    I am trying to create database in windowsXP from the database files running in Linux.
    When i try to create control file, i m getting the following errors.
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    'D:\oracle\orcl\oradata\orcl\system01.dbf'
    ORA-27046: file size is not a multiple of logical block size
    OSD-04012: file size mismatch (OS 367009792)
    Pls tell me the workarounds.
    Thanks
    Sathis.

    Hi ,
    I created database service by oradim. Now i m trying to create control file after editing the controlfile with the location of windows datafiles(copied from Linux)
    Thanks,
    Sathis.

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

  • How to find block size in OS

    I know db_block_size and db_file_multiblock_read_count of Oracle should be multiple of OS(operationg System).
    Can anybody suggest me , how can i find out block size in window or Linux.
    Thanks in advance
    Tinku

    $show parameter db_block_size this is the setting from your init.ora file.
    This parameter is set at the database creation time and cannot be altered.
    In linux system use
    dumpe2fs -fh /dev/hdb
    to get information about your block size.
    In my case it was 4k so my db_block_size will be multiples of 4k.
    http://www.dizwell.com/html/db_block_size.html
    Thanks
    Gopal
    visit
    http://dba.shilpatech.com/

  • Oracle Block Size - question for experts

    Hi ,
    For years i thought that my system block size was 8K.
    Lately due to an HPUX Bug i found that the file system block size is gust .... 1K
    (HP DocId: DCLKBRC00006913 fstyp(1m) returns unexpected block size (f_bsize) for VXFS )
    My instance is currently 10204 but previously was 7.3 --> 8 --> 8174 --> 10204.
    Since its old instance its block size is gust 4kb.
    We are planing to create new file system block size of 8k.
    The instance size is about 2 TB.
    Creating the whole database with 8 kb is impossible since its 24*7 instance.
    Do you think that i sould move gust few important tables to a new tablespace with 8k block size , or should i leave it with 4 kb ?
    Thanks

    Given that your Oracle Database Block_Size (4K) is a multiple of the FileSystem Block_Size (1K), there should be no inherent significant issue, as such.
    Yes, it would have been nice to have an 8KB Oracle Database Block_Size but whether you should recreate your FileSystems to 8KB is a difficult question. There would be implications on PreFetch that the OS does and on how the underlying Storage (must be a SAN, I presume) handles those requests.
    A thorough test (if you can setup a test environment for 2TB such that it does NOT share the same HW, doesn't complicate PreFetches in the existing SAN) would be well adviced.
    Else, check with HP and Veritas support if there are known issues and/or any Desupport plans for this combination ?!
    Oracle, obviously, would have issues with Index Key Length sizes if the Block Size is 4KB. Presumably you do not need to add any new indexes with very large keys.
    Having said that, you would have read all those posts about how Oracle doesn't (or really does ?) test every different block-size ! However, Oracle had, before 8i, been using 2K and 4K block sizes. Except that the new features (LMT, ASSM etc) may not have been well tested.
    Since you upgraded from 7.3 in place without changing the Block_Size, I would venture to say that your database is still using Dictionary Managed and Manual Allocation and Segment Space Management Manual ?
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

Maybe you are looking for

  • Need to attavh a file with email message

    Hi All, I need to attach a file to the email message, for the same I am using the function module SO_OBJECT_SEND. but coudl not succeed. Can any one have insight over the function module that how I can use it for a attching a mail to email message. P

  • Lion audio dropouts with external speakers?

    Since I installed OS 10.7.3 (upgraded from 10.6.8), I get audio dropouts whenever I use external speakers.  I use my Mini (purchased a year ago in Feb 2011) connected to my TV for Skype, Netflix, and listening to iTunes, so this effectively eliminate

  • Setting Date format in dashboard prompt and filter view.

    Hi All: I have created the dashboard promt upon two dates. But it shows date like "11/1/2007 6:00:00 AM" But I want only "dd-mm-yyyy" format. What I can do for it. I use only current_date. Thanks Ali Haroon

  • Print JVM Objects

    Hi Gurus, I want to print the objects loaded in the JVM at any given point of time. Is there any tools or API available? Can anybody help me on the same? Lovin.V

  • How to create in InDesign with "Helios Server" a flattened pdf file?

    Hi, anybody know about how to use Helios server to create flattened pdf file? some details: software: InDesign CS4 Operating System: Macintosh