ASM instance's block size
Hello.
I have an Oracle 10.2 database running on Linux. Its block size is 8KB. I created ASM instance using DBCA. I see its block size is 4KB, but I didn't specify it anywhere while creation.
Why is that?
Thx.
A.
As for databases 'later' is not possible. I'm not sure what happens when you choose another blocksize at creation time. ASM blocksize is only for metadata,it's not related to the actual database data (having the database blocksize) stored in ASM diskgroups.
Werner
Similar Messages
-
Easy questions for Pro about ASM block size
hi
how can we change DB_BLOCK_SIZE in ASM (while instalation)
we have a default of 4096As for databases 'later' is not possible. I'm not sure what happens when you choose another blocksize at creation time. ASM blocksize is only for metadata,it's not related to the actual database data (having the database blocksize) stored in ASM diskgroups.
These are the blocks which are used for extent maps in shared pool.o you dont need to worry about this blocksize difference.
here is the actul link that i copied over from(above)
ASM instance's block size -
Oracle Block Size - question for experts
Hi ,
For years i thought that my system block size was 8K.
Lately due to an HPUX Bug i found that the file system block size is gust .... 1K
(HP DocId: DCLKBRC00006913 fstyp(1m) returns unexpected block size (f_bsize) for VXFS )
My instance is currently 10204 but previously was 7.3 --> 8 --> 8174 --> 10204.
Since its old instance its block size is gust 4kb.
We are planing to create new file system block size of 8k.
The instance size is about 2 TB.
Creating the whole database with 8 kb is impossible since its 24*7 instance.
Do you think that i sould move gust few important tables to a new tablespace with 8k block size , or should i leave it with 4 kb ?
ThanksGiven that your Oracle Database Block_Size (4K) is a multiple of the FileSystem Block_Size (1K), there should be no inherent significant issue, as such.
Yes, it would have been nice to have an 8KB Oracle Database Block_Size but whether you should recreate your FileSystems to 8KB is a difficult question. There would be implications on PreFetch that the OS does and on how the underlying Storage (must be a SAN, I presume) handles those requests.
A thorough test (if you can setup a test environment for 2TB such that it does NOT share the same HW, doesn't complicate PreFetches in the existing SAN) would be well adviced.
Else, check with HP and Veritas support if there are known issues and/or any Desupport plans for this combination ?!
Oracle, obviously, would have issues with Index Key Length sizes if the Block Size is 4KB. Presumably you do not need to add any new indexes with very large keys.
Having said that, you would have read all those posts about how Oracle doesn't (or really does ?) test every different block-size ! However, Oracle had, before 8i, been using 2K and 4K block sizes. Except that the new features (LMT, ASSM etc) may not have been well tested.
Since you upgraded from 7.3 in place without changing the Block_Size, I would venture to say that your database is still using Dictionary Managed and Manual Allocation and Segment Space Management Manual ?
Hemant K Chitale
http://hemantoracledba.blogspot.com -
RAID, ASM, and Block Size
* This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
Hello,
I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
Thanks!Hi
RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
My 2p
Tony -
Install Recommendations (RAID, ASM, Block Size etc)
Hello,
I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
Thanks!The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
~Jer -
ASM instances on 2 node Oracle RAC 10g r2 on Red Hat 4 u1
Hi all
I'm experiencing a problem in configuring diskgroups under +ASM instances on a two node Oracle RAC.
I followed the official guide and also official documents from metalink site, but i'm stuck with the visibility of asm disks.
I created fake disks on nfs with Netapp certified storage binding them to block device with the usual trick "losetup /dev/loopX /nfs/disk1 " ,
run "oracleasm createdisk DISKX /dev/loopX" on one node and
"oracleasm scandisks" on the other one.
With "oracleasm listdisks" i can see the disks at OS level in both nodes , but , when i try to create and mount diskgroup in the ASM instances , on the instance on which i create the diskgroup all is well, but the other one doesn't see the disks at all, and diskgroup mount fails with :
ERROR: no PST quorum in group 1: required 2, found 0
Tue Sep 20 16:22:32 2005
NOTE: cache dismounting group 1/0x6F88595E (DG1)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DG1 was not mounted
any help would be appreciated
thanks a lot.
AntonelloI'm having this same problem. Did you ever find a solution?
-
ORA-27046: file size is not a multiple of logical block size
Hi All,
Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
ERROR -->
SQL> !pwd
/oracle/SID/sapreorg
SQL> @CONTROL.SQL
ORACLE instance started.
Total System Global Area 3539992576 bytes
Fixed Size 2088096 bytes
Variable Size 1778385760 bytes
Database Buffers 1744830464 bytes
Redo Buffers 14688256 bytes
CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file
'/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
ORA-27046: file size is not a multiple of logical block size
Additional information: 1
Additional information: 1895833576
Additional information: 8192
Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
/oracle/SID/102_64/dbs$ grep -i block initSID.ora
Kindly look into the issue.
Regards,
SoumyaPlease chk the following things
1.SPfile corruption :
Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
Then create the control file from the script.
2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
3. Has the db_block_size parameter been changed in init file by any chance.
Regards
Kausik -
ORA-00349: failure obtaining block size for '+Z' in Oracle XE
Hello,
I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
ERROR at line 1:
ORA-00349: failure obtaining block size for '+Z'
ORA-06512: at line 14
Please let me know how to go about resolving this issue.
Thank you.
See below for detail:
Connected.
SQL> @?/sqlplus/admin/movelogs;
SQL> Rem
SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
SQL> Rem
SQL> Rem movelogs.sql
SQL> Rem
SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
SQL> Rem
SQL> Rem NAME
SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem This script can be used to move online logs from old online
log
SQL> Rem location to Flash Recovery Area. It assumes that the database
SQL> Rem instance is started with new Flash Recovery Area location.
SQL> Rem
SQL> Rem NOTES
SQL> Rem For use to rename online logs after moving Flash Recovery
Area.
SQL> Rem The script can be executed using following command
SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
SQL> Rem
SQL> Rem MODIFIED (MM/DD/YY)
SQL> Rem banand 01/19/06 - Created
SQL> Rem
SQL>
SQL> SET ECHO ON
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL> declare
2 cursor rlc is
3 select group# grp, thread# thr, bytes/1024 bytes_k
4 from v$log
5 order by 1;
6 stmt varchar2(2048);
7 swtstmt varchar2(1024) := 'alter system switch logfile';
8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
9 begin
10 for rlcRec in rlc loop
11 stmt := 'alter database add logfile thread ' ||
12 rlcRec.thr || ' size ' ||
13 rlcRec.bytes_k || 'K';
14 execute immediate stmt;
15 begin
16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
17 execute immediate stmt;
18 exception
19 when others then
20 execute immediate swtstmt;
21 execute immediate ckpstmt;
22 execute immediate stmt;
23 end;
24 execute immediate swtstmt;
25 end loop;
26 end;
27 /
declare
ERROR at line 1:
ORA-00349: failure obtaining block size for '+Z'
ORA-06512: at line 14
Can someone point me in the right direction as to what I may be doing wrong here - Thank you!888442 wrote:
I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
ERROR at line 1:
ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
On standby only standby redo log files will be used. Not sure what you are trying to do.
here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
sys@ORCL> select member from v$logfile;
MEMBER
C:\ORACLE\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\ORADATA\ORCL\REDO01.LOG
sys@ORCL> alter database add logfile group 4 (
2 'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
3 'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
4 'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
Database altered.
sys@ORCL> select member from v$logfile;
MEMBER
C:\ORACLE\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\ORADATA\ORCL\REDO01.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
6 rows selected.
sys@ORCL>
Your profile:-
888442
Newbie
Handle: 888442
Status Level: Newbie
Registered: Sep 29, 2011
Total Posts: 12
Total Questions: 8 (7 unresolved)
Close the threads if answered, Keep the forum clean. -
How does 1 ASM instance in a 10 node RAC cause all 10 ASM instance to hang
Linux RHEL4
11.2.0.1.0 GI
11.2.0.1.0 RDBMS
11.1.0.7.0 RDBMS
10.2.0.4.4 RDBMS
Yesterday we had one of the ASM instance on our 10-node cluster hang with a latch "ASM file allocation latch". Oracle support has stated we are hitting a bug with is fixed in the latest PSU.
What I am trying to understand is how a latch on one ASM instance caused all 10 ASM instances to hang?
Oracle Supports explanation still does not answer how all 10 ASM instance were affected other then the faulty code that the patch fixes. Bellow is info out of the SAR.
Data Guard Site:
================
ARC1 PID 21508 reports ORA-240 'control file enqueue held for more than 120 seconds' at Thu Oct 28 11:35:54 2010. This message means ARC1 PID 21508 was holding the enqueue past the maximum tine limit of 120 seconds.
Next ARC0 PID 17398 reports ORA-16146: 'control file enqueue unavailable' at Thu Oct 28 11:36:52 2010. This means ARC0 PID 17398 tried to access the controlfile and it could get the lock.
ARC1 PID 21508 is then killed at Thu Oct 28 11:49:57 2010.
Killing enqueue blocker (pid=21508) on resource CF-00000000-00000000 by (pid=23078) by killing session 5.3
We see that RSM has been restarted by Broker. This matches to the alert.log and messages showing RSM being restarted several time. i.e. RSM0 started with pid=84, OS id=27915
Since this started happening around 11:14, the problem may have really started then instead of w/ the CF enqueue block.
The primary appears to be shipping redo to the standby using asynchronous mode. Unless there was a gap, I do no know if ARC1 should have been doing any remote network I/O. Because a log switch to thread 1 sequence 5235 had just happened at 11:33am, I suspect it was a local disk I/O that was started at least. The ORA-240 is reported at 11:35, exactly 120 sec (or 2min) from this log switch. Notice also the entry to show the log thread 1 sequence 5234 was registered to the controlfile didn't happen yet.
ASM Side:
============
From the systemstate dump on ASM, multiple processes are waiting on 'ASM file allocation latch' and we are kind of stuck on stack:
ksedsts()+461<-ksdxfstk()+32<-ksdxcb()+1782<-sspuser()+112<-0000003218E0C5B0<-kfuhInsert()+175<-kffilCreate()+601<-kfnsUFG()+3691<-kfnsBackground()+4382<-kfnDispatch()+527<-opiodr()+1149<-ttcpip()+1251<-opitsk()+1633<-opiino()+958<-opiodr()+1149<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+214<-main()+201<-__libc_start_main()+219
This caused the hang on the ASM side and hence ASM was not responding to the database. The CF Enqueue issue on the database was a side effect of this.
This is a direct <BUG:9232266> which is a duplicate of <BUG:8974548>Thanks Murali,
I was little reluctent to open a tar since my experince says that its better to troubleshoot the issue yourself instead of spending time with the Oracle support :)
Well, I got some more information on this : the sql for the process id of ASM instance is :
Alter Diskgroup mount all;
It looks like it is unable to mount the diskgroup and get hunged at that point , is it due to the new disk added to the system or do we missed something after adding the new disk.
Also, the ASM process can not be killed from unix box while the ASM is down.
Will appreciate your time if we can move in any directions from here ..
Thanks ,
Ankur -
Error while starting ASM instance
when i was trying to startup my ASM database , i was getting the following error.
SQL> startup
ASM instance started
Total System Global Area 125829120 bytes
Fixed Size 2019000 bytes
Variable Size 98644296 bytes
ASM Cache 25165824 bytes
ORA-15032: not all alterations performed
ORA-15003: diskgroup "DBDATA" already mounted in another lock name space"
thnx in advanceHi,
i was not aware of ASM anymore, i can give some more details of this.
There was already an instance named +ASM in the host, which has the diskgroup 'DBDATA' created.
But now i have created a new instance and i was trying to connect to the same diskgroup.
I dont have permission of the root, but i have the oracle login.
could u pleaze help me in this regard...
i need to connect to the diskgroup, if i try to stop the other instance, i was not getting the error, but the diskgroup is getting dis-mounted.
thnx alot for the replies -
OSD-04001: invalid logical block size (OS 2800189884)
My Windows 2003 crashed which was running Oracle XE.
I installed Oracle XE on Windows XP on another machine.
I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
When I start the database in WinXP using SQLPLUS i get the following message
SQL> startup
ORACLE instance started.
Total System Global Area 146800640 bytes
Fixed Size 1286220 bytes
Variable Size 62918580 bytes
Database Buffers 79691776 bytes
Redo Buffers 2904064 bytes
ORA-00205: error in identifying control file, check alert log for more info
I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 4 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
Wed Apr 25 18:38:36 2007
ALTER DATABASE MOUNT
Wed Apr 25 18:38:36 2007
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Wed Apr 25 18:38:36 2007
ORA-205 signalled during: ALTER DATABASE MOUNT...
ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
ORA-27047: unable to read the header block of file
OSD-04001: invalid logical block size (OS 2800189884)
Please help.
Regards,
ZulqarnainHi Zulqarnain,
Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
Regards -
ASM instance wont mount diskgroup..
HI, I have 10g release 2 installed on CENTOS 4.4, I use ASM striping with with 4 raw disks.
I had a system crash due to a power failure and now the the ASM wont mount the diskgroup.
export $ORACLE_HOME=+ASM
SQL> startup mount;
ASM instance started
Total System Global Area 130023424 bytes
Fixed Size 2071000 bytes
Variable Size 102786600 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted
SQL> alter diskgroup RESEARCH1 mount;
alter diskgroup RESEARCH1 mount
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup
"RESEARCH1"
now when I use /etc/init.d/oracleasm listdisks I can see all my disks:
DISK1
DISK2
DISK3
DISK4
then i tried to change asm_diskstring to point the mounting point, here is my ora file:
*.asm_diskgroups='RESEARCH1'
+ASM.asm_diskgroups='RESEARCH1' #Manual Dismount
*.asm_diskstring='/dev/oracleasm/disks'
*.background_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/bdump'
*.core_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/cdump'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
*.user_dump_dest='/home/oracle/product/10.2.0/db_1/admin/+ASM/udump'
any ideas?
Thanks
AssafHi,
by oracleasm lib utility you can configure as below
# /etc/init.d/oracleasm configure
Default user to own the driver interface [oracle]: oracle
Default group to own the driver interface [dba]: dba
Start Oracle ASM library driver on boot (y/n) [y]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Scanning system for ASM disks: [ OK ]
# /etc/init.d/oracleasm enable
Thanks -
Using DBCA to create ASM instance; no member disks found
OK, we've been struggling with this for two days and we can't seem to find the magic mojo to get it working.
I've got two Redhat 64-bit machines running Oracle 10.2.0.3 on them. Clusterware is installed fine and communicating properly. We've installed ASMLib and created our ASM disks fine and they list correctly using the oracleasm utility.
Our problem is that when I go to create the ASM instance (using DBCA) the member disks do not show up. I have tried changing the "Disk Discovery Path", I've created listeners, started DBCA from different Oracle homes, re-installed, etc... nothing seems to work.
I've even seen a few threads where people have experienced the same problem but have not seen solutions. Anyone have ideas how I can resolve this issue?Actually, it needs to be root:dba, sorry for the confusion:
$>ls -l
total 0
crw-rw---- 1 root dba 162, 1 May 15 22:49 raw1
crw-rw---- 1 root dba 162, 10 May 15 22:49 raw10
crw-rw---- 1 root dba 162, 11 May 15 22:49 raw11
crw-rw---- 1 root dba 162, 12 May 15 22:49 raw12
crw-rw---- 1 root dba 162, 13 May 15 22:49 raw13
crw-rw---- 1 root dba 162, 14 May 15 22:49 raw14
crw-rw---- 1 root dba 162, 15 May 15 22:49 raw15
crw-rw---- 1 root dba 162, 16 May 15 22:49 raw16
crw-rw---- 1 root dba 162, 17 May 15 22:49 raw17
crw-rw---- 1 root dba 162, 18 May 15 22:49 raw18
crw-rw---- 1 root dba 162, 19 May 15 22:49 raw19
crw-rw---- 1 root dba 162, 2 May 15 22:49 raw2
crw-rw---- 1 root dba 162, 20 May 15 22:49 raw20
crw-rw---- 1 root dba 162, 21 May 15 22:49 raw21
crw-rw---- 1 root dba 162, 22 May 15 22:49 raw22
crw-rw---- 1 root dba 162, 23 May 15 22:49 raw23
crw-rw---- 1 root dba 162, 24 May 15 22:49 raw24
crw-rw---- 1 root dba 162, 25 May 15 22:49 raw25
crw-rw---- 1 root dba 162, 26 May 15 22:49 raw26
crw-rw---- 1 root dba 162, 27 May 15 22:49 raw27
crw-rw---- 1 root dba 162, 28 May 15 22:49 raw28
crw-rw---- 1 root dba 162, 29 May 15 22:49 raw29
crw-rw---- 1 root dba 162, 3 May 15 22:49 raw3
crw-rw---- 1 root dba 162, 30 May 15 22:49 raw30
crw-rw---- 1 root dba 162, 31 May 15 22:49 raw31
crw-rw---- 1 root dba 162, 32 May 15 22:49 raw32
crw-rw---- 1 root dba 162, 33 May 15 22:49 raw33
crw-rw---- 1 root dba 162, 4 May 15 22:49 raw4
crw-rw---- 1 root dba 162, 5 May 15 22:49 raw5
crw-rw---- 1 root dba 162, 6 May 15 22:49 raw6
crw-rw---- 1 root dba 162, 7 May 15 22:49 raw7
crw-rw---- 1 root dba 162, 8 May 15 22:49 raw8
crw-rw---- 1 root dba 162, 9 May 15 22:49 raw9
ADPRD2@oracle15 oracle Last rc=0 /dev/raw
$>
Also, I see that you're using block devices while I am using raw devices. I know that
Oracle11 can utilize block devices, especially on RHEL 5, but I am not sure that
Oracle10g can do that. -
Specifying segments and block size manaually
Hi, just a quick question,
But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
Any help would be appriciatedHi,
In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
-Ease of use
-Better space utilization, especially for the objects with highly varying size rows
-Better run-time adjustment to variations in concurrent access
-Better multi-instance behavior in terms of performance/space utilization
For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
see this link
http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :) -
Database Block Size Smaller Than Operating System Block Size
Finding that your database block size should be in multiples of your operating system block size is easy...
But what if the reverse of the image below were the case?
What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block? Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
Is it different if you use ASM?
I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache. I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
One index in particular has a column that indicates the "state" of the record, it is a very dense index. Records will flood in, and then multiple processes will poll, do work, and change the state of the record. The record eventually reaches a final state and is never updated again.
I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
Any thoughts or wisdom is much appreciated.
"The database requests data in multiples of data blocks, not operating system blocks."
"In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIBBut what if the reverse of the image below were the case?
What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block? Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
Is it different if you use ASM?
I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache. I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
One index in particular has a column that indicates the "state" of the record, it is a very dense index. Records will flood in, and then multiple processes will poll, do work, and change the state of the record. The record eventually reaches a final state and is never updated again.
I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
Any thoughts or wisdom is much appreciated.
"The database requests data in multiples of data blocks, not operating system blocks."
"In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
>
At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
>
There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks but for a table of any size that is unlikely to be much of an issue.
The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.
Maybe you are looking for
-
I need some advice. I have a 1st generation nano, (I also use it with the Nike + if that matters), it was skipping songs, and just playing whatever it wanted, so I just restored it, and now it is froze on the screen "language". I have also tried to r
-
Installing Adobe Acrobat X Pro
Machine crashed and need to install adobe acrobat x pro on new computer. Entered serial number of xpro anand was prompted for qualifying product. went to list of products in my account and entered that information but was not accepted. Need help p
-
HELPHELP HELP HELP HELP HELP HELP HELP HELP URGENT
I'm getting a bit sick of Help as a subject line. Perhaps we could patition Sun to make a modification to the forums so that the word Help (and urgent for that matter) are not allowed in the subject of messages. Any votes? matfud
-
Printing Acrobat PDF in Leopard printer menu?
Hi all, Is it at all possible to use the Acrobat printer to print out PDF´s in Leopard from the printer menu. It went out when Leopard came in and I havent seen a solution yet. Any ideas?
-
Reduced Sized PDF - "An error was encountered while saving the document" - Adobe Acrobat 11
I have users randomly getting this error when trying to File > Save as Other > Reduced Sized PDF. There's nothing unusual about the PDFs. They're not encrypted or secured. I've tried creating "Preferences" and "PDF Optimizer" folders in their profile