Physical disk IO size smaller than fragment block filesystem size ?
Hello,
in one default UFS filesystem we have 8K block size (bsize) and 1K fragmentsize (fsize). At this scenary I thought all "FileSytem IO" will be 8K (or greater) but never smaller than the fragment size (1K). If a UFS fragment/blocksize is allwasy several ADJACENTS sectors on disk (in a disk with sector=512B), all "physical disk IO" it will allways, like "Filesystem IO", greater than 1K.
But with dtrace script from DTrace Toolkit (bitesize.d) I can see IO with 512B size.
¿What is wrong in my assumptions or what is the explanation?
Thank you very much in advance!!
rar wrote:
Like Jim has indicated me in unix.com forum, That cross-post thread happens to be:
http://www.unix.com/unix-advanced-expert-users/215823-physical-disk-io-size-smaller-than-fragment-block-filesystem-size.html
You could have pasted the URL to be polite ...
Similar Messages
-
How to determine physical disk size on solaris
I would like to know whether there is a simple method available for determining physical hard disk sizes on Sun sparc machines. On HP based machines it is simple:
1. run "ioscan -fnC disk" - to find all disk devices and there raw device target address ie /dev/rdsk/c0t2d2
2. run "diskinfo /dev/rdsk/c0t2d2" - display the attributes of the physical disk including size in Kbyes.
This simple process allows me create simple scripts that I can use to automate collation of audit data for a large number of HP machines.
On Sun based machines I've looked at the prtvtoc, format, and devinfo commands and have had no joy. Methods and suggestion will be well appriciated.ok,
format should say .....eg
type format ..
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
if this is not a Sun disk, and you do not get the info,
select the required disk and select partition and then print. This will display what you need.
hope this helps -
Database Block Size Smaller Than Operating System Block Size
Finding that your database block size should be in multiples of your operating system block size is easy...
But what if the reverse of the image below were the case?
What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block? Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
Is it different if you use ASM?
I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache. I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
One index in particular has a column that indicates the "state" of the record, it is a very dense index. Records will flood in, and then multiple processes will poll, do work, and change the state of the record. The record eventually reaches a final state and is never updated again.
I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
Any thoughts or wisdom is much appreciated.
"The database requests data in multiples of data blocks, not operating system blocks."
"In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIBBut what if the reverse of the image below were the case?
What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block? Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
Is it different if you use ASM?
I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache. I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
One index in particular has a column that indicates the "state" of the record, it is a very dense index. Records will flood in, and then multiple processes will poll, do work, and change the state of the record. The record eventually reaches a final state and is never updated again.
I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
Any thoughts or wisdom is much appreciated.
"The database requests data in multiples of data blocks, not operating system blocks."
"In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
>
At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
>
There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks but for a table of any size that is unlikely to be much of an issue.
The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache. -
Causes of ORA-01200: actual file size of x is smaller than correct size n ?
Hello everyone
We are running Oracle 11.2.0.3 64-bit E/E on Oracle Linux 6.2 with UEK R2 on X64.
Using Grid and ASM 11.2.0.3 and OMF names.
The database files are alll on SAN, the SAN vendor name not disclosed here to protect the innocent/guilty 8^)
I have a Test database MYDB (in NOARCHIVELOG mode) and after a normal server reboot, not a crash, the following error occured on Oracle database startup.
srvctl start database -d MYDB
PRCR-1079 : Failed to start resource ora.mydb.db
CRS-5017: The resource action "ora.mydb.db start" encountered the following error:
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: '+ASMDATA/mydb/datafile/system.256.787848913' <<<<<<<<<<---------------------------------------------------- Corrupt file on ASM disk, system tablespace this time
ORA-01200: actual file size of 94720 is smaller than correct size of 98560 blocks <<<<<<<<<<---------------------------------------------------- ERROR message
The ASM disks are all up and disk groups are mounted OK. The ASM protection level is EXTERNAL.
My understanding that the only proper recovery from the above error is to use RMAN Restore Database/File/Tablespace.etc (and then RMAN Recover, when in ArchiveLog mode).
I do have RMAN disk backups, so I don't need to "patch" the database to recover.
This is not my question at this point in time.
My Question is this : what are the most likely causes of such error?
Oracle Database bug? OS bug? Disk driver error? Server hardware failure (bus, memory, etc)? Or a SAN bug?
I expect that Oracle 11g R2 will always come up with the database "clean" if the server reboots or if server crashes (i.e. due to complete power failure) provided the actual storage is not physically damaged.
Our SAN vendor (no names!) says they are of the opinion that it's most likely Oracle database or Oracle Linux 6.x/UEK software bug, or probably Oracle ASM 11.2 bug.
We have opened a support call with Oracle.....
My personal experience dealing with similar database errors on more recent releases of Oracle (9i R2, 10g R2, 11g R2) and also MS-SQL 2005 and 2008 R2 suggests this kind of a problem is most likely related to errors/bugs in storage/drivers/firmware/BIOS and SAN and not likely to be a 'database' or O/S bug.
Perhaps you, good people on this forum, can share their experiences, as unbiased as you can?
Many thanksIve seen Ora-1200 twice I think over the years, both times there was disk problems which led to write issues which caused file problems, youve reported no such issues on your side though so if thats actually true, Im thinking bug.
-
ORA-01200: actual file size of 437759 is smaller than correct size of 43776
Hi,
I am getting the following unexpected errors while going to create CONTROL files after successful completion of offline/online oracle backup RESTORE (of PRD system) on Quality system. We are following Database specific system copy method to do the same.
All the required pre & post restore activities for the same were carried out. Even the same RESTORE activities are performed with different different online/offline backups of PRD system to do such system copy. But, the thing is stuck at control file creation step with the following same error which is seen again & again after every DB restore operation.....
SQL> @/oracle/AEQ/saptrace/usertrace/CONTROL.SQL
ORACLE instance started.
Total System Global Area 4714397696 bytes
Fixed Size 2050336 bytes
Variable Size 2365589216 bytes
Database Buffers 2332033024 bytes
Redo Buffers 14725120 bytes
CREATE CONTROLFILE REUSE SET DATABASE "AEQ" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01200: actual file size of 437759 is smaller than correct size of 437760 blocks
ORA-01110: data file 4: '/oracle/AEQ/sapdata1/sr3_1/sr3.data1'
At OS level the file size of sr3.data1 is found 3586129920 bytes (= 437760 * 8192 bytes).
host1:oraaeq 20> cd /oracle/AEQ/sapdata1/sr3_1
host1:oraaeq 21> ll
total 7004176
-rw-r--r-- 1 oraaeq dba 3586129920 May 11 02:26 sr3.data1
The above mentioned error is coming for all 294 data files. The reported file size difference is only of 1 Block in all data files. The DB block size is 8192 bytes.
Environment: (for SAP QUALITY & PRD systems)
OS: HP_UX ia64 B.11.23
SAP System : SAP ECC 6.0
Database: Oracle 10.2.0.2.0
Your help for this reported issue will be highly appreciated.
Regards,
Bhavik G. ShroffHi,
Thanks for your response.
We already have tried the same whatever you have mentioned as suggestions in ur last post .
We already tried to extend all 294 data-files as mentioned in that oracle forum link.
Its not the recommended way to play with data-files in such a way as it can lead to other unnecessary errors.
We have seen the following errors after successful creation of control file by manually extending all those 294 files (it was around 10hrs job).
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
auto
ORA-00332: archived log is too small - may be incompletely archived
ORA-00334: archived log: '/oracle/AEQ/oraarch/AEQarch1_268984_629943661.dbf'
have you tried also restoring init<SID>.ora file from PRD to new system.
I think its not having relationship with control file generation. Both systems are having same init files with respective SIDs.
Did you find any other points in your further investigation ?
I am thinking to perform Fresh SAP System Installation with same SID (AEQ) and then will try to do Database Restore again with last offline backup of AEQ system.
Regards,
Bhavik G. Shroff -
How do I play a Keynote 09v5.3 slideshow on my iMac at a size smaller than fullscreen?
How do I play a Keynote09v5.3 slideshow on my iMac at a size smaller than fullscreen?
Hi Gary
Thanks for a conclusive answer.
Regards
Orchardhouse -
Hi. I am not technically proficient. I have somehow hit something that is making my email forwards and text to the printer way smaller than what is appearing on my screen. I have looked into everything I can think of in settings and outlook express. If you have any ideas, I would greatly appreciate them. I assume that one click and I'll be back to "normal", but can't figure out which darned click that would be. Thanks.
Have you checked Outlook's font preferences? You can change to size under General Preferences>Fonts. You can also change font sizes by using ⌘shift+
-
Font Size problem : Swing font looks smaller than pdf font
Hi
I'm facing a problem with font size in my swing application. I'm showing some text with a certain predefined font size. Using the same font size and type on same m/c, I 'm generating some pdf text.
The problem is that the text in swing looks smaller than the same text in pdf. Is there a way to achieve
consistency in font sizes.please don't cross-post. pick the appropriate forum and use it. close your cross-thread.
-
Exported video size smaller than original video size. How do I keep the original size?
I exporting a video from Adobe Premiere CS6 today, and I noticed the video was about 1/2 times smaller than the original video I put in. I then uploaded the video on YouTube, and the size was still the same. I tried to change the preset to 1080p 29.97, 1080p 25, etc. and the same with 720p, and 480p, but the size only changed slightly.
How do I keep the original size after exportation of videos?More information needed for someone to help... please click below and provide the requested information
-Information FAQ http://forums.adobe.com/message/4200840
Also, exactly what are you editing, and what are your export settings?
Also, The tutorial list in message #3 http://forums.adobe.com/message/2276578 may help -
Image size of iPhone 5S is smaller than screen size
When I click any picture with my iPhone 5s, the image size is always 20-30% smaller than screen size. Its like a picture clicked from iPhone 4, is this an issue or is it designed in that way?? Coz Images clicked in potrait mode are always cropped 20-30% and I am not liking it..
If auto adjust to fit with the Default Full Zoom extension isn't working then the pages may be using absolute values for elements and in such a case you would have to zoom the pages manually.
Can you post a link to a few pages where it isn't working properly? -
Why the flashback log'size smaller than the archived log ?
hi, all . why the flashback log'size smaller than the archived log ?
Lonion wrote:
hi, all . why the flashback log'size smaller than the archived log ?Both are different.
Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
Some more information:-
Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
Edited by: CKPT on Jun 14, 2012 7:34 PM -
ORA-13044: the specified tile size is smaller than the tolerance
Hi,
I get the oracle error ORA-13044: the specified tile size is smaller than the tolerance when I do the following point cloud clip:
8 sdo_geometry(2003,29903, null,
9 mdsys.sdo_elem_info_array(1,1003,3),
10 mdsys.sdo_ordinate_array(316504,316510,234084,234090)),
However the bounds of my point cloud are
316500 to 316511 and 234080 to 234092 and my tolerance during point cloud creation is set to 0.000015 (I have alterated this value before to make it smaller but nothing seems fit).
Where do I set the "tile size"?
The set consists of 100 points and blk_capacity is set to 50.
Any help is really greatly appreciated.
Cheers,
F.Hi BKazar,
thanks for responding.
Yes, I tried out all sorts of things and swapped the numbers to see if that would fix the issue. It did not.
I have found out now though what the issue was. When I loaded the data file into Oracle, the guy who gave me the file said that the first column is the longitude, but it wasn't. After switching sdo_point.x and y the error disappeard.
I suppose the sdo_srid was expecting a different range and got confused because my long and lat were swapped.
Cheers,
F. -
Error Code - client cache is smaller than the size of the requested content
Even though we have increased the size of the ccmcache via Control Panel > Configuration Manager, we still get the Error Code 0x87D01202 (-2016407038) "the content
download cannot be performed because the total size of the client cache is smaller than the size of the requested content" The CCMEXEC Service and computer have both been restarted, after increasing the ccmcache size. Which local log
file under C:\Windows\CCM\Logs should we check for more information ?
Thanksso when you re deploying the client go into your settings and set the variable below:
smscachesize=10240
note:
SMSCACHESIZE
Specifies the size of the client cache folder in megabyte (MB) or as a percentage when used with the PERCENTDISKSPACE or PERCENTFREEDISKSPACE property. If this property is not set, the folder defaults to a maximum size of 5120 MB. The lowest value that you
can specify is 1 MB.
Note
If a new package that must be downloaded would cause the folder to exceed the maximum size, and if the folder cannot be purged to make sufficient space available, the package download fails, and the program or application will not run.
This setting is ignored when you upgrade an existing client and when the client downloads software updates.
Example: CCMSetup.exe SMSCACHESIZE=100
Note
If you reinstall a client, you cannot use the SMSCACHESIZE or SMSCACHEFLAGS installation properties to set the cache size to be smaller than it was previously. If you try to do this, your value is ignored and the cache size is automatically set to the last
size it was previously.
For example, if you install the client with the default cache size of 5120 MB, and then reinstall the client with a cache size of 100 MB, the cache folder size on the reinstalled client is set to 5120 MB.
Twitter: @dguilloryjr LinkedIn: http://www.linkedin.com/in/dannyjr Facebook: http://www.facebook.com/#!/dguilloryjr -
Why is my "Combined PDF" file size smaller than the original files?
Hello!
I am trying to combine two individual PDF files into a single PDF. Each file is 32mb, however when I use acrobat to combine them, the newly created "combined" file is only 19mb. I believe I've taken the necessary steps to ensure no degradation is happening (i.e. selecting Large File Size in the options panel), but I am still puzzled as to how two files can be put together as one and be smaller than the two separate files with out any compression. What am I missing?
Thanks in advance!When you combine a file it does a "Save As" operation. This re-writes all of the PDF object in the single file and is supposed to clean up the file, whereas the single files may have had multiple saves which when you look at the internals of the PDF file simply add on to the end of the file. In other words you get a more cleanly written and optimized file that is also saved for Fast Web View.
-
When I try to print a photo downloaded from Facebook in iPhoto, I can't get the size I specify to print correctly. Example: 5x7 prints smaller than 5x7. What do I do to solve this problem?
Those pixel dimemsions have aspect ratios that are not the same as a 5 x7 print:
1360 x 1360 = 1.0
790 x 640 = 1.23
2048 x 1366 = 1.5
An 7 x 5 image = 1.4.
So you'll need to crop the images to 5 x 7 before printing. The first two images are a little light in the pixels to produce a high resolution 5 x 7 print.
Happy Holidays
Maybe you are looking for
-
Getting error message FRM-40700:No such trigger: SPECIAL20
Hi, We have designed a custom report (Quote) and would like to use Special Menu's(Reports Menu) to open this custom Report. When I try to open this Report using REPORTS->Quote, The Report is opening seccuessfully but i am getting following error mess
-
Add a New Layer As a Last Layer
Hi Friends, How to Add a New Layer As a Last Layer in indesign scripting. If the Document has 5 layers in the sense,the newly added Layer should be the 6th Layer. Please Give ideas.
-
BAPI in CMR for Vehicle Business Partner Link
Hi Is there a BAPI that I could use in CRM Automotive to create the link between a Vehicle and Business Partner?
-
Adobe Acrobat not compatible?
I am trying to connect/set-up a scanner with my computer, but am getting a message that Adobe Acrobat is not compatible
-
I just put my Applications in the Trash. How do I get it back on the bottom bar?
I just put my Applications in the Trash. How do I get it back on the bottom bar?