Operating System's  I/O Buffer SIZE

Hi There,
can anybody tell me how to count Operating systems I/O Buffer size..?
i have Win 2000 Server with Ora. 9.2 on it
Thanx In Advance!!

Hi,
Run some statspack snapshots, then take some reports, you will find indication if this parameter is set as well for your application into average blocks by read into datafile section.
db_file_multiblock_read_count=16Take care, a high value encourage full table scan over index usage.
Nicolas.

Similar Messages

  • Get I/O buffer size of operating system...

    Hi ,
    Is there any script in Oracle which will display the I/O buffer size of operating system i use???
    I need it in order to set appropriately the parameter DB_FILE_MULTIBLOCK_READ_COUNT.
    Thanks , a lot
    Simon

    Hi ,
    I found a script in order to find the IO buffer size of operating system. This is as follows:
    create tablespace tester datafile 'C:\oracle\product\10.2.0\oradata\EPESY\test.dbf' size 10M reuse
    default storage (initial 1M next 1m pctincrease 0)
    create table testing tablespace tester
    as select * from all_objects
    where rownum<50000
    select relative_fno from dba_data_files
    where tablespace_name='TESTER'
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    select count(*) from testing
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    In an example - explaining the above example- there were the following figures:
    select phyrds,phyblkrd from v$filestat where file#=<#relative_fno#>
    PHYRDS PHYBLKRD
    154 1220
    The test ends up by dividing the PHYBLKRD by the PHYRDS . So in the above example it yields a result of 7.92 which is close to 8 - so the effective multiblock read count is 8.
    NOTE : The words underlined are of the author words not mine.
    I run the above script and i found a figure about 10.85. So which may be the effective multiblock read count...??????
    Thanks , a lot
    Simon

  • Database Block Size Smaller Than Operating System Block Size

    Finding that your database block size should be in multiples of your operating system block size is easy...
    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB

    But what if the reverse of the image below were the case?
    What happens when you store an Oracle Data Block that is 2 KB in an 8 KB Operating System Block?  Does it waste 6 KB or are there 4 Oracle Data Blocks stored in 1 Operating System Block?
    Is it different if you use ASM?
    I'd like to introduce a 2 KB block size into a RAC Exadata environment for a small set of highly transactional tables and indexes to reduce contention on blocks being requested in the Global Cache.  I've witnessed horrendous wait times for a plethora of sessions when a block was highly active.
    One index in particular has a column that indicates the "state" of the record, it is a very dense index.  Records will flood in, and then multiple processes will poll, do work, and change the state of the record.  The record eventually reaches a final state and is never updated again.
    I know that I can fill up the block with fluff by adjusting the percent free, percent used, and initrans, but that seems like a lazy hack to me and I'd like to do it right if possible.
    Any thoughts or wisdom is much appreciated.
    "The database requests data in multiples of data blocks, not operating system blocks."
    "In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#BABDCGIB
    You could have answered your own questions if you had just read the top of the page in that doc you posted the link for
    >
    At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can use or allocate.
    An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
    >
    There isn't any 'wasted' space using 2KB Oracle blocks for 8KB OS blocks. As the doc says Oracle allocates 'extents' and an extent, depending on your space management, is going to be a substantial multiple of blocks. You might typically have extents that are multiples of 64 KB and that would be 8 OS blocks for your example. Yes - it is possible that the very first OS block and the very last block might not map exactly to the Oracle blocks  but for a table of any size that is unlikely to be much of an issue.
    The single-block reads used for some index accesses could affect performance since the read of a 2K Oracle block will result in an 8K OS block being read but that 8K block is also likely to be part of the same index.
    The thing is though that an index entry that is 'hot' is going to be hot whether the block it is in is 2K or 8K so any 'contention' for that entry will exist regardless of the block size.
    You will need to conduct tests using a 2K (or other) block and cache size for your index tablespaces and see which gives you the best results for your access patterns.
    You should use the standard block size for ALL tablespaces unless you can substantiate the need for a non-standard size. Indexes and LOB storage are indeed the primary use cases for uses non-standard block sizes for one or more tablespaces. Don't forget that you need to allocate the appropriate buffer cache.

  • I am using your software: CS^ InDesign Suite on a PC using a Windows 7 operating system.     Due to eyesight issues, I need to have the menu bars of programs in easy-to-read font and picture size.  Specifically, the menu bar

    Hi,
    I am using your software: CS6 InDesign Suiteon a PC using a Windows 7 operating system.
    Due to eyesight issues, I need to have the menu bars of programs in easy-to-read font and picture size.  Specifically, the menu bar across the top (File, Edit, View, etc.), and the menu bar on the left side with the graphic depiction of options.
    In earlier versions of Windows (e.g. XP), whenever I changed the screen resolution on my computer to a lesser resolution in order to show the link icons on my desktop in a larger, more readable size, all the software programs, including yours, appeared on my screen with the menu bars in the larger font size that I needed.
    However, in Windows 7, this is not the case.  Even though I have selected the lowest resolution, making the icons on my desktop extremely large, I cannot read the options across the top menu bar of your program, nor the pull-down menu items that they contain.  I cannot see the graphic depictions of options on the left side of the screen. They are all too small.  How can I make your program increase the size?

    CS6 is not high-DPI compatible/ enabled and that can't be changed. If you cannot6 make it work with your operating system means, then short of joining Creative Cloud and using newer versions there is nothing you can do.
    Mylenium

  • I am using CS6 InDesign suite on a PC using a Windows 7 operating system.     Due to eyesight issues, I need to have the menu bars of programs in easy-to-read font and picture size.  Specifically, the menu bar across the top (File, Edit, View, etc.), and

    I am using CS6 InDesign on a PC using a Windows 7 operating system.
    Due to eyesight issues, I need to have the menu bars of programs in easy-to-read font and picture size.  Specifically, the menu bar across the top (File, Edit, View, etc.), and the menu bar on the left side with the graphic depiction of options.
    In earlier versions of Windows (e.g. XP), whenever I changed the screen resolution on my computer to a lesser resolution in order to show the link icons on my desktop in a larger, more readable size, all the software programs, including yours, appeared on my screen with the menu bars in the larger font size that I needed.
    However, in Windows 7, this is not the case.  Even though I have selected the lowest resolution, making the icons on my desktop extremely large, I cannot read the options across the top menu bar of your program, nor the pull-down menu items that they contain.  I cannot see the graphic depictions of options on the left side of the screen. They are all too small.  How can I make your program increase the size?

    NO way.
    Mylenium

  • Operating system update made all icons and fonts on my screen smaller.  how to get back to original size?

    I just upgraded my operating system on my Mac desktop.  All the icons and fonts are smaller and hard to read.  How do I get them back to pre upgrade size?

    When posting in Apple Communties/Forums/Message Boards.......It would help us to know which Mac model you have, which OS & version you're using, how much RAM, etc. You can have this info displayed on the bottom of every post by completing your system profile and filling in the information asked for.
    CLICKY CLICK-----> Help us to help you on these forums
    ***This will help in providing you with the proper and/or correct solutions.***

  • Oracle's block size and operative system cluster

    hallo,
    I would like to move our oracle's database 10.2g from server in a datacenter with Vmware environment, where the net admin will install windows 2008 with 64 bit.
    Now, the net admin strong believe that if oracle's block size is near at size of cluster of the cooked filesystem, oracle perfomance will increase, and he wants to set oracle's block size to 64 K
    It's a right idea or a mistake?
    thanks' in advance anyone answers.

    Dan_58 wrote:
    hallo,
    I would like to move our oracle's database 10.2g from server in a datacenter with Vmware environment, where the net admin will install windows 2008 with 64 bit.
    Now, the net admin strong believe that if oracle's block size is near at size of cluster of the cooked filesystem, oracle perfomance will increase, and he wants to set oracle's block size to 64 K
    It's a right idea or a mistake?
    I think he's probably had more experience with SQL Server - which is more closely integrated with the operating system, which means this type of thinking can be of some help. SQL Server has a fixed extent size of 64K (8 x 8KB pages) and an algorithm that allows it to use readahead on extents, so it's fairly common practice to synchronise the SQL Server extents with the operating system allocation unit - which is why, historically, SQL Server admins would set up the O/S with a 64KB allocation unit and fuss about aligning the allocation unit properly on the hardware.
    This type of thinking is not quite so important in Oracle systems.
    Regards
    Jonathan Lewis

  • Heap Size in 64 bit operating systems

    Hi,
    I have written a Java application which needs huge size heap. I tried to run the application under 32 bit windows operating system, but with 32 bit OS I got maximum heap size of 1.5 GB. To get more heap I have installed Windows 2003 Standard edition 64 bit version. Also I have installed JDK 1.7 64 bit for windows. I am using Netbeans IDE 6.7. I have configured Netbeans for 64 bit JVM. With this configuration I tried to set heap size of 1.6 GB it gives error "Could not create Java Virtual machine......". Also If I check proceess in task manager I see java.exe * 32 process. Does this mean JVM being used is still 32 bit JVM?
    Please let me know how to get more heap size under 64 bit OS?
    Regards,
    -Suresh Shirgave

    From a cmd window run a simple program using a 64-bit jvm (HelloWorld or similar) with different -Xmx values to establish the limit at which the program will run.
    <pathTo64-bitJVM>java -Xmxnn HelloWorldIf that's different from NB, then it's a setting of some kind in NB. Maybe you are pointing NB to a 32-bit JDK? Check the value in <NBInstallDirectory>\etc\netbeans.conf. If that's not it, check NB Help. You might want to ask the question at a NB forum, since this is a new version there may be a problem. Note that these forums are for Java language topics.

  • Operating system block size

    what's operating system block size?
    and where can i change it?

    Hi,
    Maybe the following might be helpful;
    Re: How to know the OS block size ?
    Adith

  • What is recommended size of System drive to keep operating system files , paging files and Memory Dump of hyper-v host.

    Hi ,
    I want to setup hyper-v host with 128 GB RAM  , Windows 2012 R2
    What is recommended size of System drive  to keep operating system files  , paging files  and Memory Dump?
    I tested to using 150 GB  , but when the server is crashed, there is no free space to keep memory dump file.
    Ramy

    Hi Ramy,
    For Server 2012R2 the absolute minimum system drive is 32Gb but this assume you have limited RAM or have your Page file located on another drive. It used to be best practice to setup a small page file but MS PFE now suggest leaving windows to manage
    the page file size.
    Obviously this is not always possible depending on the amount of RAM in the system, but base the system drive around this or offload the Page file to perhaps another drive. On top of this you also need space for the memory dump to be written, potentially
    again up to the size of the RAM. Assuming you fire the machine back up after a crash, you need space for the OS, the Page File plus space for the associated dump file.
    There is a nice little article here that maybe of assistance:  http://social.technet.microsoft.com/wiki/contents/articles/13383.best-practices-for-page-file-and-minimum-drive-size-for-os-partition-on-windows-servers.aspx
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Leopard operating system size?

    Hi,
    I am completely new to Mac and a little confused. My MacBook is brand new, but there is already about 18 GB taken on it's hard drive. Is it the operating system that takes up all that space? That looks huge!
    I've tried to partition the drive, but then changed my mind and now it's a single volume again. So just wondering where did all the space go...

    My brother paid $10,000 (?!?!?!) for a one gigabyte external Apple SCSI hard drive. At the time, he had a small recording studio and a one gig SCSI drive was "state of the art" equipment.

  • Linux Serial NI-VISA - Can the buffer size be changed from 4096?

    I am communicating with a serial device on Linux, using LV 7.0 and NI-VISA. About a year and a half ago I had asked customer support if it was possible to change the buffer size for serial communication. At that time I was using NI-VISA 3.0. In my program the VISA function for setting the buffer size would send back an error of 1073676424, and the buffer would always remain at 4096, no matter what value was input into the buffer size control. The answer to this problem was that the error code was just a warning, letting you know that you could not change the buffer size on a Linux machine, and 4096 bytes was the pre-set buffer size (unchangeable). According to the person who was helping me: "The reason that it doesn't work on those platforms (Linux, Solaris, Mac OSX) is that is it simply unavailable in the POSIX serial API that VISA uses on these operating systems."
    Now I have upgraded to NI-VISA 3.4 and I am asking the same question. I notice that an error code is no longer sent when I input different values for the buffer size. However, in my program, the bytes returned from the device max out at 4096, no matter what value I input into the buffer size control. So, has VISA changed, and it is now possible to change the buffer size, but I am setting it up wrong? Or, have the error codes changed, but it is still not possible to change the buffer size on a Linux machine with NI-VISA?
    Thanks,
    Sam

    The buffer size still can't be set, but it seems that we are no longer returning the warning. We'll see if we can get the warning back for the next version of VISA.
    Thanks,
    Josh

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • How to set buffer size mac?

    Video streaming is intermittent, BBC iPlayer.
    Cannot get info about router but have latest Apple Extreme modem in MacBook Air.

    A DAQboard is an IOtech device. I use a DAQbook in one of my programs.
    You set the buffer size by calling Acquisition Scan Configuration.vi, assuming you've downloaded the IOtech enhanced LabVIEW drivers. The buffer size input to this VI is in number of scans. There's no real rule to the size you should use. Just set it to a size that's big enough to hold as many scans as you will need buffered before you read them. For example, If the board is set up to sample at 100 Hz and you read the buffer once a second, make sure the buffer is at least 100 scans.
    As far as your other question, like Dennis said, you don't have to do anything to reserve the memory. The operating system will take care of it for you.

  • CRMD_ORDER  failed with operating system recv call failed 10054

    Hi ,
    Our Functioanl fox trying to create sales order in cRM using CRMD_ORDER
    transaction. In the menu of the sales oerder creation after selecting
    product GUI abnormally termainte with error message "CR1: sAP system
    messgae: Work process restarted; session terminated".
    In the system log I found the operating system recv failed like as
    follows:
    =========================
    A1 0 Initialization complete
    Q0 I Operating system call recv failed (error no. 10054)
    Q0 Q Start Workproc 0, Pid 6380
    R4 7 Delete session 001 after error 023
    A1 0 Initialization complete
    A1 4 > in program , line ??? , event
    ================
    In workprocess trace file I found the following erorr
    =========
    ThJAttachVmContainer2: found eventBits 0x40 for V1
    TH_VMC_EVENT_ROLL_IN
    ThJAttachVm: vm V1 already attached
    ThJAttachAll: return 0
    ERROR => VMCErrInfo 1 [thxxvmc.c 6049]
    msgArea=14
    B dbcalbuf: Buffer CALE (addr: 0000000010D20050, size: 500000, end:
    0000000010D9A170)
    M CCMS: AlInitGlobals : alert/use_sema_lock = TRUE.
    I *** ERROR => OpenProcess PID 3692 failed for checking semaphore 12
    ERROR_INVALID_PARAMETER: The parameter is incorrect.
    [semnt.c 1920]
    S *** init spool environment
    S TSPEVJOB updates inside critical section: event_update_nocsec = 0
    S initialize debug system
    T Stack direction is downwards.
    T debug control: prepare exclude for printer trace
    T new memory block 000000000D1B0B60
    ========
    I have updated the latest kerenel NW701 patch no 48 and tried but no
    luck. also as per the note 559119 I change the parameter
    gw/gw_disconnect value 0 but still no luck.
    I also tried following :
    To prevent firewall idle timeouts,  set a low value
    for the parameter "gw/keepalive" in instances. and users are being
    disconnected  so tried the rdisp/keepalive to a low value ,  (value 20)
    I deactived both the parameters with value 0 but still no luck ...
    Appreciate any help ...

    Glen,
    What OS/DB combination are you on?  We are seeing something similar on NW 7.0/CRM 2007 on windows 2003/oracle, where the work process dies in similar fashion but an error 050.
    Take care,
    Stephen

Maybe you are looking for