Block Sizes on UIX Table

Hello,
I am using Jdev 10.1.2 and UIX version 2.2.16. I have a UIX search page, and I'd like to be able to allow users to be able to specify how many results are returned by a query. Currently, my table is very simple:
<table width="100%"
model="${bindings.FindPresubmissionsView1}"
id="FindPresubmissionsView10"
partialRenderMode="multiple"
partialTargets="_uixState">
This is obviously followed by a number of columns, etc... I know that normally the number of rows displayed is determined by the Iterator range size for the view object in my page's UI model. I played around a little with blockSize, but this doesn't seem to actually affect the number of rows displayed, and I have a feeling like I'm going about this the wrong way.
Has anyone done anything similar? Or can anyone point me toward a working example of this? The UIX developer's guide seems to discuss what I'm looking for, but I'm a little confused about how to implement it. Is there any place to download demos of the examples in the developer's guide, perhaps? Thanks, all!
John

Hi
Oops, I had no idea that you could dynamically change the iterator rangeSize on the Java backend, which basically makes any changes to my uix page unnecessary :) Thanks!
John

Similar Messages

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • DB2 Block Size

    Dear Experts,
    How to know the block size of a table in DB2?
    Currently, we are performing 'move table' to a new tablespace. The process has spent over 30 hours for only one table which have large size and big block. Before doing 'move table' job via DB6CONV, we do not know how many the block size is?
    We have to know its block size in order to determine the time consumed by system in the future, so we can predict the down time, because as we monitored, 1 block consumed approximately 20 to 30 seconds.
    We use ECC6, DB2 ver 8.2 and AIX 5.3.
    Need your quick response.
    Thanks and Regards,
    Rudi

    Hi Diane,
    which DB2 version do you use? Please post db2level.
    Did you copy the right Version (32-bit x86 folder ntintel, 64-bit x64 ntamd64)?
    Can you catalogue the stored procedure manually? Try:
    db2 "CREATE PROCEDURE SAPTOOLS.ONLINE_TABLE_MOVE(
          IN  TABSCHEMA    VARCHAR(128),
          IN  TABNAME      VARCHAR(128),
          IN  DATA_TBSP    VARCHAR(128),
          IN  INDEX_TBSP   VARCHAR(128),
          IN  LOB_TBSP     VARCHAR(128),
          IN  MDC_COLUMNS  VARCHAR(32672),
          IN  PARTKEY_COLS VARCHAR(32672),
          IN  OPERATION    VARCHAR(128)
        SPECIFIC ONLINE_TABLE_MOVE
        DYNAMIC RESULT SETS 1
        MODIFIES SQL DATA
        NOT DETERMINISTIC
        CALLED ON NULL INPUT
        LANGUAGE C
        EXTERNAL NAME 'online_table_move_sp!online_table_move'
        FENCED THREADSAFE
        PARAMETER STYLE SQL
        PROGRAM TYPE SUB
        DBINFO"
    If you still get problems, please open a separate thread. This one is DB2 Block Size related.
    regards Siegfried

  • TABLE --- BLOCK SIZE - HELP

    Hi,
    Good day to all.
    There are totally 4 tables(EMP, DEPT, STORE_INFO,WAREHOUSE_INFO) which has the range partition been applied.(Partition is on the date range for current date i.e. for every 1 day the partition will be automatically created)
    There is a timer concept been applied such as the purge should automatically be done based on the days we pass.
    But I can see the information in my table as “11 – Which indicates that the table is overloaded”.
    For my surprise; for all the tables (NUM_ROWS and BLOCKS is NULL) except “EMP” table for which the NUM_ROWS IS NULL and BLOCKS size is 1,540,356.
    1)     Will this be an issue during purge?
    2)     Why NUM_ROWS is NULL and BLOCKS size is showing high.
    Please help....

    Thanks Hemant for your prompt reply.
    Sorry and forgot to mention that this information is for our records "11 - Status which moves when table is overloaded".
    Actually i was in an assumption that "because of the BLOCK SIZE" the table overloaded issue was getting raised.
    Is the below code right?
    When i am running it; it says the
    Error
    ===
    ORA-20000: TABLE "MOQ"."EMP" does not exist or insufficient privileges
    ORA-06512: at "SYS.DBMS_STATS", line 2105
    ORA-06512: at "SYS.DBMS_STATS", line 5210
    ORA-06512: at "SYS.DBMS_STATS", line 5243
    ORA-06512: at line 8
    DECLARE
       num_rows      NUMBER;
       num_blocks    NUMBER;
       avg_row_len   NUMBER;
    BEGIN
       -- retrieve the values of table statistics on MOQ.EMP
       -- statistics table name: EMP    statistics ID: TEST1
       -- MOQ - SCHEMA_NAME
       DBMS_STATS.get_table_stats ('MOQ'
                                  ,'EMP'
                                  ,NULL
                                  ,'EMP'
                                  ,'TEST1'
                                  ,num_rows
                                  ,num_blocks
                                  ,avg_row_len
       -- print the values
       DBMS_OUTPUT.put_line (   'num_rows='
                             || num_rows
                             || ',num_blocks='
                             || num_blocks
                             || ',avg_row_len='
                             || avg_row_len
    END;Or only DBA has the privileage?
    Please help

  • "Next" and "Previous" functionality on UIX tables not working with 10g

    With new release of JDeveloper(10G), the "Next" and "Previous" navigation buttons/links on UIX tables are not working. I tried different approaches:
    1. It does not work with Data Controls built from Java Beans or from TopLink.
    2. I tried without using Data Controls and it still does not work.
    I shows "Next" and "Previous" buttons on the page. But it shows all records on the page rather then showing limited records. Say for example if the block size is 5 and total number of records are 15, it shows all 15 records in the table but the "next" and "Previouds" button would say "1-5 of 15".
    Did any of you observe the same behaviour?

    Hi Shital -
    Thanks for the additional info...
    When I said that the total number of records is 15, I
    meant that my tableData's DataObjectList contains 15
    entries. (In case of DataControls you don't even use
    DataObjectList, but for my non data control
    applications I used DataObjectList). You are saying
    that If I want to display only 5 records per page
    then I will need to provide a DataObjectList with
    five items. Then for next five records from 6-10 I
    will have to program in such a way that my method
    call returns 6-10 records.That's correct. In the case where you are explicitly providing data to the table via a DataObjectList, you need to feed the data to the table in page size blocks - and you also need to handle the table's goto event to scroll the table to the next/previous block of data.
    In previous version of
    UIX(2.1.7) I never had to program for next and
    previous buttons. UIX tables used to take care of
    that. That's why I am so surprised.It sounds like you must have been using the <bc4j:table> component. Is that the case?
    Getting back to your original issue...
    1. It does not work with Data Controls built
    from Java Beans or from TopLink.I believe that this is a bug in the preview release - and I'm fairly sure this will be addressed by production. In production, ADF should automatically handle wiring up table scrolling for you when binding your table to a data control - whether the data control is implemented via JavaBeans, Toplink, or BC4J. I believe that in the preview release, scrolling only working when binding to a BC4J data control.
    Andy

  • Need help in analyzing size of a table

    Hi,
    We have a table DUMMY_TABLE which has a clob column. We need to reduce the size of the table by updating the clob column. The table statistics before updating the clob column is showing 2MB of size as mentioned below. After we run the update script to update the clob column, the table statistics are as shown below (table size increased to 33MB). Can you please let us know, why is the table size getting increased after the update script.
    Note: Before the update script the clob column has even bigger data.
    Before running the update script:
    Table Size - 2 MB
    Blocks - 256
    Extents - 17
    After
    running the update script:
    Table Size - 33 MB
    Blocks - 4224
    Extents - 48
    The query which we run to get the statistics is: “SELECT * FROM user_segments WHERE segment_name LIKE 'DUMMY_TABLE'”
    Update script for updating clob column is:
    DECLARE
    data1 clob;
    BEGIN
    data1 :='Dear <%UserFirstName%>,
    Below is the password you requested for the <%Company Name%>.
    Password: <%xxx%>
    To access the login page of the <%Company Name%>, please click on the following web-address:
    <%ApplicationURL%><%UserIdentificationValue%>
    If your email software does not support web links (i.e. clicking on the web-address does not take you directly to the <%Company Name%>), copy the above web-address into the "Address" or "Location" bar of your Internet web-browser in order to access the CareerTracker.
    Sincerely,
    Human Resource Department
    <%Company Name%>
    This is an automated email; please do not reply to this email address.';
    UPDATE DUMMY_TABLE SET data = data1;
    END;

    Take the statistics of the tables and check again.
    begin
    DBMS_STATS.gather_table_stats(ownname=>'syslog',tabname=>'logs');
    end;
    Regards
    Asif Kabir

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • How to get the size of the table

    Hi All,
    How to get the size of the table in Oracle 10g?
    Is there any script which needs to be run?
    Regards,
    Apoorv

    Hi All,
    Sorry but somehow the table user_segments is not populated in my case. But we have another table SYS.ALL_TABLES whose structure is given below. Would I be able to calculate the table size based on the columns given below:
    ColumnName     Data Type
    OWNER     VARCHAR2 (30 Byte)
    TABLE_NAME     VARCHAR2 (30 Byte)
    TABLESPACE_NAME     VARCHAR2 (30 Byte)
    CLUSTER_NAME     VARCHAR2 (30 Byte)
    IOT_NAME     VARCHAR2 (30 Byte)
    STATUS     VARCHAR2 (8 Byte)
    PCT_FREE     NUMBER
    PCT_USED     NUMBER
    INI_TRANS     NUMBER
    MAX_TRANS     NUMBER
    INITIAL_EXTENT     NUMBER
    NEXT_EXTENT     NUMBER
    MIN_EXTENTS     NUMBER
    MAX_EXTENTS     NUMBER
    PCT_INCREASE     NUMBER
    FREELISTS     NUMBER
    FREELIST_GROUPS     NUMBER
    LOGGING     VARCHAR2 (3 Byte)
    BACKED_UP     VARCHAR2 (1 Byte)
    NUM_ROWS     NUMBER
    BLOCKS     NUMBER
    EMPTY_BLOCKS     NUMBER
    AVG_SPACE     NUMBER
    CHAIN_CNT     NUMBER
    AVG_ROW_LEN     NUMBER
    AVG_SPACE_FREELIST_BLOCKS     NUMBER
    NUM_FREELIST_BLOCKS     NUMBER
    DEGREE     VARCHAR2 (10 Byte)
    INSTANCES     VARCHAR2 (10 Byte)
    CACHE     VARCHAR2 (5 Byte)
    TABLE_LOCK     VARCHAR2 (8 Byte)
    SAMPLE_SIZE     NUMBER
    LAST_ANALYZED     DATE
    PARTITIONED     VARCHAR2 (3 Byte)
    IOT_TYPE     VARCHAR2 (12 Byte)
    TEMPORARY     VARCHAR2 (1 Byte)
    SECONDARY     VARCHAR2 (1 Byte)
    NESTED     VARCHAR2 (3 Byte)
    BUFFER_POOL     VARCHAR2 (7 Byte)
    ROW_MOVEMENT     VARCHAR2 (8 Byte)
    GLOBAL_STATS     VARCHAR2 (3 Byte)
    USER_STATS     VARCHAR2 (3 Byte)
    DURATION     VARCHAR2 (15 Byte)
    SKIP_CORRUPT     VARCHAR2 (8 Byte)
    MONITORING     VARCHAR2 (3 Byte)
    CLUSTER_OWNER     VARCHAR2 (30 Byte)
    DEPENDENCIES     VARCHAR2 (8 Byte)
    COMPRESSION     VARCHAR2 (8 Byte)
    DROPPED     VARCHAR2 (3 Byte)

  • Buffer pool keep and multiple db block sizes

    I have a tablespace with 8k block size (database default) and a tablespace with 16k block size. I have db_cache_size and db_16k_cache_size set (obviously).
    Also i have buffer cache keep set in the database.
    Question: If a table is placed in a tablespace with 16k block size, and it's buffer pool is keep, does it end up in the keep pool (like tables from 8k tablsepace and keep pool set), or it ends in 16k buffer?

    You can find in the following online manual
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm#i16408

  • How can we find size of a table having blob columns

    I have a problem in which i need to get the size of table that ishaving BLOB column in it.
    How to find the size?
    for ex: i am having a table DocBlob which is having a BLOB column DocObject. How to get the size of DocBlob table?
    Also how one can see the data of the BLOB column. In toad or Pl/Sql dev tool, we can not see anything. here we use to store the images in it. how we can verify that what image or data is stored in these columns??
    thanks..

    Tables are segments. LOB Segments are segments.
    SELECT blocks or bytes FROM user_segments
    You can also use dbms_lob.getlength to retrieve the length of a LOB.
    http://www.morganslibrary.org/reference/dbms_lob.html

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • Transaction execution time and block size

    Hi,
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
    THX to all.

    >
    It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
    I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
    Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
    (each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
    In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
    2K oracle block database had 3 datafiles, each 7GB.
    4K oracle block database had 2 datafiles, each 10GB.
    8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
    Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
    I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
    Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
    If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
    Regards
    Jonathan Lewis

  • Specifying segments and block size manaually

    Hi, just a quick question,
    But could anyone help me understand why someone may manually add segments to a table space (or is it a data file they would be added to) ? does auto extend not take care of this?
    And secondly ... why would you increase or decrease the block size of a segment?... is this because you may have small or large sized rows within a table and want a block size to acompany this?
    Any help would be appriciated

    Hi,
    In Oracle free space can be managed automatically or manually,You specify automatic segment-space management when you create a locally managed tablespace
    Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
    -Ease of use
    -Better space utilization, especially for the objects with highly varying size rows
    -Better run-time adjustment to variations in concurrent access
    -Better multi-instance behavior in terms of performance/space utilization
    For manually managed tablespaces, two space management parameters, PCTFREE and PCTUSED, enable you to control the use of free space for inserts and updates to the rows in all the data blocks of a particular segment. Specify these parameters when you create or alter a table or cluster (which has its own data segment). You can also specify the storage parameter PCTFREE when creating or altering an index (which has its own index segment).
    see this link
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/b_deprec.htm#634923 :)

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • Oracle Block Size - question for experts

    Hi ,
    For years i thought that my system block size was 8K.
    Lately due to an HPUX Bug i found that the file system block size is gust .... 1K
    (HP DocId: DCLKBRC00006913 fstyp(1m) returns unexpected block size (f_bsize) for VXFS )
    My instance is currently 10204 but previously was 7.3 --> 8 --> 8174 --> 10204.
    Since its old instance its block size is gust 4kb.
    We are planing to create new file system block size of 8k.
    The instance size is about 2 TB.
    Creating the whole database with 8 kb is impossible since its 24*7 instance.
    Do you think that i sould move gust few important tables to a new tablespace with 8k block size , or should i leave it with 4 kb ?
    Thanks

    Given that your Oracle Database Block_Size (4K) is a multiple of the FileSystem Block_Size (1K), there should be no inherent significant issue, as such.
    Yes, it would have been nice to have an 8KB Oracle Database Block_Size but whether you should recreate your FileSystems to 8KB is a difficult question. There would be implications on PreFetch that the OS does and on how the underlying Storage (must be a SAN, I presume) handles those requests.
    A thorough test (if you can setup a test environment for 2TB such that it does NOT share the same HW, doesn't complicate PreFetches in the existing SAN) would be well adviced.
    Else, check with HP and Veritas support if there are known issues and/or any Desupport plans for this combination ?!
    Oracle, obviously, would have issues with Index Key Length sizes if the Block Size is 4KB. Presumably you do not need to add any new indexes with very large keys.
    Having said that, you would have read all those posts about how Oracle doesn't (or really does ?) test every different block-size ! However, Oracle had, before 8i, been using 2K and 4K block sizes. Except that the new features (LMT, ASSM etc) may not have been well tested.
    Since you upgraded from 7.3 in place without changing the Block_Size, I would venture to say that your database is still using Dictionary Managed and Manual Allocation and Segment Space Management Manual ?
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

Maybe you are looking for

  • Problem with messages and calls,, please respond

    Problem with my iphone is that the messages are sent and received very late, it hangs while sending, when delete a message it takes time to open messages firstly... messages in the thread disappear and such abnormalities. and in Calls, i receive noti

  • ODBC login as sysdba

    What is the syntax? From a client desktop I can login to SQL Plus and Enterprise Manager 10g, but not via ODBC (Test Connection) using the same login. SQL PLUS: Start/Programs/Oracle - OraDb10g_home3/Application Development/SQL Plus presents a login

  • Re: Is it possible to accessing regular Thumbdrive...

    Hi All, Is it possible to access regular thumbdrive from Nokia HP just as it is acccessing a memory card? What happens when a regular USB thumbdrive is attached to a N96 using a convertor cable? Have anyone tried this before? Thanks a lot in advance.

  • In OS 10.8.4 is there a way to delete unread emails as a batch?

    I have a lot of unread emails. In OS 10.8.4 is there a way to delete unread emails as a batch?

  • Intermitent Eject symbol by Network icon in Finder pane

    Please can anyone tell me why sometimes (and only sometimes) I get an eject symbol next to the network icon in the left pane of Finder windows? Pressing it does nothing. I wouldn't say I connect to a network but maybe I do and misunderstand the termi