Table size wrong in interent explorer- Nate Miller

When I preview my site, www.el-ojito.com, my tables and
content are fine in every browser except internet explorer. I used
the same techniques to apply the background image for the page
title in every page. However some pages, like accommodations, do
not stay the correct size. Can someone please help me?

The best place to start with any layout issues is at the W3
validator
http://validator.w3.org/check?verbose=1&uri=http%3A%2F%2Fwww.el-ojito.com%2F
There are a number of errors there that should be fixed -
1. There is a mismatch between the Doctype you have used on
the page
(XHTML), and the code syntax you have used (HTML).
2. There are some deprecated attributes specified
(hspace/vspace)
3. There are IE-proprietary attributes used (bordercolor,
background)
The errors dealing with your use of Flash can be ignored.
Also, I note that you have used an image with a space in the
filename
(barbed wire.png). That's a bad idea for web use. Never use
spaces or
other punctuation, other than hyphen and underscore on your
pages.
Fix those first and let's see how the page looks then.
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
==================
"Nate2551" <[email protected]> wrote in
message
news:g23re3$ht2$[email protected]..
> When I preview my site, www.el-ojito.com, my tables and
content are fine
> in
> every browser except internet explorer. I used the same
techniques to
> apply the
> background image for the page title in every page.
However some pages,
> like
> accommodations, do not stay the correct size. Can
someone please help me?
>

Similar Messages

  • NAT table size understanding

    I was wondering if someone could explain the NAT table to me in this article LINK About NAT table size. I have a Rev E router. My other post is about a PSN network error and Im wondering what the heck 30,000 means? 30,000 connections, 30,000 KB's? Dont get NAT at all except a thought it might be interfering with me being able to get my 12GB download done from PSN.
    How many connections does this router allow? Does a 12GB file from PSN use a lot of connections? I had bittorrent on my computer a while back and never had a problem with it. It is no longer on my computer but if the NAT is coming into play, why would it effect PSN and not bit torrent.
    Thanks
    Solved!
    Go to Solution.

    I saw this post over at
    http://www.dslreports.com/forum/r28481442-Billing-NAT-table-size-understand8203-ing.
    and I see what the answer is.
    If you are the original poster (OP) and your issue is solved, please remember to click the "Solution?" button so that others can more easily find it. If anyone has been helpful to you, please show your appreciation by clicking the "Kudos" button.

  • NAT table size

    Who knows what the NAT table size is on the latest Airport Extreme n? That is How many connections can it handle?

    The maximum number of simultaneous wireless clients is 50.
    The maximum number of total clients (Ethernet + wireless) is presumably 254 but Apple has never published a number.
    I think that you will experience bandwidth issues before you get anywhere near either of those numbers.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Change table size and headers in type def cluster

    Is is possible to change a table size and headers that is inside a type def cluster?
    I have a vi that loads test parameters from a csv file. The original program used an AC load so there was a column for power factor. I now have to convert this same program to be used with a DC load, so there is no power factor column.
    I have modified to vi to adjust the "test table" dynamically based on the input file. But the "test table" in the cluster does not update it's size or column headers.
    The "test table" in the cluster is used through out the main program to set the values for each test step and display the current step by highlighting the row.
    Attachments:
    Load Test Parms.JPG ‏199 KB
    Table Cluster.JPG ‏122 KB

    Nevermind, I figured it out...
    I was doing it wrong from the start, in an effort to save time writing the original program I simply copied the "test table" to by type def cluster.  This worked but was not really as universal as I thought it would be, as the table was now engraved in stone since the cluster is a type def.
    I should not have done that, but rather used an array in the cluster and only used the table in the top level VI where it's displayed on the screen.

  • Give me the sql query which calculte the table size in oracle 10g ecc 6.0

    Hi expert,
    Please  give me the sql query which calculte the table size in oracle 10g ecc 6.0.
    Regards

    Orkun Gedik wrote:
    select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
    Hi,
    This delivers possibly wrong data in MCOD installations.
    Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
    at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
    Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
    Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
    Use with
    segment_name LIKE '<TABLE_NAME>%'
    if you like to see the related indexes as well.
    For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
    might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
    Volker

  • Enqueue Replication Server - Lock Table Size

    Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
    Dear Experts,
    If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
    If enque server is configured in the same host as CI, it can be checked using
    ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
    As it is a Standalone Enqueue Server, I don't know where to check this value.
    Thanking you in anticipation.
    Best Regards
    L Raghunahth

    Hi
    Raghunath
    Check the following links
    http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
    Regards
    Bhaskar

  • Actual tables size is different than tablespace,datafile size

    Hi,
    I had created 10 tables with minimum extents of 256M in the same tablespace. The total size was 2560M. After 3 months run, all tables sizes were not increased over 256M. But the datafile size for that tablespace was increased sharply to 20G.
    I spent a lot of time on it and could not find anything wrong.
    Please help.
    Thanks,

    The Member Feedback forum is for suggestions and feedback for OTN Developer Services. This forum is not monitored by Oracle support or product teams and so Oracle product and technology related questions will not be answered. We recommend that you post this thread to the Oracle Technology Network (OTN) > Products > Database > Database - General forum.

  • Problem with table size (initial extent)

    Hi,
    I have imported a table from my client's database, which shows the following size parameters as displayed from the user_segments table :-
    bytes : 33628160
    blocks : 4105
    extents : 1
    initial_extent : 33611776
    next_extent : 65536
    The number of rows in the table is 0 (zero). I am wondering how the table size could become so large, while other tables in the schema in the same tablespace have normal initial extent size.
    I then created a tablespace with an initial and next extent of 64k each, and imported the data into the tablespace, after which the table size and the initial extent for the table remained to be 33611776. This is the problem with 4-5 other tables out of a total of 500 tables.
    Of course if i drop and recreate the table, there is no problem, and the initial extent size and the table size becomes 64k, same as per the tablespace.
    Any suggestions? I do not want to drop the tables and recreate them.
    Because of this problem, even an attempt to import a blank database is consuming 2 GB of hard disk space.
    Thanks in advance
    DSG

    I don't think you can stop the extent from being allocated when you import the table.
    Even if you try to let the table inherit storage parameters from the tablespace, it will still allocate as many 64K extents as it needs to get to the 33M size in the table's (imported) storage parameter. I have also seen that when trying to change storage during an import like that, you can look in dba_tables and see the table has an ititial setting of 33M even though when you look in dba_segments you'll see that every extent allocated was in fact 64K. The dba_tables table is getting populated directly from the import and will therefore report the wrong number.
    Perhaps you can import then create table as... to put the tables in a better storage set up. (Letting tables inherit from the tablespace is the best way to go...no fragmentation that way). You might want to get the client to let you revamp the storage since theres no good reason to have one huge extent like that.

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • Table size not reducing after delete

    The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
    Regards,
    Natesh

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • TABLE SIZE NOT DECREASING AFTER DELETION. BLOCKS NOT BEING RE-USED

    Hi ,
    Problem:
    Table size before deletion: 40GB
    Total rows before deletion: over 200000
    Rows deleted=190000 rows
    Table size after deletion is more (as new data was inserted meanwhile).
    Purpose of table:
    This table is a sort of transaction table.
    Whenever an SR is raised by CSR, data gets inserted into this table and is removed when the status is cleared.
    So there is constant insertion and purging will happen on this table.
    We are using ASSM and tablespace is LOCAL.
    This Table has a LONG column also.
    Is this problem because of LONG column ?
    So here there are 2 problems.
    1) INSERTs are not using the space created by DELETE.
    2) New INSERTs are taking much more space then expected ?
    Let me have your suggestion
    Thanks,

    I think when you do DELETE it removes the data but
    it's not releasing any used space and it's still
    marked as used space. I think reorganizing would help
    to compress and pack all block and relase any unused
    space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
    Would you also please explain about different about
    LOB and LONG ? or point me to any link which explain
    baout it.From the Oracle Concepts manual's section on the LONG data type
    "Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
    LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
    Justin

  • Table size difference?

    we have 2 db's called UT & ST.. with same setup and data also same
    running on hp-ux itanium 11.23 with same binary 9.2.0.6
    one of schema called arb contain only materialised views in both db's and with same name of db link connect to same remote server in both db's...
    in that schema of one table called rate has tablesize as 323 mb and st db, has same table rate has 480mb of tablesize, by querying the bytes of dba segement of table i found the difference.. query has follows
    In UT db
    select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
    output
    323
    In ST db
    select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
    output
    480mb
    its quite strange, both of table, contain same ddl and same counts of records and initalextent and next extents, all storage parameter are same and same uniform size of 160k tablespace with both db..
    ddl table of ut enviornment
    SQL> select dbms_metadata.get_ddl('TABLE','RATE','ARB') from dual;
    CREATE TABLE "ARB"."RATE"
    ( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "AB_DATA"
    ddl table of st enviornment
    CREATE TABLE "ARB"."RATE"
    ( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "AB_DATA"..
    tablespace of st db
    SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
    CREATE TABLESPACE "AB_DATA" DATAFILE
    '/koala_u11/oradata/ORST31/ab_data01ORST31.dbf' SIZE 1598029824 REUSE
    LOGGING ONLINE PERMANENT BLOCKSIZE 8192
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
    tablespace of ut db
    SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
    CREATE TABLESPACE "AB_DATA" DATAFILE
    '/koala_u11/oradata/ORDV32/ab_data01ORDV32.dbf' SIZE 1048576000 REUSE
    LOGGING ONLINE PERMANENT BLOCKSIZE 8192
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
    why table size is difference?

    If everything is the same as you stated, i would guess the bigger table might have some free blocks. If you truncate the bigger one and insert /*+ append */ into bigger (select * from smaller) then check the size of bigger table, see what you can find. By the way, dba_segments, or dba_extents only gives the usage to extents level granulity, withing a extent, there are blocks might not be fully occupied. In order to get exact bytes of the space, you 'll need to use dbms_space package.
    You may get some idear from the extream example I created below :
    SQL>create table big (c char(2000));
    Table created.
    SQL>select sum(bytes)/1024 kb from user_segments
    SQL>where segment_name='BIG';
    KB
    128               -- my tablespace is LMT uniform sized 128KB
    1 row selected.
    SQL>begin
    SQL> for i in 1..100 loop
    SQL> insert into big values ('A');
    SQL> end loop;
    SQL>end;
    SQL>/
    PL/SQL procedure successfully completed.
    SQL>select sum(bytes)/1024 kb from user_segments
    SQL>where segment_name='BIG';
    KB
    256               -- 2 extents after loading 100 records, 2KB+ each record
    1 row selected.
    SQL>commit;
    Commit complete.
    SQL>update big set c='B' where rownum=1;
    1 row updated.
    SQL>delete big where c='A';
    99 rows deleted.          -- remove 99 records at the end of extents
    SQL>commit;
    Commit complete.
    SQL>select sum(bytes)/1024 kb from user_segments
    SQL>where segment_name='BIG';
    KB
    256               -- same 2 extents 256KB since the HWM is not changed after DELETE
    1 row selected.
    SQL>select count(*) from big;
    COUNT(*)
    1               -- however, only 1 record occupies 256KB space(lots of free blocks)
    1 row selected.
    SQL>insert /*+ append */ into big (select 'A' from dba_objects where rownum<=99);
    99 rows created.          -- insert 99 records ABOVE HWM by using /*+ append */ hint
    SQL>commit;
    Commit complete.
    SQL>select count(*) from big;
    COUNT(*)
    100
    1 row selected.
    S6UJAZ@dor_f501>select sum(bytes)/1024 kb from user_segments
    S6UJAZ@dor_f501>where segment_name='BIG';
    KB
    512               -- same 100 records, same uniformed extent size, same tablespace LMT, same table
                        -- now takes 512 KB space(twice as much as what it took originally)
    1 row selected.

  • MySQL lock table size Exception

    Hi,
    Our users get random error pages from vibe/tomcat (Error 500).
    If the user tries it again, it works without an error.
    here are some errors from catalina.out:
    Code:
    2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
    2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
    at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
    at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
    2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
    org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
    at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
    Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
    at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
    2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
    2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
    2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
    2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
    2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
    It always logs the Mysql error code 1206:
    MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
    1206 (ER_LOCK_TABLE_FULL)
    The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
    The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
    In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
    Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
    Thanks for your help.

    I already found an entry from Kablink:
    https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
    But i think this can't be a permanent solution...
    Our MySQL Server version is 5.0.95 running on sles11

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

Maybe you are looking for