NAT table size understanding
I was wondering if someone could explain the NAT table to me in this article LINK About NAT table size. I have a Rev E router. My other post is about a PSN network error and Im wondering what the heck 30,000 means? 30,000 connections, 30,000 KB's? Dont get NAT at all except a thought it might be interfering with me being able to get my 12GB download done from PSN.
How many connections does this router allow? Does a 12GB file from PSN use a lot of connections? I had bittorrent on my computer a while back and never had a problem with it. It is no longer on my computer but if the NAT is coming into play, why would it effect PSN and not bit torrent.
Thanks
Solved!
Go to Solution.
I saw this post over at
http://www.dslreports.com/forum/r28481442-Billing-NAT-table-size-understand8203-ing.
and I see what the answer is.
If you are the original poster (OP) and your issue is solved, please remember to click the "Solution?" button so that others can more easily find it. If anyone has been helpful to you, please show your appreciation by clicking the "Kudos" button.
Similar Messages
-
Who knows what the NAT table size is on the latest Airport Extreme n? That is How many connections can it handle?
The maximum number of simultaneous wireless clients is 50.
The maximum number of total clients (Ethernet + wireless) is presumably 254 but Apple has never published a number.
I think that you will experience bandwidth issues before you get anywhere near either of those numbers. -
Help understaning NAT table for SIP
Hi Folks,
I've been battling SIP and NAT for nearly 3 months now I've got to the point where I think I may well have found a bug in IOS. Incoming calls are not getting to my CME, outbound calls are fine. The sip-ua is registered fine.
2611xm-1#show sip-ua register status
Line peer expires(sec) registered
================================ ========== ============ ==========
01xxxxx8882 -1 86 yes
Without a static route, the NAT translation table on the 877 that connects to the internet looks like this:
877va-1#show ip nat translations
Pro Inside global Inside local Outside local Outside global
udp 82.70.85.118:1024 192.168.1.254:5060 212.23.7.228:5060 212.23.7.228:5060
No incoming calls come through, but looking at the table, I understand why as the ITSP is routing calls to ports 5060, not 1024.
With this static route:
ip nat inside source static udp 192.168.1.254 5060 82.70.85.118 5060 extendable
The table looks like this:
877va-1#show ip nat translations
Pro Inside global Inside local Outside local Outside global
udp 82.70.85.118:5060 192.168.1.254:5060 212.23.7.228:5060 212.23.7.228:5060
udp 82.70.85.118:5060 192.168.1.254:5060 --- ---
Incoming calls come through, which makes sense as the port is now open!
If I delete the static route, clear the translation table and re-register the sip-ua the table looks like this:
877va-1#show ip nat translations
Pro Inside global Inside local Outside local Outside global
udp 82.70.85.118:5060 192.168.1.254:5060 212.23.7.228:5060 212.23.7.228:5060
Calls come through for around 12 hours (probably until something times out and the sip-ua re-registers to the first scenario on port 1024).
Why is the 877 setting up the translation to port 1024 - what can I do to fix this?
My 877 settings are:
no ip nat service sip udp port 5060
ip nat inside source list 1 interface Dialer1 overload
ip route 0.0.0.0 0.0.0.0 Dialer1
ip route 192.168.2.0 255.255.255.0 192.168.1.254
My CME SIP settings are:
sip-ua
credentials username 01xxxxx8882 password xxxxxxxxxxxxxxxxxxx realm voip.zen.co.uk
authentication username 01xxxxx8882 password 7 xxxxxxxxxxxxxxxxxxxxxxxxx
nat symmetric role active
nat symmetric check-media-src
retry invite 3
retry register 10
timers register 150
registrar dns:voip.zen.co.uk expires 120
sip-server dns:voip.zen.co.uk
connection-reuse
host-registrar
permit hostname dns:voip.zen.co.uk
permit hostname dns:asterisk01.voip.zen.co.uk
permit hostname dns:asterisk02.voip.zen.co.ukThanks fo thre reply Daniele.
IOS is 15.3(3)M, but I've updated this today from an ealier version hoping to fix the problem.
If I do the following:
877va-1#show ip nat translations | include 5060
udp 82.70.85.118:1029 192.168.1.254:5060 212.23.7.228:5060 212.23.7.228:5060
877va-1#clear ip nat trans
877va-1#clear ip nat translation *
877va-1#clear ip nat statistics
Then on the CME box, re-register the sip-ua.
877va-1#show ip nat translations | include 5060
udp 82.70.85.118:1030 192.168.1.254:5060 212.23.7.228:5060 212.23.7.228:5060
It takes the next port. Its never using 5060 in the first place from what I can tell -
Checking HANA Table sizes in SAP PO
Hello All,
We have SAP PO 7.4 deployed on SAP HANA SP 09, and the SAP PO is growing fast, and it grew by 200 GB over the last 2 days. The current size of the data volume is showing almost 500 GB just for the PO system. This is HANA tenant database installation.
The total memory used by this system show about 90 GB RAM. However, I just don't how to get the list of the table with SIZE. I looked the view CS_M_TABLE, which is showing all the tables which are using the memory, but that does not still add up though. I need to get the list of all the physical table size so I can understand to see which table is growing fast and try to come with something that would explain why we are seeing about 500 GB of database size for a SAP Java PO System.
Thanks for all the help.
KumarHello,
As very simple bit of SQL that you can adapt to your needs.
select table_name, round(table_size/1024/1024) as MB, table_type FROM SYS.M_TABLES where table_size/1024/1024 > 1000 order by table_size desc
select * from M_CS_TABLES where memory_size_in_total > 1000 order by memory_size_in_total desc
It's just a basic way of looking at things but at least it will give you all tables greater than 1GB.
I would imagine others will probably come up with something a bit more eloquent and perhaps better adapted to your needs.
Cheers,
A. -
[Schema Design]: How to reduce inventory snapshot table size
We are planning to store inventory level's periodic snapshot at the end of each day. We have close to 50k different products.
But on a given day only 5-6k products inventory changes.
As I understand if I start inserting just the products which have changed inventory, analysis around semi-additive dimension (Quantity) doesn't work properly.
For better understanding, lets take say the fact table looks like:
product_id time_id quantity
1 1 100
2 1 130
3 1 100
1 30 200
So basically, it says product 1,2,3 from time_id 1 to time_id 29 doesn't have any update in quantity. But product_id 1's inventory changes to 200 on time_id 30.
This approach reduces fact table size by approx 90 rows.
My question is, is it a good idea? Would this semi-additive dimension still give the same result (I doubt though)?
If not, then what other approaches I can take?
Thanks in advance.Another option is to capture just the net changes (sometimes referred to as a journalized fact table). Then you can create a calculated measure that sums all the changes from the beginning of time up to the current time-slice.
This may seem like an inefficient solution, but there are ways to reduce the problem by limiting the history for which inventory snapshots are available. For example, if the business only needs snapshots for the past 90 days, then you can grab a snapshot
of inventory for all products on day 0, and then capture the net changes for each product for days 1-90. Then you can calculate the snapshot in time by adding the baseline snapshot to the sum of all net changes.
BI Developer and lover of data (Blog |
Twitter) -
"Convert Text to Table" Size limit issue?
Alphabetize a List
I’ve been using this well known work around for years.
Select your list and in the Menu bar click Format>Table>Convert Text to Table
Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
Open “Table Inspector” (Click Table icon at top of Pages document)
Make sure “table” button is selected, not “format” button
Choose Sort Ascending from the Edit Rows & Columns pop-up menu
Finally, click Format>Table>Convert Table to Text.
A few days ago I added items & my list was 999 items long, ~22 pages.
Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
Anyone else have this problem? It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
Thanks!
Pages 08 v 3.03
OS 10.6.8G,
Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
Jerry -
Table size exceeds Keep Pool Size (db_keep_cache_size)
Hello,
We have a situation where one of our applications started performing bad since last week.
After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
After the data increase, the table size exceeded db_keep_cache_size.
I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
Is my inference correct here ?
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - ProductionSetup
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 4M
SQL>
SQL>
SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
Table created.
SQL> set autotrace on
SQL>
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
PL/SQL procedure successfully completed.
SQL> set serveroutput on
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
SEGMENT_NAME : T1
PARTITION_NAME :
SEGMENT_TYPE : TABLE
SEGMENT_SUBTYPE : ASSM
TABLESPACE_NAME : HR_TBS
BYTES : 16777216
BLOCKS : 2048
EXTENTS : 31
INITIAL_EXTENT : 65536
NEXT_EXTENT : 1048576
MIN_EXTENTS : 1
MAX_EXTENTS : 2147483645
MAX_SIZE : 2147483645
RETENTION :
MINRETENTION :
PCT_INCREASE :
FREELISTS :
FREELIST_GROUPS :
BUFFER_POOL : KEEP
FLASH_CACHE : DEFAULT
CELL_FLASH_CACHE : DEFAULT
PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
9 recursive calls
0 db block gets
2006 consistent gets
2218 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=10M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=10M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 12M
SQL>
SQL> set autotrace on
SQL>
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=20M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=20M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 20M
SQL> set autotrace on
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
1656 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
0 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedOnly with 20M db_keep_cache_size I see no physical reads.
Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
Or am I missing something ?
Rgds,
GokulHello Jonathan,
Many thanks for your response.
Here is the test I ran;
SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
BUFFER_ BLOCKS
KEEP 1977
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
1939
SQL> show parameter db_keep_cache_size
NAME TYPE VALUE
db_keep_cache_size big integer 20M
SQL>
SQL> alter system set db_keep_cache_size = 5M scope=both;
System altered.
SQL> select count(*) from hr.t1;
COUNT(*)
135496
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
Rgds,
Gokul -
Change table size and headers in type def cluster
Is is possible to change a table size and headers that is inside a type def cluster?
I have a vi that loads test parameters from a csv file. The original program used an AC load so there was a column for power factor. I now have to convert this same program to be used with a DC load, so there is no power factor column.
I have modified to vi to adjust the "test table" dynamically based on the input file. But the "test table" in the cluster does not update it's size or column headers.
The "test table" in the cluster is used through out the main program to set the values for each test step and display the current step by highlighting the row.
Attachments:
Load Test Parms.JPG 199 KB
Table Cluster.JPG 122 KBNevermind, I figured it out...
I was doing it wrong from the start, in an effort to save time writing the original program I simply copied the "test table" to by type def cluster. This worked but was not really as universal as I thought it would be, as the table was now engraved in stone since the cluster is a type def.
I should not have done that, but rather used an array in the cluster and only used the table in the top level VI where it's displayed on the screen. -
i was thinking about buying EA3500 but it has only 64MB ram + it support only 1024 Maximum Simultaneous Connections ( it that joke from linksys or what ) and only 224 Total Simultaneous Throughput
so 1 or 2 torrent and the Nat table will be filled
+ linksys killed 3rd party firmware by using marvel SoC
http://www.smallnetbuilder.com/lanwan/router-charts/bar/77-max-simul-conn
LOL the funny thing the X2000 getway support 34,925 Maximum Simultaneous Connections and its in the same price range or less and the EA2700 supprot 34,925 Maximum Simultaneous Connections too and its price about half of EA3500 O_oLisa Larson wrote:Hello, I have an RV082 10/100 8-Port VPN Router and have configured the NAT table to allow for remote users, however I've run into an issue. It seems like there is a limited number of entries that you can put in the table,10, and I need to configure about 5 more IPs.
If you are using the 1-to-1 NAT feature of RV082, the 1-to-1 NAT table supports 10 entries, each of which can be a range of IP addresses. -
Table size not reducing after delete
The table size in dba_segments is not reducing after we delete the data from the table. How can i regain the space after deleting the data from a table.
Regards,
NateshI think when you do DELETE it removes the data but
it's not releasing any used space and it's still
marked as used space. I think reorganizing would help
to compress and pack all block and relase any unused
space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
Would you also please explain about different about
LOB and LONG ? or point me to any link which explain
baout it.From the Oracle Concepts manual's section on the LONG data type
"Note:
Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
Justin -
TABLE SIZE NOT DECREASING AFTER DELETION. BLOCKS NOT BEING RE-USED
Hi ,
Problem:
Table size before deletion: 40GB
Total rows before deletion: over 200000
Rows deleted=190000 rows
Table size after deletion is more (as new data was inserted meanwhile).
Purpose of table:
This table is a sort of transaction table.
Whenever an SR is raised by CSR, data gets inserted into this table and is removed when the status is cleared.
So there is constant insertion and purging will happen on this table.
We are using ASSM and tablespace is LOCAL.
This Table has a LONG column also.
Is this problem because of LONG column ?
So here there are 2 problems.
1) INSERTs are not using the space created by DELETE.
2) New INSERTs are taking much more space then expected ?
Let me have your suggestion
Thanks,I think when you do DELETE it removes the data but
it's not releasing any used space and it's still
marked as used space. I think reorganizing would help
to compress and pack all block and relase any unused
space in blocks. Why do you think that? Deleting data will create space that can be reused by subsequent insert/ update operations. It is not going to release space back to the tablespace to make it available for inserts into other tables in the tablespace, but that's not generally an issue unless you are permanently decreasing the size of a table, which is pretty rare.
Would you also please explain about different about
LOB and LONG ? or point me to any link which explain
baout it.From the Oracle Concepts manual's section on the LONG data type
"Note:
Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."
LONG was a very badly implemented solution to storing large amounts of data. LOBs are a much, much better designed solution-- you should always be using LOBs.
Justin -
Give me the sql query which calculte the table size in oracle 10g ecc 6.0
Hi expert,
Please give me the sql query which calculte the table size in oracle 10g ecc 6.0.
RegardsOrkun Gedik wrote:
select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
Hi,
This delivers possibly wrong data in MCOD installations.
Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
Use with
segment_name LIKE '<TABLE_NAME>%'
if you like to see the related indexes as well.
For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
Volker -
Enqueue Replication Server - Lock Table Size
Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
Dear Experts,
If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
If enque server is configured in the same host as CI, it can be checked using
ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
As it is a Standalone Enqueue Server, I don't know where to check this value.
Thanking you in anticipation.
Best Regards
L RaghunahthHi
Raghunath
Check the following links
http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
Regards
Bhaskar -
we have 2 db's called UT & ST.. with same setup and data also same
running on hp-ux itanium 11.23 with same binary 9.2.0.6
one of schema called arb contain only materialised views in both db's and with same name of db link connect to same remote server in both db's...
in that schema of one table called rate has tablesize as 323 mb and st db, has same table rate has 480mb of tablesize, by querying the bytes of dba segement of table i found the difference.. query has follows
In UT db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
323
In ST db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
480mb
its quite strange, both of table, contain same ddl and same counts of records and initalextent and next extents, all storage parameter are same and same uniform size of 160k tablespace with both db..
ddl table of ut enviornment
SQL> select dbms_metadata.get_ddl('TABLE','RATE','ARB') from dual;
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"
ddl table of st enviornment
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"..
tablespace of st db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORST31/ab_data01ORST31.dbf' SIZE 1598029824 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
tablespace of ut db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORDV32/ab_data01ORDV32.dbf' SIZE 1048576000 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
why table size is difference?If everything is the same as you stated, i would guess the bigger table might have some free blocks. If you truncate the bigger one and insert /*+ append */ into bigger (select * from smaller) then check the size of bigger table, see what you can find. By the way, dba_segments, or dba_extents only gives the usage to extents level granulity, withing a extent, there are blocks might not be fully occupied. In order to get exact bytes of the space, you 'll need to use dbms_space package.
You may get some idear from the extream example I created below :
SQL>create table big (c char(2000));
Table created.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
128 -- my tablespace is LMT uniform sized 128KB
1 row selected.
SQL>begin
SQL> for i in 1..100 loop
SQL> insert into big values ('A');
SQL> end loop;
SQL>end;
SQL>/
PL/SQL procedure successfully completed.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- 2 extents after loading 100 records, 2KB+ each record
1 row selected.
SQL>commit;
Commit complete.
SQL>update big set c='B' where rownum=1;
1 row updated.
SQL>delete big where c='A';
99 rows deleted. -- remove 99 records at the end of extents
SQL>commit;
Commit complete.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- same 2 extents 256KB since the HWM is not changed after DELETE
1 row selected.
SQL>select count(*) from big;
COUNT(*)
1 -- however, only 1 record occupies 256KB space(lots of free blocks)
1 row selected.
SQL>insert /*+ append */ into big (select 'A' from dba_objects where rownum<=99);
99 rows created. -- insert 99 records ABOVE HWM by using /*+ append */ hint
SQL>commit;
Commit complete.
SQL>select count(*) from big;
COUNT(*)
100
1 row selected.
S6UJAZ@dor_f501>select sum(bytes)/1024 kb from user_segments
S6UJAZ@dor_f501>where segment_name='BIG';
KB
512 -- same 100 records, same uniformed extent size, same tablespace LMT, same table
-- now takes 512 KB space(twice as much as what it took originally)
1 row selected. -
MySQL lock table size Exception
Hi,
Our users get random error pages from vibe/tomcat (Error 500).
If the user tries it again, it works without an error.
here are some errors from catalina.out:
Code:
2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
It always logs the Mysql error code 1206:
MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
1206 (ER_LOCK_TABLE_FULL)
The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
Thanks for your help.I already found an entry from Kablink:
https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
But i think this can't be a permanent solution...
Our MySQL Server version is 5.0.95 running on sles11
Maybe you are looking for
-
anybody install the new upgrade and not be able to install it? as i go through the set up, an error mesage pops up and says that Install Shield has failed. the set up then freezes. the worst parts are that i can't shut down my PC without holding the
-
Hi Everyone, We import a lot of documents from Word to our FrameMaker book templates. We have text, tables, and graphics. I've tried using Frame's "Import Text Flow by Copy" with "Reformat Using Current Document's Formats" and removing manual page b
-
Hi, I have some problems with the connection to the wifi. I use a USB wifi pen from Belkin that uses the zd1211-firmware. I don't remember now the model but I think it isn't important because my wlan0 interface is recognized perfectly, I installed pr
-
User defined modules in File Adapters
Hi SapAll. i just want to know how and in what situations we will use user defined modules in the file communication channels under Module Tab. can any body explain me briefly with a real time scenarios. regards. Varma.
-
Change Open With for all files with a specific extension
Hi, I've been using OS X for quite some time, but more recently cannot see how you do the following which is proving frustrating: * Open Get Info for a file, * Use the drop down under Open With to choose Other, * Select an appropriate application tha