Difference table size values
Hello Experts,
I want to know the sizes of some tables (e.g. TCURR). In different threads the transaction db02old and the function module 'DB_GET_TABLE_SIZE' are mentioned for that purpose.
I wonder that using these methods two different sizes are
Using transaction db02old I get a size of 81.920 for TCURR.
Using function module 'DB_GET_TABLE_SIZE' I get a size of 79.937 for TCURR. Why is there a difference between both values?
Thanks for your help.
Regards,
Tobias
DB02old gives the correct information as it directly updates value from Database.
Function module 'DB_GET_TABLE_SIZE' depends on the variables you have set and it useful for ccms operations.
regards
nag
Edited by: welcomenag on Jul 6, 2009 4:15 PM
Similar Messages
-
Enqueue Replication Server - Lock Table Size
Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
Dear Experts,
If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
If enque server is configured in the same host as CI, it can be checked using
ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
As it is a Standalone Enqueue Server, I don't know where to check this value.
Thanking you in anticipation.
Best Regards
L RaghunahthHi
Raghunath
Check the following links
http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
Regards
Bhaskar -
we have 2 db's called UT & ST.. with same setup and data also same
running on hp-ux itanium 11.23 with same binary 9.2.0.6
one of schema called arb contain only materialised views in both db's and with same name of db link connect to same remote server in both db's...
in that schema of one table called rate has tablesize as 323 mb and st db, has same table rate has 480mb of tablesize, by querying the bytes of dba segement of table i found the difference.. query has follows
In UT db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
323
In ST db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
480mb
its quite strange, both of table, contain same ddl and same counts of records and initalextent and next extents, all storage parameter are same and same uniform size of 160k tablespace with both db..
ddl table of ut enviornment
SQL> select dbms_metadata.get_ddl('TABLE','RATE','ARB') from dual;
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"
ddl table of st enviornment
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"..
tablespace of st db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORST31/ab_data01ORST31.dbf' SIZE 1598029824 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
tablespace of ut db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORDV32/ab_data01ORDV32.dbf' SIZE 1048576000 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
why table size is difference?If everything is the same as you stated, i would guess the bigger table might have some free blocks. If you truncate the bigger one and insert /*+ append */ into bigger (select * from smaller) then check the size of bigger table, see what you can find. By the way, dba_segments, or dba_extents only gives the usage to extents level granulity, withing a extent, there are blocks might not be fully occupied. In order to get exact bytes of the space, you 'll need to use dbms_space package.
You may get some idear from the extream example I created below :
SQL>create table big (c char(2000));
Table created.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
128 -- my tablespace is LMT uniform sized 128KB
1 row selected.
SQL>begin
SQL> for i in 1..100 loop
SQL> insert into big values ('A');
SQL> end loop;
SQL>end;
SQL>/
PL/SQL procedure successfully completed.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- 2 extents after loading 100 records, 2KB+ each record
1 row selected.
SQL>commit;
Commit complete.
SQL>update big set c='B' where rownum=1;
1 row updated.
SQL>delete big where c='A';
99 rows deleted. -- remove 99 records at the end of extents
SQL>commit;
Commit complete.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- same 2 extents 256KB since the HWM is not changed after DELETE
1 row selected.
SQL>select count(*) from big;
COUNT(*)
1 -- however, only 1 record occupies 256KB space(lots of free blocks)
1 row selected.
SQL>insert /*+ append */ into big (select 'A' from dba_objects where rownum<=99);
99 rows created. -- insert 99 records ABOVE HWM by using /*+ append */ hint
SQL>commit;
Commit complete.
SQL>select count(*) from big;
COUNT(*)
100
1 row selected.
S6UJAZ@dor_f501>select sum(bytes)/1024 kb from user_segments
S6UJAZ@dor_f501>where segment_name='BIG';
KB
512 -- same 100 records, same uniformed extent size, same tablespace LMT, same table
-- now takes 512 KB space(twice as much as what it took originally)
1 row selected. -
Difference between a value table and a check table?
What is the difference between a value table and a check table?
Value Table
This is maintained at Domain Level.
When ever you create a domain , you can entered allowed values. For example you go to Domain SHKZG - Debit/credit indicator. Here only allowed values is H or S.
When ever you use this Domain, the system will forces you to enter only these values.
This is a sort of master check . To be maintained as a customization object. This mean that if you want to enter values to this table you have to create a development request & transport the same.
Check table
For example you have Employee master table & Employee Transaction table.
When ever an employee Transacts we need to check whether that employee exists , so we can refer to the employee master table.
This is nothing but a Parent & Child relationship . Here data can be maintained at client level , no development involved.
As per DBMS what we call foregin key table, is called as check table in SAP.
Reward points for the answer -
How to insert into two differents tables at the same time
Hi
I'm newer using JDev, (version 3.1.1.2 cause the OAS seems to support just the JSP 1.0)
and I want to insert into two differents tables at the same time using one view.
How can I do that ?
TIA
EdgarOracle 8i supports 'INSTEAD OF' triggers on object views so you could use a process similar to the following:
1. Create an object view that joins your two tables. 'CREATE OR REPLACE VIEW test AS SELECT d.deptno, d.deptname, e.empname FROM DEPT d, EMP E'.
2. Create an INSTEAD OF trigger on the view.
3. Put code in the trigger that looks at the :NEW values being processed and determines which columns should be used to INSERT or UPDATE for each table. Crude pseudo-code might be:
IF :NEW.deptno NOT IN (SELECT deptno FROM DEPT) THEN
INSERT INTO dept VALUES(:NEW.deptno, :NEW.deptname);
INSERT INTO emp VALUES (:NEW.deptno, :NEW.empname);
ELSE
IF :NEW.deptname IS NOT NULL THEN
UPDATE dept SET deptname = :NEW.deptname
WHERE deptno = :NEW.deptno;
END IF;
IF :NEW.empname IS NOT NULL THEN
UPDATE emp SET empname = :NEW.empname
WHERE deptno = :NEW.deptno;
Try something along those lines.
null -
Table size exceeds Keep Pool Size (db_keep_cache_size)
Hello,
We have a situation where one of our applications started performing bad since last week.
After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
After the data increase, the table size exceeded db_keep_cache_size.
I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
Is my inference correct here ?
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - ProductionSetup
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 4M
SQL>
SQL>
SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
Table created.
SQL> set autotrace on
SQL>
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
PL/SQL procedure successfully completed.
SQL> set serveroutput on
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
SEGMENT_NAME : T1
PARTITION_NAME :
SEGMENT_TYPE : TABLE
SEGMENT_SUBTYPE : ASSM
TABLESPACE_NAME : HR_TBS
BYTES : 16777216
BLOCKS : 2048
EXTENTS : 31
INITIAL_EXTENT : 65536
NEXT_EXTENT : 1048576
MIN_EXTENTS : 1
MAX_EXTENTS : 2147483645
MAX_SIZE : 2147483645
RETENTION :
MINRETENTION :
PCT_INCREASE :
FREELISTS :
FREELIST_GROUPS :
BUFFER_POOL : KEEP
FLASH_CACHE : DEFAULT
CELL_FLASH_CACHE : DEFAULT
PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
9 recursive calls
0 db block gets
2006 consistent gets
2218 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=10M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=10M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 12M
SQL>
SQL> set autotrace on
SQL>
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=20M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=20M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 20M
SQL> set autotrace on
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
1656 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
0 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedOnly with 20M db_keep_cache_size I see no physical reads.
Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
Or am I missing something ?
Rgds,
GokulHello Jonathan,
Many thanks for your response.
Here is the test I ran;
SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
BUFFER_ BLOCKS
KEEP 1977
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
1939
SQL> show parameter db_keep_cache_size
NAME TYPE VALUE
db_keep_cache_size big integer 20M
SQL>
SQL> alter system set db_keep_cache_size = 5M scope=both;
System altered.
SQL> select count(*) from hr.t1;
COUNT(*)
135496
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
Rgds,
Gokul -
Change table size and headers in type def cluster
Is is possible to change a table size and headers that is inside a type def cluster?
I have a vi that loads test parameters from a csv file. The original program used an AC load so there was a column for power factor. I now have to convert this same program to be used with a DC load, so there is no power factor column.
I have modified to vi to adjust the "test table" dynamically based on the input file. But the "test table" in the cluster does not update it's size or column headers.
The "test table" in the cluster is used through out the main program to set the values for each test step and display the current step by highlighting the row.
Attachments:
Load Test Parms.JPG 199 KB
Table Cluster.JPG 122 KBNevermind, I figured it out...
I was doing it wrong from the start, in an effort to save time writing the original program I simply copied the "test table" to by type def cluster. This worked but was not really as universal as I thought it would be, as the table was now engraved in stone since the cluster is a type def.
I should not have done that, but rather used an array in the cluster and only used the table in the top level VI where it's displayed on the screen. -
Check table and value table -Example
Hi Experts
Please give me the step by step procedure to create the check table and value table, and how to work on it.
Thanks in advance.
Regards
RajaramHi
Check Table is for Field level Validation whereas Value table is for Domain Level Validations.
Value Table proposes table for check table.
I think you are clear with this.
more elaborate.
Check Table
The Check Table is the table used by system to check if a data exist or not exist.
While creating a table if you want to be sure that a field can have some values
and these are in a certain table, you can give IT this table as CHECK TABLE.
Value Table
This is maintained at Domain Level.
When ever you create a domain , you can entered allowed values. For example you go to Domain SHKZG - Debit/credit indicator.
Here only allowed values is H or S.
When ever you use this Domain, the system will forces you to enter only these values.
This is a sort of master check . .
To be maintained as a customization object.
This mean that if you want to enter values to this table you have to create a development request & transport the same.
Differences:
1)check table will carry out the check for input values for the table field being entered in any application
and value table will provide values on F4 help for that table field.
2)The check table defines the foreign keys and is part of the table definition.
The value table is part of the domain definition.
check table is validation at field level.
value table is at domain level.
Value table is defined at the domain level and is used to provide F4 help for all the fields which refer to that domain.
Check table is defined against a field in SE11 if you want the values in that field to be checked against a list of valid values. For e.g. if you are using the field matnr in a table you could define MARA as the check table.
Also while defining a check table SAP proposes the value table as check table by default. Referring to the previous example if you tried to define a check table for the matnr field SAP would propose MARA as the check table.
1. what is the purpose / use ?
-- so that the user can select values
from some master table , for that field !!!!
2. This is done by
CHECK TABLE (foreign key concept)
(and not value table)
3. When we create a check table for a field,
then
some DEFAULT table is PROPOSED
4. that DEFAULT table is nothing
but PICKED up from the domain of that field,
and shown from the value of VALUE TABLE.
CHECK TABLE -it is a parent table.
for example..
i have two tables ZTAB1 and ZTAB2.
I have one common field in both the tables,i can make any ztable to be the check table .If i make Ztab1 to be the check table then when i have to make an entry in ztab2 i will check whether ztab1 is having that value or not..
its also field level checking..
Valuetable-It is nothing but default check table.
one parent can have n number of child tables.For example
For ztable we have zchild1 and zchild2 tables r there.
Its domain level checking..When zchild2 uses the same domain as used by zchild1 then the system automatically generates a popup saying a check table already exists would u want to maintain it.
go to domain and then press the value tab u can see the valuetable at the end...
Please refer the links below,
d/r b/n check and value table?
wjhat is the exct difference between check table and value table
what is the check table and value table
check table and value table
Re: wjhat is the exct difference between check table and value table
http://www.sap-img.com/abap/difference-between-a-check-table-and-a-value-table.htm -
SQL query to get the Datetime 06 hours prior to the table Datetime value
Hi Experts,
I'm just trying to create a SQL query to get the Datetime which should be 06 hours prior to my Table column value(Executiontime),
Eg: my Executiontime(column) value is 07:00AM means, this query should fetch the detail of first VMName from table at 01:00AM,
SQL Table Name: TestTable
Columns: VMName(varchar),status(varchar) Executiontime(Datetime)
SQL Query : Select Top 1 VMName from
TestTable where convert(date,Exeutiontime)=convert(date,getdate()) and
status='0' and ExecutionTime > dateadd(hour,6,getdate())
Request someone to alter this Query to my requirement or give me the new one.
Regards,
Sundar
SundarHi All,
Thanks for your Prompt response. I tried the below queries, but still I don't have any luck. Actually the queries are returning the value before the condition met (say when the time difference is more than 06 hours). I want the
query to return exactly @ 06 hour difference or less than 06 hours,
Query 01: Select Top 1 VMName from TestTable where
convert(date,Exeutiontime)=convert(date,getdate())
and status='0'
and ExecutionTime >
dateadd(hour,-6,getdate())
Query 02: Select
Top 1 VMName from TestTable where
status='0'
and ExecutionTime >
dateadd(hour,-6,getdate())
Query 03: Select
Top 1 VMName from TestTable where status='0'
and ExecutionTime >
dateadd(hour,-6,ExecutionTime)
Can someone point out the mistake please.
Regards,
Sundar
Sundar -
MySQL lock table size Exception
Hi,
Our users get random error pages from vibe/tomcat (Error 500).
If the user tries it again, it works without an error.
here are some errors from catalina.out:
Code:
2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
It always logs the Mysql error code 1206:
MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
1206 (ER_LOCK_TABLE_FULL)
The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
Thanks for your help.I already found an entry from Kablink:
https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
But i think this can't be a permanent solution...
Our MySQL Server version is 5.0.95 running on sles11 -
Index size keep growing while table size unchanged
Hi Guys,
I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
The base tables are some working tables with DML operation and nearly same number of records daily.
I've analysed the schema in the test environment.
Those indexes do not fulfil the criteria for rebuild as follows,
- deleted entries represent 20% or more of the current entries
- the index depth is more then 4 levels
May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
Grateful if someone can give me some advice.
Thanks a lot.
Best regards,
TimmyPlease read the documentation. COALESCE is available in 9.2.
Here is a demo for coalesce in 10G.
YAS@10G>truncate table t;
Table truncated.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 65536
TIND 65536
YAS@10G>insert into t select level from dual connect by level<=10000;
10000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
YAS@10G>delete from t where mod(id,2)=0;
5000 rows deleted.
YAS@10G>commit;
Commit complete.
YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 327680Table size is the same but the index size got bigger.
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 6
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 29
Total Blocks............................ 40
Total Bytes............................. 327,680
Total MBytes............................ 0
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 37,001
Last Used Block......................... 8
PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
YAS@10G>alter index tind coalesce;
Index altered.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 196608
TIND 327680
YAS@10G>exec show_space('TIND',user,'INDEX');
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 13
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 22
Total Blocks............................ 40
Total Bytes............................. 327,680
Total MBytes............................ 0
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 37,001
Last Used Block......................... 8
PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
Insert another 5000 rows with higher key values.
YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
5000 rows created.
YAS@10G>commit;
Commit complete.
YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
SEGMENT_NAME BYTES
T 262144
TIND 327680Now the index did not get bigger because it could use the free blocks for the new rows. -
How we will know that dimension size is more than the fact table size?
how we will know that dimension size is more than the fact table size?
Hi,
Let us assume that we are going to take Division and distribution channel in a dimension and assume we have 20 distinct values for Division in R/3 and 30 Distinct values for Distribution channel .So Maximum, we can get 20 * 30 records in dimension table and we can take rough estimation of records in the cube by observing the raw data in source system.
With rgds,
Anil Kumar Sharma .P -
Lock table size change in instance profile RZ10
i need your help. I changed the table size from 10000 to 17000 and then to 20000 but still have the same table size as before.i used rz10 to change the parameter enque/table_size.
the steps i followed are as in all documents i can find.
1. change parameter value
2. save it (parameter and instance)
3. activate it.
4.on restart instance (i just left it for the offline backup to do this).
on the 4th step is that enough, because after the system came back i checked the parameter in rz11 and the current value on the parameter is still 10000. (owner entries and granule still 12557 as before)
am i missing something?
vinaka
epeliHi,
it COULD be that the offline backup did indeed no restart of the instance. From Oracle I know that there is a so called "reconnect-status" where the SAP instance is trying over a defined period of time to log to the database again after the workprocesses lost connection to the database processes. In this timeframe the instance is not to be considered as restarted.
If you check ST02 you see the point of time where the instance was restarted in reality the last time. If this date is before your offline backup you need to do the restart manually.
Best regards, Alexander -
Estimate table size for last 4 years
Hi,
I am on Oracle 10g
I need to estimate a table size for the last 4 years. So what I plan to do is get a count of data in the table for last 4 years and then multiply that value by avg_row_length to get the total size for 4 years. Is this technique correct or do I need to add some overhead?
ThanksYes, the technique is correct, but it is better to account for some overhead. I usually multiply the results by 10 :)
The most important thing to check is if there is any trend in data volumes. Was the count of records 4 years ago more or less equal to the last year? Is the business growing or steady? How fast is it growing? What are prospects for the future? Last year in not always 25% of last 4 years. It happens that last year is more than 3 other years added together.
The other, technical issue is internal organisation of data in Oracle datafiles. The famous PCTFREE. If you expact that the data will be updated then it is much better to keep some unused space in each database block in case some of your records get larger. This is much better for performance reasons. For example, you leave 10% of each database block free and when you update your record with longer value (like replace NULL column with actual 25-characters string) then your record still fits into the same block. You should account for this and add this to your estimates.
On the other hand, if your records get never updated and you load them in batch, then maybe they can be ORDERed before insert and you can setup a table with COMPRESS clause. Oracle COMPRESS clause has very little common with zip/gzip utilities, however it can bring you significant space savings.
Finally, there is no point to make estimates too accurate. They are just only estimates and the reality will be almost always different. In general, it is better to overestimate and have some disk space unused than underestimate and need to have people to deal with the issue. Disks are cheap, people on the project are expensive. -
Hi
I've set up an old 1121 access point as a WGB on our unified (i.e. lightweight) wireless network. It works fine, but the VLAN that
the wireless side connects to has between 200 and 300 clients at any one time. The bridge table size seems to be fixed at 300 entries, and I'm concerned that at some point this may overflow. I've put this in the wireless config:
int dot110
no bridge-group 1 source-learning
bridge-group 1 block-unknown-source
to try to keep the size of the table down, but it seems to make no difference. Is my only option to move the WGB to a VLAN which has fewer clients?
Thanks
Max Caines
University of WolverhamptonHi , I fetched this for you:
Total number of forwarding database elements in the system. The memory to hold bridge entries is allocated in blocks of memory sufficient to hold 300 individual entries. When the number of free entries falls below 25, another block of memory sufficient to hold another 300 entries is allocated. Thus, the total number of forwarding elements in the system is expanded dynamically, as needed, limited by the amount of free memory in the router.
Now this docu if for router, but since both AP and Routers run IOS Code then the logic should still be the same.
So either:
1- Keep me posted on what happens when you indeed reach 300 on current table to see if AP will allocate extra 300 when 25 threshold is reached
or
2- move wgb to less busy vlan
Thanks
Serge
Maybe you are looking for
-
The new update of IOS asked to keep a lock passcode. And i entered one which I forgot. I had to delete my Itunes from my PC a few weeks back due to virus problems. Is there any way I could keep my data and applications and get the Ipad unlocked? I do
-
Importing images makes them blurry and jaggy (CS4)
Hello - I've just upgraded to Premiere CS4 but am having the same problem. When I import DV video from my camera, it is blurred and shows interlace problems. I'm curious as to why this would be; I never had these sort of problems on Premiere 2.0.
-
Oracle VM templates for Database 12c & OVM2
Oracle has recently released new VM templates for 12c database: https://blogs.oracle.com/wim/entry/oracle_vm_templates_for_database According the documentation it can be deployed either on Oracle VM version 2 or 3. I've followed instructions to creat
-
Fix for music playback problems with new iTunes (11.1)?
When playing music from my library using the latest iTunes update (11.1), the audio will pause every so often, maybe every 2-3 minutes, for just a couple of seconds - I tried to reinstall and the same problem occurs. Is there a fix for this? I am run
-
I need my sender verification number so I can email photos. It is not sending me one.
I need my sender verifcation number so I can email photo's. I just bought adobe photshop elements 11 and when I click on send number, it doesn't.