Locked table stats
Hi All,
We actually locked a table stats (TABLE1) with intention to prevent the nightly auto gather stats from picking up from stats gathering.
Understand that the nightly gather stats job kick start at 10pm daily (during wkday).
The below manual stats gathering will run about 4 hours.
If we trigger the below at 8.30pm, will the nightly gather stats (10pm) pick it up for stats gathering?
As the first sql will unlock the stats and lock it only after completion which is about 12.30am.
exec dbms_stats.unlock_table_stats('AC', 'TABLE1');
exec dbms_stats.gather_table_stats(ownname=>'ac',tabname=>'TABLE1',estimate_percent => 50,cascade=>true, method_opt => 'for all columns size 254');
exec dbms_stats.lock_table_stats('AC', TABLE1);
thanks
Didn't see your version mentioned but there may be no need to unlock them if you are updating them manually. Just add the force parameter to your call to gather_table_stats if your version has that option.
From the doc..
force
Gather statistics of table even if it is locked
Similar Messages
-
Locked table stats on volatile IOT result in suboptimal execution plan
Hello,
since upgrading to 10gR2 we are experiencing weird behaviour in execution plans of queries which join tables with a volatile IOT on which we deleted and locked statistics.
Execution plan of the example query running ok (SYS_IOT... is the volatile IOT):
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=12 Card=1 Bytes=169)
1 0 SORT AGGREGATE (Card=1 Bytes=169)
2 1 NESTED LOOPS OUTER (Cost=12 Card=1 Bytes=169)
3 2 NESTED LOOPS OUTER (Cost=10 Card=1 Bytes=145)
4 3 NESTED LOOPS (Cost=6 Card=1 Bytes=121)
5 4 NESTED LOOPS OUTER (Cost=5 Card=1 Bytes=100)
6 5 NESTED LOOPS (Cost=5 Card=1 Bytes=96)
7 6 INDEX FAST FULL SCAN ...SYS_IOT_TOP_76973 (Cost=2 Card=1 Bytes=28)
8 6 TABLE ACCESS BY INDEX ROWID ...VSUC (Cost=3 Card=1 Bytes=68)
9 8 INDEX RANGE SCAN ...VSUC_VORG (Cost=2 Card=1)Since 10gR2 the index on the joined table is not used:
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=857 Card=1 Bytes=179)
1 0 SORT AGGREGATE (Card=1 Bytes=179)
2 1 NESTED LOOPS OUTER (Cost=857 Card=1 Bytes=179)
3 2 NESTED LOOPS OUTER (Cost=855 Card=1 Bytes=155)
4 3 NESTED LOOPS (Cost=851 Card=1 Bytes=131)
5 4 NESTED LOOPS OUTER (Cost=850 Card=1 Bytes=110)
6 5 NESTED LOOPS (Cost=850 Card=1 Bytes=106)
7 6 TABLE ACCESS FULL ...VSUC (Cost=847 Card=1 Bytes=68)
8 6 INDEX RANGE SCAN ...SYS_IOT_TOP_76973 (Cost=3 Card=1 Bytes=38)I did a UNLOCK_TABLE_STATS and GATHER_TABLE_STATS on the IOT and everything worked fine - the database used the first execution plan.
Also, setting OPTIMIZER_FEATURES_ENABLE to 10.1.0.4 results in the correct execution plan, whereas 10.2.0.2 (standard on 10gR2) doesn't use the index - so i suppose it's an optimizer problem/bug/whatever.
I've also tried forcing the index with a hint - it's scanning the index but the costs are extremly high.
Any help would be greatly appreciated,
regards
-sdsdeng,
The first thing you should do is to switch to using the dbms_xplan package for generating execution plans. Amongst other things, this will give you the filter and access predicates as they were when Oracle produced the execution plan. It will also report comments like: 'dynamic sampling used for this statement'.
If you have deleted and locked stats on the IOT, then 10gR2 will (by default) be using dynamic sampling on that object - which means (in theory) it gets a better picture of how many rows really are there, and how well they might join to the next table. This may be enought to explain the change in plan.
What you might try, if the first plan is guaranteed to be good, is to collect stats on the IOT when there is NO data in the IOT, then lock the stats. (Alternatively, fake some stats that say the table is empty if it never is really empty).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
LOCK TABLE vs select for update
Hello All,
if the requirement is to lock an entire huge table to prevent any users from performing any update statement, which statement has more performance gain and why: LOCK TABLE or select fro update nowait?
is there any overhead of using LOCK TABLE statement?
Thanks,The reason I said to revoke update privilege is because I do not understand the requirement. Why do you want to prevent users from updating the table? I am assuming that users should never be allowed to update the table. In that case locking the table and select for update would be no good. If you want to stop users from updating while some one else is updating, why? All the lock table or select for update will do is cause their session to wait (hang) until the locking process commits or rolls back. This could generate a few (sic) complaints that the user application is slow/freezing.
If you can state the business problem, perhaps we can offer a solution. -
Lock Table and permit only SELECT-Statement
Hi all,
can I lock a Table and permit only SELECT-statements on a Table?
Regards
LeonidHi Kamal,
I would like to configure it in such a way that if I implement the SELECT statement, another user can't insert
a data into the table.
If it is possible, I even the LOCK would like in such a way to configure the fact that the entry of the user arrives into the buffer and if the table is unlocked, all lines from the Buffer again into the table.
I make it in such a way:
SQL Script script_test.sql:
set echo off
set verify off
set pagesize 0
set termout off
set heading off
set feedback off
lock table mytable in share row exclusive mode;
spool c:\Temp\script_info.lst
select id||'|'||dae||'|'||name||'|'||name1||'|'||hiredate
||'|'||street||'|'||nr||'|'||plznum||'|'||city
||'|'||email||'|'||telephon||'|'||cddfeas||'|'||number
||'|'||why||'|'||fgldwer||'|'||wahl||'|'||adress
||'|'||las
from mytable
where las is null
spool off
spool c:\Temp\select_from_all_tables.lst
select *
from all_tables
order by owner
spool off
spool c:\Temp\select1_from_all_tables.lst
select *
from all_tables
order by owner
spool off
update mytable
set las = 'x';
commit;
set feedback on
set heading on
set termout on
set pagesize 24
set verify on
set echo on
Afterwards I start another session:
insert into briefwahl
values(38,'11.06.2003 09:37','Test','Test','01.01.1990',
'Test','12','90000','Test',
'[email protected]','12345657','test','123',
'test','test','test','test',
null);
Then I go into the first session and start script. And I go immediately into the second session and do commit; And although I have the table closed, the new entries are spent also with spool into the .lst-File. Why, I do not understand. And in the table all lines become updated.
Regadrs
Leonid
P.S. Sorry for my English. I have write with translator. -
Timeout when inserting row in a locked table
How can I set the timeout before an INSERT statement fails with error ORA-02049 (timeout: distributed transaction waiting for lock) when the entire table has been lock with LOCK TABLE .
Documentation says to modify DISTRIBUTED_LOCK_TIMEOUT parameter, but it is obsolete in Oracle 8i.
Any idea ???You could set an alarm() in a signal. Then on return (for whatever reason) you clear the alarm, inspect the return code of the sql execute call and determine what happened (i.e. did the transaction completed or did the alarm get you).
Hope it helps.
-Lou
[email protected]
How can I set the timeout before an INSERT statement fails with error ORA-02049 (timeout: distributed transaction waiting for lock) when the entire table has been lock with LOCK TABLE .
Documentation says to modify DISTRIBUTED_LOCK_TIMEOUT parameter, but it is obsolete in Oracle 8i.
Any idea ??? -
MySQL lock table size Exception
Hi,
Our users get random error pages from vibe/tomcat (Error 500).
If the user tries it again, it works without an error.
here are some errors from catalina.out:
Code:
2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
It always logs the Mysql error code 1206:
MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
1206 (ER_LOCK_TABLE_FULL)
The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
Thanks for your help.I already found an entry from Kablink:
https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
But i think this can't be a permanent solution...
Our MySQL Server version is 5.0.95 running on sles11 -
Command object locks table (TX)
Im having locking probmlems with ODP.NET. Im updating same table with many connections in many threads. I get table locks for hours (over 24 hours) and I cant find any timout to set. The CommandTimeout on the Command object is not implemented.
In the V$Lock table I can see two locks on the same table made by two session ids from the same computer.
Im using transactions on the Connection object.
How do I set a timout for the update so that the lock will dissapear?
String connString = "Data Source=" + dsn + ";"
+ "User ID=" + user + ";"
+ "Password=" + pwd;
OracleConnection conn = new OracleConnection(connString);
String sql = "UPDATE ABDATA2 SET ABNAVN='SOLNA STAD ' WHERE OBJID = -21805738";
OracleCommand command = new OracleCommand(sql, m_conn);
//command.CommandTimeout = 60000; Exists but not supported
command.ExecuteNonQuery();
Is there anyone who has a clue?Hi Neo, thanks for responding;
I did not publish the report with saved data, and I'm not sure what you mean about appending the string value? Could you explain that part?
The parameter I created is only declared as a string with no default value with a name of "CodeTableName". I used the Create paramter button you have in the Command object window to make it.
I then added the parameter name to my SQL statement as listed above.
The actual code table names in the database is actually longer then what the parameter calls for. They all start with "TIBURON.ZZ_" and end with "_Codes". I didn't want the users to have to remember the full names so that's why the SQL statement shows those additional parts.
The report works perfect when I'm running this report from Crystal Reports 9 or CR11 itself. It's only when I upload the report to our web server that the users isn't provided a prompt to enter a parameter. They only have button labed "Run Report".
Any ideas?
Thanks,
Joe -
Querying v$lock table causes instance CPU to jump
Hi all. I run several queries to find out metrics on my Oracle instance for monitoring purposes. One of the queries I tried using to give me a PercentLockutilization is
SELECT round(NVL((count(b.sid)/a.value)*100,0.0),3) "Pct Lock Utilization" FROM v$lock b, v$parameter a WHERE a.name = 'dml_locks' GROUP BY a.value
Everytime I run this query however, the CPu usage of the Oracle instance spikes about 10 percentage points. At this point I am assuming it is something with the query causing this but I am not sure what that is. My question is if there is a better query to use to get lock utilization stats via a SQL statement? Any and all help is appreciated.You must be coming from a Sybase/SQL Server background :-)
Lock utilization is not an issue in Oracle since there are an infinite number of locks available, and most locks (or latches or enqueues) are held for only a very short period of time. Having said that, the jump in CPU is not surprising.
The v$ tables are all complex memory structures and not physical tables. It is very expensive to query them and also slows the system, since Oracle needs to aquire latches (lightweight locks) to prevent changes to these structures, and do a lot of memory manipulation to get at the values.
TTFn
John -
Lock table: How to notify others
Hi,
I 've created a Stored Procedure that inserts into a table.
The first and last statement of by BEGIN...END part of the stored procedure locks the table successfully.
BEGIN
LOCK TABLE MY_TABLE IN EXCLUSIVE MODE NOWAIT;
COMMIT;
LOCK TABLE MY_TABLE IN SHARE MODE NOWAIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.PUT_LINE(SQLERRM);
ROLLBACK;
END;
/The purpose is to lock the table so that if another user accidentally tries to execute the script s/he won't be able to modify the data in the table.
However, if the procedure is already running when another user tries to execute it, the 2nd user simply sees the procedure to terminate very quickly (it normally takes around 15-20 minutes based on the data). No message that the table is locked or that the procedure is in progress from another user.
How can I notify the 2nd user that the procedure is already in progress from another user or that the table is locked? What should I put in the EXCEPTION part maybe to do that?
Thank you in advance.
Regards,
John.PRAGMA EXCEPTION_INIT(already_lock, -54);is 54 value meant for in-buit exception table
le locked, if so, even without having a custom
exception, oracle by itself would have thrown it in
the second session, right???, please do correct me if
iam wrong,
-54, and yes it is.
>
what will be the impact if i have a WAIT instead of
f NOWAIT, will my second session just be on hold
waiting for lock to be released???Second session will wait... but WAIT doesn't exist, just write lock table t3 IN EXCLUSIVE MODE ;I advise you to read this for LOCK command : http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_914a.htm#2064408
and this for exceptions : http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/07_errs.htm#784
Nicolas. -
Issue: Lock table is out of available object entries
Hi all,
We have a method to add records into BDB, and after there are more than 10000 records, if we continue add records into BDB, such as add 400 records into BDB, then do other update/add operation to BDB, it will be failed.
The error message is Lock table is out of available object entries.
How to resolve it?
Thanks.
Jane.Frist the BDB stat as bellow:
1786 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
2000 Maximum number of locks possible
2000 Maximum number of lockers possible
2000 Maximum number of lock objects possible
52 Number of current locks
1959 Maximum number of locks at any one time
126 Number of current lockers
136 Maximum number of lockers at any one time
26 Number of current lock objects
1930 Maximum number of lock objects at any one time
21M Total number of locks requested (21397151)
21M Total number of locks released (21397099)
0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
0 Total number of locks not immediately available due to conflicts
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
736KB The size of the lock region
0 The number of region locks that required waiting (0%)
Then I run the method to insert 29 records into BDB, the BDB isn't locked yet, and the stat:
1794 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
2000 Maximum number of locks possible
2000 Maximum number of lockers possible
2000 Maximum number of lock objects possible
52 Number of current locks
1959 Maximum number of locks at any one time
134 Number of current lockers
136 Maximum number of lockers at any one time
26 Number of current lock objects
1930 Maximum number of lock objects at any one time
22M Total number of locks requested (22734514)
22M Total number of locks released (22734462)
0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
0 Total number of locks not immediately available due to conflicts
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
736KB The size of the lock region
0 The number of region locks that required waiting (0%)
Then I run the method again to insert records, the issue "Lock table is out of available locks" occur, and the BDB stat:
1795 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
2000 Maximum number of locks possible
2000 Maximum number of lockers possible
2000 Maximum number of lock objects possible
52 Number of current locks
2000 Maximum number of locks at any one time
135 Number of current lockers
137 Maximum number of lockers at any one time
27 Number of current lock objects
1975 Maximum number of lock objects at any one time
26M Total number of locks requested (26504607)
26M Total number of locks released (26504553)
0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
0 Total number of locks not immediately available due to conflicts
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
736KB The size of the lock region
0 The number of region locks that required waiting (0%)
Why this issue occur and how to resolve this issue.
Thanks very much.
Jane -
URGENT : select from table statement in ABAP OO
Hi all,
I am an absolute ABAP OO beginner and need some quick help from an expert. How can I make a selection from an existing table (eg MARA) in an BADI which is programmed according to ABAP OO principles.
In the old ABAP school you could make a TABLES statement at the beginning and the do a SELECT¨* FROM but this does not work in ABAP OO.
How should i define such simple selections from existing tables. Anyone ?
Thanks a lot,
Eric Hassenberg*define internal table
data: i_mara like standard table of mara.
*select to this table
select * from mara into table i_mara.
Also you have to define work area for this internal table in order to use it.
data:w_mara like line of i_mara. -
Are there any way to lock tables when i insert data with SQL*Loader? or oracle do it for me automatically??
how can i do this?
Thanks a lot for your helpAre there any problem if in the middle of my load (and commits) an user update o query data ?The only problem that I see is that you may run short of undo space (rollback segment space) if your undo space is limited and the user is running a long SELECT query for example: but this problem would only trigger ORA-1555 for the SELECT query or (less likely since you have several COMMIT) ORA-16XX because load transaction would not find enough undo space.
Data is not visible to other sessions, unless, the session which is loading data, commits it. That's the way Oracle handle the read committed isolation level for transaction.
See http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2689
Or what happens if when i want to insert data someone has busy the table?You will get blocked if you try to insert data that has the same primary key as a row being inserted by a concurrent transaction. -
How to get the number of hits ("returned rows") in read table statement
Hi Experts
I have the statement shown below, which seems not to work as I would like it to. My problem is that I would like to do two different things depending on weather or not a read table statement results in 0 hits or 1 (or more) hits.
READ TABLE g_ship_item
WITH KEY
l_ztknum = DATA_PACKAGE-/bic/ztknum
BINARY SEARCH.
IF sy-subrc is initial.
no hits found
DATA_PACKAGE-/BIC/ZSTAGEERR = 1.
ELSE.
hits where found and we will do something else...
DATA_PACKAGE-/BIC/ZSTAGEERR = 0.
Hope someone can help me out of my problem...
Thanks in advance, regards
TorbenHi,
As you are using READ statement with Binary search, check whether the internal table g_ship_item is sorted with field /bic/ztknum or not. If it is not sorted then the result of this READ statement is not correct.
Below is the correct code.
sort g_ship_item by /bic/ztknum.
READ table g_ship_item with key g_ship_item = xxx.
Thanks,
Satya -
Creating a better update table statement
Hello,
I have the following update table statement that I would like to make more effecient. This thing is taking forever. A little background. The source table/views are not indexed and the larger of the two only has 150k records. Any ideas on making more effecient would be appreciate.
Thanks.
Ryan
Script:
DECLARE
V_EID_CIV_ID SBI_EID_W_VALID_ANUM_V.SUBJECT_KEY%TYPE;
V_EID_DOE DATE;
V_EID_POE SBI_EID_W_VALID_ANUM_V.POINT_OF_ENTRY%TYPE;
V_EID_APPR_DATE DATE;
V_CASE_CIV_ID SBI_DACS_CASE_RECORDS.CASE_EID_CIV_ID%TYPE;
V_CASE_DOE DATE;
V_CASE_POE SBI_DACS_CASE_RECORDS.CASE_CODE_ENTRY_PLACE%TYPE;
V_CASE_APPR_DATE DATE;
V_CASE_DEPART_DATE DATE;
V_SBI_UPDATE_STEP SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP%TYPE;
V_SBI_CIV_ID SBI_DACS_CASE_RECORDS.SBI_CIV_ID%TYPE;
CURSOR VALID_CIV_ID_FROM_EID IS
SELECT EID.SUBJECT_KEY,
TO_DATE(EID.PROCESS_ENTRY_DATE),
EID.POINT_OF_ENTRY,
TO_DATE(EID.APPREHENSION_DATE),
DACS.CASE_EID_CIV_ID,
TO_DATE(DACS.CASE_DATE_OF_ENTRY,'YYYYMMDD'),
DACS.CASE_CODE_ENTRY_PLACE,
TO_DATE(DACS.CASE_DATE_APPR,'YYYYMMDD'),
TO_DATE(DACS.CASE_DATE_DEPARTED,'YYYYMMDD'),
DACS.SBI_UPDATE_STEP,
DACS.SBI_CIV_ID
FROM SBI_EID_W_VALID_ANUM_V EID,
SBI_DACS_CASE_RECORDS DACS
WHERE DACS.CASE_NBR_A = EID.ALIEN_FILE_NUMBER;
BEGIN
OPEN VALID_CIV_ID_FROM_EID;
SAVEPOINT A;
LOOP
FETCH VALID_CIV_ID_FROM_EID INTO V_EID_CIV_ID, V_EID_DOE, V_EID_POE,V_EID_APPR_DATE,V_CASE_CIV_ID, V_CASE_DOE,V_CASE_POE,V_CASE_APPR_DATE,V_CASE_DEPART_DATE,V_SBI_UPDATE_STEP,V_SBI_CIV_ID;
DBMS_OUTPUT.PUT_LINE('BEFORE');
EXIT WHEN VALID_CIV_ID_FROM_EID%FOUND;
DBMS_OUTPUT.PUT_LINE('AFTER');
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_CASE_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 1
WHERE V_CASE_CIV_ID IS NOT NULL
AND V_CASE_CIV_ID <> 0;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 2
WHERE V_SBI_CIV_ID IS NULL AND V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE
AND V_EID_APPR_DATE = V_CASE_DEPART_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 3
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 4
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4 ;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 5
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE <> V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 6
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 7
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 8
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 9
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 10
WHERE V_SBI_UPDATE_STEP = 0
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
END LOOP;
CLOSE VALID_CIV_ID_FROM_EID;
COMMIT;
END;
-----Thats it. Thanks for your help.
RyanPlease use [ code] or [ pre] tags to format code before posing:
DECLARE
V_EID_CIV_ID SBI_EID_W_VALID_ANUM_V.SUBJECT_KEY%TYPE;
V_EID_DOE DATE;
V_EID_POE SBI_EID_W_VALID_ANUM_V.POINT_OF_ENTRY%TYPE;
V_EID_APPR_DATE DATE;
V_CASE_CIV_ID SBI_DACS_CASE_RECORDS.CASE_EID_CIV_ID%TYPE;
V_CASE_DOE DATE;
V_CASE_POE SBI_DACS_CASE_RECORDS.CASE_CODE_ENTRY_PLACE%TYPE;
V_CASE_APPR_DATE DATE;
V_CASE_DEPART_DATE DATE;
V_SBI_UPDATE_STEP SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP%TYPE;
V_SBI_CIV_ID SBI_DACS_CASE_RECORDS.SBI_CIV_ID%TYPE;
CURSOR VALID_CIV_ID_FROM_EID IS
SELECT EID.SUBJECT_KEY,
TO_DATE(EID.PROCESS_ENTRY_DATE),
EID.POINT_OF_ENTRY,
TO_DATE(EID.APPREHENSION_DATE),
DACS.CASE_EID_CIV_ID,
TO_DATE(DACS.CASE_DATE_OF_ENTRY,'YYYYMMDD'),
DACS.CASE_CODE_ENTRY_PLACE,
TO_DATE(DACS.CASE_DATE_APPR,'YYYYMMDD'),
TO_DATE(DACS.CASE_DATE_DEPARTED,'YYYYMMDD'),
DACS.SBI_UPDATE_STEP,
DACS.SBI_CIV_ID
FROM SBI_EID_W_VALID_ANUM_V EID,
SBI_DACS_CASE_RECORDS DACS
WHERE DACS.CASE_NBR_A = EID.ALIEN_FILE_NUMBER;
BEGIN
OPEN VALID_CIV_ID_FROM_EID;
SAVEPOINT A;
LOOP
FETCH VALID_CIV_ID_FROM_EID INTO V_EID_CIV_ID, V_EID_DOE,
V_EID_POE,V_EID_APPR_DATE,V_CASE_CIV_ID, V_CASE_DOE,
V_CASE_POE,V_CASE_APPR_DATE,V_CASE_DEPART_DATE,V_SBI_UPDATE_STEP,V_SBI_CIV_ID;
DBMS_OUTPUT.PUT_LINE('BEFORE');
EXIT WHEN VALID_CIV_ID_FROM_EID%FOUND;
DBMS_OUTPUT.PUT_LINE('AFTER');
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_CASE_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 1
WHERE V_CASE_CIV_ID IS NOT NULL
AND V_CASE_CIV_ID <> 0;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 2
WHERE V_SBI_CIV_ID IS NULL AND V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE
AND V_EID_APPR_DATE = V_CASE_DEPART_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 3
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 4
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4 ;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 5
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE <> V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 6
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_POE = V_CASE_POE
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 7
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_APPR_DATE = V_CASE_APPR_DATE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 8
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 9
WHERE V_SBI_UPDATE_STEP = 0
AND V_EID_DOE = V_CASE_DOE
AND V_EID_POE = V_CASE_POE;
UPDATE SBI_DACS_CASE_RECORDS
SET SBI_DACS_CASE_RECORDS.SBI_CIV_ID = V_EID_CIV_ID,
SBI_DACS_CASE_RECORDS.SBI_UPDATE_STEP = 10
WHERE V_SBI_UPDATE_STEP = 0
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) > -4
AND (V_EID_APPR_DATE - V_CASE_APPR_DATE) < 4;
END LOOP;
CLOSE VALID_CIV_ID_FROM_EID;
COMMIT;
END;Peter D. -
How to specify tablespace for a primary key inde in create table statement
How to specify the tablespace for a primary key index in a create table statement?
Does the following statement is right?
CREATE TABLE 'GPS'||TO_CHAR(SYSDATE+1,'YYYYMMDD')
("ID" NUMBER(10,0) NOT NULL ENABLE,
"IP_ADDRESS" VARCHAR2(32 BYTE),
"EQUIPMENT_ID" VARCHAR2(32 BYTE),
"PACKET_DT" DATE,
"PACKET" VARCHAR2(255 BYTE),
"PACKET_FORMAT" VARCHAR2(32 BYTE),
"SAVED_TIME" DATE DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "UDP_LOG_PK" PRIMARY KEY ("ID") TABLESPACE "INDEX_DATA"
TABLESPACE "SBM_DATA"; Thank you
Edited by: qkc on 09-Nov-2009 13:42As orafad indicated, you'll have to use the USING INDEX clause from the documentation, i.e.
SQL> ed
Wrote file afiedt.buf
1 CREATE TABLE GPS
2 ("ID" NUMBER(10,0) NOT NULL ENABLE,
3 "IP_ADDRESS" VARCHAR2(32 BYTE),
4 "EQUIPMENT_ID" VARCHAR2(32 BYTE),
5 "PACKET_DT" DATE,
6 "PACKET" VARCHAR2(255 BYTE),
7 "PACKET_FORMAT" VARCHAR2(32 BYTE),
8 "SAVED_TIME" DATE DEFAULT CURRENT_TIMESTAMP,
9 CONSTRAINT "UDP_LOG_PK" PRIMARY KEY ("ID") USING INDEX TABLESP
ACE "USERS"
10 )
11* TABLESPACE "USERS"
SQL> /
Table created.Justin
Maybe you are looking for
-
I made the unfortunate mistake of upgrading my ipod touch to ios 6 and then had to update itunes. Lots and lots of issues with that, losing my library, not able to add what was already on my ipod to itunes, and I have no idea what version of itunes I
-
2 year old drivers need to be updated.
When is hp going to update the drivers for Intel/ATI Switchable Graphic Laptops? The drivers I have on my Pavilion M6-1058ca Laptop can no longer play the latest games. Please release updated drivers and don't make your client base assume you "take t
-
[SOLVED]Xfce4 slowed down after backup
I have Xfce4 and something went wrong with it after making backup on the external drive. It is very slow now, it takes 15 seconds to open something i clicked, the mouse pointer is OK, also the clock. How to fix this? If i damaged system files, how to
-
I was trying to create headers and footers and went to tools but there is no document settings tool. All I have is find, print, settings and go to help. Am I missing something?
-
Issue with 'AftValidate' script
Hi, We have written 'AftValidate' event script which triggers a mail with Pass status when Validate status is '11' and Fail status when code is '12' When we have source members which do not hv target member defined (unmapped member), code is set to 1