Response.write after insert/update
Hi
Could anyone please show me how to do a response.write message on the page after insert/update?
Wondering especially about the placement and the syntax of the response.
DWCS5, Access, Asp
Christian
I'm sorry to say I don't know how to test values in ASP, have just started to use it.
I didn't get it to work so I threw away the code and went for an ordinary text response on a second page.
Might as well get a book and read up on it
Similar Messages
-
Bc4j bug at refresh after insert / update
having a table and a trigger on it that fills an attribute
in a 9i db. one can take the scott/tiger emp table and use the
following trigger:
create or replace trigger t_bri_emp
before insert on emp
for each row
declare
v_no number(7,2);
begin
select user_SEQ.nextval*100
into v_no
from dual;
:new.sal := v_no;
end;
then creating a bc4j project with an entity object based on this table (emp)
and a view object based on the created entity object.
!!!! when you create the bc4j project you must set the SQL Flavor to SQL92 or OLite
when you set the attribute settings, for the attribute the trigger is on (sal),
to refresh after insert / update or both in the entity object you get a
null pointer when you try to insert a new row and commit.
java.lang.NullPointerException
oracle.jdbc.ttc7.TTIoac oracle.jdbc.ttc7.TTCAdapter.newTTCType(oracle.jdbc.dbaccess.DBType)
oracle.jdbc.ttc7.NonPlsqlTTCColumn[] oracle ....................
all can be reproduced just useing the wizard and start the bc4j project with the tester
any workaround ??Sven:
I looked into your issue. It turns out your problem is caused by a bug in the system (bug #2409955).
The bug is that for non-Oracle SQLBuilder, we are not processing refresh-on-insert and refresh-on-update attributes correctly. We end up forming an invalid SQL statement and the JDBC driver gives the obscure NullPointerException.
Thus, until 9.0.3, you should not use refresh-on-insert/update attributes on a non-Oracle SQLBuilder.
If you need the refresh-on-insert/update attribute, here is a possible workaround:
1. Whenever you invoke the tester, on the first panel, click on the 'Properities' tab. In the list of properties, you should find one for 'jbo.SQLBuilder'. It will say 'SQL92'. Remove it, so that it is empty. Run test tester. Then, the tester will use Oracle SQLBuilder which will handle refresh-on-insert/update attributes correctly for you.
2. For switching between Oracle SQLBuilder and SQL92 SQLBuilder, you can try the following:
2a) Locate bc4j.xcfg and your Project??.jpx under your 'src' directory.
2b) Make copies of these files.
2c) Edit them so that you have one set for Oracle SQLBuilder and one set for SQL92 SQLBuilder. For Oracle SQLBuilder, make sure these files do NOT have entries like:
<jbo.SQLBuilder>..</jbo.SQLBuilder> in bc4j.xcfg and
<Attr Name="_jbo.SQLBuilder" Value="..." /> in Project??.jpx
When there is no jbo.SQLBuilder entry, BC4J will default to Oracle.
For SQL92, make sure you have
<jbo.SQLBuilder>SQL92</jbo.SQLBuilder> in bc4j.xcfg and
<Attr Name="_jbo.SQLBuilder" Value="SQL92" /> in Project??.jpx.
Before you run, decide which SQLBuilder to use (for 9.0.2, with retrieve-on-insert/update attrs, you have no choice but to use Oracle SQLBuilder) and copy these files into your class path, e.g.,
<jdev-install-dir>\jdev\mywork\Workspace1\Project1\classes
When you get 9.0.3, then you should be able to switch between Oracle and SQL92 SQLBuilders freely.
Thanks.
Sung -
(Automatic) Refresh of Cached object after insert/update
Hi,
(I am using Toplink 9.0.3 against an Oracle Database)
I am inserting and updating records in the database through objects registered in a TopLink UnitOfWork. I happen to know that certain database columns will get a (changed) values because of Database triggers.
Is it true that the only way to get these changed values reflected back to the TopLink cache is by explicitly executing a
session.refreshObject();
call for every object changed in the UnitOfWork?
Is there no way to inform TopLink (for example through the descriptors for the relevant classes) that for certain classes after the insert/update an automatic synchronization with the database must be performed?
I have not been able to find such a setting, but I may have overlooked it - I hope I did.
Thanks for your help,
Lucas Jellema (AMIS)In this case use a postMerge event -- it will get called after the merge of the cache and then you could update the object explicitly.
Ultimately, the way to achieve the behavior you're looking for is events or refreshing.
- Don -
Refreshing the RowSetBrowser webbean after insert/update(?)
How do I refresh the RowSetBrowser web bean after inserting or updating and committing a record thru the EditCurrentRecord web bean?
I am unable to see the new record added in the RowSetBrowser unless I close the browser and re-enter.
Any advice would be appreciated!
nullafter you call to iniatialize() in the DWB, you can refresh the cache by calling
<%
dbw.getRowset().executeQuery();
%> -
Create trigger for after insert update
i have table tt (id number (10), name varchar2(20), status varchar2(1) , stage varchar2(1))
when i insert data in table tt
id name status stage
1 anil a
then trigger fire and update same value of satge to status
like
1 anil a a
pls give exampleHi,
please try this code:
CREATE OR REPLACE TRIGGER <tiggername>
AFTER INSERT OR UPDATE
ON TT
FOR EACH ROW
DECLARE
v_Status varchar2(10);
BEGIN
-- Insert record into audit table
INSERT INTO orders_audit
( id,
name,
stage)
VALUES
( :new.id,
:new.name,
:new.stage);
select stage into v_status
from tt where stage = new.stage
update tt set status = v_status where id = new.id
END; -
ORA-29532 exception from after insert update trigger
Trigger code:
CREATE OR REPLACE TRIGGER SAMS.PERS_ALG_AIU_TRG
AFTER INSERT OR UPDATE
ON PERS_ALG
FOR EACH ROW
DECLARE
trans_type VARCHAR2(1);
event_id CONSTANT VARCHAR2(7) := 'A31_ZA1';
data_src CONSTANT VARCHAR2(15) := 'SI_A31_ZA1_VW';
p_row_id ROWID;
select_stmt VARCHAR2(5000);
XMLString CLOB;
BEGIN
IF INSERTING THEN
trans_type := 'I';
ELSE
trans_type := 'U';
END IF;
SELECT ROWID
INTO p_row_id
FROM personnel
WHERE pers_seq = :new.pers_seq;
select_stmt := 'SELECT * FROM si_a31_ZA1_vw WHERE start_date = MAX(start_date) AND ROW_ID =chartorowid('''||p_row_id||''')';
-- Produce the XML
XMLString := si_lib.get_XML(select_stmt);
-- Insert the transaction
INSERT INTO si_transaction (transaction_type, event_id, status_id, timestamp, transaction_xml)
VALUES ( trans_type, event_id, 'CR', SYSDATE, XMLString);
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR (-20040, sqlerrm);
END;
... is causing ORA-29532 JAVA call terminated by uncaught java exception: java.lang.nullPointerException
any ideas --- get_XML actually calls dbms_xmlquery.getXMLWhen in doubt, break it into pieces and test the pieces. The following is not valid:
SELECT * FROM si_a31_ZA1_vw WHERE start_date = MAX(start_date)
Try replacing the above with:
SELECT * FROM si_a31_ZA1_vw WHERE start_date = (SELECT MAX(start_date) FROM si_a31_ZA1_vw) -
Need user response(confirmation) before Insert, update or delete
Hi,
I want that before insert or update or delete etc., an alert is shown to the user and if the user click OK button then that action is performed on database otherwise not. I wrote following javascripts in a user interface template and then called this function but it did not work
function confirm_Modification_delete ()
if (confirm("Are you sure you want to remove this record?"))
return true;
else
alert("You click Cancel")
return false;
in the same way i also created insert or update alerts but when i clicked cancel even then they perform the insert or update or delete. How can i correct these scripts or get this functionality.
Thanks
Muhammad Ejaz
nullYOU CAN ADD THE FOLLOWING JAVASCRIPT CODE UNDER THE "onClick" EVENT FOR THE INSERT/UPDATE/DELETE BUTTONS...
var x=window.confirm('Do you want to proceed?');
if (!x)
return false;
IT WORKS FOR ME
I HOPE IT HELPS..
null -
I need to write a trigger that if some columns of the master table have changed, the trigger will first insert the change to a copy table. Later on, if there are more changes to the master of the same row, it will compare the new value to the old value and delete the same row in the copy table and insert the latest change. I have wrote the following trigger. Some how when I run it, it has compilation error. Does anyone have any idea on how should I approach this?
create or replace trigger TRITON.AFTER_INSERT_UPDATE_ITM001100after insert or update on TRITON.TTIITM001100for each rowdeclarev_reflag char(1);beginv_reflag:='0';if (:new.T$ITEM != :old.T$ITEM or :new.T$DSCA != :old.T$DSCA) then
delete from ticcrm.ttiitm001100_copy;
insert /*+append*/ into ticcrm.ttiitm001100_copy(T$ITEM, T$DSCA, T$DSCB, T$DSCC, T$DSCD, T$WGHT, T$SEAK, REFLAG)values(:new.T$ITEM, :new.T$DSCA, :new.T$DSCB, :new.T$DSCC, :new.T$DSCD, :new.T$WGHT, :new.T$SEAK, v_reflag);
end if;
end;
/delete from ticcrm.ttiitm001100_copy;
insert /*+append*/ into ticcrm.ttiitm001100_copy(T$ITEM, T$DSCA, T$DSCB, T$DSCC, T$DSCD, T$WGHT, T$SEAK, REFLAG)values(:new.T$ITEM, :new.T$DSCA, :new.T$DSCB, :new.T$DSCC, :new.T$DSCD, :new.T$WGHT, :new.T$SEAK, v_reflag);
This is simple SQL statements, not PL/SQL.
Use EXECUTE IMMEDIATE for executing it.
P.S. Please do not forget to make good formatting of your code before posting it. -
No database update after insert / update
Dear all..
I have created a small report for storing additional invoice text into a z-table. These invoice text could have the lenght of 2560 character and are needed for EDI transmission.
Directly after saving - I can view them. After some hours I am always getting empty fields wenn I try to view the data. ... for me it looks like that the table entries are buffered but nor really stored into database. Does a commit work solves this problem ?
Thanks in advance
Jürgenhi,
you say :
Directly after saving - I can view them.
how? with se16 or program?
-> try to work with offset
A. -
Using Database Change Notification instead of After Insert Trigger
Hello guys! I have an after insert trigger that calls a procedure, which in turn is doing an update or insert on another table. Due to mutating table errors I declared the trigger and procedure as autonomously transactional. The problem is, that old values of my main tables are inserted into the subtable since the after insert/update trigger is fired before the commit.
My question is how can I solve that and how could I use the change notification package to call my procedure? I now that this notification is only started after a DML/DDL action has been commited on a table.
If you could show me how to carry out the following code with a Database Change Notification I'd be delighted. Furthermore I need to know if it suffices to set up this notification only once or for each client seperately?
Many thanks for your help and expertise!
Regards,
Sebastian
declare
cnumber number (6);
begin
select count(*) into cnumber from (
select case when (select date_datum
from
(select f.date_datum,
row_number() over (order by f.objectid desc) rn
from borki.fangzahlen f
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010'
where rn=1) < (select date_datum
from
(select f.date_datum,
row_number() over (order by f.objectid desc) rn
from borki.fangzahlen f
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010'
where rn=2) then 1 end as action from borki.fangzahlen
where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010') where action = 1;
if cnumber != 0 then
delete from borki.tbl_test where lng_falle = :new.lng_falle
and int_fallennummer = :new.int_fallennummer
and lng_schaedling = :new.lng_schaedling
and date_datum > '31.03.2010';
commit;
pr_fangzahlen_tw_sync_sk(:new.lng_falle, :new.int_fallennummer, :new.lng_schaedling);It looks like you have an error in line 37 of your code. Once you fix that the problem should be resolved.
-
Updating row after insert that causes trigger
Hi,
I have a table which one of the columns is a sequence number which each time a new row is written I want to automatically write to this sequence column the latest number (which is maintained using an Oracle sequence). I decided to use a trigger which works great:
CREATE OR REPLACE TRIGGER pre_insert_trg before insert ON INCOMING
for each row
BEGIN
SELECT INCOMING_SEQ.NEXTVAL
INTO :new.SEQ
FROM DUAL;
END;
The problem is I have many multi-threaded processes writing to this table and the sequence number gets allocated before the row is committed which means although thread 1 may write first it may commit after thread 2 which has a later sequence number which has the result that sequence number 2 may be shown before 1. The problem is much larger when dealing with many threads with 100's of messages a second and so I would prefer the seq to be added AFTER the insert (once committed), e.g.
CREATE OR REPLACE TRIGGER post_insert_trg after insert ON INCOMING
for each row
BEGIN
SELECT INCOMING_SEQ.NEXTVAL
INTO :new.SEQ
FROM DUAL;
END;
Trying to create this trigger results in a "ORA-04084: cannot change NEW values for this trigger type". Is there a way to update a column for the row which caused the trigger? If I try and remove the :new. I then get errors when inserting about mutating table (which I understand the DML locking but need a way round)
Many thanks,
JohnOff hand I think the only way you can do what you say you want to do is to replace the sequence with a code table. A transaction locks the code table, gets the next value from it and unlocks the code table at the end of the transaction. This ensures that the sequence number is always assigned in transaction completion order (or rather the transactions always complete in sequence assigned order). It is basically what java does with its synchronize statement.
The only drawback with this approach is that it completely shags the performance of the sort of multi-threaded environment you've just described. :P
So you need to decide what's more important: having stuff inserted into a queue in sequence order or having a multi-threaded system running quickly. I really don't see how you can acheive what you're attempting in a genuinely concurrent environment; you'll have to introduce an artificial serialism (ie bottleneck) somewhere.
If you do crack this please tell me how: I would be interested in ordering multiple AQs in the fashion you described.
Cheers, APC -
Extremely slow inserts/updates reported after HW upgrade...
Hi gyus,
I'll try to be as descriptive as I can. It's this project in which we have to move circa 6 mission critical (24x7) and mostly OLTP databases from MS Windows 2003 (DB on local disks) to HP-UX IA (CA metrocluster, HP XP 12000 disk array) - all ORA10gR2 10.2.0.4. And everything was perfect until we moved this XYZ database...
Almost immediately users reported "considerable" performance degradation. According to 3rd party application log they get almost 40 secs. instead of previously recorded 10.
We, I mean Oracle and HP specialists, haven't noticed/recorded any significant peeks/bottlenecks (RAM, CPU, Disk I/O).
Feel free to check 3 AWR reports and the init.ora at [http://www.mediafire.com/?sharekey=0269c9bc606747b47f7ec40ada4772a6e04e75f6e8ebb871]
1_awrrpt_standard.txt - standard workload during 8 hours (peek hours are from 8-12AM)
2_awrrpt_2hrs_ca.txt - standard workload during 2 peek hours (8-10)
3_awrrpt_2hrs_noca.txt - standard workload during 2 peek hours (10-12) with CA disk mirroring disabled
Of course, I've checked the ADDM reports - and first, I'd like to ask why ADDM keeps on reporting the following (on all database instances on this
cluster node):
FINDING 1: 100% impact (310 seconds)
Significant virtual memory paging was detected on the host operating system.
RECOMMENDATION 1: Host Configuration, 100% benefit (310 seconds)
Is it just some kind of false alarm (like we use to get on MS Windows)? Both nodes are running on 32gigs of RAM
with roughly more than 10gigs constantly free.
Second, as ADDM reported:
FINDING 2: 44% impact (135 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
we've tried to split CA disk mirroring, using RAID 10 for redo log file disks etc. etc. No substantial performance gain was reported from users (though I've noticed some in AWR reports).
Despite confusing app. users' feedback I'm nearly sure that our bottleneck are redo log file disks. Why? Previously (old HW) we had 1-3 ms avg wait on log file sync and log file parallel write and now (new HW, RAID5/RAID10 we've tested both) - it's 8 ms or even more. We were able to get 2ms only with CA switched off (HP etrocluster
disk array mirroring).
And that brings up two new questions:
1. Does redo log group mirroring (2 on 2 separate disks vs. 1 on 1 disk) have any
significant impact on abovementioned wait events? I mean what performance gain
could I expect when I drop all "secondary" redo log members?
2. Why do we get almost identical response times when we run bulk insert/update tests (say
1000000 rows) against old and new DB/HW?
Thanks in advance,
tOPsEEK
Edited by: smutny on 1.11.2009 17:39
Edited by: smutny on 1.11.2009 17:46Hi again,
so here's the actual AWR report (1 minute window while the most problematic operation took place). I think it's becoming clear we have to deal with slow redo log writes...
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
... 294736156 ... 1 10.2.0.4.0 NO ...
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1254 02-Nov-09 10:07:45 91 46.4
End Snap: 1255 02-Nov-09 10:08:47 91 46.4
Elapsed: 1.04 (mins)
DB Time: 0.51 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 576M 576M Std Block Size: 8K
Shared Pool Size: 912M 912M Log Buffer: 14,372K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 163,781.94 4,575.45
Logical reads: 1,677.15 46.85
Block changes: 1,276.32 35.66
Physical reads: 1.99 0.06
Physical writes: 1.16 0.03
User calls: 426.41 11.91
Parses: 20.74 0.58
Hard parses: 0.19 0.01
Sorts: 2.38 0.07
Logons: 0.00 0.00
Executes: 386.76 10.80
Transactions: 35.80
% Blocks changed per Read: 76.10 Recursive Call %: 31.51
Rollback per transaction %: 0.00 Rows per Sort: 369.98
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 99.07
Execute to Parse %: 94.64 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 112.50 % Non-Parse CPU: 99.11
Shared Pool Statistics Begin End
Memory Usage %: 88.90 88.87
% SQL with executions>1: 98.74 99.39
% Memory for SQL w/exec>1: 95.35 97.75
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
log file parallel write 2,228 21 10 69.4 System I/O
log file sync 2,220 21 10 69.2 Commit
CPU time 20 65.5
SQL*Net break/reset to client 2,106 1 1 3.4 Applicatio
db file sequential read 131 0 4 1.5 User I/O
Time Model Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> Total time in database user-calls (DB Time): 30.9s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 20.2 65.5
sql execute elapsed time 9.1 29.4
PL/SQL execution elapsed time 0.3 1.0
parse time elapsed 0.2 .5
hard parse elapsed time 0.1 .5
repeated bind elapsed time 0.0 .0
DB time 30.9 N/A
background elapsed time 22.2 N/A
background cpu time 0.4 N/A
Wait Class DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 3,213 .0 22 7 1.4
Commit 2,220 .0 21 10 1.0
Application 2,106 .0 1 1 0.9
User I/O 134 .0 0 4 0.1
Network 29,919 .0 0 0 13.4
Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
log file sync 2,220 .0 21 10 1.0
SQL*Net break/reset to clien 2,106 .0 1 1 0.9
db file sequential read 131 .0 0 4 0.1
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
SQL*Net more data to client 3,397 .0 0 0 1.5
SQL*Net message to client 26,509 .0 0 0 11.9
control file sequential read 897 .0 0 0 0.4
SQL*Net more data from clien 13 .0 0 0 0.0
direct path write 3 .0 0 0 0.0
SQL*Net message from client 26,510 .0 1,493 56 11.9
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
PL/SQL lock timer 10 100.0 49 4897 0.0
Background Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
control file sequential read 71 .0 0 0 0.0
rdbms ipc message 2,412 7.7 525 218 1.1
pmon timer 20 100.0 59 2929 0.0
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
Operating System Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total
AVG_BUSY_TIME 847
AVG_IDLE_TIME 5,362
AVG_IOWAIT_TIME 2,692
AVG_SYS_TIME 295
AVG_USER_TIME 549
BUSY_TIME 3,396
IDLE_TIME 21,457
IOWAIT_TIME 10,776
SYS_TIME 1,190
USER_TIME 2,206
LOAD 0
OS_CPU_WAIT_TIME 192,401,000
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 40,960
VM_OUT_BYTES 335,872
PHYSICAL_MEMORY_BYTES 34,328,276,992
NUM_CPUS 4
NUM_CPU_SOCKETS 4
Instance Activity Stats DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total per Second per Trans
CPU used by this session 984 15.8 0.4
CPU used when call started 344 5.5 0.2
CR blocks created 4 0.1 0.0
Cached Commit SCN referenced 208 3.3 0.1
Commit SCN cached 0 0.0 0.0
DB time 2,589 41.6 1.2
DBWR checkpoint buffers written 69 1.1 0.0
DBWR checkpoints 0 0.0 0.0
DBWR object drop buffers written 0 0.0 0.0
DBWR tablespace checkpoint buffe 0 0.0 0.0
DBWR thread checkpoint buffers w 0 0.0 0.0
DBWR transaction table writes 8 0.1 0.0
DBWR undo block writes 15 0.2 0.0
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 1,156 18.6 0.5
IMU Redo allocation size 996,048 16,017.2 447.5
IMU commits 2,100 33.8 0.9
IMU contention 0 0.0 0.0
IMU ktichg flush 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU recursive-transaction flush 0 0.0 0.0
IMU undo allocation size 22,402,560 360,250.9 10,064.0
IMU- failed to get a private str 0 0.0 0.0
Misses for writing mapping 0 0.0 0.0
SQL*Net roundtrips to/from clien 26,480 425.8 11.9
active txn count during cleanout 34 0.6 0.0
application wait time 106 1.7 0.1
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 199 3.2 0.1
branch node splits 0 0.0 0.0
buffer is not pinned count 13,919 223.8 6.3
buffer is pinned count 19,483 313.3 8.8
bytes received via SQL*Net from 884,016 14,215.7 397.1
bytes sent via SQL*Net to client 9,602,642 154,418.1 4,313.9
calls to get snapshot scn: kcmgs 13,641 219.4 6.1
calls to kcmgas 3,029 48.7 1.4
calls to kcmgcs 56 0.9 0.0
change write time 8 0.1 0.0
cleanout - number of ktugct call 42 0.7 0.0
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 1,100 17.7 0.5
cluster key scans 1,077 17.3 0.5
commit batch/immediate performed 1 0.0 0.0
commit batch/immediate requested 1 0.0 0.0
commit cleanout failures: block 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callba 4 0.1 0.0
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 9,539 153.4 4.3
commit cleanouts successfully co 9,535 153.3 4.3
commit immediate performed 1 0.0 0.0
commit immediate requested 1 0.0 0.0
commit txn count during cleanout 26 0.4 0.0
concurrency wait time 0 0.0 0.0
consistent changes 264 4.3 0.1
consistent gets 48,659 782.5 21.9
consistent gets - examination 26,952 433.4 12.1
consistent gets direct 2 0.0 0.0
consistent gets from cache 48,657 782.4 21.9
cursor authentications 2 0.0 0.0
data blocks consistent reads - u 4 0.1 0.0
db block changes 79,369 1,276.3 35.7
db block gets 55,636 894.7 25.0
db block gets direct 3 0.1 0.0
db block gets from cache 55,633 894.6 25.0
deferred (CURRENT) block cleanou 4,768 76.7 2.1
dirty buffers inspected 0 0.0 0.0
enqueue conversions 15 0.2 0.0
enqueue releases 9,967 160.3 4.5
enqueue requests 9,967 160.3 4.5
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 24,051 386.8 10.8
failed probes on index block rec 0 0.0 0.0
frame signature mismatch 0 0.0 0.0
free buffer inspected 680 10.9 0.3
free buffer requested 1,297 20.9 0.6
heap block compress 11 0.2 0.0
hot buffers moved to head of LRU 1,797 28.9 0.8
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 2,274 36.6 1.0
index crx upgrade (positioned) 47 0.8 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 10,326 166.1 4.6
index scans kdiixs1 6,071 97.6 2.7
leaf node 90-10 splits 14 0.2 0.0
leaf node splits 18 0.3 0.0
lob reads 0 0.0 0.0
lob writes 198 3.2 0.1
lob writes unaligned 176 2.8 0.1
logons cumulative 0 0.0 0.0
messages received 2,272 36.5 1.0
messages sent 2,272 36.5 1.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 21,083 339.0 9.5
opened cursors cumulative 1,290 20.7 0.6
parse count (failures) 0 0.0 0.0
parse count (hard) 12 0.2 0.0
parse count (total) 1,290 20.7 0.6
parse time cpu 18 0.3 0.0
parse time elapsed 16 0.3 0.0
physical read IO requests 124 2.0 0.1
physical read bytes 1,015,808 16,335.0 456.3
physical read total IO requests 1,030 16.6 0.5
physical read total bytes 15,785,984 253,851.1 7,091.6
physical read total multi block 0 0.0 0.0
physical reads 124 2.0 0.1
physical reads cache 122 2.0 0.1
physical reads cache prefetch 0 0.0 0.0
physical reads direct 2 0.0 0.0
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 47 0.8 0.0
physical write bytes 589,824 9,484.8 265.0
physical write total IO requests 4,591 73.8 2.1
physical write total bytes 25,374,720 408,045.5 11,399.3
physical write total multi block 4,461 71.7 2.0
physical writes 72 1.2 0.0
physical writes direct 3 0.1 0.0
physical writes direct (lob) 3 0.1 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 69 1.1 0.0
physical writes non checkpoint 18 0.3 0.0
pinned buffers inspected 1 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 0 0.0 0.0
process last non-idle time 0 0.0 0.0
recursive calls 12,197 196.1 5.5
recursive cpu usage 747 12.0 0.3
redo blocks written 11,398 183.3 5.1
redo buffer allocation retries 0 0.0 0.0
redo entries 6,920 111.3 3.1
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 96 1.5 0.0
redo size 10,184,944 163,781.9 4,575.5
redo subscn max counts 811 13.0 0.4
redo synch time 2,190 35.2 1.0
redo synch writes 2,220 35.7 1.0
redo wastage 1,377,920 22,158.0 619.0
redo write time 2,192 35.3 1.0
redo writer latching time 0 0.0 0.0
redo writes 2,228 35.8 1.0
rollback changes - undo records 24 0.4 0.0
rollbacks only - consistent read 4 0.1 0.0
rows fetched via callback 1,648 26.5 0.7
session connect time 0 0.0 0.0
session cursor cache hits 1,242 20.0 0.6
session logical reads 104,295 1,677.2 46.9
session pga memory 2,555,904 41,101.0 1,148.2
session pga memory max 0 0.0 0.0
session uga memory 123,488 1,985.8 55.5
session uga memory max 0 0.0 0.0
shared hash latch upgrades - no 66 1.1 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 148 2.4 0.1
sorts (rows) 54,757 880.5 24.6
sql area evicted 86 1.4 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 596 9.6 0.3
table fetch by rowid 9,173 147.5 4.1
table fetch continued row 0 0.0 0.0
table scan blocks gotten 982 15.8 0.4
table scan rows gotten 154,079 2,477.7 69.2
table scans (cache partitions) 0 0.0 0.0
table scans (long tables) 0 0.0 0.0
table scans (short tables) 59 1.0 0.0
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 1 0.0 0.0
undo change vector size 3,990,136 64,164.5 1,792.5
user I/O wait time 49 0.8 0.0
user calls 26,517 426.4 11.9
user commits 2,226 35.8 1.0
user rollbacks 0 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 204 3.3 0.1
write clones created in backgrou 0 0.0 0.0
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------... and what's even more interesting is the report that I've got using this great Tanel Poder's session snapper script. Take a look at these numbers (excerpt):
SID USERNAME TYPE STATISTIC DELTA HDELTA/SEC %TIME
668 ROVE_EDA WAIT log file sync 2253426 450.69ms 45.1%
668 ROVE_EDA WAIT log file sync 2140618 428.12ms 42.8%
668 ROVE_EDA WAIT log file sync 2088327 417.67ms 41.8%
668 ROVE_EDA WAIT log file sync 2184408 364.07ms 36.4%
668 ROVE_EDA WAIT log file sync 2117470 352.91ms 35.3%
668 ROVE_EDA WAIT log file sync 2051280 341.88ms 34.2%
668 ROVE_EDA WAIT log file sync 1595019 265.84ms 26.6%
668 ROVE_EDA WAIT log file sync 612034 122.41ms 12.2%
668 ROVE_EDA WAIT log file sync 2162980 432.6ms 43.3%
668 ROVE_EDA WAIT log file sync 2071811 345.3ms 34.5%
668 ROVE_EDA WAIT log file sync 2004571 334.1ms 33.4%
668 ROVE_EDA WAIT db file sequential read 28401 5.68ms .6%
668 ROVE_EDA WAIT db file sequential read 29028 4.84ms .5%
668 ROVE_EDA WAIT db file sequential read 24846 4.14ms .4%
668 ROVE_EDA WAIT db file sequential read 24323 4.05ms .4%
668 ROVE_EDA WAIT db file sequential read 17026 3.41ms .3%
668 ROVE_EDA WAIT db file sequential read 6736 1.35ms .1%
668 ROVE_EDA WAIT db file sequential read 33028 5.5ms .6%
764 (LGWR) WAIT log file parallel write 2236748 447.35ms 44.7%
764 (LGWR) WAIT log file parallel write 2150825 430.17ms 43.0%
764 (LGWR) WAIT log file parallel write 2139532 427.91ms 42.8%
764 (LGWR) WAIT log file parallel write 2119086 423.82ms 42.4%
764 (LGWR) WAIT log file parallel write 2134938 355.82ms 35.6%
764 (LGWR) WAIT log file parallel write 2083649 347.27ms 34.7%
764 (LGWR) WAIT log file parallel write 2034998 339.17ms 33.9%
764 (LGWR) WAIT log file parallel write 1996050 332.68ms 33.3%
764 (LGWR) WAIT log file parallel write 1797057 299.51ms 30.0%
764 (LGWR) WAIT log file parallel write 555403 111.08ms 11.1%
764 (LGWR) WAIT log file parallel write 277875 46.31ms 4.6%
764 (LGWR) WAIT log file parallel write 2067591 344.6ms 34.5%Where SID=668 is the session we've been looking for... OK, we've got to get back to monitoring disk array and the corresponding network components.
tOPsEEK -
SQL merge and after insert or update on ... for each row fires too often?
Hello,
there is a base table, which has a companion history table
- lets say USER_DATA & USER_DATA_HIST.
For each update on USER_DATA there has to be recorded the old condition of the USER_DATA record into the USER_DATA_HIST (insert new record)
- to have the history of changes to USER_DATA.
The first approach was to do the insert for the row trigger:
trigger user_data_tr_aiu after insert or update on user_data for each rowBut the performance was bad, because for a bulk update to USER_DATA, there have been individual inserts per records.
So i tried a trick:
Instead of doing the real insert into USER_DATA_HIST, i collect the USER_DATA_HIST data into a pl/sql collection first.
And later i do a bulk insert for the collection in the USER_DATA_HIST table with stmt trigger:
trigger user_data_tr_ra after insert or update on user_dataBut sometimes i recognize, that the list of entries saved in the pl/sql collection are more than my USER_DATA records being updated.
(BTW, for the update i use SQL merge, because it's driven by another table.)
As there is a uniq tracking_id in USER_DATA record, i could identify, that there are duplicates.
If i sort for the tracking_id and remove duplicate i get exactly the #no of records updated by the SQL merge.
So how comes, that there are duplicates?
I can try to make a sample 'sqlplus' program, but it will take some time.
But maybe somebody knows already about some issues here(?!)
- many thanks!
best regards,
FrankHello
Not sure really. Although it shouldn't take long to do a test case - it only took me 10 mins....
SQL>
SQL> create table USER_DATA
2 ( id number,
3 col1 varchar2(100)
4 )
5 /
Table created.
SQL>
SQL> CREATE TABLE USER_DATA_HIST
2 ( id number,
3 col1 varchar2(100),
4 tmsp timestamp
5 )
6 /
Table created.
SQL>
SQL> CREATE OR REPLACE PACKAGE pkg_audit_user_data
2 IS
3
4 PROCEDURE p_Init;
5
6 PROCEDURE p_Log
7 ( air_UserData IN user_data%ROWTYPE
8 );
9
10 PROCEDURE p_Write;
11 END;
12 /
Package created.
SQL> CREATE OR REPLACE PACKAGE BODY pkg_audit_user_data
2 IS
3
4 TYPE tt_UserData IS TABLE OF user_data_hist%ROWTYPE INDEX BY BINARY_INTEGER;
5
6 pt_UserData tt_UserData;
7
8 PROCEDURE p_Init
9 IS
10
11 BEGIN
12
13
14 IF pt_UserData.COUNT > 0 THEN
15
16 pt_UserData.DELETE;
17
18 END IF;
19
20 END;
21
22 PROCEDURE p_Log
23 ( air_UserData IN user_data%ROWTYPE
24 )
25 IS
26 ln_Idx BINARY_INTEGER;
27
28 BEGIN
29
30 ln_Idx := pt_UserData.COUNT + 1;
31
32 pt_UserData(ln_Idx).id := air_UserData.id;
33 pt_UserData(ln_Idx).col1 := air_UserData.col1;
34 pt_UserData(ln_Idx).tmsp := SYSTIMESTAMP;
35
36 END;
37
38 PROCEDURE p_Write
39 IS
40
41 BEGIN
42
43 FORALL li_Idx IN INDICES OF pt_UserData
44 INSERT
45 INTO
46 user_data_hist
47 VALUES
48 pt_UserData(li_Idx);
49
50 END;
51 END;
52 /
Package body created.
SQL>
SQL> CREATE OR REPLACE TRIGGER preu_s_user_data BEFORE UPDATE ON user_data
2 DECLARE
3
4 BEGIN
5
6 pkg_audit_user_data.p_Init;
7
8 END;
9 /
Trigger created.
SQL> CREATE OR REPLACE TRIGGER preu_r_user_data BEFORE UPDATE ON user_data
2 FOR EACH ROW
3 DECLARE
4
5 lc_Row user_data%ROWTYPE;
6
7 BEGIN
8
9 lc_Row.id := :NEW.id;
10 lc_Row.col1 := :NEW.col1;
11
12 pkg_audit_user_data.p_Log
13 ( lc_Row
14 );
15
16 END;
17 /
Trigger created.
SQL> CREATE OR REPLACE TRIGGER postu_s_user_data AFTER UPDATE ON user_data
2 DECLARE
3
4 BEGIN
5
6 pkg_audit_user_data.p_Write;
7
8 END;
9 /
Trigger created.
SQL>
SQL>
SQL> insert
2 into
3 user_data
4 select
5 rownum,
6 dbms_random.string('u',20)
7 from
8 dual
9 connect by
10 level <=10
11 /
10 rows created.
SQL> select * from user_data
2 /
ID COL1
1 GVZHKXSSJZHUSLLIDQTO
2 QVNXLTGJXFUDUHGYKANI
3 GTVHDCJAXLJFVTFSPFQI
4 CNVEGOTDLZQJJPVUXWYJ
5 FPOTZAWKMWHNOJMMIOKP
6 BZKHAFATQDBUVFBCOSPT
7 LAQAIDVREFJZWIQFUPMP
8 DXFICIPCBCFTPAPKDGZF
9 KKSMMRAQUORRPUBNJFCK
10 GBLTFZJAOPKFZFCQPGYW
10 rows selected.
SQL> select * from user_data_hist
2 /
no rows selected
SQL>
SQL> MERGE
2 INTO
3 user_data a
4 USING
5 ( SELECT
6 rownum + 8 id,
7 dbms_random.string('u',20) col1
8 FROM
9 dual
10 CONNECT BY
11 level <= 10
12 ) b
13 ON (a.id = b.id)
14 WHEN MATCHED THEN
15 UPDATE SET a.col1 = b.col1
16 WHEN NOT MATCHED THEN
17 INSERT(a.id,a.col1)
18 VALUES (b.id,b.col1)
19 /
10 rows merged.
SQL> select * from user_data_hist
2 /
ID COL1 TMSP
9 XGURXHHZGSUKILYQKBNB 05-AUG-11 10.04.15.577989
10 HLVUTUIFBAKGMXBDJTSL 05-AUG-11 10.04.15.578090
SQL> select * from v$version
2 /
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionHTH
David -
Capturing value in after insert or update row level trigger
Hi Experts,
I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
Note:2 tables are like master-child relation.
Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
In Trigger A:
I'm capturing wip_entity_id and holding in global variable.
package.variable:=:new.wip_entity_id
In Trigger B:
I'm using the above global variable.
Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
I need your help how to have the current value in the global variable so that i can use that in the trigger B.
Awaiting response at the earliest.
Thanks798616 wrote:
Hi Experts,
I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
Note:2 tables are like master-child relation.
Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
In Trigger A:
I'm capturing wip_entity_id and holding in global variable.
package.variable:=:new.wip_entity_id
In Trigger B:
I'm using the above global variable.
Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
I need your help how to have the current value in the global variable so that i can use that in the trigger B.
Awaiting response at the earliest.
ThanksMy head hurts just thinking about how this is being implemented.
What's stopping you from creating a nice and simple procedure to perform all this magic?
Continue with the global/trigger magic at your own peril, as you can hopefully already see ... nothing good will come from it.
Cheers, -
2 finger scroll doesnt work in Finder after Mavericks update. Macbook pro seems slow and less responsive after the update.
nateshns wrote:
2 finger scroll doesnt work in Finder after Mavericks update.
In any particular app, or all of them?
Maybe you are looking for
-
HP Officejet Pro 8600 - Now won't print in Black Ink Only!
Hi! After previously owning the Hp Officejet 8500a pro and experiencing numerous issues related to the 'Black Only' printing! We then decided to scrap the HP Officejet 8500a and move on to a new printer! We did a LOT of research into the new printer
-
How to set HTTP header field "cookie" with http receiver adapter?
Hi, I am using http receiver adapter (not axis) in a scenario. I could not set a parameter with key cookie in http header. Is there some kind of restriction to set it? I am able to set others like connection and create custom fields using ASMA and dy
-
Internet scenario - survey suite
Hello all, i am new to crm marketing. I was wondering if anyone has done an internet scenario for the survey suite. I have already created the survey but I just do not know how i can implement this in an internet scenario. When i try to generate an U
-
Setting -Xms causes JRE to not load
I have a Java app that has been in use for several months. We have found a need to do some memory performance tuning to deal with some issues. We have been running previously with the memory settings to -Xms150M-Xmx50M and all has been pretty well. A
-
I still need help!!!
I've downloaded the trial version of Fireworks CS3 and it will not install. It tells me the required resources are missing. It does the same with Photoshop. I've gotta get this installed! Help me please!