Insert update logic
Hi ,
i am unable to create the insert update logic for the below scenario:
Below is the target table structure and records which is at target level:
Create Table Target (SEQ_GUID VARCHAR2(20),Actno Number,
Seq_Num Number,
Boxtype Varchar2(20),
REQAddress Varchar2(20),
Boxnum Number,
Boxprop Varchar2(10),
Pno Number,
CON_SWITCH NUMBER);
Insert Into Target Values('XYZ1',732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1);
Insert Into Target Values ('XYZ2',732,10,'SQUARE','90:12:43:23',1200,'PORT',760,1);
Insert Into Target Values('XYZ3',732,10,'PYRAMID','12:23:43:12',908,'PLUG',760,1);
Insert Into Target Values('XYZ4',732,10,'RECTANGLE','12:32:76:89',1920,'PORT',760,1);
Insert Into Target Values('XYZ5',732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1);
Insert Into Target Values('XYZ6',732,12,'SQUARE','12:23:41',2212,'PLUG',909,6);
Insert Into Target Values('XYZ7',732,12,'CUBE','100:234:23',121,'PORT',909,6);
COMMIT;
Below is the source table ,which has records at source level:
Create Table SOURCE (Actno Number,
Seq_Num Number,
Boxtype Varchar2(20),
REQAddress Varchar2(20),
Boxnum Number,
Boxprop Varchar2(10),
Pno Number,
Con_Switch Number);
Insert Into SOURCE Values(732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1);
Insert Into SOURCE Values (732,10,'SQUARE','909:125:43:23',2350,'PORT',760,1);
Insert Into SOURCE Values(732,10,'SQUARE','1223:43:12',1212,'PLUG',760,1);
Insert Into SOURCE Values(732,10,'PYRAMID','1232:76:89',100,'PORT',760,1);
Insert Into SOURCE Values(732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1);
Insert Into SOURCE Values(732,12,'Square_2','1253:53:00',2212,'Plug_1',909,6);
INSERT INTO SOURCE VALUES(732,12,'CUBE_2','120:90:087',211,'PORT_1',909,6);
I need to update the target table based on the columns Actno,Seq_Num,Pno,Con_Switch.
The output should be in the below format
PLEASE INSERT THE BELOW RECORDS INTO THE 'TARGET' table:
Insert Into Target Values('XYZ1',732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1);
Insert Into TARGET Values ('XYZ2',732,10,'SQUARE','909:125:43:23',2350,'PORT',760,1);
Insert Into Target Values('XYZ3',732,10,'SQUARE','1223:43:12',1212,'PLUG',760,1);
Insert Into Target Values('XYZ4',732,10,'PYRAMID','1232:76:89',100,'PORT',760,1);
Insert Into Target Values('XYZ5',732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1);
Insert Into Target Values('XYZ6',732,12,'Square_2','1253:53:00',2212,'Plug_1',909,6);
INSERT INTO TARGET VALUES('XYZ7',732,12,'CUBE_2','120:90:087',211,'PORT_1',909,6);
Below is the final output the columns needs to be updated :
'XYZ1',732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1
'XYZ2',732,10,'SQUARE','909:125:43:23',2350,'PORT',760,1
'XYZ3',732,10,'SQUARE','1223:43:12',1212,'PLUG',760,1
'XYZ4',732,10,'PYRAMID','1232:76:89',100,'PORT',760,1
'XYZ5',732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1
'XYZ6',732,12,'Square_2','1253:53:00',2212,'Plug_1',909,6
'XYZ7',732,12,'CUBE_2','120:90:087',211,'PORT_1',909,6
Pleaze help me out in solving this...
thanks
{Below is the target table structure and records which is at target level:
Create Table Target (SEQ_GUID VARCHAR2(20),Actno Number,
Seq_Num Number,
Boxtype Varchar2(20),
REQAddress Varchar2(20),
Boxnum Number,
Boxprop Varchar2(10),
Pno Number,
CON_SWITCH NUMBER);
Insert Into Target Values('XYZ1',732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1);
Insert Into Target Values ('XYZ2',732,10,'SQUARE','90:12:43:23',1200,'PORT',760,1);
Insert Into Target Values('XYZ3',732,10,'PYRAMID','12:23:43:12',908,'PLUG',760,1);
Insert Into Target Values('XYZ4',732,10,'RECTANGLE','12:32:76:89',1920,'PORT',760,1);
Insert Into Target Values('XYZ5',732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1);
Insert Into Target Values('XYZ6',732,12,'SQUARE','12:23:41',2212,'PLUG',909,6);
Insert Into Target Values('XYZ7',732,12,'CUBE','100:234:23',121,'PORT',909,6);
COMMIT;
Below is the source table ,which has records at source level:
Create Table SOURCE (Actno Number,
Seq_Num Number,
Boxtype Varchar2(20),
REQAddress Varchar2(20),
Boxnum Number,
Boxprop Varchar2(10),
Pno Number,
Con_Switch Number);
Insert Into SOURCE Values(732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1);
Insert Into SOURCE Values (732,10,'SQUARE','909:125:43:23',2350,'PORT',760,1);
Insert Into SOURCE Values(732,10,'SQUARE','1223:43:12',1212,'PLUG',760,1);
Insert Into SOURCE Values(732,10,'PYRAMID','1232:76:89',100,'PORT',760,1);
Insert Into SOURCE Values(732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1);
Insert Into SOURCE Values(732,12,'Square_2','1253:53:00',2212,'Plug_1',909,6);
INSERT INTO SOURCE VALUES(732,12,'CUBE_2','120:90:087',211,'PORT_1',909,6);
I need to update the target table based on the columns Actno,Seq_Num,Pno,Con_Switch.
_The output should be in the below format_
'XYZ1',732,10,'CUBE','06:20:33:12',1900,'SWITCH',760,1
'XYZ2',732,10,'SQUARE','909:125:43:23',2350,'PORT',760,1
'XYZ3',732,10,'SQUARE','1223:43:12',1212,'PLUG',760,1
'XYZ4',732,10,'PYRAMID','1232:76:89',100,'PORT',760,1
'XYZ5',732,11,'HOST','432:123:12:65',8076,'SWITCH',709,1
'XYZ6',732,12,'Square_2','1253:53:00',2212,'Plug_1',909,6
'XYZ7',732,12,'CUBE_2','120:90:087',211,'PORT_1',909,6
In the target level , i will be looking up on the actno,seq_num,pno,con_switch, If i find a match , then i will update the column , but the problem is we have mulitple rows with the same combination , this is where i am getting difficulty inorder to update the target.
please help me out
Similar Messages
-
Inserts/Updates on replicated tables Logical Standby Database ??
Hello all,
We have a Logical standby database on 10.2.0.5. Can you please suggest if there is a way we can do data inserts/updates on replicated tables ?
Can this be done by doing a alter database guard none; or alter session disable|enable guard; ?? Even if it completes succesfully , will this have any effect on replication later ?Hello all,
We have a Logical standby database on 10.2.0.5. Can you please suggest if there is a way we can do data inserts/updates on replicated tables ?
Can this be done by doing a alter database guard none; or alter session disable|enable guard; ?? Even if it completes succesfully , will this have any effect on replication later ? -
SQL SERVER BULK FETCH AND INSERT/UPDATE?
Hi All,
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic?
How do i do this?
Thanks in Advance.
Regards
Yogesh.B> is there a possibility that I can do a bulk fetch and place it in an array, even inside a stored procedure ?
You can use Temporary tables or Table variables and have them indexes as well
>After I have processed my records, tell me a way that I will NOT go, RECORD by RECORD basis, even inside a stored procedure ?
As i said earlier, you can perform UPDATE these temporary tables or table variables and finally INSERT/ UPDATE your base table
>Arrays are used just to minimize the traffic between the server and the program area. They are used for efficient processing.
In your case you will first have to populate the array (Using some of your queries from the server) which means you will first load the arrary, do some updates, and then send them back to server therefore
network engagement
So I just gave you some thoughts I feel could be useful for your implementation, like we say, there are many ways so pick the one that works good for you in the long run with good scalability
Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue -
Oracle RAC - Not getting performance(TPS) as we expect on insert/update
Hi All,
We got a problem while executing insert/update and delete queries with Oracle RAC system, we are not getting the TPS as we expected in Oracle RAC. The TPS of Oracle RAC (for insert/update and delete ) is less than as that of
single oracle system.
But while executing select queries, we are getting almost double TPS as that of Single Oracle System.
We have done server side and client side load balancing.
Can anyone knows to solve this strange behaviour? Shall we need to perform any other settings in ASM/ Oracle Nodes
for better performance on insert/update and delete queries.
The following is the Oracle RAC configuration
OS & Hardware :Windows 2008 R2 , Core 2 Du0 2.66GHz , 4 GB
Software : Oracle 11g 64 Bit R2 , Oracle Clusterware & ASM , Microsoft iSCSI initiator.
Storage Simulation : Xeon 4GB , 240 GB ,Win 2008 R2, Microsoft iSCSI Traget
Please help me to solve this. We are almost stuck with this situation.
Thanks
RoyLoad Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 48.3 0.3 0.26 0.10
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 523,787.9 3,158.4
Logical reads: 6,134.6 37.0
Block changes: 3,247.1 19.6
Physical reads: 3.5 0.0
Physical writes: 50.7 0.3
User calls: 497.6 3.0
Parses: 182.0 1.1
Hard parses: 0.1 0.0
W/A MB processed: 0.1 0.0
Logons: 0.1 0.0
Executes: 184.0 1.1
Rollbacks: 0.0 0.0
Transactions: 165.8
Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 93.74 Redo NoWait %: 99.96
Buffer Hit %: 99.99 Optimal W/A Exec %: 100.00
Library Hit %: 100.19 Soft Parse %: 99.96
Execute to Parse %: 1.09 Latch Hit %: 99.63
Parse CPU to Parse Elapsd %: 16.44 % Non-Parse CPU: 84.62
Shared Pool Statistics Begin End
Memory Usage %: 75.89 77.67
% SQL with executions>1: 71.75 69.88
% Memory for SQL w/exec>1: 75.63 71.38 -
How can perform insert /update /delete in one single mapping.
Hi,
I want to is there any logic by which we can create 2-3 pipeline in a mappings like pipelines will work for insert / update /delete or storing soem rejected data according to conditional flag.
I tried it in a mapping but problem is that when target load order is like ins then upd then delete/reject . if new rec will come then control will pass through ins target . but if rec needs to update or delete then again control is going to ins target not update / delete target.
We have already given the all conditional flags in filter after lookup and before target .
all possibilities we checked but didnt got success.
last option is separate the mappings for insert / update/delete.....etc.
Is there any solution for this type of problem.
reply plz if any body have solutions.
---UmeshHi Umesh,
I understand from your query that you want to load target with insert, update and delete rows after runnng the mappping...
If you are looking for the same then you can use one of the Oracle fetures Oracle Streams: Change Data Capture.
the Url is:
http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf
If any other help required do reply.
Regards
Tarang Jain -
Array Insert/Update/Select in JDBC
I've been using array processing in ODBC, OCI, and Pro*C for years and desperately need to do so in JDBC. I've thus far been unsuccessful and am beginning to doubt that it's supported which to me is unfathomable. I've likewise found many like inquiries on the net - none of which were addressed. There have been many respondants who don't undertstand what array processing is and mistake it for batch SQL statements. They are not the same. Batched SQL statements are seperate statements which are executed in a single call to the database engine. What I'm taking about is using a prepared statement and binding primitive arrays to the statement. A single statement is executed and the contents of the arrays passed to the database. I've conducted tests years ago and array insert/update is many times faster than batch statements. Does anyone know whether or not JDBC supports array processing? If anyone's interested, I can provide snippets via email which illustrate how this is done in ODBC and other API's.
Thanks.You referred me to
http://java.sun.com/products/jdbc/download.html. The
only reference I found to arrays were SQL Arrays - not
what I'm talking about. See prior C/ODBC snippet. Do
you know how to do this in JDBC?You are talking about passing an array as a single parameter to and from the underlying database correct? (And that has nothing to do with batch processing.)
If so in section 16.4 "Array Object"..... looking at that section gives the following reference....
The Array object returned to an application by the ResultSet.getArray and
CallableStatement.getArray methods is a logical pointer to the SQL ARRAY
value in the database; it does not contain the contents of the SQL ARRAY value.
The above has nothing to do with batch processing (although presumably one could use it in a batch process but then one can use String as well.)
Of course perhaps there is something in there that says that only applies to batch processing. If so could you please point out the section and quote the text. -
Extremely slow inserts/updates reported after HW upgrade...
Hi gyus,
I'll try to be as descriptive as I can. It's this project in which we have to move circa 6 mission critical (24x7) and mostly OLTP databases from MS Windows 2003 (DB on local disks) to HP-UX IA (CA metrocluster, HP XP 12000 disk array) - all ORA10gR2 10.2.0.4. And everything was perfect until we moved this XYZ database...
Almost immediately users reported "considerable" performance degradation. According to 3rd party application log they get almost 40 secs. instead of previously recorded 10.
We, I mean Oracle and HP specialists, haven't noticed/recorded any significant peeks/bottlenecks (RAM, CPU, Disk I/O).
Feel free to check 3 AWR reports and the init.ora at [http://www.mediafire.com/?sharekey=0269c9bc606747b47f7ec40ada4772a6e04e75f6e8ebb871]
1_awrrpt_standard.txt - standard workload during 8 hours (peek hours are from 8-12AM)
2_awrrpt_2hrs_ca.txt - standard workload during 2 peek hours (8-10)
3_awrrpt_2hrs_noca.txt - standard workload during 2 peek hours (10-12) with CA disk mirroring disabled
Of course, I've checked the ADDM reports - and first, I'd like to ask why ADDM keeps on reporting the following (on all database instances on this
cluster node):
FINDING 1: 100% impact (310 seconds)
Significant virtual memory paging was detected on the host operating system.
RECOMMENDATION 1: Host Configuration, 100% benefit (310 seconds)
Is it just some kind of false alarm (like we use to get on MS Windows)? Both nodes are running on 32gigs of RAM
with roughly more than 10gigs constantly free.
Second, as ADDM reported:
FINDING 2: 44% impact (135 seconds)
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
we've tried to split CA disk mirroring, using RAID 10 for redo log file disks etc. etc. No substantial performance gain was reported from users (though I've noticed some in AWR reports).
Despite confusing app. users' feedback I'm nearly sure that our bottleneck are redo log file disks. Why? Previously (old HW) we had 1-3 ms avg wait on log file sync and log file parallel write and now (new HW, RAID5/RAID10 we've tested both) - it's 8 ms or even more. We were able to get 2ms only with CA switched off (HP etrocluster
disk array mirroring).
And that brings up two new questions:
1. Does redo log group mirroring (2 on 2 separate disks vs. 1 on 1 disk) have any
significant impact on abovementioned wait events? I mean what performance gain
could I expect when I drop all "secondary" redo log members?
2. Why do we get almost identical response times when we run bulk insert/update tests (say
1000000 rows) against old and new DB/HW?
Thanks in advance,
tOPsEEK
Edited by: smutny on 1.11.2009 17:39
Edited by: smutny on 1.11.2009 17:46Hi again,
so here's the actual AWR report (1 minute window while the most problematic operation took place). I think it's becoming clear we have to deal with slow redo log writes...
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
... 294736156 ... 1 10.2.0.4.0 NO ...
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1254 02-Nov-09 10:07:45 91 46.4
End Snap: 1255 02-Nov-09 10:08:47 91 46.4
Elapsed: 1.04 (mins)
DB Time: 0.51 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 576M 576M Std Block Size: 8K
Shared Pool Size: 912M 912M Log Buffer: 14,372K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 163,781.94 4,575.45
Logical reads: 1,677.15 46.85
Block changes: 1,276.32 35.66
Physical reads: 1.99 0.06
Physical writes: 1.16 0.03
User calls: 426.41 11.91
Parses: 20.74 0.58
Hard parses: 0.19 0.01
Sorts: 2.38 0.07
Logons: 0.00 0.00
Executes: 386.76 10.80
Transactions: 35.80
% Blocks changed per Read: 76.10 Recursive Call %: 31.51
Rollback per transaction %: 0.00 Rows per Sort: 369.98
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 99.07
Execute to Parse %: 94.64 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 112.50 % Non-Parse CPU: 99.11
Shared Pool Statistics Begin End
Memory Usage %: 88.90 88.87
% SQL with executions>1: 98.74 99.39
% Memory for SQL w/exec>1: 95.35 97.75
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
log file parallel write 2,228 21 10 69.4 System I/O
log file sync 2,220 21 10 69.2 Commit
CPU time 20 65.5
SQL*Net break/reset to client 2,106 1 1 3.4 Applicatio
db file sequential read 131 0 4 1.5 User I/O
Time Model Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> Total time in database user-calls (DB Time): 30.9s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 20.2 65.5
sql execute elapsed time 9.1 29.4
PL/SQL execution elapsed time 0.3 1.0
parse time elapsed 0.2 .5
hard parse elapsed time 0.1 .5
repeated bind elapsed time 0.0 .0
DB time 30.9 N/A
background elapsed time 22.2 N/A
background cpu time 0.4 N/A
Wait Class DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 3,213 .0 22 7 1.4
Commit 2,220 .0 21 10 1.0
Application 2,106 .0 1 1 0.9
User I/O 134 .0 0 4 0.1
Network 29,919 .0 0 0 13.4
Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
log file sync 2,220 .0 21 10 1.0
SQL*Net break/reset to clien 2,106 .0 1 1 0.9
db file sequential read 131 .0 0 4 0.1
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
SQL*Net more data to client 3,397 .0 0 0 1.5
SQL*Net message to client 26,509 .0 0 0 11.9
control file sequential read 897 .0 0 0 0.4
SQL*Net more data from clien 13 .0 0 0 0.0
direct path write 3 .0 0 0 0.0
SQL*Net message from client 26,510 .0 1,493 56 11.9
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
PL/SQL lock timer 10 100.0 49 4897 0.0
Background Wait Events DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 2,228 .0 21 10 1.0
control file parallel write 44 .0 0 9 0.0
db file parallel write 44 .0 0 4 0.0
control file sequential read 71 .0 0 0 0.0
rdbms ipc message 2,412 7.7 525 218 1.1
pmon timer 20 100.0 59 2929 0.0
Streams AQ: qmn slave idle w 2 .0 55 27412 0.0
Streams AQ: qmn coordinator 4 50.0 55 13706 0.0
Operating System Statistics DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total
AVG_BUSY_TIME 847
AVG_IDLE_TIME 5,362
AVG_IOWAIT_TIME 2,692
AVG_SYS_TIME 295
AVG_USER_TIME 549
BUSY_TIME 3,396
IDLE_TIME 21,457
IOWAIT_TIME 10,776
SYS_TIME 1,190
USER_TIME 2,206
LOAD 0
OS_CPU_WAIT_TIME 192,401,000
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 40,960
VM_OUT_BYTES 335,872
PHYSICAL_MEMORY_BYTES 34,328,276,992
NUM_CPUS 4
NUM_CPU_SOCKETS 4
Instance Activity Stats DB/Inst: ROVEPP/rovepp Snaps: 1254-1255
Statistic Total per Second per Trans
CPU used by this session 984 15.8 0.4
CPU used when call started 344 5.5 0.2
CR blocks created 4 0.1 0.0
Cached Commit SCN referenced 208 3.3 0.1
Commit SCN cached 0 0.0 0.0
DB time 2,589 41.6 1.2
DBWR checkpoint buffers written 69 1.1 0.0
DBWR checkpoints 0 0.0 0.0
DBWR object drop buffers written 0 0.0 0.0
DBWR tablespace checkpoint buffe 0 0.0 0.0
DBWR thread checkpoint buffers w 0 0.0 0.0
DBWR transaction table writes 8 0.1 0.0
DBWR undo block writes 15 0.2 0.0
IMU CR rollbacks 0 0.0 0.0
IMU Flushes 1,156 18.6 0.5
IMU Redo allocation size 996,048 16,017.2 447.5
IMU commits 2,100 33.8 0.9
IMU contention 0 0.0 0.0
IMU ktichg flush 0 0.0 0.0
IMU pool not allocated 0 0.0 0.0
IMU recursive-transaction flush 0 0.0 0.0
IMU undo allocation size 22,402,560 360,250.9 10,064.0
IMU- failed to get a private str 0 0.0 0.0
Misses for writing mapping 0 0.0 0.0
SQL*Net roundtrips to/from clien 26,480 425.8 11.9
active txn count during cleanout 34 0.6 0.0
application wait time 106 1.7 0.1
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 199 3.2 0.1
branch node splits 0 0.0 0.0
buffer is not pinned count 13,919 223.8 6.3
buffer is pinned count 19,483 313.3 8.8
bytes received via SQL*Net from 884,016 14,215.7 397.1
bytes sent via SQL*Net to client 9,602,642 154,418.1 4,313.9
calls to get snapshot scn: kcmgs 13,641 219.4 6.1
calls to kcmgas 3,029 48.7 1.4
calls to kcmgcs 56 0.9 0.0
change write time 8 0.1 0.0
cleanout - number of ktugct call 42 0.7 0.0
cleanouts and rollbacks - consis 0 0.0 0.0
cleanouts only - consistent read 0 0.0 0.0
cluster key scan block gets 1,100 17.7 0.5
cluster key scans 1,077 17.3 0.5
commit batch/immediate performed 1 0.0 0.0
commit batch/immediate requested 1 0.0 0.0
commit cleanout failures: block 0 0.0 0.0
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callba 4 0.1 0.0
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 9,539 153.4 4.3
commit cleanouts successfully co 9,535 153.3 4.3
commit immediate performed 1 0.0 0.0
commit immediate requested 1 0.0 0.0
commit txn count during cleanout 26 0.4 0.0
concurrency wait time 0 0.0 0.0
consistent changes 264 4.3 0.1
consistent gets 48,659 782.5 21.9
consistent gets - examination 26,952 433.4 12.1
consistent gets direct 2 0.0 0.0
consistent gets from cache 48,657 782.4 21.9
cursor authentications 2 0.0 0.0
data blocks consistent reads - u 4 0.1 0.0
db block changes 79,369 1,276.3 35.7
db block gets 55,636 894.7 25.0
db block gets direct 3 0.1 0.0
db block gets from cache 55,633 894.6 25.0
deferred (CURRENT) block cleanou 4,768 76.7 2.1
dirty buffers inspected 0 0.0 0.0
enqueue conversions 15 0.2 0.0
enqueue releases 9,967 160.3 4.5
enqueue requests 9,967 160.3 4.5
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 24,051 386.8 10.8
failed probes on index block rec 0 0.0 0.0
frame signature mismatch 0 0.0 0.0
free buffer inspected 680 10.9 0.3
free buffer requested 1,297 20.9 0.6
heap block compress 11 0.2 0.0
hot buffers moved to head of LRU 1,797 28.9 0.8
immediate (CR) block cleanout ap 0 0.0 0.0
immediate (CURRENT) block cleano 2,274 36.6 1.0
index crx upgrade (positioned) 47 0.8 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 10,326 166.1 4.6
index scans kdiixs1 6,071 97.6 2.7
leaf node 90-10 splits 14 0.2 0.0
leaf node splits 18 0.3 0.0
lob reads 0 0.0 0.0
lob writes 198 3.2 0.1
lob writes unaligned 176 2.8 0.1
logons cumulative 0 0.0 0.0
messages received 2,272 36.5 1.0
messages sent 2,272 36.5 1.0
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 21,083 339.0 9.5
opened cursors cumulative 1,290 20.7 0.6
parse count (failures) 0 0.0 0.0
parse count (hard) 12 0.2 0.0
parse count (total) 1,290 20.7 0.6
parse time cpu 18 0.3 0.0
parse time elapsed 16 0.3 0.0
physical read IO requests 124 2.0 0.1
physical read bytes 1,015,808 16,335.0 456.3
physical read total IO requests 1,030 16.6 0.5
physical read total bytes 15,785,984 253,851.1 7,091.6
physical read total multi block 0 0.0 0.0
physical reads 124 2.0 0.1
physical reads cache 122 2.0 0.1
physical reads cache prefetch 0 0.0 0.0
physical reads direct 2 0.0 0.0
physical reads direct (lob) 0 0.0 0.0
physical reads direct temporary 0 0.0 0.0
physical reads prefetch warmup 0 0.0 0.0
physical write IO requests 47 0.8 0.0
physical write bytes 589,824 9,484.8 265.0
physical write total IO requests 4,591 73.8 2.1
physical write total bytes 25,374,720 408,045.5 11,399.3
physical write total multi block 4,461 71.7 2.0
physical writes 72 1.2 0.0
physical writes direct 3 0.1 0.0
physical writes direct (lob) 3 0.1 0.0
physical writes direct temporary 0 0.0 0.0
physical writes from cache 69 1.1 0.0
physical writes non checkpoint 18 0.3 0.0
pinned buffers inspected 1 0.0 0.0
prefetch warmup blocks aged out 0 0.0 0.0
prefetched blocks aged out befor 0 0.0 0.0
process last non-idle time 0 0.0 0.0
recursive calls 12,197 196.1 5.5
recursive cpu usage 747 12.0 0.3
redo blocks written 11,398 183.3 5.1
redo buffer allocation retries 0 0.0 0.0
redo entries 6,920 111.3 3.1
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo ordering marks 96 1.5 0.0
redo size 10,184,944 163,781.9 4,575.5
redo subscn max counts 811 13.0 0.4
redo synch time 2,190 35.2 1.0
redo synch writes 2,220 35.7 1.0
redo wastage 1,377,920 22,158.0 619.0
redo write time 2,192 35.3 1.0
redo writer latching time 0 0.0 0.0
redo writes 2,228 35.8 1.0
rollback changes - undo records 24 0.4 0.0
rollbacks only - consistent read 4 0.1 0.0
rows fetched via callback 1,648 26.5 0.7
session connect time 0 0.0 0.0
session cursor cache hits 1,242 20.0 0.6
session logical reads 104,295 1,677.2 46.9
session pga memory 2,555,904 41,101.0 1,148.2
session pga memory max 0 0.0 0.0
session uga memory 123,488 1,985.8 55.5
session uga memory max 0 0.0 0.0
shared hash latch upgrades - no 66 1.1 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 148 2.4 0.1
sorts (rows) 54,757 880.5 24.6
sql area evicted 86 1.4 0.0
sql area purged 0 0.0 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 596 9.6 0.3
table fetch by rowid 9,173 147.5 4.1
table fetch continued row 0 0.0 0.0
table scan blocks gotten 982 15.8 0.4
table scan rows gotten 154,079 2,477.7 69.2
table scans (cache partitions) 0 0.0 0.0
table scans (long tables) 0 0.0 0.0
table scans (short tables) 59 1.0 0.0
total number of times SMON poste 0 0.0 0.0
transaction rollbacks 1 0.0 0.0
undo change vector size 3,990,136 64,164.5 1,792.5
user I/O wait time 49 0.8 0.0
user calls 26,517 426.4 11.9
user commits 2,226 35.8 1.0
user rollbacks 0 0.0 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 204 3.3 0.1
write clones created in backgrou 0 0.0 0.0
write clones created in foregrou 0 0.0 0.0
-------------------------------------------------------------... and what's even more interesting is the report that I've got using this great Tanel Poder's session snapper script. Take a look at these numbers (excerpt):
SID USERNAME TYPE STATISTIC DELTA HDELTA/SEC %TIME
668 ROVE_EDA WAIT log file sync 2253426 450.69ms 45.1%
668 ROVE_EDA WAIT log file sync 2140618 428.12ms 42.8%
668 ROVE_EDA WAIT log file sync 2088327 417.67ms 41.8%
668 ROVE_EDA WAIT log file sync 2184408 364.07ms 36.4%
668 ROVE_EDA WAIT log file sync 2117470 352.91ms 35.3%
668 ROVE_EDA WAIT log file sync 2051280 341.88ms 34.2%
668 ROVE_EDA WAIT log file sync 1595019 265.84ms 26.6%
668 ROVE_EDA WAIT log file sync 612034 122.41ms 12.2%
668 ROVE_EDA WAIT log file sync 2162980 432.6ms 43.3%
668 ROVE_EDA WAIT log file sync 2071811 345.3ms 34.5%
668 ROVE_EDA WAIT log file sync 2004571 334.1ms 33.4%
668 ROVE_EDA WAIT db file sequential read 28401 5.68ms .6%
668 ROVE_EDA WAIT db file sequential read 29028 4.84ms .5%
668 ROVE_EDA WAIT db file sequential read 24846 4.14ms .4%
668 ROVE_EDA WAIT db file sequential read 24323 4.05ms .4%
668 ROVE_EDA WAIT db file sequential read 17026 3.41ms .3%
668 ROVE_EDA WAIT db file sequential read 6736 1.35ms .1%
668 ROVE_EDA WAIT db file sequential read 33028 5.5ms .6%
764 (LGWR) WAIT log file parallel write 2236748 447.35ms 44.7%
764 (LGWR) WAIT log file parallel write 2150825 430.17ms 43.0%
764 (LGWR) WAIT log file parallel write 2139532 427.91ms 42.8%
764 (LGWR) WAIT log file parallel write 2119086 423.82ms 42.4%
764 (LGWR) WAIT log file parallel write 2134938 355.82ms 35.6%
764 (LGWR) WAIT log file parallel write 2083649 347.27ms 34.7%
764 (LGWR) WAIT log file parallel write 2034998 339.17ms 33.9%
764 (LGWR) WAIT log file parallel write 1996050 332.68ms 33.3%
764 (LGWR) WAIT log file parallel write 1797057 299.51ms 30.0%
764 (LGWR) WAIT log file parallel write 555403 111.08ms 11.1%
764 (LGWR) WAIT log file parallel write 277875 46.31ms 4.6%
764 (LGWR) WAIT log file parallel write 2067591 344.6ms 34.5%Where SID=668 is the session we've been looking for... OK, we've got to get back to monitoring disk array and the corresponding network components.
tOPsEEK -
Can not insert/update data from table which is created from view
Hi all
I'm using Oracle database 11g
I've created table from view as the following command:
Create table table_new as select * from View_Old
I can insert/update data into table_new by command line.
But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)
Anybody tell me, what's happend? cause?
Thankyou
thiensu
Edited by: user8248216 on May 5, 2011 8:54 PM
Edited by: user8248216 on May 5, 2011 8:55 PMI can insert/update data into table_new by command line.
But I can not Insert/update data of table_new by SI Oject Browser tool or Oracle SQL Developer tool .(read only)so what is wrong with the GUI tools & why posting to DATABASE forum when that works OK? -
BDLS updating logical systems in CIF models?
Hi
The BDLS tool can be used to update logical system names in systems. As I understand this tool is often used when copying systems (Production system to Q system for instance).
In a situation where you copy the productive system to the Q system:
Does anyone know if BDLS is capable of updating the logical systems in the generated CIF models that you get copied from the productive system? In this case you'd get a big chunk of models that point to the wrong logical name. If you could get the target system changed in the generated model then you could save a lot of time in generating and activating models.
Any comments much appreciated
SimonHi,
I am not too sure if this thread is followed. I was looking for some info where we had a similar issue recently when we did a production refersh to Quality and all our integration model variants are pointing to our production system.
Our basis team has taken all the necessary steps like BDLS etc, but we had this issue. I dropped a message to SAP and they gave a report name RCIFVARIANTCHANGE to change the logical system name from old one to the new one for CIF variants.
Thanks,
Murali -
Insert / update data to a table through DBLINK (oracle)
I try to insert / update a table from one instance of oracle database to another one through oracle dblink, get following error:
java.sql.SQLException: ORA-01008: not all variables bound
ORA-02063: preceding line from MYLINK
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:582)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1986)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1144)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2152)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:2035)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2876)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:609)
The same code to insert / update the exact same table in local instance works fine.No any binding problem. So i am pretty sure all ? mark in SQL are set with some value before sending to Oracle.
Someone please advise what is possible problem. Db link is not set correctly? or we can not update remote table by dblink.
By the way i can do insert / update from TOAD to the remote table through DBLINK. Problem happens only in Java code.
thanks!
Garydblink links from one database instance to another.
So it is certainly a source of possible problems when it works on one database and not another.
You should start by looking at the dblink and it possible testing it in the database not via java.
Note as well that that error suggests that it is coming from the Oracle database. I believe if you had a bind parameter problem in your java code that the error would come from the driver. But that is a guess on my part. -
Hello,
I am using ORACLE DATABASE 11g (EE) and RHEL 5.
I want to insert/update japanese language data in a column which has the datatype as varchar2(256).
I tried to change the NLS_LANGUAGE and NLS_TERRITORY parameters with 'ALTER SESSION set ...' command but no effect.
I tried to bounce back ( shutdown and startup ) the DB but still no effect.
I tried to inset the NLS_LANGUAGE and NLS_TERRITORY in init.ora file but still no use.
If anybody knows the detail steps which i have mentioned above .... let me know. Might be that i am wrong in my method.
Can you please guide me how to change the language of DB for a perticular session to japanese ???
Thanks in advance...
Edited by: VJ4 on May 9, 2011 6:21 PMVJ4 wrote:
Thanks for the info.
Yes i tried with UNISTR function and was able to insert the data successfully.
but the point is that we can't remember unicode for each of the letter. It's their any method that we can directly insert japanese character using an insert.
As you said :-
Note that changing database character set is something complicated that requires many steps.
Can you please provide me some links or some stuffs to study about the detail steps of chaining database character set.
I have gone through the Oracle online documentation.. if you can pin point any good link in it you can else provide me some other stuff.
Thanks .You will need to convert your database characterset to AL32UTF8. This is not a trivial exercise if your database already has data in it. See these MOS Docs
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) (Doc ID 260192.1)
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications (Doc ID 788156.1)
http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430
HTH
Srini -
I am trying to simulate the robot voice synthesizer sound that is produced by that electronic voice simulator after someone as had their voice box removed. The vocal transformer insert in Logic Pro doesn't quite do it. Any Suggestions?
Try one of the Audio Voice Effects like Alien / Cosmic / Robot… etc. to start with…
Adjust the Settings in the Inspector to your liking… -
Can you create nested condition in merge(e.g. insert/update) using owb
Hi,
Does OWB9iR2 allow you to build in nested condition in merge. such as
If no match on col1 and col2 then
if col3 match then no new sequence insert <---
else insert new sequence;
else (there is match on col1 and col2)
update col3 and sequence.
I have an incremental load for a lookup table, where insert/update is used. There are two match columns and surrogate key is used. When there is no match, it shall not insert a sequence when there is a match on third column. I can not use the 3rd column in the original match because it shall be updated where there is a match for the two match column.
I am trying to avoid using transformant for performance impact. ThanksHIi I think the misleading thing is that in PL/SQL you can use booleans, which is not possible in SQL. So in a PL/SQL tranformation, this is OK:
a:= case when not orgid_lkup( INGRP1.ORG_ID )
then get_supid(..)
else ...
but, the following SQL does not work:
select case when not orgid_lkup( INGRP1.ORG_ID )
then get_supid(..)
else ...
into a
from dual;
I ended up using only 0/1 as boolean return values for these reasons;
so I can have:
select
case when orgid_lkup( INGRP1.ORG_ID ) = 0 then ...
though true booleans are better if you don't have to embed them in SQL.
Antonio -
Pre Database Insert / Update Validation
Hi,
I have a maintenance view that allow user to insert / update.
Let's say, z_my_table is my table
key1
key2
field1
field2
field3
I want to validate whether field1, field2, field3 whether exist in my table before insert / update.
However, due to some restriction, I cannot make it as part of the key.
My question is, how can I capture OR validate it before the data get inserted into the table ?
I checked some online help, and found that suggestion is to write a function module OR enhancement point.
Just wonder what is the common practice and how should I do it ? Thanks.I don't agree with the design but if you want to do it anyway it can be done using Events in the Maintenance View.
After you've generated the Maintenance View of your Z table, within SE11 pull out menubar Utilities->Table Maintenance Generator, then menubar Environment->Modifcation->Events. Create New Entries and I think for your requirement you would need 2 entries - one at event 05 and another at event 18. Input the name of your routine and write the code by clicking its corresponding button.
The following is an example of a scenario where text of an object is retrieved and filled into the view display-only column according to user input of its related object id - for your requirement you would validate the user input after checking the input values in your existing table:
form fill_text_05.
data begin of w_total.
include structure zfv_fbt_acct_rel.
data: action,
mark,
end of w_total.
data: w_extract type zfv_fbt_acct_rel.
clear: zfv_fbt_acct_rel-fbt_rel_acc_txt,
zfv_fbt_acct_rel-fbt_cost_grp_txt.
* populate GL a/c text
select txt50 up to 1 rows
from skat
into zfv_fbt_acct_rel-fbt_rel_acc_txt
where spras = sy-langu
and saknr = zfv_fbt_acct_rel-fbt_rel_acc.
endselect.
* populate Cost Group text
select single fbt_cost_grp_txt
from zft_fbt_cost_grp
into zfv_fbt_acct_rel-fbt_cost_grp_txt
where company_code = zfv_fbt_acct_rel-company_code
and fbt_cost_grp = zfv_fbt_acct_rel-fbt_cost_grp.
if sy-subrc <> 0.
message e074(zf_enhancements) with zfv_fbt_acct_rel-company_code.
endif.
endform. "fill_text_05
Hope this helps.
Cheers,
Sougata. -
Inserting/updating data in control block based on view
Hi!
I`ve created a block based on a view to display data.
I want this block to be insertable and updateable, that is I will use a on-insert/update trigger to call an insert/update procedure located in the database.
When trying to change/insert a value in the block, the error message "Error: Can not insert into or update data in a view" pops up. I`ve tried to get rid of this error, without success.
How can I make a data block based on a view insertable and updateable?
My guess is that this have something to do with locking the records(there is no rowid in the view)... but I'm not sure.
Pls advise!!Morten
As well as on-update, on-insert, on-delete triggers you also need an on-lock,
(even though it might just contain null;) otherwise the form will try to lock the view and fail.
Actually your terminology is wrong, the block being based on a table or view is not a control block. A control block is not based on anything and has no default functionality for communicating with the database. If it was a control block, the on- triggers would not fire.
Maybe you are looking for
-
Sales Order Drafts Deleteing unexpectedly
I have an issue with Sales Order Document Drafts deleting. Here is my scenario My customer uses the Sales Order Document Draft to generate for a better name, Recurring Sales Orders. We run a process using the SDK to take these Drafts and on a weekly
-
How can I sync two seperate ipads to the same computer without the apps from one ipad being loaded on the other?
-
Attachments not visible in Yahoo mail under Firefox 18.
Running under Firefox 18, attachments to received e-mails on Yahoo mail are not visible, as they used to be (and are under Internet Explorer). On downloading the e-mails (to Thunderbird) the attachments are there as expected.
-
LIKE operator in multiple-row subqueries
Hello, in a test i saw 2 questions: Which operator can be used with a multiple-row subquery? A. = B. LIKE C. BETWEEN D. NOT IN E. IS F. <> Answer: D and Which two statements about subqueries are true? (Choose two.) A. A single row subquery can retrie
-
Uninstalling CS,CS2,CS3 and CS4 Adobe programs leaving CS5 suit more space.
I would like to remove all my old Adobe suits leaving CS5 more space on the disk, I am not sure how to do this, with thanks