Multi table insert and ....tricky problem
I have three account related table
--staging table where all raw records exist
acc_event_stage
(acc_id NUMBER(22) NOT NULL,
event_code varchar2(50) NOT NULL,
event_date date not null);
--production table where valid record are moved from staging table
acc_event
(acc_id NUMBER(22) NOT NULL,
event_code varchar2(50) NOT NULL,
event_date date not null);
--error records from staging are moved in here
err_file
(acc_id NUMBER(22) NOT NULL,
error_code varchar2(50) NOT NULL);
--summary records from production account table
acc_event_summary
(acc_id NUMBER(22) NOT NULL,
event_date date NOT NULL,
instance_flag char(1) not null);
records in the staging table may look like this(I have put it in simple english for ease)
open account A on June 12 8 am
close account A on June 12 9 am
open account A on June 12 11 am
Rules
Account cannot be closed if an open account doesnt exist
account cannot be updated if an open account doesnt exist
account cannot be opened if an open account already exist.
Since I am using
Insert all
when open account record and account already exist
into ...error table
when close account record and open account doesnt exist
into ...error table
else
into account_event table.
select * from acc_event_stage order by ..
wondering if the staging table has records like this
open account A on June 12 8 am
close account A on June 12 9 am
open account A on June 12 11 am
then how can I validate some of the records given the rule above.I can do the validation from existing records (before the staging table data arrived) without any problem.But the tricky part is the new records in the staging table.
This can be easily achieved if I do cursor loop method,but doing a multi table insert looks like a problem
any opinion.?
thx
m
In short,in simple example,
through multi table insert if I insert record in production account event table from
staging account event table,making sure that the record doesnt exist in the production table.This will bomb if 2 open account exist in the current staging table.
It will also bomb if an open account and then close account exist.
etc.
Similar Messages
-
Issue with trigger, multi-table insert and error logging
I find that if I try to perform a multi-table insert with error logging on a table that has a trigger, then some constraint violations result in an exception being raised as well as logged:
<pre>
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> create table t1 (id integer primary key);
Table created.
SQL> create table t2 (id integer primary key, t1_id integer,
2 constraint t2_t1_fk foreign key (t1_id) references t1);
Table created.
SQL> exec dbms_errlog.create_error_log ('T2');
PL/SQL procedure successfully completed.
SQL> insert all
2 into t2 (id, t1_id)
3 values (x, y)
4 log errors into err$_t2 reject limit unlimited
5 select 1 x, 2 y from dual;
0 rows created.
SQL> create or replace trigger t2_trg
2 before insert or update on t2
3 for each row
4 begin
5 null;
6 end;
7 /
Trigger created.
SQL> insert all
2 into t2 (id, t1_id)
3 values (x, y)
4 log errors into err$_t2 reject limit unlimited
5 select 1 x, 2 y from dual;
insert all
ERROR at line 1:
ORA-02291: integrity constraint (EOR.T2_T1_FK) violated - parent key not found
</code>
This doesn't appear to be a documented restriction. Does anyone know if it is a bug?Tony Andrews wrote:
This doesn't appear to be a documented restriction. Does anyone know if it is a bug?Check The Execution Model for Triggers and Integrity Constraint Checking.
SY. -
ODI - IKM Oracle Multi Table Insert
Hi All,
I am new to ODI, i tried to use "IKM Oracle Multi Table Insert", one interface generates query like
insert all
when 1=1 then
into BI.EMP_TOTAL_SAL
*(EMPNO, ENAME, JOB, MGR, HIREDATE, TOTAL_SAL, DEPTNO)*
values
*(C1_EMPNO, C2_ENAME, C3_JOB, C4_MGR, C5_HIREDATE, C6_TOTAL_SAL, C7_DEPTNO)*
select
C1_EMPNO EMPNO,
C2_ENAME ENAME,
C3_JOB JOB,
C4_MGR MGR,
C5_HIREDATE HIREDATE,
C6_TOTAL_SAL TOTAL_SAL,
C7_DEPTNO DEPTNO
from BI.C$_0EMP_TOTAL_SAL
where (1=1)
because of alias this insert fails. Could you please anyone explain what exactly happens and how to control the query genration?
Thanks & Regards
M ThiyaguWhat David is asking is for you to go to operator and review the failed task, copy the SQL and paste it up here, run the SQL in your sql client (Toad / SQL Developer) and try and ascertain what objects your missing causing your SQL Error.
Have you followed the link posted above? Have you placed the interfaces in a package in the right order? Are you running the package as a whole or individual interfaces? I dont think the individual interfaces will work with this IKM as its designed for one to feed the other.
Please detail the steps you've taken, how many interfaces you have and what options you have chosen in the IKM options for each interface - Its tricky to diagnose your problem and when you say "I can't understand what to do and how to do...
So please give the step wise solution to do that interface.. or please give with an example.." it means a lot of people will ignore your post as we cant see any evidence of you trying!
p.s I see you have resurected a thread from 2009 - 1) I dont think the multi-table insert KM was available with ODI at that time (10G) 2) The thread is answered / closed so not many people will look at it 3) Proceedurs should only really be used when you cant do it with an interface, you lose all the lovely lineage between objects with you get with an interface.
Hope this helps - please post your setup , your error and how you have configured the interfaces and package so far. -
Hii...Experts..
How can I load data from a single source table to multiple target tables using IKM oracle multi table insert ???
Please help me with an example.
RegardsWhat David is asking is for you to go to operator and review the failed task, copy the SQL and paste it up here, run the SQL in your sql client (Toad / SQL Developer) and try and ascertain what objects your missing causing your SQL Error.
Have you followed the link posted above? Have you placed the interfaces in a package in the right order? Are you running the package as a whole or individual interfaces? I dont think the individual interfaces will work with this IKM as its designed for one to feed the other.
Please detail the steps you've taken, how many interfaces you have and what options you have chosen in the IKM options for each interface - Its tricky to diagnose your problem and when you say "I can't understand what to do and how to do...
So please give the step wise solution to do that interface.. or please give with an example.." it means a lot of people will ignore your post as we cant see any evidence of you trying!
p.s I see you have resurected a thread from 2009 - 1) I dont think the multi-table insert KM was available with ODI at that time (10G) 2) The thread is answered / closed so not many people will look at it 3) Proceedurs should only really be used when you cant do it with an interface, you lose all the lovely lineage between objects with you get with an interface.
Hope this helps - please post your setup , your error and how you have configured the interfaces and package so far. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
Splitter operator doesnt use multi table inserts in OWB...very very urgent
Hi,
I am using OWB 9i to carry out tranformations. I want to copy the same seuence numbers to the two target tables.
Scenario:
I have a source table source_table, which is connected to a splitter and the splitter is used to dump the records in two target tables namely target1_table and target2_table. I have a sequence which is also an input to the splitter, so that I can have the same sequence number in the the two output groups of he splitter. I then map the sequence number from the two output groups to the two target tables expecting to have the same sequence number in the target tables. But when I see the generated code it creates two procedures and effectively inserts sequencing numbers in the target tables which are not consistent. Please help me so that I have the same sequencing numbers in the target tables which are consistent.
Well the above example works in row based operating mode but not in set based mode. Please give me a valid explanation.
OWB pdf says that splitter uses multi table inserts for multiple targets. After seeing the generated code for set based operations I dont agree to this.
Its very urgent.
thanks a lot in advance.
-SharatHi Mark,
You got me wrong, let me explain you the problem again.
RDBMS oracle 9.2.0.4
OWB 9.2.0.2.8
I have three tables T1,T2 and T3.
T1 is the source table and the remaining two tables T2 and T3 are target tables.
Following are the contents of table T1 -
SQl>select * from T1;
DEPTNAME LOCATIO?N
COMP PUNE
MECH BOMBAY
ELEC A.P
Now I want to populate the two destination tables T2 and T3 with the records in T1.
For this I am using splitter operator in OWB which is suppose to generate multi table inserts, but unfortunately its not doing so when I generate the SQL. There si no "insert all" command in the sql it generates.
What I want is, when I populate T2 and T3 I use a sequence generator and I want the same sequences for T2 and T3 eg.
SQl>select * from T2;
NEXT_VAL DEPTNAME LOCATIO?N
1 COMP PUNE
2 MECH BOMBAY
3 ELEC A.P
SQl>select * from T3;
NEXT_VAL DEPTNAME LOCATIO?N
1 COMP PUNE
2 MECH BOMBAY
3 ELEC A.P
I am able to achieve this when I set the operating mode to ROW BASED. I am not geting the same result when I set the operating mode to SET BASED.
Help me....
-Sharat -
Number of record with multi-table insert
Hi,
I need to insert into 3 different tables from one big source table so I decided to use multi-table insert. I could use three inserts statements but the source table is an external table and I would prefer not reading it three times.
I wonder if there is a way to get the exact number of records inserted in each one of the tables?
I tried using rowcount but I all I get is the number of rows inserted in all three tables. Is there a way to get this info without having to execute a "select count" in each table afterward?
Thanks
INSERT /*+ APPEND */
WHEN RES_ENS='PU' THEN
INTO TABLE1
VALUES(CD_ORGNS, NO_ORGNS, DT_DEB, SYSDATE)
WHEN RES_ENS='PR' THEN
INTO TABLE2
VALUES(CD_ORGNS, UNO_ORGNS, DT_DEB, SYSDATE)
ELSE
INTO TABLE3
VALUES(CD_ORGNS, NO_ORGNS, DT_DEB, SYSDATE)
SELECT ES.CD_ORGNS CD_ORGNS, ES.RES_ENS RES_ENS, ES.DT_DEB DT_DEB, ES.NO_ORGNS NO_ORGNS
FROM ETABL_ENSGN_SUP ESI have a large number of data to load in those three tables. I do not want to use PL/SQL (with loops and lines of codes) because I can do the insert directly from the source table and it saves a lot of time.
INSERT /*+APPEND*/
WHEN condition1 THEN
INTO table1
WHEN condition2 THEN
INTO table2
SELECT xx FROM my_table
For example, if my_table has 750000000 rows, I only need to read it once and the INSERT..SELECT is a really fast way to load data.
As I was saying, the only problem I've got, is that I cannot get the number of rows in each table. Do you know a way to get it? -
VLD-1119: Unable to generate Multi-table Insert statement for some or all t
Hi All -
I have a map in OWB 10.2.0.4 which is ending with following error: -
VLD-1119: Unable to generate Multi-table Insert statement for some or all targets.*
Multi-table insert statement cannot be generated for some or all of the targets due to upstream graphs of those targets are not identical on "active operators" such as "join".*
The map is created with following logic in mind. Let me know if you need more info. Any directions are highly appreciated and many thanks for your inputs in advance: -
I have two source tables say T1 and T2. There are full outer joined in a joiner and output of this joined is passed to an expression to evaluate values of columns based on
business logic i.e. If T1 is available than take T1.C1 else take T2.C1 so on.
A flag is also evaluated in the expression because these intermediate results needs to be joined to third source table say T3 with different condition.
Based on value taken a flag is being set in the expression which is used in a splitter to get results in three intermediate tables based on flag value evaluated earlier.
These three intermediate tables are all truncate insert and these are unioned to fill a final target table.
Visually it is something like this: -
T1 -- T3 -- JOINER1
| -->Join1 (FULL OUTER) --> Expression -->SPLITTER -- JOINER2 UNION --> Target Table
| JOINER3
T2 --
Please suggest.I verified that their is a limitation with the splitter operator which will not let you generate a multi split having more than 999 columns in all.
I had to use two separate splitters to achieve what I was trying to do.
So the situation is now: -
Siource -> Split -> Split 1 -> Insert into table -> Union1---------Final tableA
Siource -> Split -> Split 2 -> Insert into table -> Union1 -
Multi-table INSERT with PARALLEL hint on 2 node RAC
Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
used is what is given below.
create table t1 ( x int );
create table t2 ( x int );
insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;
I can see multiple sessions using the below query, but on only one instance only. This happens not
only for the above statement but also for a statement where real time table(as in table with more
than 20 million records) are used.
select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945
Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
Thanks,
MaheshPlease take a look at these 2 articles below
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
thanks
http://swervedba.wordpress.com -
Hello boys,
I would like to do an insert into 3 tables:
insert into t1
if found insert into t2
if not found insert into t3.
So 2 out of 3 tables will end up with data.
I would like to do it in 1 multi table insert.
I can't use selects from the destination tables cause they are in the terabytes. I tried a number of combinations of primary keys error logging tables ("log errors reject limit unlimited") but without any success.
Any ideas?Hello Exor,
You said,
insert into t1
if found insert into t2
if not found insert into t3.
What is found condition? What are you checking and have you looked into insert all clause if that what you needed. Remember you can have complex join in your select and insert into 3 or 4 table based your "found" condition. This might be fastest way to move data into 3 tables instead of using cursor or bulk collect.
INSERT ALL
WHEN order_total < 1000000 THEN
INTO small_orders
WHEN order_total > 1000000 AND order_total < 2000000 THEN
INTO medium_orders
WHEN order_total > 2000000 THEN
INTO large_orders
SELECT order_id, order_total, sales_rep_id, customer_id
FROM orders;Regards -
I have a set of data:
ID Name Address
1 Tom 1 The Walk
1 Tom 2 The Run
1 Tom 3 The Sprint
I was hoping to use a multi table insert to insert this information but I wanted to distinct the ID and Name in one table and enter 3 values in the other table. I don't think this is possible. Is there some syntax I am missing that would enable me to do this?
RichardHow about following ?
SQL> select * from t1 ;
ID NAME ADDRESS
1 Tom 1 The Walk
1 Tom 2 The Run
1 Tom 3 The Sprint
3 rows selected.
SQL> select * from t11 ;
no rows selected
SQL> select * from t12 ;
no rows selected
SQL> desc t11 ;
Name Null? Type
ID NUMBER(38)
NAME VARCHAR2(10)
SQL> desc t12 ;
Name Null? Type
ID NUMBER(38)
NAME VARCHAR2(10)
ADDRESS VARCHAR2(20)
SQL> insert all when (pid > 0) then into t11 values(id, name)
when (pid >= 0) then into t12 values(id, name, address)
select id, name, address, case when id = lag(id) over (order by id,name)
and name = lag(name) over (partition by id order by id,name) then 0 else id end pid
from t1 ;
4 rows created.
SQL> select * from t11 ;
ID NAME
1 Tom
1 row selected.
SQL> select * from t12 ;
ID NAME ADDRESS
1 Tom 1 The Walk
1 Tom 2 The Run
1 Tom 3 The Sprint
3 rows selected. -
Hello,
Does ODI allow using multi table insert feature? For example
INSERT FIRST
WHEN expr THEN
INTO ...
WHEN expr THEN
INTO ..
SELECT ..
FROM ...
WHERE ...
I could build my own LKM but I can't define appropriate Meta Model. In an interface ODI allows single Target Datastore only.Hi simonova,
I guess you can do this by ODI procedure,( write the Plsql code for it).
But Interface supports only one target data store.So you couldnt do with available KMs.
Thanks,
Madha -
Multi-table INSERT with a "twist"
Hello:
I'm having a problem writing a query which will allow me to enter data into two tables.
Both tables will require to insert a value which is derived by grabbing the max of a column and adding 1.
I'm first testing the max part into a single table. When I run the insert statement for this single table I receive an error message stating that group function is not allowed here. This is my query:
insert into trait_map (name, trait, id) values (belle, 30000, max(id)+1);The max(id)+1 does give me the correct value when I tested it in the following query:
select max(dwkey_traittype)+1 as next_id from trait_map order by dwkey_traittype desc;What I ultimately need is to insert data into two tables trait_map and traittype_dim. Both of these tables contain the field dwkey_traittype. In both of these tables, I need to get the max(dwkey_traittype)+1 and insert the result into the column dwkey_traittype.
Can someone help me?
Thanks.First off, why not use a sequence instead of calculating max(value)+1? Think about what would happen if you had two sessions attempting to insert data into your tables at the same time. Session 1 performs an insert using max(value)+1, but before session 1 commits, session 2 attempts to perform a similar operation and gets the same value for max(value)+1 that session 1 got.
Now as far as your question how how to get max to work, you need to use it in the context of a select statement, not an insert statement:
insert into a_table (id, col, list) select max(id)+1, 'const', 'list' from a_table;however, this would be better:
insert into a_table (id, col, list) select seq.nextval, 'const', 'list' from dual -
Multi table inheritance and performance
I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.Stefan-
Thanks for the feedback. Note that 3.0 does make this simpler: we have
extensions that allow you to define the mechanism for subclass
identification purely in the metadata file(s). See:
http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
The idea for having a separate table mapping numbers to class names is
good, but we prefer to have as few Kodo-managed tables as possible. It
is just as easy to do this in the metadata file.
In article <[email protected]>, Stefan wrote:
First of all: thx for the fast help, this one (IntegerProvider) helped and
solves my problem.
kodo is really amazing with all it's places where customization can be
done !
Anyway as a wish for future releases: exactly this technique - using
integer as class-identifiers rather than the full class names is what I
meant with "normalization".
The only thing missing, is a table containing information of how classIDs
are mapped to classnames (which is now contained as an explicit statement
in the jdo-File). This table is not mapped to the primary key of the main
table (as you suggested), but to the classID-Integer wich acts as a
foreign key.
A query for a specific class would be solved with a query like:
select * from classValues, classMapping where
classValues.JDOCLASSX=classmapping.IDX and
classmapping.CLASSNAMEX='de.company.whatever'
This table should be managed by kodo of course !
Imagine a table with 300.000 rows containing only 3 different derived
classes.
You would have an extra table with 4 rows (base class + 3 derived types).
Searching for the classID is done in that 4row table, while searching the
actual class instances than would be done over an indexed integer-classID
field.
This is much faster than having the database doing 300.000 String
comparisons (even when indexed).
(By the way - it would save a lot memory as well, even on classes which
are not derived)
If this technique is done by kodo transparently, maybe turned on with an
extra option ... that would be great, since you wouldn't need to take care
for different "subclass-indicator-values", can go on as everytime and have
a far better performance ...
Stephen Kim wrote:
You could push off fields to seperate tables (as long as the pk column
is the same), however, I doubt that would add much performance benefit
in this case, since we'd simply add a join (e.g. select data.name,
info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
info.jdoclassx = 'foo'). One could turn off default fetch group for
fields stored in data, but now you're adding a second select to load one
"row" of data.
However, we DO provide an integer subclass provider which can speed
these sorts of queries a lot if you need to constrain your queries by
class, esp. with indexing, at the expense of simple legibility:
http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
Stefan wrote:
I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Oracle forms 10g,multiple insert and update problem
Hi,
I have tabular form(4 records displayed) with one datablock(based on a view).
After querying the form could not update all the records but only first record value can select from LOV.
I called a procedure in in-insert and on-update
The query is lik this
PACKAGE BODY MAPPING IS
PROCEDURE INSERT_ROW(EVENT_NAME IN VARCHAR2)
IS
BEGIN
IF (EVENT_NAME = 'ON-INSERT') THEN
INSERT INTO XX_REC_MAPPING
(BRANCH_CODE,COLLECTION_ID,PAY_MODE_ID,RECEIPT_METHOD,CREATED_BY,
CREATION_DATE,LAST_UPDATED_BY,LAST_UPDATE_DATE,LAST_UPDATE_LOGIN)
VALUES
( :XX_REC_MAPPING.OFFICE_CODE,
:XX_REC_MAPPING.COLLECTION_ID,
:XX_REC_MAPPING.PAY_MODE_ID,
:XX_REC_MAPPING.RECEIPT_METHOD,
:XX_REC_MAPPING.CREATED_BY,
:XX_REC_MAPPING.CREATION_DATE,
:XX_REC_MAPPING.LAST_UPDATED_BY,
:XX_REC_MAPPING.LAST_UPDATE_DATE,
:XX_REC_MAPPING.LAST_UPDATE_LOGIN);
ELSIF (EVENT_NAME = 'ON-UPDATE') THEN
UPDATE XX_REC_MAPPING
SET BRANCH_CODE=:XX_REC_MAPPING.OFFICE_CODE,
COLLECTION_ID=:XX_REC_MAPPING.COLLECTION_ID,
PAY_MODE_ID=:XX_REC_MAPPING.PAY_MODE_ID,
RECEIPT_METHOD=:XX_REC_MAPPING.RECEIPT_METHOD,
LAST_UPDATED_BY=:XX_REC_MAPPING.LAST_UPDATED_BY,
LAST_UPDATE_DATE=:XX_REC_MAPPING.LAST_UPDATE_DATE,
LAST_UPDATE_LOGIN=:XX_REC_MAPPING.LAST_UPDATE_LOGIN
WHERE ROWID=:XX_REC_MAPPING.ROW_ID;
END IF;
END INSERT_ROW;
END MAPPING;
Whether the table gets looked or sholud i use some other trigger or loops ?
someone suggest me how to edit this query for multiple update.
Thanks
sat33I called a procedure in in-insert and on-updateWhy are you writing this code in the first place? If you have a block based on an updatable view, just let Forms do the inserts and updates.
If it's not an updatable view, use instead of triggers on the view.
See this current thread too:
INSTEAD of Trigger View for an Oracle EBS New form development
Maybe you are looking for
-
Hi All, Business scenario: File-To-File, file needs content conversion My Graphical mapping is working fine. I tested with sample file in interface mappin also. but when I'm executing the scenario it is giving following error. Error: (SXMB_MONI)->er
-
Oracle Database Express Edition used in a standalone database
I have been developing databases for years in Access. I have had alot of people want me to make databases for them. I do not want to make the databases in access and then have to worry about sending a database to a different state and then find out t
-
From ZipEntry to File without uncompressing?
Hello, I have some third party Java code that requires a java.io.File for its constructor input argument. The third party code only needs to read the file. It does this using RandomAccessFile. In my code I am originally working with a ZipFile. So my
-
Flash Player Version Shown Below Every Flash Object
Is there some reason that the Flash Player version is shown below every Flash object on every website page that it appears? This didn't happen in older versions. There are times when it obscures content on a site. Is there a way to diable it? Thanks.
-
Process order and user time zone
Our system server time zone is CET but we are using EST for user time zone and plant. When we create process orders, the creation date/time is using the server time zone CET. I understand that in the database, the CET time is used but Is there a way