Conditional multi row Table insert
Hello,
I've a multi insert table, with 1 new row (_1B). In some cases I don't want a
new row, because there are enough rows created, how should I prevent the new row on the iterator?
Anyone an idea?
Regards,
Ruben Spekle
Wouldn't it be easyer to just omit the _1B if your condition occurs?
model = "${ ui:conc ( 'yourtablename' , ui:cond ( yourcondition , '_1B' , '' ) ) }"
Anton
Similar Messages
-
How to set Destination URI of a column in a multi row table
Hi,
I need to programmatically set the destinationURI property of a 'messageStyleText' column in a multi-row table.
I have used the below code in the processRequest of the Controller of the page:
OAViewObject viewObject = (OAViewObject)am.findViewObject("IntSummBackOrdDetVO");
String url = (viewObject.getCurrentRow().getAttribute("ErrorCode")).toString();
url = "/oiphtml/o2c22_"+url+".htm";
OAStaticStyledTextBean errorlink = (OAStaticStyledTextBean) webBean.findChildRecursive("ErrorCode1");
errorlink.setDestination(url);
But this code is not working. Could you please help me in correcting the above code?user594528 ,
What ur trying to do can be conceptually possible through bound values only as there are many rows in table.Read bound values section in dev guide, to understand the fundamentals.
You can refer to thread to correct ur code:
Re: Unable to set Destination URI to URL stored in a VO attribute
--Mukul -
Multi Row Madness: Insert?
Hey Guys,
I've been hitting my head against a cobbled wall trying to figure this one out.
I'm creating a form that needs to support versioning - so instead of ever using an update the form always inserts a copy of the data into the database with an incremented version number.
Frustratingly enough the form requires a dynamic amount of parts - best editting and represented but a tabular form. But MRU and MRD.... where is Multi-Row Insert?
I've tried everything i can to load the information and then change the version - but doing this will result in a ORA-20001: Error in MRU as the data in database is different and cannot "update" the row for old records (when i just want it to copy with a new version number).
The tables work with a header table that had id and version attributes making a combined pk, and the parts table with id and version which references the header table, and a parts_id to uniquely identify itself.
Is there any suggestions on this issue? I was thinking about adding an extra value and adding a trigger to do the inserting instead - but i fear this approach may change the information in the current version (i'm not that familiar with oracle databases... so prefer to try to find a solution in apex).
I'm using 2.2.
cheers,
AlexBut MRU and MRD.... where is Multi-Row Insert?
The MRU process does an update for existing rows and insert for new rows (that were added using the Add Row button)
I was thinking about adding an extra value and adding a trigger to do the inserting instead
Yes, a row-level trigger on the underlying table would be the best way to approach this problem. Let the APEX MRU and MRD processes do their job and your row-level trigger can keep inserting rows into a separate audit/history table with structure identical to the main table (plus sequence generated version number).
Something like
create table mytable_hist as select 'U' dml_action, 1 version_no,a.* from mytable a where 1=2;
create or replace trigger mytrig
after insert or update or delete on mytable
for each row
declare
l_action varchar2(1);
begin
if inserting then l_action := 'I';
elsif updating then l_action := 'U';
elsif deleting then l_action := 'D';
end if;
insert into mytable_hist
values
l_action,
version_no_seq.nextval,
nvl(:new.col1,:old.col1),
nvl(:new.col2,:old.col2),
nvl(:new.col3,:old.col3),
end;
/ -
hello guys I want to fill a block with multiple rows before inserting the rows data are from 2 blocks in the form I tried to use next_record , down but they are all restricted here an example of what I want to do
I have block1 , block2 and block3 and I want to fill block3 with the first row of each block before inserting the data in the block3
what should I do?
Thanks in advance guysBut MRU and MRD.... where is Multi-Row Insert?
The MRU process does an update for existing rows and insert for new rows (that were added using the Add Row button)
I was thinking about adding an extra value and adding a trigger to do the inserting instead
Yes, a row-level trigger on the underlying table would be the best way to approach this problem. Let the APEX MRU and MRD processes do their job and your row-level trigger can keep inserting rows into a separate audit/history table with structure identical to the main table (plus sequence generated version number).
Something like
create table mytable_hist as select 'U' dml_action, 1 version_no,a.* from mytable a where 1=2;
create or replace trigger mytrig
after insert or update or delete on mytable
for each row
declare
l_action varchar2(1);
begin
if inserting then l_action := 'I';
elsif updating then l_action := 'U';
elsif deleting then l_action := 'D';
end if;
insert into mytable_hist
values
l_action,
version_no_seq.nextval,
nvl(:new.col1,:old.col1),
nvl(:new.col2,:old.col2),
nvl(:new.col3,:old.col3),
end;
/ -
ODI Multi Target tables insert using single interface
Hi
I am facing an issue in ODI with multiple target tables insert.
I have a source table in Sybase ASE and want to insert the data in two different target tables which are in Sybase IQ.
Any suggestion would be appreciated !
Thanks in advance.
Best Regards
Sanchit AggarwalHave you tried using IKM Oracle Multi Table Insert?
-
Transposing column names to row for a multi row table
Hi
I have a table some thing like this that needs to be transposed.
FIELD_POPULATED First_NM**Last_NM**Mi_NM*
A_NULL 0 0 0
A_NOT_NULL 120 120 120
B_NULL 0 0 0
B_NOT_NULL 0 0 0
The above table has to be transposed as
column_name*A_NULL**A_NOT_NULL**B_NULL* B_NOT_NULL
FIRST_NM 0 120 0 0
Last_NM 0 120 0 0
Mi_NM 0 120 0 0
I am working oracle on 11g. Any help is greatly appreciatedHi,
See this thread:
Re: Help with PIVOT query (or advice on best way to do this) -
Multi Row Update set fields disabled
Hi
I have a multi-row table and the functionality I need is when the user changes a value in a drop down list it either enables or disables certain fields.
I have set the depends on value for my fields to the drop down list and checked the clear/refresh box. On each field I have tried
#{bindings.dropdown.inputValue=='codeA'} and also
#{row.dropdown=='codeA'}
Each of these work as when the form is first loaded and the first row selected, the correct fields are disabled. However if I add a new row or change the dropdown, nothing happens.
Any ideas?
Thanks
RichSteven,
I have tried this. It simply adds in row.MyList. Here is the field that in generates
<af:selectOneChoice id="MinimumEntryFtTerritoryCode" value="#{row.FtTerritoryCode}"
partialTriggers = "MinimumEntryMetCode" required="#{(bindings.MinimumEntryFtTerritoryCode.mandatory) and (!MinimumEntryCollectionModel.newRow)}" readOnly="#{!(data.StudentsPageDef.StudentsCustomerStatus.inputValue=='ACTIVE')}" disabled="#{row.MetCode!='PRE_ASS'}" valign="top">
<af:selectItem value="" label="" rendered="#{!((bindings.MinimumEntryFtTerritoryCode.mandatory) and (!MinimumEntryCollectionModel.newRow))}" />
<!-- DEBUG:BEGIN:DYNAMIC_DOMAIN_OPTIONS : default/item/dynamicDomainOptions.vm -->
<af:forEach var="row2" items="#{bindings.FndTerritoriesVOLookup.rangeSet}" >
<af:selectItem label="#{row2.TerritoryShortName}" value="#{row2.TerritoryCode}"/>
</af:forEach>
<!-- DEBUG:END:DYNAMIC_DOMAIN_OPTIONS : default/item/dynamicDomainOptions.vm-->
</af:selectOneChoice>
It all looks like it should work but when I change MetCode in the list, everything stays disabled.
Cheers
Rich -
Multi-row insert in master-detail tables
Hi, I'm using jdev 10.1.3.2 with jheadstart and my problem is:
I hava a master-detail structure, both are tables and my goal is that I want multi insert (exactly 3) in master and detail table when user makes new order(some business scenario). I cannot create rows in master or detail VO by overriding create() method because its entities have complex primary keys and some part of this key is populated by the user with lov. So I set in jhs new rows to 3 and checked multi-row insert allowed but the problem is that overall I can only create rows in master table after I submit form. I want to create row in master table and fill rows in detail table, and after that I want to have opportunity to create second (or even third) row in master table and fill rows in detail table.
thanks for help.
PiotrSee JHS DevGuide: 3.2.1. Review Database Design:
If you are in the position to create or modify the database design, make sure all
tables have a non-updateable primary key, preferably consisting of only one
column. If you have updateable and/or composite primary keys, introduce a
surrogate primary key by adding an ID column that is automatically populated.
See section 3.2.4 Generating Primary Key Values for more info. Although ADF
Business Components can handle composite and updateable primary keys, this
will cause problems in ADF Faces pages. For example, an ADF Faces table
manages its rows using the key of the underlying row. If this key changes,
unexpected behavior can occur in your ADF Faces page. In addition, if you want
to provide a drop down list on a lookup tables to populate a foreign key, the
foreign key can only consists of one column, which in turn means the referenced
table must have a single primary key column.
Groeten,
HJH -
Issues in Table with Multi-Row Insert
I have created a master detail screens using jheadstart on 2 separate pages, Master in the Form layout and detail in the Table Layout with multi-row insert, update and delete flags ON. Have set the New Rows count = 2.
Issue 1
If I try to delete any existing rows, it gives error for new rows saying value is required for the mandatory fields. It should just ignore the new rows if I have not updated any values for any attributes in the those row(As it does for non Master-Detail Table layout). I guess this might be happening because the jheadstart code is setting the foreign key for new rows the detail, but not resetting the status of the rows back to INITIALIZED.
I also noticed that the create() of underlying EO is getting called for those blank rows when I click on 'Save' button, even if I have not changed any data in those rows.
Issue 2
When I try to select the new rows also for deletion, I am getting a '500 Internal Server Error' with following stack trace... This is also happening for normal (non Master-Detail) Table layout.
java.lang.IllegalStateException: AdfFacesContext was already released or had never been attached. at oracle.adf.view.faces.context.AdfFacesContext.release(AdfFacesContext.java:342) at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl.doFilter(AdfFacesFilterImpl.java:253) at oracle.adf.view.faces.webapp.AdfFacesFilter.doFilter(AdfFacesFilter.java:87)
Issue 3
I have put some validation code in the validate() method in the MyEntityImpl.java class.
The validate method seems tobe getting called lots of times, in my case 20 times, where the new rows are just 2.
Environment:
Jdeveloper 10.1.3, JHeadStart 10.1.3 build 78, Windows XP
thanksThanks for the reply.
Issue 1:
What I have observed that in case of multi-row select enabled tables, the blank rows do not have any data. This is because the EO's create() method is called only when we post the data using 'Save' button. Thus the Foreign Keys are also not setup. This is a correct behavior since create() and FK setups etc should get done only if the user has inputted any value in the new rows and thus intend to insert new data into the table.
I am able to find the exact cause of this issue. It is happening because in the details table, I have a column which needs tobe shown as checkbox. Since we can only bind checkbox to an Boolean attribute in VO, I have created a transient attribute of type Boolean, which basically calls the getter/setter of actual attribute doing the String "Y"/"N" to true/false conversion. Here is code for the transient attribute getter/setter
public Boolean getDisplayOnWebBoolean() {
return "Y".equals(getDisplayOnWeb()) ? Boolean.TRUE : Boolean.FALSE;
public void setDisplayOnWebBoolean(Boolean value) {
if(Boolean.TRUE.equals(value))
setDisplayOnWeb("Y");
else
setDisplayOnWeb("N");
Now when I click on the "Save" button, the setter for the boolean field is getting called with the value = false and this is resulting into the row being maked as dirty and thus the validation for the required attributes is getting executed and failing.
Issue 2:
Confirmed that correct filter-mapping entries are present in the web.xml.
Now when I select the new blank rows for deletion and click save, following exception is thrown:
java.lang.ClassCastException: oracle.jheadstart.controller.jsf.bean.NewTableRowBean at oracle.jheadstart.controller.jsf.bean.JhsCollectionModel.getRowsToRemove(JhsCollectionModel.java:412) at oracle.jheadstart.controller.jsf.bean.JhsCollectionModel.doModelUpdate(JhsCollectionModel.java:604) at oracle.jheadstart.controller.jsf.lifecycle.JhsPageLifecycle.processModelUpdaters(JhsPageLifecycle.java:541) at oracle.jheadstart.controller.jsf.lifecycle.JhsPageLifecycle.validateModelUpdates(JhsPageLifecycle.java:571)
thanks - rutwik -
ADF Editable Table : CANNOT Insert Multi Rows , Please hellppppssss
Hi All,
Our customer requirement is being able to insert Multi Rows in an Editable Table and submit ALL of them with just one Click ?
Is it Really Possible ?
I have tried, even I can insert three Empty Rows and Fill them all with values, when I press Submit ONLY one row will be submitted, the other TWO will become BLANK again, is this my code is wrong ?
Or this is limitation of ADF Editable Table ?
Here is my codes, I have tried two ways, all same problem :
(1) public String create_action() {
BindingContainer bindings = getBindings();
OperationBinding operationBinding =
bindings.getOperationBinding("Create");
Object result = operationBinding.execute();
if (!operationBinding.getErrors().isEmpty()) {
return null;
return null;
public String myCreate_action() {
create_action();
create_action();
create_action();
return null;
(2) public String createMultiRows() {
DCBindingContainer dcbc = (DCBindingContainer)getBindings();
//get iterator binding
DCIteratorBinding ib = (DCIteratorBinding)dcbc.get("DeptView1Iterator");
// get viewobject
ViewObject vo = ib.getViewObject();
System.out.println(vo.getCurrentRowIndex());
vo.clearCache();
int currentRowIndex = vo.getRowCount() -1;
Row row1 = vo.createRow();
row1.setNewRowState(Row.STATUS_NEW);
vo.insertRowAtRangeIndex(++currentRowIndex, row1);
Row row2 = vo.createRow();
row2.setNewRowState(Row.STATUS_NEW);
vo.insertRow(row2);
Row row3 = vo.createRow();
row3.setNewRowState(Row.STATUS_NEW);
vo.insertRow(row3);
return null;
Please help... I am stuck ...
Thank you very much,
xtantoXtanto,
Could you try changing
bindings.getOperationBinding("Create");to
bindings.getOperationBinding("CreateInsert");in your first example to see if it fixes your problem?
Regards,
John -
Multi-row uipdate with an insert into another table [I think]
Folks,
I'm trying to get a multi row region with a checkbox against each row. When a user checks a box I want to insert a corresponding row into another table. So I've tried to simplify the questions and distill the problem down using the EMP table.
So, here goes:
I have table EMPS, which looks like this:
EMPNO Number
FIRSTNAME Varchar2(9)
LASTNAME Varchar2(10)
HIREDATE Date
I have table EMP_CANDIDATES, which looks like this:
EMPNO Number
SELECTION_DATE Date
NOTES Varchar2(53)
- I want to create a multi-row region based on EMPS, with a checkbox on each employee row.
- The user should be able to select any number of employees in the region and then press submit.
- on-submit there needs to be a process, which inserts a record into EMP_CANDIDATES for each checked employee.
I've tried pre-populating a collection with the EMPS records and using apex_item.checkbox to produce a checkbox, using this code:
=============
if apex_collection.collection_exists(p_collection_name=>'EMPS') then
apex_collection.delete_collection(p_collection_name=>'EMPS');
end if;
apex_collection.create_collection_from_query(
p_collection_name=>'EMPS',
p_query=>'select
p.empno,
p.hiredate,
p.firstname,
p.lastname,
null selection
from emps p');
=========
I can create a report region on this using tthe following SQL:
select c001 empno
, c002 hiredate
, c003 firstname
, c004 lastname
,apex_item.checkbox(1,c005) selection
from apex_collections
where collection_name ='EMPS'
======
So how do I now get a MRU that will insert a row into EMP_CANDIDATES for each checked row in my region? Or have I gone about this the wrong way?
TFH
DerekHi Derek,
Firstly, your checkbox should be on the c001 field as this is the one that contains your empno.
Then, you need a page process that can be triggered by a button. The process should be set to run "On submit (After computations and validations)" and the PL/SQL code would be something like:
DECLARE
v_empno NUMBER;
BEGIN
IF HTMLDB_APPLICATION.G_F01.COUNT = 0 THEN
raise_application_error(-20001, 'Please select at least one employee!');
END IF;
FOR i IN 1.. HTMLDB_APPLICATION.G_F01.COUNT LOOP
v_empno := TO_NUMBER(HTMLDB_APPLICATION.G_F01(i));
INSERT INTO EMP_CANDIDATES VALUES (v_empno, whateverdate, whatevernotes);
END LOOP;
END;
This will firstly count the items that have been ticked - if there aren't any, the user gets an error message. If there is at least one item ticked, the code will loop through these, get the empno relating to the ticked box and insert a record into the emp_candidates table. Please note that no account is taken here of any validation on this second table - if you need to ensure, for example, uniqueness of records in this table, you will have to update the above to perform this validation.
Regards
Andy -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
VLD-1119: Unable to generate Multi-table Insert statement for some or all t
Hi All -
I have a map in OWB 10.2.0.4 which is ending with following error: -
VLD-1119: Unable to generate Multi-table Insert statement for some or all targets.*
Multi-table insert statement cannot be generated for some or all of the targets due to upstream graphs of those targets are not identical on "active operators" such as "join".*
The map is created with following logic in mind. Let me know if you need more info. Any directions are highly appreciated and many thanks for your inputs in advance: -
I have two source tables say T1 and T2. There are full outer joined in a joiner and output of this joined is passed to an expression to evaluate values of columns based on
business logic i.e. If T1 is available than take T1.C1 else take T2.C1 so on.
A flag is also evaluated in the expression because these intermediate results needs to be joined to third source table say T3 with different condition.
Based on value taken a flag is being set in the expression which is used in a splitter to get results in three intermediate tables based on flag value evaluated earlier.
These three intermediate tables are all truncate insert and these are unioned to fill a final target table.
Visually it is something like this: -
T1 -- T3 -- JOINER1
| -->Join1 (FULL OUTER) --> Expression -->SPLITTER -- JOINER2 UNION --> Target Table
| JOINER3
T2 --
Please suggest.I verified that their is a limitation with the splitter operator which will not let you generate a multi split having more than 999 columns in all.
I had to use two separate splitters to achieve what I was trying to do.
So the situation is now: -
Siource -> Split -> Split 1 -> Insert into table -> Union1---------Final tableA
Siource -> Split -> Split 2 -> Insert into table -> Union1 -
Is there a way to determine which rows are inserted in JDT1 table by the system automatically?
Hi,
Is there a way to determine which rows are inserted in JDT1 table by the system automatically? Example, many GLs which are mapped in GL Account Determination, like Foreign Exchange Gain / Loss, Rounding Account, etc. are inserted by the system automatically in the Journal Entry on various scenarios, like during Incoming Payment, etc.
Which SQL query can give me those rows? Basically the WHERE condition should be based on which column or multiple columns in JDT1 table?
Thanks.Hi Rajesh,
I'm not entirely sure but I think the TransId is the same in the header that belongs to the transaction.
OINV.TransId = JDT1.TransId
Best regards,
Pedro Magueija -
Hello boys,
I would like to do an insert into 3 tables:
insert into t1
if found insert into t2
if not found insert into t3.
So 2 out of 3 tables will end up with data.
I would like to do it in 1 multi table insert.
I can't use selects from the destination tables cause they are in the terabytes. I tried a number of combinations of primary keys error logging tables ("log errors reject limit unlimited") but without any success.
Any ideas?Hello Exor,
You said,
insert into t1
if found insert into t2
if not found insert into t3.
What is found condition? What are you checking and have you looked into insert all clause if that what you needed. Remember you can have complex join in your select and insert into 3 or 4 table based your "found" condition. This might be fastest way to move data into 3 tables instead of using cursor or bulk collect.
INSERT ALL
WHEN order_total < 1000000 THEN
INTO small_orders
WHEN order_total > 1000000 AND order_total < 2000000 THEN
INTO medium_orders
WHEN order_total > 2000000 THEN
INTO large_orders
SELECT order_id, order_total, sales_rep_id, customer_id
FROM orders;Regards
Maybe you are looking for
-
MacBook Pro Slowed to a CRAWL after less then a week....
Hello All, I'm new to Mac. I just got a MacBook Pro last week. Well, for some reason it has slowed to a crawl.... The dock show/hide and magnify isn't working either. Neither is the "active screen corners" feature on the dashboard. In addition to tha
-
My iTunes will not burn a lot of my music as MP3s
I am trying to create a MP3 cd with music from my iTunes. Most of the music is from CDs. But it doesn't seem to matter what format the music originally came in (imported) as because some of it can be turned into MP3s and some of it can not. Please te
-
Hi I am using Oracle 9i, i am having SQL Query like this which is taking mote time to execute , is there any alternate way to modify the same query with Rank and Partition by concepts? The tables(dw_csc_site_mappings smo, dw_csc_companies cco) having
-
Hi Gurus, We have project sysyem, FI and cost and profit center accounting. We are booking cost and revenue on wbs and then we are settling it to cost center. Do we really need to settle to cost center what is the need and what is the correct process
-
Problem with the synchronization by Desktop Software (6.1.0.35)
Hi, i have a big problem with the synchronization by Desktop Software (6.1.0.35). When I installed DS i set to folder sync photos not as ... Documents / BlackBerry / photo (created by default), but i chose ... Documents / Pictures / Photos from the