Join tables and replicate (multiple rows)
Hi,
i have two tables TAB1 (A, B, C) and TAB2 (C, D)
i want to join the tables and replicat such that the resultant table TAB3 has columns A, B, D
select x.a, x.b, y.d from tab1 x join tab2 y on x.c=y.c where y.d=1;
but the problem here is that the above command gives multiple rows and i need all those to be in the target table. When i am using SQLEXEC-QUERY in the mapping, only the first search result is being entered into the TAB3. Then i Found out that sqlexec will return only one value. But i need all the search results to be replicated.
What alternatives are there and what GG recommend in such use case?
Any helpful advice is much appreciated.
thanks,
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
the problem to use SP to check every time before insert/update is that there will be around 15 tables not 2 and there will be lakhs of rows in each table, SP which checks every insert will cause a lot of overhead and lag. I cannot afford that.
More over there will be around 60 values of D and or each value of D there will be Lakhs of rows associated in TAB1
I need to set up different extract and replicat processes for each set of rows associated with each value of D
for example the values of column D are 1-30
for D=1 there are 1.5lakh rows in TAB1 and EXT1, REP1 for this set of rows
D=2 => 2lakhs rows in TAB2 and EXT2, REP2 for this set of rows
D=3 => 1lakhs rows in TAB2 and EXT3, REP3 for this set of rows...etc
this is to ensure that if i want to stop replicating data associated with D=1(which i need to do) i can stop only REP1 and keep the rest of the data associated with D=2-30 in sync
As far as materialized views are concerned, i need to do it with goldengate only
Thanks
Similar Messages
-
Adding and Deleting Multiple Rows or Columns
How do you add or delete more than one row or column at a time.
Robby! wrote:
That's a great finding!
I just wrote a feedback requesting a shortcut like this.
It is a pity though that they desing it to work only if you select the header of the row/column. It should be enabled to work from within any cell in the table.
Who wrote such an error ?
These interesting shortcuts behave flawlessly even if the cursor was in D18 for instance.
(a) I never saw them in the delivered resources.
(b) they aren't responding to the OP's question which was about "Adding and Deleting Multiple Rows and Columns"
Yvan KOENIG (from FRANCE vendredi 3 octobre 2008 18:39:01) -
Join two source tables and replicat into a target table with BLOB
Hi,
I am working on an integration to source transaction data from legacy application to ESB using GG.
What I need to do is join two source tables (to de-normalize the area_id) to form the transaction detail, then transform by concatenate the transaction detail fields into a value only CSV, replicate it on the target ESB IN_DATA table's BLOB content field.
Based on what I had researched, lookup by join two source tables require SQLEXEC, which doesn't support BLOB.
What alternatives are there and what GG recommend in such use case?
Any helpful advice is much appreciated.
thanks,
XiaocunXiaocun,
Not sure what you're data looks like but it's possible the the comma separated value (CSV) requirement may be solved by something like this in your MAP statement:
colmap (usedefaults,
my_blob = @STRCAT (col02, ",", col03, ",", col04)
Since this is not 1:1 you'll be using a sourcedefs file, which is nice because it will do the datatype conversion for you under the covers (also a nice trick when migrating long raws to blobs). So col02 can be varchar2, col03 a number, and col04 a clob and they'll convert in real-time.
Mapping two tables to one is simple enough with two MAP statements, the harder challenge is joining operations from separate transactions because OGG is operation based and doesn't work on aggregates. It's possible you could end up using a combination of built in parameters and funcations with SQLEXEC and SQL/PL/SQL for more complicated scenarios, all depending on the design of the target table. But you have several scenarios to address.
For example, is the target table really a history table or are you actually going to delete from it? If just the child is deleted but you don't want to delete the whole row yet, you may want to use NOCOMPRESSDELETES & UPDATEDELETES and COLMAP a new flag column to denote it was deleted. It's likely that the insert on the child may really mean an update to the target (see UPDATEINSERTS).
If you need to update the LOB by appending or prepending new data then that's going to require some custom work, staging tables and a looping script, or a user exit.
Some parameters you may want to become familiar with if not already:
COLS | COLSEXCEPT
COLMAP
OVERRIDEDUPS
INSERTDELETES
INSERTMISSINGUPDATES
INSERTUPDATES
GETDELETES | IGNOREDELETES
GETINSERTS | IGNOREINSERTS
GETUPDATES | IGNOREUPDATES
Good luck,
-joe -
SQL query for join table and multiple values
Trying to join two tables , Emphours and EmpStatus to get
result which gives each emplyees hour
worked each day
in past say 1 year in what status. I need result similar to table 3 , Hours Can also be grouped per week
all I need Is Each employees hours in each week and his status and position at that time if possible
any help will be highly appreciated. Thank you
note: payday is every other Friday- week runs from Saturday through Friday
EmpStatus Table tracks when employees status changed
EmpHours
employee
workday
payday
hours
position
101
1/1/2014
1/3/2014
8
assistant
101
1/3/2014
1/3/2014
8
assistant
101
1/4/2014
1/17/2014
8
assistant
101
1/5/2014
1/17/2014
8
assistant
101
1/7/2014
1/17/2014
8
assistant
101
1/8/2014
1/17/2014
8
assistant
101
1/9/2014
1/17/2014
8
assistant
101
1/11/2014
1/17/2014
8
assistant
101
1/13/2014
1/17/2014
8
assistant
101
1/14/2014
1/17/2014
8
assistant
101
1/18/2014
2/14/2014
8
assistant
102
1/1/2014
1/3/2014
7
manager
102
1/25/2014
1/31/2014
7
manager
102
1/26/2014
1/31/2014
7
manager
102
1/28/2014
1/31/2014
7
manager
102
1/31/2014
1/31/2014
7
manager
103
1/1/2014
1/3/2014
5
intern
103
1/31/2014
1/31/2014
6
intern
104
1/14/2014
1/17/2014
5
supervisor
104
1/30/2014
1/31/2014
6
supervisor
EmpStatus
employee
start_date
status
101
1/1/2014
parttime
101
1/18/2014
fulltime
102
1/1/2014
seasonal
102
1/18/2014
fulltime
103
1/1/2014
partime
103
1/18/2014
fulltime
104
1/4/2014
parttime
104
1/18/2014
fulltime
Table 3
employee
status
hours
position
workday
weekend
payday
101
parttime
8
assistant
1/1/2014
1/3/2014
1/3/2014
101
parttime
8
assistant
1/3/2014
1/3/2014
1/3/2014
101
parttime
8
assistant
1/4/2014
1/10/2014
1/17/2014
101
parttime
8
assistant
1/5/2014
1/10/2014
1/17/2014
101
parttime
8
assistant
1/7/2014
1/10/2014
1/17/2014
101
parttime
8
assistant
1/8/2014
1/10/2014
1/17/2014
101
parttime
8
assistant
1/9/2014
1/10/2014
1/17/2014
101
parttime
8
assistant
1/11/2014
1/17/2014
1/17/2014
101
parttime
8
assistant
1/13/2014
1/17/2014
1/17/2014
101
parttime
8
assistant
1/14/2014
1/17/2014
1/17/2014
101
fulltime
8
assistant
1/18/2014
1/24/2014
2/14/2014
102
seasonal
7
manager
1/1/2014
1/3/2014
1/3/2014
102
fulltime
7
manager
1/25/2014
1/25/2014
2/14/2014
102
fulltime
7
manager
1/26/2014
1/26/2014
2/14/2014
102
fulltime
7
manager
1/28/2014
1/28/2014
2/14/2014
102
fulltime
7
manager
1/31/2014
1/31/2014
2/14/2014
103
parttime
5
intern
1/1/2014
1/3/2014
1/3/2014
103
fulltime
6
intern
1/31/2014
1/31/2014
2/14/2014
104
parttime
5
supervisor
1/14/2014
1/17/2014
1/17/2014
104
fulltime
6
supervisor
1/30/2014
1/31/2014
1/31/2014Hello David,
Try this query
set dateformat mdy;
declare @EmpHours table
(Employee int,workday date,payday date,hours int,position varchar(50));
insert into @EmpHours values
(101,'1/1/2014','1/3/2014',8,'assistant'),
(101,'1/3/2014','1/3/2014',8,'assistant'),
(101,'1/4/2014','1/17/2014',8,'assistant'),
(101,'1/5/2014','1/17/2014',8,'assistant'),
(101,'1/7/2014','1/17/2014',8,'assistant'),
(101,'1/8/2014','1/17/2014',8,'assistant'),
(101,'1/9/2014','1/17/2014',8,'assistant'),
(101,'1/11/2014','1/17/2014',8,'assistant'),
(101,'1/13/2014','1/17/2014',8,'assistant'),
(101,'1/14/2014','1/17/2014',8,'assistant'),
(101,'1/18/2014','2/14/2014',8,'assistant'),
(102,'1/1/2014','1/3/2014',7,'manager'),
(102,'1/25/2014','1/31/2014',7,'manager'),
(102,'1/26/2014','1/31/2014',7,'manager'),
(102,'1/28/2014','1/31/2014',7,'manager'),
(102,'1/31/2014','1/31/2014',7,'manager'),
(103,'1/1/2014','1/3/2014',5,'intern'),
(103,'1/31/2014','1/31/2014',6,'intern'),
(104,'1/14/2014','1/17/2014',5,'supervisor'),
(104,'1/30/2014','1/31/2014',6,'supervisor');
--select * from @EmpHours
declare @EmpStatus table
(employee int,start_date date,status varchar(20));
insert into @EmpStatus values
(101,'1/1/2014','parttime'),
(101,'1/18/2014','fulltime'),
(102,'1/1/2014','seasonal'),
(102,'1/18/2014','fulltime'),
(103,'1/1/2014','partime'),
(103,'1/18/2014','fulltime'),
(104,'1/4/2014','parttime'),
(104,'1/18/2014','fulltime');
WITH C AS
SELECT es.employee,es.start_date, es.status, ROW_NUMBER() OVER(partition by employee ORDER BY start_date) AS rownum
FROM @EmpStatus ES
CTE_RANGES as(
SELECT cur.employee,Cur.start_date start_range, cur.status,case when nxt.start_date is null then '2099-12-31' else dateadd(d,-1,Nxt.start_date) end AS end_range
FROM C AS Cur
left JOIN C AS Nxt
ON Nxt.rownum = Cur.rownum + 1 and cur.employee=nxt.employee)
select eh.*,es.status from @EmpHours EH join CTE_RANGES Es on EH.Employee =es.employee and EH.workday between es.start_range and es.end_range
--where es.employee=101
You will need a calender table too which can be joined to the output of the above query to get the weekend dates.
You can find the T-SQL code to generate the calender here
http://stackoverflow.com/questions/19191577/t-sql-function-to-generate-calendar-table
and posting the questions with necessary DDL , DML (like I have posted) would help us a lot.
Satheesh
My Blog -
Updating and deleting multiple rows
Hi!
I have a form on a table with report page where I can filter data through columns.
1. Is it possible to create a button that will delete all the filtered data?
2. Also, I would like to be able to update any column of the filtered dataset to a certain value (that needs to be input somehow). Is that possible?
So far I can only update or delete one row at a time, which isn't useful as I sometimes need to change 100 rows at a time with the same value or delete them.
When I use tabular form, I can't filter rows, but I can delete multiple rows...
Also if there are similar examples, could you please send me a link; I can't seem to find any.
I'm using Apex 4.2.2.
Best Regards,
IvanDeleting multiple rows - [url https://forums.oracle.com/forums/thread.jspa?threadID=2159983]common question
Best answered with Martin's example
http://www.talkapex.com/2009/01/apex-report-with-checkboxes-advanced.html
Depends how you filter your data. You could identify all your limiting variables, and reverse the where clause in a delete process.
As for point 2, you can define a button that redirects to the page, and you can defined all the item values you like using the Action link.
There is likely an example in the supplied package applications.
Scott -
Display only one row for distinct columns and with multiple rows their valu
Hi,
I have a table having some similar rows for some columns and multiple different rows for some other columns
i.e
o_mobile_no o_doc_date o_status d_mobile_no d_doc_date d_status
9825000111 01-jan-06 'a' 980515464 01-feb-06 c
9825000111 01-jan-06 'a' 991543154 02-feb-06 d
9825000111 01-jan-06 'a' 154845545 10-mar-06 a
What i want is to display only one row for above distinct row along with multiple non distinct colums
ie
o_mobile_no o_doc_date o_status d_mobile_no d_doc_date d_status
9825000111 01-jan-06 'a' 980515464 01-feb-06 c
991543154 02-feb-06 d
154845545 10-mar-06 a
regards,
KumarRe: SQL Help
-
When primary table is also join table and you have NOT NULL constraints
Hi,
Me again. This is similar to the message titled "Problem with an
optional 1 to 1 relationship modelled using a link table". Whats
different about this case is we are dealing with a one to many relationship.
Given this SQL:
create table person (
pid INTEGER(10) NOT NULL,
language_code VARCHAR(3) NOT NULL
create table language_person (
pid INTEGER(10) NOT NULL REFERENCES person(pid),
language_code VARCHAR(3) NOT NULL,
first_name VARCHAR(20) NOT NULL
I wrote these classes (abbreviated)
Person:
* @jdo:persist
* @jdo:identity-type application
* @jdo:objectid-class PersonId
* @jdo:requires-extent false
* @jdo:extension vendor-name="kodo" key="table"
* value="PERSON"
* @jdo:extension vendor-name="kodo" key="lock-column"
* value="none"
* @jdo:extension vendor-name="kodo" key="class-column"
* value="none"
public class Person {
* @jdo:primary-key true
* @jdo:extension vendor-name="kodo" key="data-column"
* value="PID"
private int pid;
* @jdo:extension vendor-name="kodo" key="data-column"
* value="LANGUAGE_CODE"
private String languageCode;
* @jdo:collection element-type="LanguagePerson"
* @jdo:extension vendor-name="kodo" key="pid-data-column"
* value="PID"
* @jdo:extension vendor-name="kodo" key="table"
* value="LANGUAGE_PERSON"
* @jdo:extension vendor-name="kodo" key="pid-ref-column"
* value="PID"
* @jdo:extension vendor-name="kodo"
key="languageCode-data-column"
* value="LANGUAGE_CODE"
* @jdo:extension vendor-name="kodo"
key="languageCode-ref-column"
* value="LANGUAGE_CODE"
private Set languagePersons = new HashSet();
public Person(int pid, String languageCode) {
this.pid = pid;
this.languageCode = languageCode;
public void addLanguagePerson(LanguagePerson languagePerson) {
languagePersons.add(languagePerson);
public Set getLanguagePersons() {
return languagePersons;
LANGUAGE_PERSON
* @jdo:persist
* @jdo:identity-type application
* @jdo:objectid-class LanguagePersonId
* @jdo:requires-extent false
* @jdo:extension vendor-name="kodo" key="table"
* value="LANGUAGE_PERSON"
* @jdo:extension vendor-name="kodo" key="lock-column"
* value="none"
* @jdo:extension vendor-name="kodo" key="class-column"
* value="none"
public class LanguagePerson {
* @jdo:primary-key true
* @jdo:extension vendor-name="kodo" key="data-column"
* value="PID"
private int pid;
* @jdo:primary-key true
* @jdo:extension vendor-name="kodo" key="data-column"
* value="LANGUAGE_CODE"
private String languageCode;
* @jdo:extension vendor-name="kodo" key="data-column"
* value="FIRST_NAME"
private String firstName;
public LanguagePerson(int pid, String languageCode, String firstName) {
this.pid = pid;
this.languageCode = languageCode;
this.firstName = firstName;
And then I do this:
PersistenceManager pm = JDOFactory.getPersistenceManager();
pm.currentTransaction().begin();
final Person person = new Person(1,"EN");
final LanguagePerson languagePerson = new
LanguagePerson(1,"EN","Mike");
person.addLanguagePerson(languagePerson);
pm.makePersistent(person);
pm.currentTransaction().commit();
The SQL that issues forth is this:
1125 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
preparing statement <17089909>: INSERT INTO PERSON(LANGUAGE_CODE, PID)
VALUES (?, ?)
1125 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
executing statement <17089909>: [reused=1;params={(String)EN,(int)1}]
1125 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
preparing statement <9818046>: INSERT INTO
LANGUAGE_PERSON(LANGUAGE_CODE, PID) VALUES (?, ?)
1125 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
executing statement <9818046>: [reused=1;params={(String)EN,(int)1}]
1140 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
preparing statement <24763620>: INSERT INTO LANGUAGE_PERSON(FIRST_NAME,
LANGUAGE_CODE, PID) VALUES (?, ?, ?)
1140 [main] INFO jdbc.SQL - [ C:6588476; T:6166426; D:2891371 ]
executing statement <24763620>:
[reused=1;params={(String)Mike,(String)EN,(int)1}]
And the second INSERT fails on Oracle because FIRST_NAME is null, and
the table definition requires it to be NOT NULL.
Is there anyway I can get Kodo to figure out its dealing with the same
table for inserting the link columns and the full row, and optimize
accordingly i.e do one INSERT for LANGUAGE_PERSON?
I guess my only other options are a) introduce an explicit link table or
b) define a custom mapping?
Thanks,
Mike.There are examples of 1-Many mappings in the documentation:
http://www.solarmetric.com/Software/Documentation/latest/docs/
ref_guide_meta_examples.html
The important point I think you've missed is that right now, 1-many
mappings always require an inverse 1-1 mapping. Again, see the docs
above.
So your LanguagePerson needs a field of type Person, and whenever you add
a LanguagePerson to a Person, make sure to set that LanguagePerson's
Person too. LanguagePerson.person will use the same PID column as
LanguagePeson.pid. Kodo has no problem with having 2 mappings
mapped to the same column.
Kodo 3.0 will allow 1-Many relations without an inverse 1-1. -
Exception raised while trying to join Table and Stream
I have written a sample Project to fetch the data from a Table and join it with the Input Stream. Followed the same procedure specified at http://download.oracle.com/docs/cd/E17904_01/doc.1111/e14301/processorcql.htm#CIHCCADG
I am getting the exception:
<Error> <Deployment> <BEA-2045016> <The application context "Plugin" could not be started. Could not initialize component "<unknown>":
Invalid statement: "select PROMOTIONAL_ORDER.ORDER_ID as orderId ,PROMOTIONAL_ORDER.UFD_ID as ufdId, PROMOTIONAL_ORDER.WEB_USER_ID as webUserId
from helloworldInputChannel [now] as dataStream, PROMOTIONAL_ORDER where >>PROMOTIONAL_ORDER.ORDER_ID = dataStream.ORDER_ID<<"
Cause: wrong number or types of arguments in call to et
Action: Check the spelling of the registered function. Also confirm that its call is correct and its parameters are of correct datatypes.>
If the where condition is removed then the application runs fine fetching the data from the Tables.
Following is the config.xml for processor:
======================================
<?xml version="1.0" encoding="UTF-8"?>
<n1:config xmlns:n1="http://www.bea.com/ns/wlevs/config/application">
<processor>
<name>helloworldProcessor</name>
<rules>
<query id="dummyRule"> <![CDATA[
select PROMOTIONAL_ORDER.ORDER_ID as orderId ,PROMOTIONAL_ORDER.UFD_ID as ufdId, PROMOTIONAL_ORDER.WEB_USER_ID as webUserId
from helloworldInputChannel [now] as dataStream, PROMOTIONAL_ORDER where PROMOTIONAL_ORDER.ORDER_ID = dataStream.ORDER_ID
]]></query>
</rules>
</processor>
</n1:config>
Following is the assembly file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:osgi="http://www.springframework.org/schema/osgi"
xmlns:wlevs="http://www.bea.com/ns/wlevs/spring"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/osgi
http://www.springframework.org/schema/osgi/spring-osgi.xsd
http://www.bea.com/ns/wlevs/spring
http://www.bea.com/ns/wlevs/spring/spring-wlevs-v11_1_1_3.xsd">
<wlevs:event-type-repository>
<wlevs:event-type type-name="CorpInterfaceEvent">
<wlevs:class>com.bea.wlevs.event.example.helloworld.CorpInterfaceEvent</wlevs:class>
</wlevs:event-type>
<wlevs:event-type type-name="PromotionalOrderEvent">
<wlevs:properties>
<wlevs:property name="ORDER_ID" type="bigint" />
<wlevs:property name="UFD_ID" type="bigint"/>
<wlevs:property name="WEB_USER_ID" type="bigint" />
</wlevs:properties>
</wlevs:event-type>
<wlevs:event-type type-name="DummyEvent">
<wlevs:properties>
<wlevs:property name="ORDER_ID" type="bigint" />
<wlevs:property name="UFD_ID" type="bigint"/>
<wlevs:property name="WEB_USER_ID" type="bigint" />
</wlevs:properties>
</wlevs:event-type>
</wlevs:event-type-repository>
<!--
Adapter can be created from a local class, without having to go
through a adapter factory
-->
<wlevs:adapter id="helloworldAdapter"
class="com.bea.wlevs.adapter.example.helloworld.HelloWorldAdapter">
<wlevs:instance-property name="message"
value="HelloWorld - the current time is:" />
</wlevs:adapter>
<wlevs:channel id="helloworldInputChannel" event-type="CorpInterfaceEvent">
<wlevs:listener ref="helloworldProcessor" />
<wlevs:source ref="helloworldAdapter" />
</wlevs:channel>
<!-- The default processor for OCEP 11.0.0.0 is CQL -->
<wlevs:processor id="helloworldProcessor">
<wlevs:table-source ref="PROMOTIONAL_ORDER" />
</wlevs:processor>
<wlevs:channel id="helloworldOutputChannel" event-type="CorpInterfaceEvent"
advertise="true">
<wlevs:listener>
<bean class="com.bea.wlevs.example.helloworld.HelloWorldBean" />
</wlevs:listener>
<wlevs:source ref="helloworldProcessor" />
</wlevs:channel>
<wlevs:table id="PROMOTIONAL_ORDER" event-type="PromotionalOrderEvent"
data-source="wlevsDatasource" />
</beans>
CorpInterfaceEvent.java:
package com.bea.wlevs.event.example.helloworld;
public class CorpInterfaceEvent {
private Long orderId;
public Long ORDER_ID;
private Long ufdId;
private Long webUserId;
public CorpInterfaceEvent(){
super();
public Long getOrderId() {
return orderId;
public void setOrderId(Long orderId) {
this.orderId = orderId;
public Long getORDER_ID() {
return ORDER_ID;
public void setORDER_ID(Long oRDERID) {
ORDER_ID = oRDERID;
public Long getUfdId() {
return ufdId;
public void setUfdId(Long ufdId) {
this.ufdId = ufdId;
public Long getWebUserId() {
return webUserId;
public void setWebUserId(Long webUserId) {
this.webUserId = webUserId;
Adapter:
/* (c) 2006-2009 Oracle. All rights reserved. */
package com.bea.wlevs.adapter.example.helloworld;
import java.math.BigDecimal;
import java.text.DateFormat;
import java.util.Date;
import com.bea.wlevs.ede.api.RunnableBean;
import com.bea.wlevs.ede.api.StreamSender;
import com.bea.wlevs.ede.api.StreamSource;
import com.bea.wlevs.event.example.helloworld.CorpInterfaceEvent;
public class HelloWorldAdapter implements RunnableBean, StreamSource {
private static final int SLEEP_MILLIS = 300;
private DateFormat dateFormat;
private String message;
private boolean suspended;
private StreamSender eventSender;
public HelloWorldAdapter() {
super();
dateFormat = DateFormat.getTimeInstance();
/* (non-Javadoc)
* @see java.lang.Runnable#run()
public void run() {
suspended = false;
while (!isSuspended()) { // Generate messages forever...
generateHelloMessage();
suspend();// This would generate the messages only once..
try {
synchronized (this) {
wait(SLEEP_MILLIS);
} catch (InterruptedException e) {
e.printStackTrace();
public void setMessage(String message) {
this.message = message;
private void generateHelloMessage() {
String message = this.message + dateFormat.format(new Date());
CorpInterfaceEvent event = new CorpInterfaceEvent();
//event.setOrderId(1);
event.setORDER_ID(Long.valueOf(1));
eventSender.sendInsertEvent(event);
/* (non-Javadoc)
* @see com.bea.wlevs.ede.api.StreamSource#setEventSender(com.bea.wlevs.ede.api.StreamSender)
public void setEventSender(StreamSender sender) {
eventSender = sender;
/* (non-Javadoc)
* @see com.bea.wlevs.ede.api.SuspendableBean#suspend()
public synchronized void suspend() {
suspended = true;
private synchronized boolean isSuspended() {
return suspended;
Kindly let me know if you need further info.Issue identified. The datatypes of the stream order id and the one from the tables differ.
The Long could not be casted to the bigint format of CQL.
On changing the datatype of ORDER_ID in the CorpInterfaceEvent to int, the join is successful. -
Call tp EP and sending multiple rows of records
Hello friends,
I am creating a screen and am sending back to EP an user ID.
However I would like to send Multiple User-Ids.
Below is the code of what AM I doing.
str = userid. " NOTE - I would like to send Multiple IDS.
lv_value_label->set_string_struct_element( element = str
label_name = 'MAINTID' ).
it_result_state-result_state = 'APPROVED'.
APPEND it_result_state TO it_result_states.
CALL METHOD eup_structure_factory=>pack_value_label_to_xml
EXPORTING
iv_value_label = lv_value_label
IMPORTING
ev_xml = str.
CALL FUNCTION 'EUP_STORE_BSP_OUTPUT_DATA'
EXPORTING
iv_process_id = process_id
iv_task_id = task_id
iv_output_value = str
TABLES
it_result_states = it_result_states.Dear Ster,
I think you are having all the user id's in an internal table.
Why dont you loop your internal table and pass those values in the standard FM you represented ('EUP_STORE_BSP_OUTPUT_DATA').
I'm not aware that, whether there is any Standard FM's to be used for sending multiple user ID's. Will check and let you know.
Hope this will be helpful.
Regards,
Gokul.N -
Finding and averaging multiple rows
Hello,
I am having a little problem and was wondering if anyone had any ideas on how to best solve it.
Here is the problem:
- I have a large file 6000 rows by 2500 columns.
- First I sort the file by columns 1 and 2
- then I find that various rows in these two columns (1 and 2) have duplicate values, sometimes only twice, but sometimes three or four, or five or up to 9 times.
- this duplication occurs in only the first two columns, but we don't know in which rows and we don't know how much duplication there is. The remaining columns, i.e. column 3 to column 2500, for the corresponding rows contain data.
- Programatically, I would like to find the duplicated rows by searching columns 1 and 2 and when I find them, average the respective data for these rows in columns 3 to 2500.
- So, once this is done I want to save the averaged data to file. In each row this file should have the name of colunm 1 and 2 and the averaged row values for columns 3 to 2500. So the file will have n rows by 2500 columns, where n will depend on how many duplicated rows there are in the original file.
I hope that this makes sense. I have outlined the problem in a simple example below:
In the example below we have two duplicates in rows 1 and 2 and four duplicates in rows 5 to 8.
Example input file:
Col1 Col2 Col3 ... Col2500
3 4 0.2 ... 0.5
3 4 0.4 ... 0.8
8 5 0.1 ... 0.4
7 9 0.7 ... 0.9
2 8 0.1 ... 0.5
2 8 0.5 ... 0.8
2 8 0.3 ... 0.2
2 8 0.6 ... 0.7
6 9 0.9 ... 0.1
So, based on the above example, the first two rows need averaging (two duplicates) as do rows 5 to 8 (four duplicates). The output file should look like this:
Col1 Col2 Col3 ... Col2500
3 4 0.3 ... 0.65
8 5 0.1 ... 0.4
7 9 0.7 ... 0.9
2 8 0.375 ... 0.55
6 9 0.9 ... 0.1
Solved!
Go to Solution.Well, here's an initial crack at it. The premise behind this
solution is to not even bother with the sorting. Also, trying to read
the whole file at once just leads to memory problems. The approach
taken is to read the file in chunks (as lines) and then for each line
create a lookup key to see if that particular line has the same first
and second columns as one that we've previously encountered. A shift
register is used to keep track of the unique "keys".
This
is only an initial attempt and has known issues. Since a Build Array is
used to create the resulting output array the loop will slow down over
time, though it may slow down, speed up, and slow down as LabVIEW
performs internal memory management to allocate more memory for the
resultant array. On the large 6000 x 2500 array it took several minutes on my computer. I did this on LabVIEW 8.2, and I know that LabVIEW 8.6
has better memory management, so the performance will likely be
different.
Attachments:
Averaging rows.vi 30 KB -
Return and combine multiple rows in one record
Hi friends,
I have these cursors,
DECLARE
CURSOR plaintif_cur IS
SELECT personel_id, sp_sfs_id
FROM siv_plaintif
WHERE SP_SFS_ID IN(70, 74, 182)
ORDER BY personel_id;
-- defendan cursor all defendan for a dept number
CURSOR defendan_cur (v_sp_sfs_id siv_plaintif.SP_SFS_ID%TYPE) IS
SELECT personel_id, sd_sfs_id
FROM siv_defendan
WHERE sd_sfs_id = v_sp_sfs_id
AND SD_SFS_ID IN(70, 74, 182);
BEGIN
FOR plaintif_rec IN plaintif_cur LOOP
dbms_output.put_line('Plaintif in Sivil '||TO_CHAR(plaintif_rec.sp_sfs_id));
FOR defendan_rec in defendan_cur(plaintif_rec.sp_sfs_id) LOOP
dbms_output.put_line('...plaintif is '||plaintif_rec.personel_id);
END LOOP;
END LOOP;
END;
The output generated was
Output:
Plaintif in Sivil 182
...plaintif is 38
Plaintif in Sivil 70
...plaintif is 1257
Plaintif in Sivil 74
...plaintif is 1277
Plaintif in Sivil 74
...plaintif is 1278
However, I want the output to be like this, especially for the record where there are many plaintifs in one Sivil file
Desired Output:
Plaintif in Sivil 182
...plaintif is 38
Plaintif in Sivil 70
...plaintif is 1257
Plaintif in Sivil 74
...plaintif is 1277, 1278
I would like to thank those everyone helping.. Thank you.Instead of declaring two cursors and doing it in slowest possible manner, possibly you can combine it into one SQL. Search for string aggregation to get some queries in this regard.
For more specific answer, please post your table structure (CREATE TABLE) and sample data (INSERT statement) with sample output desired. Format your code with tag. -
UPDATE TABLE - subquery returns multiple rows
Hi,
i need to update a table, but I don't know how or what I'm doing wrong.
My syntax:
UPDATE TBL2
SET TBL2.QUANTITY =
(select TBL1.QUANTITY
from TBL2,
TBL1
where TBL1.ID_STOCK=TBL2.ID_STOCK
and TBL1.DATE=TBL2.DATE
and TBL1.ID_FK_STOCKAREA=TBL2.ID_FK_STOCKAREA
and TBL1.ID_FK_STOCKPLACE=TBL2.ID_FK_STOCKPLACE
and TBL1.ID_FK_CONTAINER=TBL2.ID_FK_CONTAINER
AND TBL1.ID_STOCK = :P302_ID_STOCK)
Acutally, it is only possible, that it returns only 1 value... but it always says: ORA-01427: subquery returns more than one row.
Has anybody an idea??
Thanks sooooo much,
yours
ElisabethThis might help:
UPDATE TBL2
SET TBL2.QUANTITY =
select TBL1.QUANTITY
from TBL1
where TBL1.ID_STOCK=TBL2.ID_STOCK
and TBL1.DATE=TBL2.DATE
and TBL1.ID_FK_STOCKAREA=TBL2.ID_FK_STOCKAREA
and TBL1.ID_FK_STOCKPLACE=TBL2.ID_FK_STOCKPLACE
and TBL1.ID_FK_CONTAINER=TBL2.ID_FK_CONTAINER
AND TBL1.ID_STOCK = :P302_ID_STOCK
)pratz -
Updating ROW and inserting multiple row
Hello,
I needed some help.
Firstly, i have VO which has 2 EO, these 2 EO are linked with association.
In my page, i have a table, which by default has one row, and few fields can be edited, on click of "Apply" i want to commit this data,
but when i use " getOADBTransaction().commit();" it gives me primary key constraint error.
Secondly, when i click on add row button, it adds new row with unique primary key, also copying few attributes from existing first row.
So, now when i want to update this table, since this VO is based on 2 EO's which are linked , i can't insert completely since in second EO parent EO's primary key is not inserted.
Please help.I created new VO.
It is EO based.
Again, Commit is not working.
code is
public void create_row()
SplitAtsVOImpl svo = getSplitAtsVO1();
Row row = svo.first();
Row r = svo.createRow();
for(int i =1;i<row.getAttributeCount();i++)
{System.out.println(i+" "+row.getAttribute(i));
if(row.getAttribute(i)!=null&&i<27)
r.setAttribute(i,row.getAttribute(i).toString());
r.setAttribute("DispAssNum",""+r.getAttribute("AssetNum")+"-"+count);
r.setAttribute("AtsAssetId",getOADBTransaction().getSequenceValue("ATS_ASSET_TBL_S").toString());
System.out.println(r.getAttribute("AtsAssetId"));
r.setNewRowState(Row.STATUS_INITIALIZED);
//r.setAttribute("AtsAssetId1",(""+row.getAttribute("AtsAssetId")));
System.out.println(svo.getRowCount());
svo.insertRowAtRangeIndex(0,r);
try
getOADBTransaction().commit();
catch(OAException e)
System.out.println(e.toString());
} -
Mapping and combining multiple rows
Hi,
Imaging the following case From R/3 every week to different customers 10 different products# are sold..If no new idem is sold for customer no new record is generated for this customer. For example -Source file:
CustID product #sold number items
1 1 5
1 2 2
1 3 3
3 2 7
4 1 30
In this case for custID 2 no sales were generated this week so no record exists in the text file. This file must be submitted to a partner for analysis in the following structure:
Cust ID product 1 product 2 product 3 .
1 5 2 3
3 7
4 30
It is going to be text file structure after xi mapping and will be. Both files are fixed so each record and field has fixed positions predetermined.
How would be the mapping in XI performed the best in this case..?Is using BMP necessary in this case or it could be avoided and if yes how ?Looks like mapping program must be created. I appreciate any input The easy way is the better way..Of course I would like to handle some errors as well.
Do i need any conversions in inbound/outbound File Adapter and what kind of if yes ?Thanks
Jonhey
>>case From R/3 every week to different customers 10 different products# are sold
this means you are sending some values from R/3 to your partner,now this sender file,will this be an IDOC ?if yes,then which IDOC,if its gonna be a Flat file then can you provide us with the structure of the sender data type.
also it will be helpful if you can send the receiver data structure as well
Until now i dont see any need of BPM,it can be handled in message mapping but i can be more sure once i get the source and target data structure.
Thanx
Aamir -
Error while joining Table and Collection.
Hi Friends,
I need to know whether the below is possible.
DECLARE
TYPE t_dept IS TABLE OF dept%ROWTYPE;
c t_dept;
BEGIN
SELECT *
BULK COLLECT INTO c
FROM dept;
FOR r IN (SELECT *
FROM emp e, TABLE (CAST (c AS t_dept)) d
WHERE e.deptno = d.deptno)
LOOP
DBMS_OUTPUT.put_line (r.ename);
END LOOP;
END;No.
What exactly are you trying to accomplish?
SQL> DECLARE
2 TYPE t_dept IS TABLE OF dept%ROWTYPE;
3
4 c t_dept;
5 BEGIN
6 SELECT *
7 BULK COLLECT INTO c
8 FROM dept;
9
10 FOR r IN (SELECT *
11 FROM emp e, TABLE (CAST (c AS t_dept)) d
12 WHERE e.deptno = d.deptno)
13 LOOP
14 DBMS_OUTPUT.put_line (r.ename);
15 END LOOP;
16 END;
17 /
FROM emp e, TABLE (CAST (c AS t_dept)) d
ERROR at line 11:
ORA-06550: line 11, column 31:
PL/SQL: ORA-00902: invalid datatype
ORA-06550: line 10, column 10:
PL/SQL: SQL Statement ignored
ORA-06550: line 14, column 23:
PLS-00364: loop index variable 'R' use is invalid
ORA-06550: line 14, column 1:
PL/SQL: Statement ignored
SQL>
Maybe you are looking for
-
PSE 6 Blank welcome screen and nothing else
Started suddenly, no issues previously. Installed on an iMac running 10.5.8. Blank welcome screen and nothing else. Tried reinstalling... nothing is working.... help!
-
Hello all , Please clarify me that my clients declares plant wise balance sheet to legal authority and they need separate balance sheet as they maintain the will have two company code but they also want consolidated balance sheet for two company code
-
ORA-12547 for rsh -l sid adm dbhost "brconnect -u / -f state"
Hi, I have problem with db13 (PI711, kernel 720, Oracle 11.2.0.3) after split&migrate to linux. BR0301E SQL error -12547 at location db_connect-2, SQL statement: 'CONNECT /' ORA-12547: TNS:lost contact. but: 1. <dbhost>:ora<sid>$ "brconnect -u / -
-
I bumped into a mention of the javatv technology while rumaging through the internet. Digging around, i saw there is a jsr maintainance that was updated in september, 2008 but i cant seem to find any current material on it. The download has a date of
-
I want to find out if my provider is in the database
I am simply trying find out if my service provider is actually listed in the mozilla database. All I get are answers that tell me that Mozilla has a database, and that Thunderbird will check it for me, but I what I want is to look at the database mys