Question on use of multi-mappings in interface mappings
We have the following scenario :
1. XI receives a orders05 idoc - xml and does a first message map - splitting this xml into two - a lookup key msg type and a copy of orders05 msg type ( 1:n )
2. These two messages from the first mapping are used in a second message mapping ( of type multi-mapping too ) ( n:1) to create the final orders05 message.
3. These two message maps are put sequentially in the interface mapping.
orders05 -> MessageMapping1 -> MessageMapping2 -> orders05 . Please note that our objective is to send one single ORDERS05 idoc into the end R3 system - using the idoc adapter. The use of multi-mapping is in the interim - and not in the idoc adapter.
Issues:
a. the "ns0:Messages" and "ns:Message[n]" tags are not being created automatically.
b. In the interface determination we do not see the interface mapping when we select the 'enhanced' option.
Any ideas, pointers as to what I am missing here ? I am on a critical timeline to implement this and can't understand whats going wrong ?
Michal,
In your suggestion - to use two interface mappings - how I can configure the two interface mappings as a part of a single interface determination so that they execute one after the other . Or is it two interface determinations - one for each interface mapping - how do I relate these two ?
Also, I assume that since the message mappings in the interface mappings are multimappings , I need to use enhanced interface determination .
Could you share one of the scenarios wherein you had two interface mappings execute in a series - ? Thanks for your time.
Similar Messages
-
Use of multi applications and interface comp controller in Leave(any WDP)
Hi Frndz,
AM new to ESS/MSS, now am modifying standard ESS leave proj while doing this i want make clear myself about somethings ..those r
1) What is the use(purpose) creating multi applications(LeaveRequest,LeaveRequestAdmin,LeaveRequestApprover,TeamView) in a WDP (for ex:ESS leave) proj.
2) And i came to know that interface controller is playing big role in this , i need a exact use of interface controller, if posible with any basic example, i gone through SDN blogs but i did't get clear picture about interface controller.
Regards
RajeshRefer help.sap.com for exact defination
In brief
LeaveRequest > For requesting Leaves by the Employee ie enduser
LeaveRequestAdmin > to solve error in Leave request application, not available, Need to use PTARQ
LeaveRequestApprover> to approve Leave requests like a manager
TeamView : to see other employees absences in ones org unit -
Question about using Sun Multi-Schema XML Validator
Hi all,
I tried to use sun MSV to validate xml against my schema file, the result turns strange. To simply test the validator, I use the following xml file and xsd file:
------- test.xml -------------------------------------------------
<?xml version="1.0" encoding="ISO-8859-1"?>
<persons xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="test.xsd">
<person>1</person>
<person>2</person>
<person>3</person>
</persons>
--------test.xsd ---------------------------------------------------
<?xml version="1.0" encoding="ISO-8859-1"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified">
<xs:element name="persons">
<xs:complexType>
<xs:sequence>
<xs:element name="person" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
In my knowledge of xml schema, the above xml file should be validated "ok" against the xsd files.
But in sun msv validator, it reports the following error:
------------ error message ---------------------------------------------------
Error of test.xml
element "person" was found where no element may occur
Why does this happen? I have defined the occurrence of "person" element under "persons" to unbounded. What is the possible wrong?
Thanks in advance!Problem sovled by updating the msv lib files. Thx for noticing!
-
A question about using INTERFACE?
I was writing a xml dom application..
I tried to use some sample code,
it works fine..
I import some class for xml parsing..
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
and I open the Node.java and NodeList.java in the dom directory, and see the Node is an public interface
public interface Node {
my question is..
if Node is an interface, why I can use directly the function of Node object in my DOM application code..
like:
Node node;
String val=node.getNodeValue()
I don't understand that.
why we can declare an object of Node(which is interface), and use it's function??If Node is an interface, why I can use directly
the function of Node object in my DOM
application code..
like:
Node node;
String val=node.getNodeValue()
I don't understand that. why we can declare an
object of Node(which is interface), and use it's
function??
Just like u can use the Connection interface in JDBC to
create a Statement interface object.
U can have objects whose reference belongs to an interface type. That way the ppl here at SUN have
restricted you from running any other methods other
than those listed in the interface object reference. This
creates a very strict inheritance heirarchy which is
impossible to break.
For example, you cannot possiblly run or create any
other methods than those listed in the Connection
interface in JDBC API. This is bcoz although the object
at runtime might denote an implementation of the
interface BUT the reference through which u have
access to it is only a predefined interface. -
RAID Interface Mappings with RocketRaid 2310 (rr2310_00) fail in .37
EDIT: This post was originally titled:
RAID Interface Mappings (Hard {Challenge?} Question) [with edit]
It has since been changed because further investigation found it to be a completely different problem... Go figure.
Also, this post is tightly coupled with another post (link further down in the comments).
Greetings...
Alright. So I'm not new to Linux to say the least, but I swear every time I have to deal with udev and related tasks something goes funky.
So, here's the issue:
I have a RocketRAID 2310 raid adaptor (made by HighPoint) that has a RAID5 configured using four 750 GiB drives. (of 8 in the machine)
The machine dual boots both Windows and Linux, and that is the main source of the issue. It appears as thought he Linux drivers and the Windows drivers, while both functional, are not designed to be intermixed... Which I suppose from a server perspective is a perfectly reasonable limitation, even if frustrating. Anyway, I digress...
I've been working with this card for years now, accepting the limitation that I could only access it from Windows. Today, though, I made an interesting discovery. I was reconfiguring some devices and noticed something strange about the fdisk output:
Disk /dev/sde: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sde1 1 4294967295 2147483647+ ee GPT
Disk /dev/sdf: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0046b51b
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xac7a1898
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xac3cad83
Disk /dev/sdh doesn't contain a valid partition table
The four drives showing up as individual drives is completely normal per my experience with this card... What's new since installing Arch is the GPT partition table detected on the first drive, and more importantly, the sectors! For those that didn't catch that, let me highlight:
Disk /dev/sde: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
/dev/sde1 1 4294967295 2147483647+ ee GPT
Yeah... Goes to a sector about three times the max. Also known as the raid partition! Huh, would you look at that. Sadly, udev doesn't catch it, and there is no /dev/sde1 partition.
Now there are two posibilities here as far as I can tell:
First, that somehow the entire GPT partition table is smaller then the striping block size of the RAID5, and that fdisk manages to read it entirely off the first disk in the raid. Striping is set at 64k I think, so it's possible, but seems a stretch...
Second, that the raid is being detected, but udev is missing it and something funny is happening with the device/partition mappings.
Now I would love to experiment to find out what is going on here, but the raid contains critical data (that, although backed up to tape, is still something I don't want to have to restore! Rebuilding the array takes 5 hours minimum, then restoring from tape is likely another 10... 15 hours downtime is not an option). What this means is that all tests on it have to be NON-DESTRUCTIVE, read only operations. To complicate matters (just because things can't be that easy), the contents of the RAID5 is a partition level TrueCrypt volume, so no easy way to determine the contents based off of partition identifiers.
I tried using parted, but sadly it coughs up a lung and aborts all operations when it sees a partition boundry outside the scope of the disk (as is the case here). Seriously, it won't even try. Wimp...
So that's where I am. Mistery partition that only fdisk can see. If I could get a file handle to it, I could try to get TrueCrypt to decrypt it to find out if it's a striping artifact or not.
Aside:
I suppose this question also links to another, more generic question: Given a 'file' (which could be any *nix file handle) that contains a valid partition table and partitions (say a dd if=/dev/sda of=/home/user/device.bak), how does one go about accessing it's partitions? Are there udev tricks, or some other method? Edit 1a: I know how loopback mounting works, but that doesn't expose the partitions within because it doesn't flag a uevent for the udev daemon to respond to... Not sure how to manually tell it to process /dev/loop0 and look for partitions (supposedly splitting it up into /dev/loop0p1, /dev/loop0p2, etc.)
Additional notes:
- I am not sure what device driver is being used for those four drives to be visible... It's not rr2310 (the HighPoint RAID driver), but I know the card uses a resold chipset (maybe Marvel?), so it could be detected by a number of other kernel drivers. lsmod doesn't have anything useful there. Is there maybe a way to determine what device file is associated with what driver module?... So many good questions. Too many for one post!
- The RAID BIOS config is a useless piece of junk. It loads into memory too early in the BIOS operations on this board, and thus runs into a race condition for detecting it's own hardware. 90%+ of the time it will list no drives detected, but continue booting and everything is there just fine. It conflicts with the AMI (Megatrends) BIOS, but not with similar Pheonix based ones on similar boards (which detect the drives and raid properly every boot)... For those thinking of RAID cards for their servers, don't grab HighPoint! I've since stayed clear, but this server has a long life expectency, so I'm having to live with it.
So there we have it! Hit me, Arch Forums, with your best theories or hacks!
Edit 1b:
I was recently reconfiguring two other drives in another computer to use a RAID0 array and didn't bother wiping them before initialising the array... Normally I wouldn't have even bothered checking, but today I felt curious. The RAID0 was striped with 64k blocks, same as the above mentioned RAID5... And low and behold, the new RAID0 array had the old partition table of one of the original drives. What this proves is that the striping of a RAID array (and I know that a rundundant array of independent disks array is redundant... ) can leave a partition table intact and readable. That makes the first option seem more probable...
Last edited by SeanM (2011-02-22 20:05:48)Hi, I downloaded your last package, and adapted for the 232x model, using the rr232x-linux-src-v1.10-090716-0928.tar.gz from RR.
I'm not an expert in pkg building, but after some reading I considered myself enough armed, ... I thought so (sic)
So I did replace all strings related to the model and the package version in the install, PKGBUILD and patches. It patches fine except the Makefile.def that is rejected. I hope you could share some light in that issue.
Currently, the kernel is 3.1.2-1-ARCH, so might that be the problem?
that is rr232x.install
# This is a default template for a post-install scriptlet. You can
# remove any functions you don't need (and this header).
KERNEL_VERSION=`uname -r`
# arg 1: the new package version
pre_install() {
/bin/true
# arg 1: the new package version
post_install() {
depmod -v $KERNEL_VERSION > /dev/null 2>&1
/bin/cat <<EOF
==> To use this module, sata_mv has to be unloaded or the kernel will panic
==> To unload sata_mv, please do
==>
==> # rmmod sata_mv
==>
==> as root before loading rr232x
==>
==> If you want to use this module permanently, you should blacklist
==> sata_mv by adding 'blacklist sata_mv' to /etc/modprobe.d/modprobe.conf
EOF
# arg 1: the new package version
# arg 2: the old package version
post_upgrade() {
post_install
# arg 1: the old package version
post_remove() {
depmod -v $KERNEL_VERSION > /dev/null 2>&1
the modded PKGBUILD
# Maintainer: count-corrupt <corrupt at giggedy dot de>
pkgname=rr232x
pkgver=1.10
pkgrel=1
pkgdesc="Kernel modules for Highpoint RocketRAID 230x and 231x SATA cards. Patched for use with kernel26 =2.6.37, >=2.6.38 and kernel >= 3 (a.k.a. linux)"
arch=('i686' 'x86_64')
url="http://www.highpoint-tech.com/USA/bios_rr2320.htm"
license=('custom')
groups=()
if [[ `uname -r` == 2.6.* ]]; then
depends=('kernel26')
else
depends=('linux')
fi
makedepends=()
provides=()
conflicts=()
replaces=()
backup=()
options=()
install=$pkgname.install
source=( http://www.highpoint-tech.cn/BIOS_Driver/rr232x/Linux/new%20format/rr232x-linux-src-v1.10-090716-0928.tar.gz scsi_lck.patch kernel3.patch)
noextract=()
md5sums=()
#_kernver=`uname -r`
_kernver=`uname -r`
build() {
mkdir -p $startdir/pkg/lib/modules/${_kernver}/kernel/drivers/scsi/
# Apply the scsi lock patch to make the driver work with kernel26 > 2.6.37
cd $startdir
patch -p0 -i $startdir/scsi_lck.patch
patch -p0 -i $startdir/kernel3.patch
cd $startdir/src/rr232x-linux-src-v$pkgver/product/rr232x/linux/
make KERNELDIR=/usr/src/linux-$_kernver || return 1
# Install the kernel module
install -m 644 -D rr2320_00.ko $startdir/pkg/lib/modules/${_kernver}/kernel/drivers/scsi/
mkdir -p $startdir/pkg/usr/share/licenses/$pkgname
cp $startdir/src/rr232x-linux-src-v$pkgver/README $startdir/pkg/usr/share/licenses/$pkgname/
patches kernel3.patch and scsi_lck.patch are modded accordingly.
To avoid dependencies,
makepkg -i --skipinteg
so I get those results ...
┌─17:40 joan@WS01 ~/src/rr232x_mod
└─>>> makepkg -i --skipinteg
==> S'està fent el paquet: rr232x 1.10-1 (dv nov 25 17:44:16 CET 2011)
==> Comprovant les dependències en temps d'execució ...
==> Comprovant de dependències per l'assemblatge ...
==> S'estan recuperant les fonts...
-> S'ha trobat rr232x-linux-src-v1.10-090716-0928.tar.gz
-> S'ha trobat scsi_lck.patch
-> S'ha trobat kernel3.patch
==> AVÍS: S'està ometent la comprovació de la integritat.
==> S'està extraient el codi font...
-> S'està extraient rr232x-linux-src-v1.10-090716-0928.tar.gz amb bsdtar
==> S'està eliminant el directori pkg/ existent...
==> S'està entrant en l'entorn fakeroot...
==> Iniciant build()...
patching file src/rr232x-linux-src-v1.10/osm/linux/os_linux.c
patching file src/rr232x-linux-src-v1.10/osm/linux/osm_linux.c
patching file src/rr232x-linux-src-v1.10/osm/linux/osm_linux.h
patching file src/rr232x-linux-src-v1.10/inc/linux/Makefile.def
Hunk #1 FAILED at 74.
Hunk #2 FAILED at 119.
2 out of 2 hunks FAILED -- saving rejects to file src/rr232x-linux-src-v1.10/inc/linux/Makefile.def.rej
==> ERROR: S'ha produït un error en build().
S'està cancel·lant...
so I guess the scsi_lck.patch is fine, and the issue must be in the adressing of the kernel version.
the rejected Makefile
--- src/rr232x-linux-src-v1.10/inc/linux/Makefile.def.orig 2011-08-13 10:13:37.000000000 +0200
+++ src/rr232x-linux-src-v1.10/inc/linux/Makefile.def 2011-08-13 11:15:53.000000000 +0200
@@ -74,19 +74,39 @@
KERNELDIR := /lib/modules/$(shell uname -r)/build
endif
-KERNEL_VER := 2.$(shell expr `grep LINUX_VERSION_CODE $(KERNELDIR)/include/linux/version.h | cut -d\ -f3` / 256 % 256)
+KERNEL_MAJ_VER := $(shell expr `grep LINUX_VERSION_CODE /usr/src/$(uname -r)/include/linux/version.h | cut -d\ -f3` / 65536 % 65536)
+
+KERNEL_VER := $(KERNEL_MAJ_VER).$(shell expr `grep LINUX_VERSION_CODE $(KERNELDIR)/include/linux/version.h | cut -d\ -f3` / 256 % 256)
ifeq ($(KERNEL_VER),)
$(error Cannot find kernel version. Check $(KERNELDIR)/include/linux/version.h.)
endif
+
ifneq ($(KERNEL_VER), 2.6)
ifneq ($(KERNEL_VER), 2.4)
-$(error Only kernel 2.4/2.6 is supported but you use $(KERNEL_VER))
+ifneq ($(KERNEL_MAJ_VER), 3)
+$(error Only kernel 2.4/2.6/3 is supported but you use $(KERNEL_VER))
endif
endif
+endif
+
+ifeq ($(KERNEL_VER), 2.4)
+
+HPT_LIB := $(HPT_LIB)-regparm0
+_TARGETMODS := $(addprefix $(HPT_LIB)/,$(TARGETMODS))
+
+VPATH := .. $(HPT_ROOT)/osm/linux
+TARGET := $(TARGETNAME).o
+
+C_INCLUDES += -I$(HPT_ROOT)/osm/linux -I$(KERNELDIR)/include -I$(KERNELDIR)/drivers/scsi
+
+$(TARGET): $(TARGETOBJS) $(_TARGETMODS)
+ @echo $(if $V,,[LD] $@)
+ $(if $V,,@)$(CROSS_COMPILE)$(LD) -r -o $@ $^
+
-ifeq ($(KERNEL_VER), 2.6)
+else # for kernel >= 2.6
TARGET := $(TARGETNAME).ko
@@ -119,20 +139,6 @@
@echo '$$(addprefix $$(obj)/,$$(TARGETMODS)): $$(obj)/%.o: $$(HPT_LIB)/%.o' >>$@
@echo ' @cp -f $$< $$@' >>$@
-else # for kernel 2.4 modules
-HPT_LIB := $(HPT_LIB)-regparm0
-_TARGETMODS := $(addprefix $(HPT_LIB)/,$(TARGETMODS))
-VPATH := .. $(HPT_ROOT)/osm/linux
-TARGET := $(TARGETNAME).o
-C_INCLUDES += -I$(HPT_ROOT)/osm/linux -I$(KERNELDIR)/include -I$(KERNELDIR)/drivers/scsi
-$(TARGET): $(TARGETOBJS) $(_TARGETMODS)
- @echo $(if $V,,[LD] $@)
- $(if $V,,@)$(CROSS_COMPILE)$(LD) -r -o $@ $^
endif # KERNEL_VER
endif # KMOD
Had tried already several days without success, and I'm runnin' out of ideas, so I ask for some guidance to find back the path.
Thanks in advance
Last edited by ga01f4733 (2011-11-25 17:27:13) -
View Mapping Result between two Interface Mappings in ccBPM
Hello,
I've got a ccBPM which does two interface mappings. The second one fails. When I redo the steps manually in the Interface Mapping test mode everything works fine. Anyway, I want to get the message from the failed BPM that got out of the first interface mapping, which worked fine in the BPM as well, before entering the second.
Where can I get that message? In Monitoring I can only find messages that got sent.
Thanks for you help!
Regards,
DirkHi,
Please check in Runtime Workbench.
Go to Adapter Engine --> Component Monitoring
Now select your Adapter.
Use Filter and below you will find message ids.
select one and you can see the audit log..where your appln fails.
You can also use SXMB_MONI.
Select the message giving error and in that goto outbound tab..click on link...select view details image button...select the component with error and go to container tab of it....there you will find trace entry....where log of your error will be stored..
Hope it helps.
Best Of Luck
Akhil
Edited by: Akhil Rastogi on Mar 18, 2008 11:08 AM -
How to add Two Interface Mappings to One Receiver(BPM) Help needed urgently
I have a requirement where i get a flat file and split into multiple files and send to BPM.
For each split file I created Interface Mapping using Java Mapping Program.
In the Configuration how to add more Interface Mappings?
Thanks for your help in advance.
Regards
SudhaYou can use Enhanced Interface Determination to split one message to Multiple hence to multiple Interfaces.
You have to change the Occurance of Messages in Message Mapping and their Corresponding Interfaces in Interface Mapping. That would create Multiple Files with Multiple Interfaces to Receiver (BPM)
1) You need not to use Multilpe Interface Mapping
2) You will use Extended Interface Determination for this.
regards.
Jeet. -
Interface Mappings are not displayed in Receiver Determination
Hi friends,
I'm doing Enhanced Receiver determination.. but in this if i select <b>Extended</b> radio button in the receiver determination .. I'm not gettting any search help for selecting my Interface mappings..
what misteqe i did.. for getting those interface mappings does we need to do..any special things .
i followed the bellow blog.. but like that i'm not getting... and how many interface mappings do we need to create for this..
/people/venkataramanan.parameswaran/blog/2006/03/17/illustration-of-enhanced-receiver-determination--sp16
plz suggest me..
thanks
BABUHi Prabhu..
Thank you for ur spontenious respnose ..I Mentioned the receiver business services in the user defined function which i creted in the message mappings .. is there any other place also do we nned to mention..
actuvally , i have one source struturce that and Two Receiver strutures..
<b>1) Sendor_DT</b>
PERSON
NAME
AGE
ADRESS
<b>2) Receiver_DT_1 ( this is for Male person details )</b>
PERSON
NAME
AGE
ADRESS
<b>3) Receiver_DT_2 (this is for Female Person Details</b> )
PERSON
NAME
AGE
ADRESS
for this.. i created Three message mappings.. ( one for soource to first recevier )
and next for ( source to second receiver)
and for third for source to RECEIVERS ( MESSAGE TYPE FROM SAP -BASIS component )
in the mapping
i created one user defined function and i map that to split message..
in that user defined function i wrote the bellow coding
int i;
int mr=0;
int ms=0;
for(i=0; i<a.length;i++)
if( a<i>.substring(0,2).equals("Mr") && mr==0)
result.addValue("AATRNG_TEST_4_BS_IB1");
mr=1;
if( a<i>.substring(0,2).equals("Ms") && ms==0)
result.addValue("AATRNG_TEST_4_BS_IB2");
ms=1;
</textarea>
and i created two interface mappings .. and i created three business services.. three communication channels. and two recever aggriments.. and two interface determinations .. and one sendor aggriment and <b>receiver determination</b> with <b>EXTENDED</b> . and i used those interface mappings in recever detrmination ....
but why file was not loaded into receiver side..
thanks
Babu -
Make sure that your bean is implementing the serializable interface and that
you are accessing the bean from the session with the same name.
Bryan
"Sandeep Suri" <[email protected]> wrote in message
news:[email protected]..
Hi, I have quick question about use of USEBEAN tag in SP2. When I
specify a scope of SESSION for the java bean, it does not keep the
values that I set for variable in the bean persistent.Thanks,Sonny
Try our New Web Based Forum at http://softwareforum.sun.com
Includes Access to our Product Knowledge Base! -
Using journalized data in an interface with aggragate function
Hi
I am trying to use the journalized data of a source table in one of my interfaces in ODI. The trouble is that one of the mappings on the target columns involves a aggregate function(sum). When I run the interface i get an error saying "not a group by expression". I checked the code and found that the jrn_subscriber, jrn_flag and jrn_date columns are included in the select statement but not in the group by statement(the group by statement only contains the remiaining two columns of the target table).
Is there a way around this? Do I have to manually modify the km? If so how would I go about doing it?
Also I am using Oracle GoldenGate JKM (oracle to oracle OGG).
Thanks and really aprreciate the help
Ajay'ORA-00979' When Using The ODI CDC (Journalization) Feature With Knowledge Modules Including SQL Aggregate Functions [ID 424344.1]
Modified 11-MAR-2009 Type PROBLEM Status MODERATED
In this Document
Symptoms
Cause
Solution
Alternatives :
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.
Applies to:
Oracle Data Integrator - Version: 3.2.03.01
This problem can occur on any platform.
Symptoms
After having successfully tested an ODI Integration Interface using an aggregate function such as MIN, MAX, SUM, it is necessary to set up Changed Data Capture operations by using Journalized tables.
However, during execution of the Integration Interface to retrieve only the Journalized records, problems arise at the Load Data step of the Loading Knowledge Module and the following message is displayed in ODI Log:
ORA-00979: not a GROUP BY expression
Cause
Using both CDC - Journalization and aggregate functions gives rise to complex issues.
Solution
Technically there is a work around for this problem (see below).
WARNING : Oracle engineers issue a severe warning that such a type of set up may give results that are not what may be expected. This is related to the way in which ODI Journalization is implemented as specific Journalization tables. In this case, the aggregate function will only operate on the subset which is stored (referenced) in the Journalization table and NOT over the entire Source table.
We recommend to avoid such types of Integration Interface set ups.
Alternatives :
1.The problem is due to the missing JRN_* columns in the generated SQL "Group By" clause.
The work around is to duplicate the Loading Knowledge Module (LKM), and in the clone, alter the "Load Data" step by editing the "Command on Source" tab and by replacing the following instruction:
<%=odiRef.getGrpBy()%>
with
<%=odiRef.getGrpBy()%>
<%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>
2. It is possible to develop two alternative solutions:
(a) Develop two separate and distinct Integration Interfaces:
* The first Integration Interface loads data into a temporary Table and specify the aggregate functions to be used in this initial Integration Interface.
* The second Integration Interfaces uses the temporary Table as a Source. Note that if you create the Table in the Interface, it is necessary to drag and drop the Integration Interface into the Source panel.
(b) Define two connections to the Database so that the Integration Interface references two distinct and separate Data Server Sources (one for the Journal, one for the other Tables). In this case, the aggregate function will be executed on the Source Schema.
Show Related Information Related
Products
* Middleware > Business Intelligence > Oracle Data Integrator (ODI) > Oracle Data Integrator
Keywords
ODI; AGGREGATE; ORACLE DATA INTEGRATOR; KNOWLEDGE MODULES; CDC; SUNOPSIS
Errors
ORA-979
Please find above the content from OTN.
It should show you this if you search this ID in the Search Knowledge Base
Cheers
Sachin -
How to use the multi-touch trackpad to adjust icon size?
After I upgrade my OS X 10.6 to 10.9.2, I lost.
Everyday I search the forum to look for the "feeling" of using Mac OS (migrate from windows to OS X10.6). The OS X 10.6 is just great.
Now the questions, how to use the multi-touch trackpad to adjust icon size with simple pinch open or close gesture?
Thank you.Welcome to Apple Support Communities. We're users here and don't speak for "Apple Inc."
Using the multi-touch trackpad to 'zoom' in or out is temporary.
The best way to permanently change icon size on the Desktop is to Command+click on an empty place on the Desktop screen and select Icon Size from the Desktop preference pane: -
What is the Use of Inner classes in Interface.
Hi All,
Most of us we know that We can define inner classes in the interface. Like
public interface MyItf{
Demo d = new Demo();
class Demo{
Demo(){
//some additional code here
}Now I have following question in my mind:
1. An Interface is pure abstract. Then why inner classes inside the interface?
2. In what scenario, we can utilize these inner classes of interface?
Plz Share your views on this...
Thks for ur replies in advance.This we cando in defining Demo Class outside.That's no argument. You could write the programs in other languages, so why use Java? Just because you can use a top-level class instead, it's no argument against using an inner class. You also can make all attributes public... you don't o that either (I hope).
Ok Also
tell me how to pass an Object in inner class Demo. to
the method of Interface.
public abstract TheInterface.Demo doSomething(TheInterface.Demo d);
Can u give some real time situation where this
concept can be used.There are only very, very few. Just because it's possible, it doesn't mean it needs to be done or is done often. -
Interface Mappings required?
Hey,
trying <a href="/people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit blog</a> but with a web service instead of RFC. Using the same WSDL data type for file input and SOAP request and for SOAP response and file output.
Because of using the same data type, do I have to create any interface mappings or message mappings?
My graphical BPM monitoring in sxmb_moni shows that he gets until synchronous sending of soap message, but there he stops with following error:
<SAP:Code area="UNKNOWN">ModuleUnknownException</SAP:Code>
<SAP:P1 />
<SAP:P2 />
<SAP:P3 />
<SAP:P4 />
<SAP:AdditionalText>com.sap.aii.af.mp.module.ModuleException: com.sap.aii.af.ra.ms.api.DeliveryException: Application:EXCEPTION_DURING_EXECUTE: caused by: com.sap.aii.af.ra.ms.api.DeliveryException: Application:EXCEPTION_DURING_EXECUTE: at com.sap.aii.af.mp.soap.ejb.XISOAPAdapterBean.process(XISOAPAdapterBean.java:1111) at
thx
chrisChristian,
You can skip the message and interface mapping for the mappings betwwen the file and bpm and bpm to file..
but in tne case of the syncronous rfc or soap in ur case, u wud b reqd to have the mapping.
regards,
Vishal -
After I upgrade my OS X 10.6 to 10.9.2, I lost.
Everyday I search the forum to look for the "feeling" of using Mac OS (migrate from windows to OS X10.6). The OS X 10.6 is just great.
Now the questions, how to use the multi-touch trackpad to adjust icon size with simple pinch open or close gesture?
Thank you.I might also add, you can run 10.6.8 Server in virtualization, if you need access to Rosetta applications:
https://discussions.apple.com/docs/DOC-2295#ROSETTALION -
Question about using Runtime.getRuntime();
hi all
i have a question about using Runtime.getRuntime(). if i use this to get a runtime reference to run an external program, is it considered as starting a new thread inside the thread that starts it?
is it safe to do it in the Session EJB? if not, what can you recommand to do it? thankshi all
i have a question about using Runtime.getRuntime().
if i use this to get a runtime reference to run an
external program, is it considered as starting a new
thread inside the thread that starts it? No. Starting a process, starts a process. Threads have nothing to do with it.
is it safe to do it in the Session EJB? if not, what
can you recommand to do it? thanksSo what? Run another process? If you want to run another process in java then your choices are to use Runtime.exec() or use JNI. And using JNI will probably end up doing exactly the same thing as Runtime.exec().
"Safe" is harder. Typically to correctly use Runtime.exec() you must use threads. And as noted threads ideally should not be used. You can use them but if you do you had better understand why they didn't want you using them in the first place. You had also better be sure that you really want to wait for it to complete.
Other than that Runtime.exec() is safe because it can't crash the VM like other interfaces can (like JNI.)
Maybe you are looking for
-
Please help me, I mistakenly deleted a file. How can I retrieve my file?
PLease help me retrieve my file. I was rearranging some docs into sub-files and mistakenly deleted the main file, thinking the sub-files would still be there and they are all gone. Lots and lots of docs. PLEASE HELP ME,!!! The File name was "Card
-
How do you open a multi page tif be opened in Photoshop?
I have an embedded tif with a render, shadow layers and other channels in one tif. (Not created in Photoshop) I thought it was multilayered tif, but I think it's called a multi page tif. When I open it in Photoshop it opens the top layer/page, and no
-
Time Machine backup requires significantly more space than size of hard drive + external
I can't seem to figure out what is going on here. I recently purchased a new Macbook Pro with a 1T SSD hard drive, upgrading from an older MB Pro with a 500 GB hard drive. In the past I've used Time Machine to back up my hard drive and an external ha
-
ASUS G75VW laptop running Win7 Pro x64 which is unable to upgrade IE9. I can download and install IE, but upon reboot during "Configuring Windows Updates" it gets up to 90% complete then returns "Failure configuring windows updates reverting changes"
-
Is there 64-bit version PowerCenter9.0.1 to download?
1. I had installed 64-bit version oracle database 11gR2 and 64bit informatica9.0.1 service in win7 64bit ultimate. 2. But the client is 32bit version,that i can't import data from my dababase(the designer can't find the database odbc driver,beacuse i