Downstream Capturing - Two Sources
Hi
We are planning to have two source databases and one consolidated/merge database.We are planning to use streams.I just configured downstream real time capture from one source database.
As I can have only one real-time downstream capture process and one archive-log downstream capture process.
How do I configure this archivelog-downstream capture process.Where in the code that differentiates between real time and archivelog.
Thanks
Sree
You will find the steps for configuring downstream capture in the 11.2 Streams Replication Administrator's Guide towards the end of chapter 1. Here is the URL to the online doc section that gives examples of the source init.ora parameter log_arch_dest_2 for real-time mining and archive log mining:
http://docs.oracle.com/cd/E11882_01/server.112/e10705/prep_rep.htm#i1014093
In addition to the different log_arch_dest_* parameter settings between real-time and archive log mining, real-time mining requires standby logfiles at the downstream mining database. The instructions for that are also in the Streams Replication Administrator's Guide.
Finally, the capture parameter DOWNSTREAM_REAL_TIME_MINE must be set to Y for real-time mining. The default is N for archive log mining.
Chapter 2 in that same manual includes how to configure downstream capture using the MAINTAIN_SCHEMAS procedure
Similar Messages
-
Source DB on RAC, Archived Log Downstream Capture:Logs could not be shipped
I don't have much experience in Oracle RAC.
We are implementing Oracle Streams using Archived-Log Downstream capture. Source and Target DBs are 11gR2.
The source DB is in RAC (uses scan listeners).
To prevents, users from accessing the source DB, the DBA of the source DB shutdown the listener on port 1521 (changed the port number to 0000 in some file). There was one more listener on port 1523 that was up and running. We used port 1523 to create DB link between the 2 databases.
But, because the listener on Port 1521 was down, the archived logs from the source DB could not be shipped to the shared rive. As per the source DB DBA, the two instances in RAC use this listener/port to communicate with each other.
As such, when we ran DBMS_CAPTURE_ADM.CREATE_CAPTURE procedure from the target DB, the Logminer Data Dictionary that was extracted from the source DB to the Redo Logs was not avaialble to the target DB and the streams implementation failed.
It seems that for the archived logs to ship from the source DB to the shared drive, we need the listener on the port 1521 up and running. (Correct me if I am wrong ).
My question is:
Is there a way to shutdown a listener to prevent users from accessing the DB and have another listsener up so that the archived logs can be shipped to the shared drive ? If so, can you please give the details/example ?
We asked the same question to the DBA of the source DB and we were told that it could not be done.
Thanks in advance.Make sure that the dblink "using" clause is referencing a service name that uses a listener that is up and operational. There is no requirement that the listener be on port 1521 for Streams or for shipping logs.
Chapter 4 of the 2Day+ Data Replication and Integration manual has instructions for configuring downstream capture in Tutorial: Configuring Two-Database Replication with a Downstream Capture Process
http://docs.oracle.com/cd/E11882_01/server.112/e17516/tdpii_repcont.htm#BABIJCDG -
Multiple sources with single downstream capture
Is it possible to have multiple source machines all send their redo logs to a single downstream capture DB that will collect LCRs and queue the LCRs for all the source machines?
Yes, that's wht downstream replication is all about. read the 11G manual if you want to know how to do that.
-
Configure log file transfer to downstream capture daabase!
Dear all,
I am setting up bidirectional replication among 2 database server that are Linux based and oracle 11gR1 is the database.
I am following the document Oracle Streams Administrator Guide, Have completed all the pre-configuration tasks but I have confusion with this step where we have to configure log file transfer to downstream capture database.
I am unable to understand this from the documentation.
I mean how do I configure Oracle Net so that the source database can communicate with each other in bi-directional replication?
Configure authentication at both databases to support the transfer of redo data? How can I do this?
Third thing is the paramter setting that obviously i can do
Kindly help me through this step.
Regards, Imranand what about this:
Configure authentication at both databases to support the transfer of redo data?
Thanks, ImranFor communication between the two databases, you create streams administrator at both the databases. The strmadmin talk to each other.
Regards,
S.K. -
Downstream Capture using archivelog files
DB Version 11.2
I have not been able to find a demo/tutorial of using Streams with Downstream Capture where only archivelogs from the source DB are available (they were moved to a remote system using a DVD/CD/tape). All the examples that I have been able to find use a network connection.
Does anyone know of such an example?
Thank you.Hi!
Please can you elaborate your question more clearly..
Explanation of Downstream Capture...
I am assuming that we are having two databases one is production database and one is DR database (disaster database).
we want that changes from Production database should get replicate to DR database.
For performance reasons we want that no process should be running on Production.
In order to achieve that we use downstream capture in which capture and apply both process will be running on DR database.Now here problem arises that how that capture process will capture changes from production database.
so we configure data guard configuration b/w these two databases.production database should be in archive-log mode where as DR database can be in no-archive log mode.now archive log file from production database are copied to DR database using network connection automatically and then from these archives capture process captures changes.
Now why do you want archives to be copied on DVDs...
regards
Usman -
RAC for downstreams capture process
I have created a real-time downstreams capture process in a RAC to protect the process of any failure but I have some doubts about this:
1- I need create the group of standby redo log for each instance in cluster or its shared for all ?
2- if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
RAC_STR=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(CONNECT_DATA=
(SERVICE_NAME=RAC-global_name)
configured process will be able to continue capturing changes without data redo loss ?
Appreciate any explanation.>
if one instance goes down and we perform Redos sending from the source via the following service depicted in the source TNSNAME:
RAC_STR=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=VIP-instance1)
(PORT=1521)
(CONNECT_DATA=
(SERVICE_NAME=RAC-global_name)
configured process will be able to continue capturing changes without data redo loss ?You will not expirience with data loss if one of RAC instances goes down - next one will overtake yours downstream capture process and continues to mine redo from source database. But you definitly need to correct your tnsnames, because it is pointing twice to the same RAC instance "VIP-instance1"
The downstream capture on RAC unfortunatly has other problems, with I've already expirienced, but maybe it will not concern your configuration. The undocumented problems (or bugs which are open and not solved yet) are:
1. if your RAC DB has the phys. standby than it can happened that it discontinue to register redo from upstream Streams database.
2. if your RAC DB has both downstream and local capture then if more as 2 RAC instances are running, the local capture can't continue with current redolog (only after log switch) -
Error running Archived-Log Downstream Capture Process
I have created a Archived-Log Downstream Capture Process with ref. to following link
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_ccap.htm#i1011654
After executing the capture process get following error in trace
============================================================================
Trace file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_13572.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /home/oracle/app/oracle/product/11.2.0/dbhome_1
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-194.el5
Version: #1 SMP Fri Apr 2 14:58:14 EDT 2010
Machine: x86_64
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 37
Unix process pid: 13572, image: [email protected] (CP01)
*** 2011-08-20 14:21:38.899
*** SESSION ID:(146.2274) 2011-08-20 14:21:38.899
*** CLIENT ID:() 2011-08-20 14:21:38.899
*** SERVICE NAME:(SYS$USERS) 2011-08-20 14:21:38.899
*** MODULE NAME:(STREAMS) 2011-08-20 14:21:38.899
*** ACTION NAME:(STREAMS Capture) 2011-08-20 14:21:38.899
knlcCopyPartialCapCtx(), setting default poll freq to 0
knlcUpdateMetaData(), before copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcUpdateMetaData(), after copy IgnoreUnsuperrTable:
source:
Ignore Unsupported Error Table: 0 entries
target:
Ignore Unsupported Error Table: 0 entries
knlcfrectx_Init: rs=STRMADMIN.RULESET$_66, nrs=., cuid=0, cuid_prv=0, flags=0x0
knlcObtainRuleSetNullLock: rule set name "STRMADMIN"."RULESET$_66"
knlcObtainRuleSetNullLock: rule set name
knlcmaInitCapPrc+
knlcmaGetSubsInfo+
knlqgetsubinfo
subscriber name EMP_DEQ
subscriber dblinke name
subscriber name APPLY_EMP
subscriber dblinke name
knlcmaTerm+
knlcmaTermSrvs+
knlcmaTermSrvs-
knlcmaTerm-
knlcCCAInit()+, err = 26802
knlcnShouldAbort: examining error stack
ORA-26802: Queue "STRMADMIN"."STREAMS_QUEUE" has messages.
knlcnShouldAbort: examing error 26802
knlcnShouldAbort: returning FALSE
knlcCCAInit: no combined capture and apply optimization err = 26802
knlzglr_GetLogonRoles: usr = 91,
knlqqicbk - AQ access privilege checks:
userid=91, username=STRMADMIN
agent=STRM05_CAPTURE
knlqeqi()
knlcRecInit:
Combined Capture and Apply Optimization is OFF
Apply-state checkpoint mode is OFF
last_enqueued, last_acked
0x0000.00000000 [0] 0x0000.00000000 [0]
captured_scn, applied_scn, logminer_start, enqueue_filter
0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908] 0x0000.0004688c [288908]
flags=0
Starting persistent Logminer Session : 13
krvxats retval : 0
CKPT_FREE event=FALSE CCA=FALSE Checkptfreq=1000 AV/CDC flags=0
krvxssp retval : 0
krvxsda retval : 0
krvxcfi retval : 0
#1: krvxcfi retval : 0
#2: krvxcfi retval : 0
About to call krvxpsr : startscn: 0x0000.0004688c
state before krvxpsr: 0
dbms_logrep_util.get_checkpoint_scns(): logminer sid = 13 applied_scn = 288908
dbms_logrep_util.get_checkpoint_scns(): prev_ckpt_scn = 0 curr_ckpt_scn = 0
*** 2011-08-20 14:21:41.810
Begin knlcDumpCapCtx:*******************************************
Error 1304 : ORA-01304: subordinate process error. Check alert and trace logs
Capture Name: STRM05_CAPTURE : Instantiation#: 65
*** 2011-08-20 14:21:41.810
++++ Begin KNST dump for Sid: 146 Serial#: 2274
Init Time: 08/20/2011 14:21:38
++++Begin KNSTCAP dump for : STRM05_CAPTURE
Capture#: 1 Logminer_Id: 13 State: DICTIONARY INITIALIZATION [ 08/20/2011 14:21:38]
Capture_Message_Number: 0x0000.00000000 [0]
Capture_Message_Create_Time: 01/01/1988 00:00:00
Enqueue_Message_Number: 0x0000.00000000 [0]
Enqueue_Message_Create_Time: 01/01/1988 00:00:00
Total_Messages_Captured: 0
Total_Messages_Created: 0 [ 01/01/1988 00:00:00]
Total_Messages_Enqueued: 0 [ 01/01/1988 00:00:00]
Total_Full_Evaluations: 0
Elapsed_Capture_Time: 0 Elapsed_Rule_Time: 0
Elapsed_Enqueue_Time: 0 Elapsed_Lcr_Time: 0
Elapsed_Redo_Wait_Time: 0 Elapsed_Pause_Time: 0
Apply_Name :
Apply_DBLink :
Apply_Messages_Sent: 0
++++End KNSTCAP dump
++++ End KNST DUMP
+++ Begin DBA_CAPTURE dump for: STRM05_CAPTURE
Capture_Type: DOWNSTREAM
Version:
Source_Database: ORCL2.LOCALDOMAIN
Use_Database_Link: NO
Logminer_Id: 13 Logfile_Assignment: EXPLICIT
Status: ENABLED
First_Scn: 0x0000.0004688c [288908]
Start_Scn: 0x0000.0004688c [288908]
Captured_Scn: 0x0000.0004688c [288908]
Applied_Scn: 0x0000.0004688c [288908]
Last_Enqueued_Scn: 0x0000.00000000 [0]
Capture_User: STRMADMIN
Queue: STRMADMIN.STREAMS_QUEUE
Rule_Set_Name[+]: "STRMADMIN"."RULESET$_66"
Checkpoint_Retention_Time: 60
+++ End DBA_CAPTURE dump
+++ Begin DBA_CAPTURE_PARAMETERS dump for: STRM05_CAPTURE
PARALLELISM = 1 Set_by_User: NO
STARTUP_SECONDS = 0 Set_by_User: NO
TRACE_LEVEL = 7 Set_by_User: YES
TIME_LIMIT = -1 Set_by_User: NO
MESSAGE_LIMIT = -1 Set_by_User: NO
MAXIMUM_SCN = 0xffff.ffffffff [281474976710655] Set_by_User: NO
WRITE_ALERT_LOG = TRUE Set_by_User: NO
DISABLE_ON_LIMIT = FALSE Set_by_User: NO
DOWNSTREAM_REAL_TIME_MINE = FALSE Set_by_User: NO
MESSAGE_TRACKING_FREQUENCY = 2000000 Set_by_User: NO
SKIP_AUTOFILTERED_TABLE_DDL = TRUE Set_by_User: NO
SPLIT_THRESHOLD = 1800 Set_by_User: NO
MERGE_THRESHOLD = 60 Set_by_User: NO
+++ End DBA_CAPTURE_PARAMETERS dump
+++ Begin DBA_CAPTURE_EXTRA_ATTRIBUTES dump for: STRM05_CAPTURE
USERNAME Include:YES Row_Attribute: YES DDL_Attribute: YES
+++ End DBA_CAPTURE_EXTRA_ATTRIBUTES dump
++ LogMiner Session Dump Begin::
SessionId: 13 SessionName: STRM05_CAPTURE
Start SCN: 0x0000.00000000 [0]
End SCN: 0x0000.00046c2d [289837]
Processed SCN: 0x0000.0004689e [288926]
Prepared SCN: 0x0000.000468d4 [288980]
Read SCN: 0x0000.000468e2 [288994]
Spill SCN: 0x0000.00000000 [0]
Resume SCN: 0x0000.00000000 [0]
Branch SCN: 0x0000.00000000 [0]
Branch Time: 01/01/1988 00:00:00
ResetLog SCN: 0x0000.00000001 [1]
ResetLog Time: 08/18/2011 16:46:59
DB ID: 740348291 Global DB Name: ORCL2.LOCALDOMAIN
krvxvtm: Enabled threads: 1
Current Thread Id: 1, Thread State 0x01
Current Log Seqn: 107, Current Thrd Scn: 0x0000.000468e2 [288994]
Current Session State: 0x20005, Current LM Compat: 0xb200000
Flags: 0x3f2802d8, Real Time Apply is Off
+++ Additional Capture Information:
Capture Flags: 4425
Logminer Start SCN: 0x0000.0004688c [288908]
Enqueue Filter SCN: 0x0000.0004688c [288908]
Low SCN: 0x0000.00000000 [0]
Capture From Date: 01/01/1988 00:00:00
Capture To Date: 01/01/1988 00:00:00
Restart Capture Flag: NO
Ping Pending: NO
Buffered Txn Count: 0
-- Xid Hash entry --
-- LOB Hash entry --
-- No TRIM LCR --
Unsupported Reason: Unknown
--- LCR Dump not possible ---
End knlcDumpCapCtx:*********************************************
*** 2011-08-20 14:21:41.810
knluSetStatus()+{
*** 2011-08-20 14:21:44.917
knlcapUpdate()+{
Updated streams$_capture_process
finished knlcapUpdate()+ }
finished knluSetStatus()+ }
knluGetObjNum()+
knlsmRaiseAlert: keltpost retval is 0
kadso = 0 0
KSV 1304 error in slave process
*** 2011-08-20 14:21:44.923
ORA-01304: subordinate process error. Check alert and trace logs
knlz_UsrrolDes()
knstdso: state object 0xb644b568, action 2
knstdso: releasing so 0xb644b568 for session 146, type 0
knldso: state object 0xa6d0dea0, action 2 memory 0x0
kadso = 0 0
knldso: releasing so 0xa6d0dea0
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-01304: subordinate process error. Check alert and trace logs
Any suggestions???Output of above query
==============================
CAPTURE_NAME STATUS ERROR_MESSAGE
STRM05_CAPTURE ABORTED ORA-01304: subordinate process error. Check alert and trace logs
Alert log.xml
=======================
<msg time='2011-08-25T16:58:01.865+05:30' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='localhost.localdomain' host_addr='127.0.0.1' module='STREAMS'
pid='30921'>
<txt>Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_cp01_30921.trc:
ORA-01304: subordinate process error. Check alert and trace logs
</txt>
</msg>
The orcl_cp01_30921.trc has the same thing posted in the first message. -
Downstream Capture and Extended Datatype Support
Wondering if anyone has tried downstream capture on tables with datatypes that are not natively supported (e.g. MDSYS.SDO_GEOMETRY)?
I've had a look at this for upstream capture but at my site we want to,
a) reduce possible impact of the capture processes on the source database
and
b) avoid adding triggers to the source tables being replicated as the application vendor isn't keen on changes to "their" schema. The shadow tables might not be quite such an issue in the source schema but if they can be avoided then that would bve abonus too.
My first thoughts are that with downstream capture, EDS would still need to be applied on these tables in the source database to handle these data types.
If anyone has been down this path and has some insights I'd love to hear them.
ThanksHi Bernard,
I have read the generated SQL-Files (read README-File in ( Metalink: Extended Datatype Support (EDS) for Streams [ID 556742.1]) :
run1_source_data_1.sql
run1_source_streams_2.sql
run1_dest_data_1.sql
run1_dest_streams_2.sqland understand now how it works.
But I still have some problems to find die Description of the undocumented functions like:
SYS_OP_NII(...),
SYS_ET_IMAGE_TO_BLOB and
the HINT /*+ relational("A") restrict_all_ref_cons*/ in INSERT and UPDATEI have marked it as answered because I have waited too long and nobody gave me a comment or answer.
I have thought nobody take a look at my case.
But now you are the first one, I'm happy. :-)
regards
hqt200475
Edited by: hqt200475 on Jul 1, 2011 4:53 AM
Edited by: hqt200475 on Jul 1, 2011 5:18 AM -
Join two source tables and replicat into a target table with BLOB
Hi,
I am working on an integration to source transaction data from legacy application to ESB using GG.
What I need to do is join two source tables (to de-normalize the area_id) to form the transaction detail, then transform by concatenate the transaction detail fields into a value only CSV, replicate it on the target ESB IN_DATA table's BLOB content field.
Based on what I had researched, lookup by join two source tables require SQLEXEC, which doesn't support BLOB.
What alternatives are there and what GG recommend in such use case?
Any helpful advice is much appreciated.
thanks,
XiaocunXiaocun,
Not sure what you're data looks like but it's possible the the comma separated value (CSV) requirement may be solved by something like this in your MAP statement:
colmap (usedefaults,
my_blob = @STRCAT (col02, ",", col03, ",", col04)
Since this is not 1:1 you'll be using a sourcedefs file, which is nice because it will do the datatype conversion for you under the covers (also a nice trick when migrating long raws to blobs). So col02 can be varchar2, col03 a number, and col04 a clob and they'll convert in real-time.
Mapping two tables to one is simple enough with two MAP statements, the harder challenge is joining operations from separate transactions because OGG is operation based and doesn't work on aggregates. It's possible you could end up using a combination of built in parameters and funcations with SQLEXEC and SQL/PL/SQL for more complicated scenarios, all depending on the design of the target table. But you have several scenarios to address.
For example, is the target table really a history table or are you actually going to delete from it? If just the child is deleted but you don't want to delete the whole row yet, you may want to use NOCOMPRESSDELETES & UPDATEDELETES and COLMAP a new flag column to denote it was deleted. It's likely that the insert on the child may really mean an update to the target (see UPDATEINSERTS).
If you need to update the LOB by appending or prepending new data then that's going to require some custom work, staging tables and a looping script, or a user exit.
Some parameters you may want to become familiar with if not already:
COLS | COLSEXCEPT
COLMAP
OVERRIDEDUPS
INSERTDELETES
INSERTMISSINGUPDATES
INSERTUPDATES
GETDELETES | IGNOREDELETES
GETINSERTS | IGNOREINSERTS
GETUPDATES | IGNOREUPDATES
Good luck,
-joe -
Hello All,
I have the following scenario.
I have two source system ECDCLNT200 and ECDCLNT230 and one BI system BIDCLNT200. client 200 is used for configuration and client 230 is used for data loading and validation.now firstly i created source system ECDCLNT230 in RSA1 then it comes under SAP menu but when i try to create ECDCLNT200 in SAP Menu it comes automatically under BI menu. i deleted it and tried again but the same problem is there.
if i remain the same configuration means ECDCLNT200 under BI menu then will it create any problem in future ? or if i want to create ECDCLNT200 under SAP menu what is the otheroption ?
I tried both the way. first i rightclick on main node and selected create source system and second time i right click on SAP and select create source system but couldn't create it.
Please let me know the solution.
Regards,
Komik ShahThanks vincent,
i got the solution through SAP note.
I have assigned the points.
Thank you again
Regards,
Komik Shah -
Process chain - Two source system issue
Hi Experts,
I face problem when i transport a process chain from BW DEV to Quality.
I have two source systems in Quality ECI and EXS.EXS is a sandbox server(Source system) which we no longer use.
When i transport the process chain from DEV to QA, i get two DTP's automatically added to Process chain in quality for two source systems ECI and EXS whereas i do not want the data to be loaded from EXS client.
I checked the BW dev system, it has only one DTP in the process chain.
Now I want to elminate the other DTP in the process chain in quality.
Please suggest.
Regards,
SriniI Dont think so a DTP for EXS system has got created automatically when you transport the PC to QA (which actually doesn't exist in DEV).
Please check you DEV system once again, there might a DTP from EXS system also.. other wise go the Release TR and then check the contents-->iif you find two DTP's there then it is there in the dev system..
Any ways if you want to delete the other DTP from the PC in QA.. you can Delete it directly from the PC itself.. Goto change mode of the PC and then select the DTP which you want to delete-->then right click and hten Click on Remove Process -->then activate the PC..
Note: You might have authorization to edit the PC in QA then only you will be able to do the changes.. -
Combining two source fields in Import Manager
Hi,
I am trying to combine two source fields say fld1 with 5 digit and Fld2 with 3 digit so that combined fld3 of 8 digit can be mapped to a 8 digit field in destination MDM side.
Any suggestion how to do this is highly appreciated
Thanks,
-reoHi Reo,
I am just adding details to how to combine fields in the import manager.
To combine two or more existing partitions for a destination node steps are as follows:
<b>1. </b>In the appropriate source hierarchy tree, select the node whose partitions you want to combine.
<b>2.</b> click on the Partition Field/Value tab to make it the active tab.
<b>3.</b> In the appropriate Partition list, select the two or more partition items you want to combine into a single partition.
<b>4.</b> Click on the Combine button, or right-click on one of the items and choose Combine Partitions from the context menu.
<b>5.</b> MDM combines the selected partition items.
<b>6.</b> Now you can map this partition directly to the destination field.
But as ur requirement seems ,u do not need to set delimiter.
Hope it will help you and let me know the results. please remark .
Thanks,
<b>Shiv Prashant Dixit</b> -
Hi all,
Can I create two source systems in BW (RSA1) for the same R/3 system? I like to have one for FI and CO, and the other for HR and PY.
Thanks a lot,
Regards
VicHi Roberto,
I need them for the same client.
Regards
Vic -
Hi
I am doing a POC on joing two MSSQL 2K source tables and populating into Oracle table.
I did not find the joiner operator and other transformation operators.
Can anyone help me in using them?
Thanks,
GaneshHi,
First, just drag&drop the two source table in the interface.
Then, connect the two tables with eachother using your mouse while selecting the two columns that you want to use in your join.
Transformations can be found when selecting a field in your target datastore; you will see a screen appear in the 'properties-panel' of your interface-editor; here you can edit transformations.
Good luck ...
Steffen -
Can we capture two types of Serial Numbers for a material?
Hi,
I have scenario where i need to capture two different types of serial numbers for a material. This scenario is very similar to the below one -
Say if a CAR is a serialized material, then i want to capture engine no. and chasis no. for each car.
I know in standard serial number management, we can capture one kind of serial no. for an item, but can we capture two types of serial number for an item? If yes, how we can do that?
Please help.
Thanks,
Parimal.Hi,
No it is not possible to have two serial numbers for a material.
The main attribute of a serial number is that it is UNIQUE.
You can better have the vehicle number as the required serial number and the other numbers under the Manufacturer data of the General data tab of the IQ01 transaction. ( Equipment master creation ).
You can better change the name of the manufacturer data such as Model number to Engine numeber and Manu serial no to Chasis no etc.., using CMOD transactions.
Regards
Maybe you are looking for
-
Problem with iTunes and PrintScreen
I have a lil' problem. When a song is playing in iTunes, and while it is playing, I push printscreen and my computer makes a strange sound and the music stops playing. It will only start playing again when i move the printscreen img file to the trash
-
Keyboard and trackpad temporarily unresponsive after mac is woken from sleep or battery runs out
I am having problems after my battery runs out completely or when I open my macbook a day later with the battery charged. If my battery runs out, after I plug it in and turn it on, the first thing that appears is the login page and there is a dark g
-
What's the best choice re video file format in a camera for iDVD production
I currently am using a Sony DSC H1 digital camera which does shoot video in MPEG 1. I use MPEG Streamclip to convert those clips to MPEG 4. The results on my TV screen after processing through iMovie and iDVD are "acceptable" but I will want to upgra
-
Learn Photoshop CS5.1 or upgrade to CC?
Hey Guys, I need some advice... I purchased Photoshop CS5.1 a few years ago and never had the resources to learn the program. I found the videos on YouTube really did not help me for the most part. I just recently purchased a Canon Rebel T5i and
-
Working with delegations ---- basics help
Hi All, Can anyone please let me know how exactly to work with delegations..... Any code or explanations will be really helpful...... eg. I need to know how exactly to delegate an approval process... what tags to be used... and where it needs to be p