Source Rows to Target Column mapping
Hi Gurus,
Please guide me on the bellow issue,
Source Side tables
CUSTOMER_MASTER(master table)-Customer ID is primary key
CUSTOMER_ADDRESS(Child Table)-Multiple customer rows based on address type ,OFFICE,RESIDENCE etc.
Target Table
TRG_CUSTOMER-Customer ID is primary key. This target table has column for each address type like OFFICE_ADDRESS,RES_ADDRESS.
I have to hand off all the rows of CUSTOMER_MASTER to TRG_CUSTOMER,if multiple address are there for customer in CUSTOMER_ADDRESS table i have to transfer those values to respective target columns.
Could plz share idea to implement this in the inteface?
Thanks,
Kumbar
Hi ,
I am using some of the column of both tables to map target table, In Target table we have columns OFFICE_ADDRESS_LINE1 to OFFICE_ADDRESS_LINE3,home_ADDRESS_LINE1 to home_ADDRESS_LINE3,but in the source table CUSTOMER_ADDRESS we have row for each type of address.How to do
this maping and tranformation ?
Please help me,
Similar Messages
-
Hi All,
I am reading notepads files and inserting data in sql tables from the notepad-
while performing sql bulk copy on this line it throws exception - "bulkcopy.WriteToServer(dt); -"data type related(mentioned in subject )".
Please go through my logic and tell me what to change to avoid this error -
public void Main()
Dts.TaskResult = (int)ScriptResults.Success;
string[] filePaths = Directory.GetFiles(@"C:\Users\jainruc\Desktop\Sudhanshu\master_db\Archive\test\content_insert\");
for (int k = 0; k < filePaths.Length; k++)
string[] lines = System.IO.File.ReadAllLines(filePaths[k]);
//table name needs to extract after = sign
string[] pathArr = filePaths[0].Split('\\');
string tablename = pathArr[9].Split('.')[0];
DataTable dt = new DataTable(tablename);
|
string[] arrColumns = lines[1].Split(new char[] { '|' });
foreach (string col in arrColumns)
dt.Columns.Add(col);
for (int i = 2; i < lines.Length; i++)
string[] columnsvals = lines[i].Split(new char[] { '|' });
DataRow dr = dt.NewRow();
for (int j = 0; j < columnsvals.Length; j++)
//Console.Write(columnsvals[j]);
if (string.IsNullOrEmpty(columnsvals[j]))
dr[j] = DBNull.Value;
else
dr[j] = columnsvals[j];
dt.Rows.Add(dr);
SqlConnection conn = new SqlConnection();
conn.ConnectionString = "Data Source=UI3DATS009X;" + "Initial Catalog=BHI_CSP_DB;" + "User Id=sa;" + "Password=3pp$erv1ce$4";
conn.Open();
SqlBulkCopy bulkcopy = new SqlBulkCopy(conn);
bulkcopy.DestinationTableName = dt.TableName;
bulkcopy.WriteToServer(dt);
conn.Close();
Issue 1:-
I am reading notepad: getting all column and values in my data table now while inserting for date and time or integer field i need to do explicit conversion how to write for specific column before bulkcopy.WriteToServer(dt);
Issue 2:- Notepad does not contains all columns nor in specific sequence in that case i can add few column ehich i am doing now but the issue is now data table will add my columns + notepad columns and while inserting how to assign in perticular colums?
sudhanshu sharma Do good and cast it into river :)Hi,
I think you'll have to do an explicit column mapping if they are not in exact sequence in both source and destination.
Have a look at this link:
https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopycolumnmapping(v=vs.110).aspx
Good Luck!
Kaur.
Please mark as answer if this resolves your issue. -
Row to column mapping and combining columns
Hi
I have one issue i want to concatenating some columns and map row as columns like below.I am giving one ex and have more than 1000 employees like this
Example
{Srno Eno Ename job datestart}
{1 01 jack------clerk------01-Jan-2008}
{ 2 01 jack-----snrclerk----01-Jun-2009}
{ 3 01 jack ---- officer------- 01-Jan-2010}
I want to display the row to columan mapping like first row concatenated as col2 and second row for col2 column etc
{eMpno------- col1 ------------------co12----------------------------------col3}
{0101 jackclerk 01-Jan-2008---jacksnrclerk01-Jun-2009------01jackofficer01-Jan-2010}
rgds
ramya
Edited by: user11243021 on Sep 7, 2010 5:21 AMwith t as (
select 1 srno,'01' eno,'jack' ename,'clerk' job,to_date('01-Jan-2008','dd-mon-yyyy') datestart from dual union all
select 2 srno,'01' eno,'jack' ename,'snrclerk' job,to_date('01-Jun-2009','dd-mon-yyyy') datestart from dual union all
select 3 srno,'01' eno,'jack' ename,'officer' job,to_date('01-Jan-2010','dd-mon-yyyy') datestart from dual
) -- end of data sample
select eno,
max(
case srno
when 1 then ename || ' ' || job || to_char(datestart,' dd-Mon-yyyy')
end
) col1,
max(
case srno
when 2 then ename || ' ' || job || to_char(datestart,' dd-Mon-yyyy')
end
) col2,
max(
case srno
when 3 then ename || ' ' || job || to_char(datestart,' dd-Mon-yyyy')
end
) col3
from t
group by eno
EN COL1 COL2 COL3
01 jack clerk 01-Jan-2008 jack snrclerk 01-Jun-2009 jack officer 01-Jan-2010
SQL> SY. -
SAP XI 3.0 Same source for different target in std Value mapping function
Hi,
We have replicated 4 value mapping entries from R3 to XI having the same Context , Agency , Schema and value for source, but each of the 4 values has the same Context and Agency but different Schema and Value respectively.
To illusstate :
Source |Target
Context Agency Schema Value -----Context Agency Schema Value
CS1 AS1 SS1 1 CT1 AT1 ST1 A
CS1 AS1 SS1 1 CT1 AT1 ST2 A
CS1 AS1 SS1 1 CT1 AT1 ST3 B
This value mapping is not working and we always get the source value as the result.
We are wondering if the reason for this is that we use the same source for different targets. But we are not 100 % sure of it.
When I read the documentation on Value mapping or when we use the value mapping standard function in graphical mapping, we pass the context , agency and schema of the source and target respectively and the source value to get the target value, and this combination is always unique in our case as seen in the above example.
Has anyone faced this kind of an issue, if yes I would appreciate if anyone could help us resolve this problem.
Thanks in advance.
regards,
AdvaitHi Advait,
From the below what I understand is that you are not able to do value mapping for the follwoing
1 A
2 A
3 B
As value mapping allow one to one mapping only. Please do it like as mentioned below
1 1*A
2 2*A
3 3*B
Then in the graphical mapping of Integration Repository do the mapping for the same as shown below
source field > VALUEMAPPING> UDF--> TARGET Field
In UDF suppress the value of 1* , 2* , 3* which can be done as follows
create one UDF with one input field
//write the code as below to suppress the field
return input.substring(2);
Here the davantage of using 1* , 2* , 3* etc is that you have the option to use value mapping for 100 values which I think is not normally the case for any Interface.
If you have same source you can do the same thing for that.
Hope this helps you to resolve your query.
Thanks & Regards
Prabhat -
Hi
During my message mapping, I have to convert one String Field from Source Structure to Double ( primitive data type ) field in Target Structure.
String Weight -
Source
Double Wght -
Target
Java Code ****
<b>double Wght = Double.valueOf(Weight.trim()).doubleValue();</b>
But if I use this in UDF then it has default return type as String.
Can I change the return type in UDFs or is their any other way using which I can do this conversion.
Best Regards
- lalit chaudhary -Hi Lalita,
Please see the following code try..
static public double parseDouble(String s)
return doubleValueOf(s);
But XI always input ,output as string in MM.
is it your requirement..?
Regards
Chilla.. -
Hi everyone,
Recently ,I have one publishing site template that has a lot of sub sites which contain a large amount of content.
On root publishing site, I have created custom list including many custom fields
and saved it as template in order that
any sub sites will be able to reuse it later on . My scenario describe as follows.
I need to apply Site Content and Structure Tool to copy a lot of items
from one list to another. Both lists were created from same template
I use Site Content and Structure Tool to copy data from source list
to target list as figure below.
Once copied , all items are completed.
But many columns in target list have been duplicated from source list such as PublishDate ,NumOrder, Detail as
figure below .
What is the huge impact from this duplication?
User can input data into this list successfully
but several values of some columns like “Link column”
won't display on “AllItems.aspx” page
. despite that they show on edit item form page and view item form page.
In addition ,user can input data into this list as above but
any newly added item won't appear on
on “AllItems.aspx” page
at all despite that actually, these
item are existing on database(I try querying by power shell).
Please recommend how to resolve this column duplication problem.Hi,
According to your description, my understanding is that it displayed many repeated columns after you copy items from one list to another list in Site Content and Structure Tool.
I have tested in my environment and it worked fine. I created a listA and created several columns in it. Then I saved it as template and created a listB from this template. Then I copied items from listA to listB in Site Content and Structure Tool and it
worked fine.
Please create a new list and save it as template. Then create a list from this template and test whether this issue occurs.
Please operate in other site collections and test whether this issue occurs.
As a workaround, you could copy items from one list to another list by coding using SharePoint Object Model.
More information about SharePoint Object Model:
http://msdn.microsoft.com/en-us/library/ms473633.ASPX
A demo about copying list items using SharePoint Object Model:
http://www.c-sharpcorner.com/UploadFile/40e97e/sharepoint-copy-list-items-in-a-generic-way/
Best Regards,
Dean Wang
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected] -
Restore single datafile from source database to target database.
Here's my issue:
Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
O/S: RHEL 5 (Tikanga)
Database Storage: Using ASM on a stand-alone server (NOT RAC)
Using Oracle GG to replicate changes on the Source to the Targets.
My scenario:
We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
I know there are possibilites of
1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
I would think you could
1) back up the datafile/tablespace on the source,
2)then copy the datafile to the target.
3) startup mount;
4) Newname the new file copied from the source (this is ASM)
5) Restore the datafile/tablespace
6) Recover the datafile/tablespace
7) alter database open;
Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
Just looking for alternate methods of recovery.
Thanks,
JasonLet me take another stab at sticking a fork into this myth about separating tables and indexes.
Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
Now you want to do some simple piece of work and assume there is an index on the ename column...
SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
Apply some synapses to this myth and it falls apart.
You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
Where in here is there anything that indicates an advantage to having separate physical files?
And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
PS: Ouija Boards don't work either. -
Unable to delete rows from Target.
Hello everyone,
I am unable to delete rows from target data store. Here is what I have done.
Source Oracle 10g - staging 11g - Target Oracle 11g
I have implemented consistent set CDC on data model in staging and added 2 tables to CDC and turned on the journals . Both tables A and B are joined together via Column E (primary key of table A). Table A is the master table(has foreign key). Table B is child table. Target column consists of all the columns of both table A and B.
Following is what I am able to do and not to do
ABLE TO DO. If data is inserted into both or any of journalized tables I can successfully load the same in target by performing following steps. 1. Extend the consistency window at model level. Lock subscriber. Run the interface with any source table marked as Journalized data only. Unlock subscriber and purge journal.
ABLE TO DO. If data is updated in any of the journalized table, along with the steps mentioned above I can execute two interfaces. In one Interface table A marked as journalized data only Joined with table B and in second interface table B marked as Journalized data only joined to table a.
NOT ABLE TO DO If data is deleted from one or both tables it shows up as journalized data in JV$D<tablename> marked as D with date and subscriber name but when i run the interface by extending the window , locking subscriber executing both interfaces, unlock subscriber purge journals. no change takes place is Target. After unlocking subscriber step, journalized data gets removed from JV$D view. Please let me know what I am doing wrong here. How can rows delted from source can also be deleted from TARGET?
NOTE : In the flow table SYNC_JRNL_DELETES is YES
In moel under jounalized table tab Table have following order Table A folloed by Table B
Thanks in advance
GreenwichSorry I still do not get it. when you say "Its a legacy app", are you talking about the VB.NET app ?
If so then I repeat my self :-) Why not to connecting to the SQL server directly?
* even if you need information from several databases (for example ACCESS + SQL Server), in most cases, it is much better to connect directly and get each information to the app. Then in your app you can combine the information and analyse it
[Personal Site] [Blog] [Facebook]
Access app is the legacy app. -
How to control unique key target table mapping!
Hi all,
Is there any way to map a source table to target,the source table contains some duplicate rows,and only one of them is needed for the target table,can OWB manage to insert only one of these duplicate rows and ignore other rows according to an unique key and continue to execute the map?
Thanks in advance!If you have duplicates and want to filter them out, try creating a view on the target table and use the rownum() function in the view.
Than in your mapping use this view as you source and put a filter on it where the rownumber = 1. This will only select the unique records. -
How to load the value in target column?
Hi
Source: Oracle
Target: Oracle
ODI: 11g
I have an interface which loads the data from source table to target. Some of the columns in the target tables are automatically mapped with source table. Some of the column remain un-mapped. Those who remain un-mapped, I want to load the values in that column by executing an query. So can anybody tell me where I should mention that query whose result would become the value of the specific column.
-Thanks,
ShrinivasYou can put the query (or subquery) into the mapping field.
You can also call a function in the mapping field which may be easier in your case. -
ODI error Target Column Accounts: Target column has no data server associat
Hi John,
When i tried to map my source file to Planning target, i got the below error. Please let me know the resolution.
Target Column Accounts: Target column has no data server associated
Thanks,
SravanHi,
You have not set the correct context that the datastore is associated with.
Change the optimization context in the definition tab.
Cheers
John
http://john-goodwin.blogspot.com/ -
Some '???' appeared in the LOV Column Mapping window ...only
Hi,
I installed the Forms6i with the patch 12 on Windows XP platform. The installation language is Greek. The only problem i have faced up with the installation is the appearance of some question marks in LOV Column Mapping window ... instead of the Greek translation of the phrases:
Column Names ,
Return Item ,
Display Width ,
Column Title
Is there a specific Windows font that should be installed ....or something else...????
Thanks,,,,
SimYou have to do it from the source code. Click on source, you will find "<xsl:if test="">".
Define your condition in test="/tns:input1=/tns:input2" or check if you source string contains "abcd", i.e. xsl:if test="contain(/tns:source_column1,'abcd')"
hope this will help -
Hi, I am really new to OBIEE 10g.
I already set up a SQL Server 2005 database in Physical and import a view vw_Dim_retail_branch.
The view has 3 columns: branch_id, branch_code, branch_desc.
Now I want to set up the Business model to map this physical table (view).
I created a new Business model
Added new logical table Dim_retail_branch
In the sources, added the vw_Dim_retail_branch as source table.
But in the Logical table source window, column mapping tab, it's blank. I thought it should be able to identify all the columns from vw_Dim_retail_branch, but not. The show mapped columns is ticked.
What should I do here? Manually type each column?HI,
Just you can drag and drop the columns from physical layer to BMM layer.
Select the 3 columns and drag and drop it to the created logical column in BMM layer.
for more reference : http:\\mkashu.blogspot.com
Regards,
VG -
Error calling pl/sql function in target column
Hi guys,
I get this error when calling my function in ODI:
Caused By: java.sql.SQLSyntaxErrorException: ORA-00904: "mySchema"."GET_ODI_DEFAULT_VALUE(1)": ongeldige ID --> 1 is an IN parameter
while in sql developer I get a good result with following query:
select mySchema.get_odi_default_value(1) from dual;
In my target table for the primary key I call a sequence from mySchema and this works fine.
I've tried the following synxtax in the mapping of my target column:
- <%=odiRef.getObjectName( "L","GET_ODI_DEFAULT_VALUE(1)", "D" )%>
- <%=odiRef.getObjectName( "L","GET_ODI_DEFAULT_VALUE", "D" )(1)%>
Thanks for you adviceiadgroe wrote:
how to bring oracle function into ODI
I thought for objects like sequences and functions you had to use 'odiRef.getObjectName' to get access to them.
Am I wrong because now I'm a little confused???Hi,
Best practices would be to use getobjectname method as this way your not hardcoding anything into the interface and you are referencing the logical object only, which means you can change the logical object later and not have to worry about changing the interfaces. -
Warning in WD ABAP Table - Master Column Mapping deprecated
Hi ,
I am getting a warning 'Master Column Mapping deprecated' in a WD ABAP component while activating it .
Can anybody suggest what I have to do to solve this issue .
Best Regards
SidHi,
Could you please tell me how to take Row Arrangement in Master Column?
Cheers
Harkesh Dang
Maybe you are looking for
-
'dimension attribute was not found' error while refresing the cube in Excel
Dear All, Thanks for all your support and help. I need an urgent support from you as I am stuck up here from nearly past 2-3 days and not getting a clue on it. I have re-named a dimension attribute 'XX' to 'xx' (Caps to small)in my cube and cleared a
-
Creation of New T-Code for mass maintenance of SA
SAP currently does not have a facility to mass-maintain schedule agreements. The business alone has over 16,000 schedule agreement lines. Manual maintenance is very time consuming. There are some key items of data that can require amendent for a pa
-
How to deal with the two bytes character in 'IF_IXML_NODE'
we can create dom with Xstring ostream = streamfactory->create_ostream_xstring( string = lv_xstring ) but if one value of the node is two bytes character. if_ixml_node->get_value( ), only retrun string value, so the result is wrong, it will display "
-
Tried using the liquify command in PS CC, everything worked fine but when I went to save/exit I got an error message: "The operation could not be completed a write permissions error has occurred" I had no option but to hit 'cancel' and lose my adjust
-
Workflow not getting triggered from MM01
Hi, I understand there is a similar post but the solution is not so clear there. I wish to trigger a workflow whenever a material is created using tcode MM01. For which i am using BUS1001. (In the header data > start events tab > BO/BUS1001/CREATED b