Process csv files in obiee
can obiee accept data directly from a csv file for inclusion in the business layer
create a dsn and point it to location of the csv file import it as as regular database.
each sheet shows up as a seperate table
Mark if helps
Thanks,
Similar Messages
-
Hi all,
I need to process CSV-Files dynamically, i.e. not using a static order for splitting the values of the lines into the according variables, but determining the mapping dynamically using a header line in the CSV-File.
For instance:
#dcu;item;subitem
827160;5111001000;3150
So if the order in the the CSV-File changes to e.g.
#item;dcu;subitem
5111001000;827160;3150
the program should fill the corresponding variables (ls_upload-dcu, ls_upload-item, ls_load-subitem) dynamically without the need to change the code.
If anyone has a solution, please let me know.
Thanks,
ThomasTry to use variants concepts
-
Error while processing csv file
Hi,
i get this messages while processing a csv file to load the content to database,
[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL;
data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
[SSIS.Pipeline] Warning: The output column "Copy of Column 13" (842) on output "Data Conversion Output" (798) and component
"Data Conversion" (796) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
can someone please me with this.
With Regards
litu HereHello Litu,
Are you using: OLEDB Source and Destination ?
Can you change the source and destination provider to ADO.NET and check if the issue still persist?
Mark this post as "Answered" if this addresses your question.
Regards,
Don Rohan [MSFT]
Regards, Don Rohan [MSFT] -
DRM and configuring CSV files for OBIEE apps
How does DRM impact configuring the CSV files for Informatica on the OBIEE apps?
Do we still need to configure the buckets in the CSV for chart of accounts, custom calendar etc?Hi,
I am also trying to integrate DRM with OBIEE. can you please provide me more details on this. also i am trying to build a report with tree formate to represent DRM hierarchy. can you throw some light on this? -
Issue with processing .csv file
Hi.
I have a simple csv file (multiple rows) which needs to be picked up by the PI File adapter and then processed into a BAPi.
I created a Data type 'Record' which has the column names. Then there is a message type using this particular data type MT_SourceOrder. This message type is mapped to BAPI_SALESORDER_CREATEFROMDAT2. I have done the configuration is the receiver file adapter as well to accept the .csv file
Document Name: MT_SourceOrder
Doc Namespace:....
Recorset Name: Recordset
Recordset Structure: Row,*
Recordset Sequence: Ascending
Recordsets per message: 1000
Key Field Type: Ascending
In the parameter section, the following information has been provided:
Row.fieldNames MerchantID,OrderNumber,OrderDate,Description,VendorSKU,MerchantSKU,Size,UnitPrice,UnitCost,ItemCount,ItemLineNumber,Quantity,Name,Address1,Address2,Address3,City,State,Country,Zip,Phone,ShipMethod,ServiceType,GiftMessage,Tax,Accountno,CustomerPO
Row.fieldSeparator ,
Row.processConfiguration FromConfiguration
However, when the mapping is still not working correctly.
Can anyone please help.
This is very urgent and all help is very much appreciated.
Thanks.
Anuradha SenGupta.HI Santosh.
I have verified the content in the source payload in SXMB_MONI.
<?xml version="1.0" encoding="utf-8"?>
<ns:MT_SourceOrder xmlns:ns="http://xxx.com/LE/PIISalesOrder/">
<Recordset>
<Row>
<MerchantID>PII</MerchantID>
<OrderNumber>ORD-0504703</OrderNumber>
<OrderDate>8/11/2011</OrderDate>
<Description>"Callaway 60"" Women's Umbrella"</Description>
<VendorSKU>4824S</VendorSKU>
<MerchantSKU>CAL5909002</MerchantSKU>
<Size></Size>
<UnitPrice>25</UnitPrice>
<UnitCost>25</UnitCost>
<ItemCount>2</ItemCount>
<ItemLineNumber>1</ItemLineNumber>
<Quantity></Quantity>
<Name>MARGARET A HAIGHT MARGARET A HAIGHT</Name>
<Address1>15519 LOYALIST PKY</Address1>
<Address2></Address2>
<Address3></Address3>
<City>BLOOMFIELD</City>
<State>ON</State>
<Country></Country>
<Zip>K0K1G0</Zip>
<Phone>(613)399-5615 x5615</Phone>
<ShipMethod>Purolator</ShipMethod>
<ServiceType>Standard</ServiceType>
<GiftMessage>PI Holding this cost-</GiftMessage>
<Tax></Tax>
<Accountno>1217254</Accountno>
<CustomerPO>CIBC00047297</CustomerPO>
</Row>
</Recordset>
</ns:MT_SourceOrder>
It looked as above. However, in the message mapping section it doesnt work - when i do Display Queue on the source field it keeps saying the value as NULL.
Please advice.
Thanks.
Anuradha. -
Use .CSV files to UPDATE
Hi,
I have a requirement to process .CSV files and use the info to UPDATE an existing table. One line from the file matches a row in the existing table by an unique key. There is no need to keep the data from the .CSV files after.
I was planning to use a temporary table with an INSERT trigger that will perform the UPDATE, with ON COMMIT DELETE ROWS, and have sqlldr load the data in this temporary table.
But I found out that sqlldr cannot load into temporary tables (SQL*Loader-280).
What would be other options? The .CSV files are retrieved periodically (every 15 min), have all the same structure, their number can vary, and their filename is unique.
Thank you in advance.SQL*Loader-280 "table %s is a temporary table"
*Cause: The sqlldr utility does not load temporary tables. Note that if sqlldr did allow loading of temporary tables, the data would
disappear after the load completed.
*Action: Load the data into a non-temporary table. Can't you load data into non-temporary table and drop after update? -
OBIEE 11g - issue when export to a csv file
Hello,
I have an issue when I try to export to a csv file a report having numeric feild, when I export and open the file in notepad the trailing zeros to the right of decimal points do not show up. When export to another formats (such as excel, pdf) the trailing zeros appears correctly. e..g, 100.00 appears as 100, 100.50 appears as 100.5.
Do you know what can be done in order to have trailing zeros to the right of decimal points to appear in a csv?
ThanksThis is a bug see below.
Bug 13083479 : EXPORT DATA TO CSV FORMAT DOES NOT DOWNLOAD EXPANDED HIERARCHICAL COLUMNS
Proposed workaround : OBIEE 11.1.1.5: Pivot Table Report Export To Csv Does Not Include Hierarchical Data [ID 1382184.1]
Mark if answered.!
Thanks,
SVS -
Save a csv file on local host, executing a DTP Open hub in a Process chain?
Hi All,
my client has asked me to try if there is a way to save in the local host c:\ a .CSV file generated from an Open HUB executed by a DTP in a Process chain.
when i execute my DTP it work correctly and it save the .csv file on my c:\ directory.
my client doesn't want to give th users authtorization on RSA1 and want to try if there is a way to put the DTP in a process chain
and when executing the process chain every user can have the file on his own local c:\ directory.
i tried this solution but it doesn't work i get an error on executin the process chain:
Runtime Errors OBJECTS_OBJREF_NOT_ASSIGNED
Except. CX_SY_REF_IS_INITIAL
Error analysis
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_SY_REF_IS_INITIAL', was not
caught in
procedure "FILE_DELETE" "(METHOD)", nor was it propagated by a RAISING clause.
Since the caller of the procedure could not have anticipated that the
exception would occur, the current program is terminated.
The reason for the exception is:
You attempted to use a 'NULL' object reference (points to 'nothing')
access a component (variable: "CL_GUI_FRONTEND_SERVICES=>HANDLE").
An object reference must point to an object (an instance of a class)
before it can be used to access components.
Either the reference was never set or it was set to 'NULL' using the
CLEAR statement.
can you give some advices please?
Thanks for All
BilalHi Bilal,
Unfortunately, DTPs belonging to Open Hubs wich are targeted to a local workstation file, can't be executed through a process chain.
The only way of including such DTP in a process chain is changing the Open Hub so that it writes the output file in the application server. Then, you can retrieve the file -through FTP or any other means- from the application server to the local workstation.
Hope this helps.
King regards,
Maximiliano -
Uploading & Processing of CSV file fails in clustered env.
We have a csv file which is uploaded to the weblogic application server, written
to a temporary directory and then manipulated before being written to the database.
This process works correctly in a single server environment but fails in a clustered
environment.
The file gets uploaded to the server into the temporary directory without problem.
The processing starts. When running in a cluster the csv file is replicated
to a temporary directory on the secondary server as well as the primary.
The manipulation process is running but never finishes and the browser times out
with the following message:
Message from the NSAPI plugin:
No backend server available for connection: timed out after 30 seconds.
Build date/time: Jun 3 2002 12:27:28
The server which is loading the file and processing it writes this to the log:
17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
lfilment) One of the getParameter family of methods called after reading from
the Serv
letInputStream, not merging post parameters>.
Anna Bancroft wrote:
> We have a csv file which is uploaded to the weblogic application server, written
> to a temporary directory and then manipulated before being written to the database.
> This process works correctly in a single server environment but fails in a clustered
> environment.
>
> The file gets uploaded to the server into the temporary directory without problem.
> The processing starts. When running in a cluster the csv file is replicated
> to a temporary directory on the secondary server as well as the primary.
>
> The manipulation process is running but never finishes and the browser times out
> with the following message:
> Message from the NSAPI plugin:
> No backend server available for connection: timed out after 30 seconds.
>
>
>
> --------------------------------------------------------------------------------
>
> Build date/time: Jun 3 2002 12:27:28
>
> The server which is loading the file and processing it writes this to the log:
> 17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
> lfilment) One of the getParameter family of methods called after reading from
> the Serv
> letInputStream, not merging post parameters>.
>
>
It doesn't make sense? Who is replicating the file? How long does it
take to process the file? Which plugin are you using as proxy to the
cluster?
-- Prasad
-
Csv file processing using external table
Dear All,
Our database is oracle 10g r2 and OS is solaris
We would receive csv files to a particular directory on server each day.
File Format look like:
H00,SOURCE_NAME,FILE_CREATED_DATE
RECORD_TYPE,EMP_ID,EMP_NAME,EMP_DOB(DDMMYYYY),EMP_HIRE_DATE(DDMMYYYY),EMP_LOCATION
T00,RECORD_COUNT
EMPLOYEE TABLE STRUCTURE
EMP_ID NOT NULL NUMBER ,
EMP_NAME NOT NULL VARCHAR2(10) ,
EMP_DOB DATE,
EMP_HIRE_DATE NOT NULL DATE,
EMP_LOCATION VARCHAR2(80)
Sample File:
H00,ABC,21092013
"R01",1,"EMP1","14021986","06072010","LOC1"
"R01",20000000000,"EMP2","14021-987","06072011",""
,***,"EMPPPPPPPPPPP3","14021988","060**012","LOC2"
"R01",4,4,"14021989","06072013",
T00,4
we need to validate each record excluding header and trailer for:
DATATYPE, LENGTH,OPTIONALITY, and other date validations such as EMP_HIRE_DATE can not be less than EMP_DOB
In case of any data errors we need to send a response file for corresponding source file.
we have predefined error codes to be sent in the response file.
ERR001 EMP_ID can not be null
ERR002 EMP_ID exceeds 10 digits
ERR003 EMP_ID is not a number
ERR004 EMP_NAME has to be text
ERR005 EMP_NAME length can not exceed 10
ERR006 EMP_NAME can not be null
ERR007 EMP_DOB is not a date
ERR008 EMP_DOB is not in ddmmyyyy format
ERR009 EMP_HIRE_DATE is not a date
ERR010 EMP_HIRE_DATE is not in ddmmyyyy format
ERR011 EMP_HIRE_DATE can not be null
ERR012 EMP_LOCATION has to be text
ERR013 EMP_LOCATION length can not exceed 80
ERR014 EMP_HIRE_DATE can not be less than EMP_DOB
ERR015 Field missing in the record
ERR016 More number of fields than allowed
1.Do I need to create external table before processing each file.(EMP1.txt,EMP2.txt)?
2.How to generate these error codes in case of respective failure scenarios and to log into an exception table?
3.response file needs to have entire record and a concatination of all the error codes in the next line.
4.what would be a better approach among
creating an external table with all char(2000) fields and writing a select statement
such as select * from ext_table where (emp id is not null and length(emp_id)<=10 and....to select only proper data);
or creating the external table to be same as employee table and creating a bad file? if this is the preferred how can I generate the custom error codes?
Could you please help me in achieving this!
Warm Regards,
Shankar.You can do a one-time creation of an external table. After that, you can either overwrite the existing text file or alter the location. In the example below I have split your original sample file into two files, in order to demonstrate altering the location. You can make the external table all varchar2 fields as large as they need to be to accommodate all possible data. If you then create a staging table and rejects table of the same structure, you can create a trigger on the staging table such that, when you insert from the external table to the staging table, it inserts into either the employee table or rejects table with the concatenated error list. If you want this in a file, then you can just spool a select from the rejects table or use some other method such as utl_file. The following is a partial example. Another alternative would be to use SQL*Loader to load directly into the staging table without an external table.
SCOTT@orcl12c> HOST TYPE emp1.txt
H00,ABC,21092013
"R01",1,"EMP1","14021986","06072010","LOC1"
"R01",20000000000,"EMP2","14021-987","06072011",""
T00,4
SCOTT@orcl12c> HOST TYPE emp2.txt
H00,ABC,21092013
,***,"EMPPPPPPPPPPP3","14021988","060**012","LOC2"
"R01",4,4,"14021989","06072013",
T00,4
SCOTT@orcl12c> CREATE OR REPLACE DIRECTORY my_dir AS 'c:\my_oracle_files'
2 /
Directory created.
SCOTT@orcl12c> CREATE TABLE external_table
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80))
8 ORGANIZATION external
9 (TYPE oracle_loader
10 DEFAULT DIRECTORY my_dir
11 ACCESS PARAMETERS
12 (RECORDS DELIMITED BY NEWLINE
13 LOAD WHEN ((1: 3) != "H00" AND (1:3) != 'T00')
14 LOGFILE 'test.log'
15 FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
16 MISSING FIELD VALUES ARE NULL
17 REJECT ROWS WITH ALL NULL FIELDS
18 (record_type, emp_id, emp_name, emp_dob, emp_hire_date, emp_location))
19 LOCATION ('emp1.txt'))
20 /
Table created.
SCOTT@orcl12c> CREATE TABLE staging
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80))
8 /
Table created.
SCOTT@orcl12c> CREATE TABLE employee
2 (emp_id NUMBER NOT NULL,
3 emp_name VARCHAR2(10) NOT NULL,
4 emp_dob DATE,
5 emp_hire_date DATE,
6 emp_location VARCHAR2(80))
7 /
Table created.
SCOTT@orcl12c> CREATE TABLE rejects
2 (record_type VARCHAR2(10),
3 emp_id VARCHAR2(11),
4 emp_name VARCHAR2(14),
5 emp_dob VARCHAR2(10),
6 emp_hire_date VARCHAR2(10),
7 emp_location VARCHAR2(80),
8 error_codes VARCHAR2(4000))
9 /
Table created.
SCOTT@orcl12c> CREATE OR REPLACE TRIGGER staging_air
2 AFTER INSERT ON staging
3 FOR EACH ROW
4 DECLARE
5 v_rejects NUMBER := 0;
6 v_error_codes VARCHAR2(4000);
7 v_num NUMBER;
8 v_dob DATE;
9 v_hire DATE;
10 BEGIN
11 IF :NEW.emp_id IS NULL THEN
12 v_rejects := v_rejects + 1;
13 v_error_codes := v_error_codes || ',' || 'ERR001';
14 ELSIF LENGTH (:NEW.emp_id) > 10 THEN
15 v_rejects := v_rejects + 1;
16 v_error_codes := v_error_codes || ',' || 'ERR002';
17 END IF;
18 BEGIN
19 v_num := TO_NUMBER (:NEW.emp_id);
20 EXCEPTION
21 WHEN value_error THEN
22 v_rejects := v_rejects + 1;
23 v_error_codes := v_error_codes || ',' || 'ERR003';
24 END;
25 IF :NEW.emp_name IS NULL THEN
26 v_rejects := v_rejects + 1;
27 v_error_codes := v_error_codes || ',' || 'ERR006';
28 ELSIF LENGTH (:NEW.emp_name) > 10 THEN
29 v_rejects := v_rejects + 1;
30 v_error_codes := v_error_codes || ',' || 'ERR005';
31 END IF;
32 BEGIN
33 v_dob := TO_DATE (:NEW.emp_dob, 'ddmmyyyy');
34 EXCEPTION
35 WHEN OTHERS THEN
36 v_rejects := v_rejects + 1;
37 v_error_codes := v_error_codes || ',' || 'ERR008';
38 END;
39 BEGIN
40 IF :NEW.emp_hire_date IS NULL THEN
41 v_rejects := v_rejects + 1;
42 v_error_codes := v_error_codes || ',' || 'ERR011';
43 ELSE
44 v_hire := TO_DATE (:NEW.emp_hire_date, 'ddmmyyyy');
45 END IF;
46 EXCEPTION
47 WHEN OTHERS THEN
48 v_rejects := v_rejects + 1;
49 v_error_codes := v_error_codes || ',' || 'ERR010';
50 END;
51 IF LENGTH (:NEW.emp_location) > 80 THEN
52 v_rejects := v_rejects + 1;
53 v_error_codes := v_error_codes || ',' || 'ERR013';
54 END IF;
55 IF v_hire IS NOT NULL AND v_dob IS NOT NULL AND v_hire < v_dob THEN
56 v_rejects := v_rejects + 1;
57 v_error_codes := v_error_codes || ',' || 'ERR014';
58 END IF;
59 IF :NEW.emp_id IS NULL OR :NEW.emp_name IS NULL OR :NEW.emp_dob IS NULL
60 OR :NEW.emp_hire_date IS NULL OR :NEW.emp_location IS NULL THEN
61 v_rejects := v_rejects + 1;
62 v_error_codes := v_error_codes || ',' || 'ERR015';
63 END IF;
64 IF v_rejects = 0 THEN
65 INSERT INTO employee (emp_id, emp_name, emp_dob, emp_hire_date, emp_location)
66 VALUES (:NEW.emp_id, :NEW.emp_name,
67 TO_DATE (:NEW.emp_dob, 'ddmmyyyy'), TO_DATE (:NEW.emp_hire_date, 'ddmmyyyy'),
68 :NEW.emp_location);
69 ELSE
70 v_error_codes := LTRIM (v_error_codes, ',');
71 INSERT INTO rejects
72 (record_type,
73 emp_id, emp_name, emp_dob, emp_hire_date, emp_location,
74 error_codes)
75 VALUES
76 (:NEW.record_type,
77 :NEW.emp_id, :NEW.emp_name, :NEW.emp_dob, :NEW.emp_hire_date, :NEW.emp_location,
78 v_error_codes);
79 END IF;
80 END staging_air;
81 /
Trigger created.
SCOTT@orcl12c> SHOW ERRORS
No errors.
SCOTT@orcl12c> INSERT INTO staging SELECT * FROM external_table
2 /
2 rows created.
SCOTT@orcl12c> ALTER TABLE external_table LOCATION ('emp2.txt')
2 /
Table altered.
SCOTT@orcl12c> INSERT INTO staging SELECT * FROM external_table
2 /
2 rows created.
SCOTT@orcl12c> SELECT * FROM employee
2 /
EMP_ID EMP_NAME EMP_DOB EMP_HIRE_DATE EMP_LOCATION
1 EMP1 Fri 14-Feb-1986 Tue 06-Jul-2010 LOC1
1 row selected.
SCOTT@orcl12c> COLUMN error_codes NEWLINE
SCOTT@orcl12c> SELECT * FROM rejects
2 /
RECORD_TYP EMP_ID EMP_NAME EMP_DOB EMP_HIRE_D EMP_LOCATION
ERROR_CODES
R01 20000000000 EMP2 14021-987 06072011
ERR002,ERR008,ERR015
*** EMPPPPPPPPPPP3 14021988 060**012 LOC2
ERR003,ERR005,ERR010
R01 4 4 14021989 06072013
ERR015
3 rows selected. -
How to process large input CSV file with File adapter
Hi,
could someone recommend me the right BPEL way to process the large input CSV file (4MB or more with at least 5000 rows) with File Adapter?
My idea is to receive data from file (poll the UX directory for new input file), transform it and then export to one output CSV file (input for other system).
I developed my process that consists of:
- File adapter partnerlink for read data
- Receive activity with checked box to create instance
- Transform activity
- Invoke activity for writing to output CSV.
I tried this with small input file and everything was OK, but now when I try to use the complete input file, the process doesn't start and automatically goes to OFF state in BPEL console.
Could I use the MaxTransactionSize parameter as in DB adapter, should I batch the input file or other way could help me?
Any hint from you? I've to solve this problem till this thursday.
Thanks,
Milan K.This is a known issue. Martin Kleinman has posted several issues on the forum here, with a similar scenario using ESB. This can only be solved by completely tuning the BPEL application itself, and throwing in big hardware.
Also switching to the latest 10.1.3.3 version of the SOA Suite (assuming you didn't already) will show some improvements.
HTH,
Bas -
How to process a file with out any CSV, Delimitaor, fixed length or Copybok
Hi Team,
i need to process below file in OSB and need to send mails to the concerns ids...
this file will have either 1 mail or multiple mails.
sample.txt file with 1 mail content
======================================
START
[email protected]
[email protected], [email protected],
END
Subject : CAL IND Renege #00424523 Hse580 CTH580
BODY:
User_ID: LARRY014
XXX Hse/Customer # : 580/1196310
X12 Order Number: 580094624
Customer E-Mail: [email protected]
Customer E-Mail 2: [email protected]
Customer Phone : 909312345
Dear Salesperson,
mysupply.com Order # : 00424523
mysupply.com User ID : LARRY014
Customer CALIFORNIA STEEL IND has entered order 00424523
through mysupply.com.
THIS ORDER HAS RENEGED for the following reason(S):
I. ORDER LEVEL
NOTE SEGMENTS FOUND IN INPUT - SENTRY
CDF REQUIRED CUSTOMER - ORDER RENEGED
II. ITEM/LINE LEVEL
LINE # ECOM LINE NAED QTY STATUS ALLOW SUBS
Please resolve the renege and release the order in Sentry
01 as soon as possible. Thank you.
EMAIL-END
====================================
Please help me ,how to process this file and send mail to the concern people, as its do not have neither CSV, nor Fixed lengthn or its not Cobol copybook nor its not DTD to convert it.
Thanks
Reddy
Edited by: KiranReddy on Feb 3, 2012 9:52 PMyou shouldn't need a csv if you want a fixed file you need some thing like
read:
<xsd:element name="C1" type="xsd:string" nxsd:style="fixedLength" nxsd:length="1" nxsd:paddedBy=" " nxsd:padStyle="tail" />
<xsd:element name="C2" type="xsd:string" nxsd:style="fixedLength" nxsd:length="xx" nxsd:paddedBy=" " nxsd:padStyle="tail" />
write:
<xsd:element name="C1" type="xsd:string" nxsd:style="fixedLength" nxsd:length="1" nxsd:paddedBy=" " nxsd:padStyle="tail" />
<xsd:element name="C2" type="xsd:string" nxsd:style="fixedLength" nxsd:length="xx" nxsd:paddedBy=" " nxsd:padStyle="tail" />
<xsd:element name="C3" type="xsd:string" nxsd:style="fixedLength" nxsd:length="5" nxsd:paddedBy=" " nxsd:padStyle="tail" />xx stands for the length of your line
hope this makes sense
cheers
James -
Setting required to process incomming Email with CSV file attached
Dear all,
We are having a scenario where in the supplier receives a PO confirmation CSV file from the download center as an Email attachment. The supplier updates the file by confirming PO's and replies back. The CSV file needs to update the confirmed PO's in SNC.
The SO50 settings are in place but still we are not able to receive the Email back in SNC. Any suggestion on what settings we need in SNC to receive and process the CSV file?
thanks,
mahehsHi,
You can use the upload center to process the changed data. Isnt that helpful?
Best Regards,
Harsha Gatt -
Processing Several Records in a CSV File
Hello Experts!
I'm currently using XI to process an incoming CSV file containing Accounts Payable information. The data from the CSV file is used to call a BAPI in the ECC (BAPI_ACC_DOCUMENT_POST). Return messages are written to text file. I'm also using BPM. So far, I've been able to get everything to work - financial documents are successfully created in the ECC I also receive the success message in my return text file.
I am, however, having one small problem... No matter how many records are in the CSV file, XI only processes the very first record. So my question is this: Why isn't XI processing all the records? Do I need a loop in my BPM? Are there occurrence settings that I'm missing? I kinda figured XI would simply process each record in the file.
Also, are there some good examples out there that show me how this is done?
Thanks a lot, I more than appreciate any help!Matthew,
First let me explain the BPM Steps,
Recv--->Transformation1->For-Each Block->Transformation2->Synch Call->Container(To append the response from BAPI)->Transformation3--->Send
Transformation3 and Send must be outside Block.
Transformation1
Here, the source and target must be same. I think you must be know to split the messages, if not see the below example
Source
<MT_Input>
<Records>
<Field1>Value1</Field1>
<Field2>Value1</Field2>
</Records>
<Records>
<Field1>Value2</Field1>
<Field2>Value2</Field2>
</Records>
<Records>
<Field1>Value3</Field1>
<Field3>Value3</Field3>
</Records>
</MT_Input>
Now , I need to split the messages for each Records, so what I can do?
In Message Mapping, choose the source and target as same and in the Messages tab, choose the target occurrence as 0..Unbounded.
Now,if you come to Mapping tab, you can see Messages tag added to your structure, and also you can see <MT_Input> occurrence will be changed to 0..unbounded.
Here is the logic now
Map Records to MT_INPUT
Constant(empty) to Records
Map rest of the fields directly. Now your o/p looks like
<Messages>
<Message1>
<MT_Input>
<Records>
<Field1>Value1</Field1>
<Field2>Value1</Field2>
</Records>
</MT_Input>
<MT_Input>
<Records>
<Field1>Value2</Field1>
<Field2>Value2</Field2>
</Records>
</MT_Input>
<MT_Input>
<Records>
<Field1>Value3</Field1>
<Field3>Value3</Field3>
</Records>
</MT_Input>
</Message1>
</Messages>
raj. -
Processing a .CSV file to Table
HI Friends,
Please le tme know how to process a .CSV file to Table by using SQL loader Concept.
No other technique required...! Know it can be done thru various processes. But needed in the above way...
Thanks in adv...!
tempdbs
www.dwforum.nethere:
http://forums.oracle.com/forums/search.jspa?threadID=&q=loading+csv+sql+loader&objID=c84&dateRange=lastyear&userID=&numResults=15
Maybe you are looking for
-
How to print error messages in browser?
When I visit a jsp file in browser, there's some errors in the jsp file, so the browser returns The page cannot be displayed There is a problem with the page you are trying to reach and it cannot be displayed. I want the detail error messages, so I c
-
Export to Excel and CSV rounding differ
Hi All, I was able to export data from report to excel and csv formats.However the excel format shows the downloaded values with decimal roundings, whereas CSV format is not showing the decimal roundings. For example Overall result is 253,333.39 Exce
-
WSUS updates file location is empty
We were using GFI and moved to WSUS- I just setup a WSUS server- setup computer groups in WSUS- setup GPO's- Most servers and computers have checked in- I have also approved updates- When I look in the WSUSContent folder I see many directories but
-
Use Different mailbox or same mailbox if setup two accepted domains in one Exchange Server 2010
Hi, I want to know if I setup two accepted domain in one Exchange Server 2010, will it have another mailbox or use same mailbox?
-
How do you fix a fine line scrolling on all screens - ex wallpaper,email,safari,word etc?