Populating the data into idoc
Hi
Could you plz tell me the ways through which we can populate the data into idoc ?? I can think of two ways.
1. By writing the report program and executing the same
2. By change pointer concept.
Are there any ways through which we can populate the data into idocs ??
thanks
Kumar
Hi,
Others are
1. Creation through NAST message control
2. Creation through workflow
aRs
Similar Messages
-
I have a process that creates a collection with data from multiple tables. The query returns multiple rows in 'Sql commands' tab. The same query is used to create the collection in 'Before Header' and when i create a region with Region source as
Select * From apex_collections Where collection_name = 'load_dashboard';
At the time of rendering the page shows me 'no data found'.
what could be the problem? Are there any prerequisites before creating the collection?
Thanks in Advance,
SriramHi Denes,
Below is my code for creating and saving data into the collection.
if apex_collection.collection_exists(p_collection_name =>'load_dashboard') then
apex_collection.delete_collection(
p_collection_name =>'load_dashboard');
end if;
apex_collection.create_collection_from_query(
p_collection_name => 'load_dashboard',
p_query => 'select a.rowid
,b.first_name
,b.last_name
,c.job_title
,d.parent
,d.child_level_1
,d.child_level_2
,a.resource_allocation
,a.person_id
,a.month_id
,a.oracom_project_id ,wwv_flow_item.md5(a.rowid,b.first_name,b.last_name,c.job_title,d.parent,d.child_level_1,d.child_level_2,a.resource_allocation,a.person_id,a.month_id,a.oracom_project_id)
from oracom_resource_management a, oracom_people b,oracom_job c ,oracom_project d where a.supervisor_id=886302415 and a.month_id=201312 and a.oracom_job_id=c.job_id and a.person_id=b.person_id and a.oracom_project_id=d.oracom_project_id',
p_generate_md5 => 'YES'
Sriram. -
Parsing an EDI file and populating the data into database table
Hi ,
Please help me in parsing an edi file and getting the required columns.
we get an EDI file from a bank. I need to parse that file and populate the db table with the required columns.
the file is '*' delimited and every line ends with '\'.
The record starts with 'ST*' and ends with 'SE*'.
sample edi file is
ISA*00* *00* *ZZ*043000096820 *ZZ*2156833510 *131202*0710*U*00401*000001204*0*P*>\ ignore first 2 lines
GS*RA*043000096820*2156833510*131202*0710*1204*X*003020\
ST*820*000041031\
BPR*X*270*C*ACH*PPD*01*101036669***9101036669**01*031000053*DA*00000008606086714*131202\
TRN*1*101036661273032\
DTM*007*131202\
N1*1U*BPS\
N1*BE*MICHAEL DRAYTON*34*159783633\
N1*PE*BPS*ZZ*183383689C2 ABC\
N1*PR*ABC TREAS 310\
SE*9*000041031\ ST*820*000041032\
BPR*X*686*C*ACH*PPD*01*101036669***9101036669**01*031000053*DA*00000008606086714*131202\
TRN*1*101036661273034\
DTM*007*131202\
N1*1U*BPS\
N1*BE*SAMIA GRAVES*34*892909238\
N1*PE*BPS*ZZ*184545710C5 ABC\
N1*PR*ABC TREAS 310\
SE*9*000041032\
Below is the procedure I am trying to use for parsing that file. but the logic is not working. can you please help me in doing this. its very urgent requirement.
CREATE OR REPLACE package body p1 is
Function parse_spec(p_str varchar2) return t_str_nt is
begin
return regexp_replace(p_str,'\\$',null);
end;
procedure edi( is
l_out_file utl_file.file_type;
l_lin varchar2(200);
field1 number(9);
field2 varchar2(10 byte);
field3 varchar2(15 byte);
field4 varchar2(15 byte);
field5 varchar2(20 byte);
field6 varchar2(20 byte);
field7 varchar2(20 byte);
field8 varchar2(9 byte);
field9 varchar2(15 byte);
field10 varchar2(5 byte);
l_item_nt t_str_nt:=t_str_nt();
begin
l_out_file := utl_file.fopen (file_path, file_name, 'r');
IF utl_file.is_open(l_out_file) THEN
LOOP
BEGIN
l_item_nt:= utl_file.get_line(l_out_file, l_lin);
IF l_item_nt IS NULL THEN
raise no_data_found;
Else
for k in 1..l_item_nt.count loop
case
when l_item_nt(k) like 'ST*%' then
field1:= ltrim(regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3),0);
when l_item_nt(k) like 'BPR*X*%' then
field2 := regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3);
when l_item_nt(k) like 'TRN*1*%' then
field3:= regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3);
when l_item_nt(k) like 'DTM*007*%' then
field4:= regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3);
when l_item_nt(k) like '%*BE*%' then
field5 := regexp_substr(regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3),'[^ ]+', 1, 1);
field6 := regexp_substr(regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,3),'[^ ]+', 1, 1);
field7 := regexp_substr(parse_spec(l_item_nt(k)),'[^*]+',1,5);
when l_item_nt(k) like '%*PE*%*ZZ*%' then
field8:= regexp_substr(regexp_substr(parse_spec(line),'[^*]+',1,5),'[^ ]+',1,1)
field9 := regexp_substr(regexp_substr(parse_spec(line),'[^*]+',1,5),'[^ ]+',1,2);
when l_item_nt(k) like 'SE*%' then
insert into t1(field1,field2,field3,field5,field6,field7,field8,field9)
-- values(field1,field2,field3,field5,field6,field7,field8,field9);
else
dbms_output.put_line ('end of line');
end case;
end loop;
end if;
end loop;
utl_file.fclose(l_out_file);
exception
when no_data_found then
dbms_output.put_line('No data found');
end;I would not use regular expressions for parsing as it is CPU intensive - and standard string processing suffices.
I would break the EDI up into lines. I would tokenise each line. I then have 2d array that can be referenced to find a specific field. E.g. line x and token y is field abc.
Basic approach:
SQL> create or replace type TStrings as table of varchar2(4000);
2 /
Type created.
SQL> -- create a parser that tokenises a string
SQL> create or replace function Tokenise(
2 csvLine varchar2,
3 separator varchar2 default ',',
4 enclosedBy varchar2 default null
5 ) return TStrings is
6 strList TStrings;
7 str varchar2(32767);
8 i integer;
9 l integer;
10 enclose1 integer;
11 enclose2 integer;
12 encloseStr varchar2(4000);
13 replaceStr varchar2(4000);
14
15 procedure AddString( line varchar2 ) is
16 begin
17 strList.Extend(1);
18 strList( strList.Count ) := Replace( line, CHR(0), separator );
19 end;
20
21 begin
22 strList := new TStrings();
23
24 str := csvLine;
25 loop
26 if enclosedBy is not null then
27 -- find the ennclosed text, if any
28 enclose1 := InStr( str, enclosedBy, 1 );
29 enclose2 := InStr( str, enclosedBy, 2 );
30
31 if (enclose1 > 0) and (enclose2 > 0) and (enclose2 > enclose1) then
32 -- extract the enclosed string
33 encloseStr := SubStr( str, enclose1, enclose2-enclose1+1 );
34 -- replace the separator char's with zero char's
35 replaceStr := Replace( encloseStr, separator, CHR(0) );
36 -- and remove the enclosed quotes
37 replaceStr := Replace( replaceStr, enclosedBy );
38 -- change the enclosed string in the big string to the replacement string
39 str := Replace( str, encloseStr, replaceStr );
40 end if;
41 end if;
42
43 l := Length( str );
44 i := InStr( str, separator );
45
46 if i = 0 then
47 AddString( str );
48 else
49 AddString( SubStr( str, 1, i-1 ) );
50 str := SubStr( str, i+1 );
51 end if;
52
53 -- if the separator was on the last char of the line, there is
54 -- a trailing null column which we need to add manually
55 if i = l then
56 AddString( null );
57 end if;
58
59 exit when str is NULL;
60 exit when i = 0;
61 end loop;
62
63 return( strList );
64 end;
65 /
Function created.
SQL>
SQL>
SQL> declare
2 ediDoc varchar2(32767) :=
3 'ISA*00* *00* *ZZ*043000096820 *ZZ*2156833510 *131202*0710*U*00401*000001204*0*P*>\GS*RA*043000096820*2156833510*131202*0710*1204*X*003020\ST*820*000041031\BPR*X*270*C*ACH*PPD*01*101036669***9101036669**01*031000053*DA*00000008606086714*131202\TRN*1*101036661273032\DTM*007*131202\N1*1U*BPS\N1*BE*MICHAEL DRAYTON*34*159783633\N1*PE*BPS*ZZ*183383689C2 ABC\N1*PR*ABC TREAS 310\SE*9*000041031\ST*820*000041032\BPR*X*686*C*ACH*PPD*01*101036669***9101036669**01*031000053*DA*00000008606086714*131202\TRN*1*101036661273034\DTM*007*131202\N1*1U*BPS\N1*BE*SAMIA GRAVES*34*892909238\N1*PE*BPS*ZZ*184545710C5 ABC\N1*PR*ABC TREAS 310\SE*9*000041032\';
4
5 lines TStrings;
6 tokens TStrings;
7 begin
8 -- split EDI string into lines
9 lines := Tokenise( ediDoc, '\' );
10
11 -- process line and extract fields
12 for i in 3..lines.Count loop
13 dbms_output.put_line( '***********************' ) ;
14 dbms_output.put_line( 'line=['||lines(i)||']' );
15 tokens := Tokenise( lines(i), '*' );
16
17 for j in 1..tokens.Count loop
18 dbms_output.put_line( to_char(j,'00')||'='||tokens(j) );
19 end loop;
20 end loop;
21 end;
22 /
line=[ST*820*000041031]
01=ST
02=820
03=000041031
line=[BPR*X*270*C*ACH*PPD*01*101036669***9101036669**01*031000053*DA*00000008606086714*131202]
01=BPR
02=X
03=270
04=C
05=ACH
06=PPD
07=01
08=101036669
09=
10=
11=9101036669
12=
13=01
14=031000053
15=DA
16=00000008606086714
17=131202
line=[TRN*1*101036661273032]
01=TRN
02=1
03=101036661273032
<snipped> -
Are there any standard Idocs or Bapis for posting the data into transaction
Hi,
Are there any standard Idocs or Bapis for posting the data into transactions ME42N and IK11?
Thank You.Thank you.
Any idea of the other one? -
populating the data to the data base table .... which i created using only reports ...
can any body tell the approach ....... if possible coding also plz help me out ..
but using only general reports ..Are you talking about loading data to an internal table from a database table or poplulating a database table?
If the former, the previous response is correct.
If you are attempting to load data into a database table, there are a number of approaches, including CATT and BDC.
I have frequently used BDC.
Both approaches have the data initialing in a flat file, such as a tab-delimited file output from Excel with the data to be loaded into the table.
The program then loads the data from the PC and creates a BDC session, which simulates data input into the appropriate transaction for updating the table in question.
You can also use a more straightforward technique if you are loading a Z table that does not require the standard edit checking and control you get with CATT or BDC.
Please clarify your request.
Good luck
Brian -
What is the procedure to transfor the data to idoc.
Hi Abapers,
What is the procedure to transfor the data to idoc.
I have added some new fields in my program. so now i need to transfer those same filed into IDOC.
Can any body tell me the procedure.
Points will be given.Hi,
First of all, you have to find an EXIT in the driver program. In the exit, you have to populate the values to the new segments which you created.
Creation of custom idoc includes the following steps:
Create Segment ( WE31)
Create Idoc Type ( WE30)
Create Message Type ( WE81)
Assign Idoc Type to Message Type ( WE82)
Creating a Segment
Go to transaction code WE31
Enter the name for your segment type and click on the Create icon Type the short text Enter the variable names and data elements Save it and go back Go to Edit -> Set Release Follow steps to create more number of segments
Create IDOC Type
Go to transaction code WE30
Enter the Object Name, select Basic type and click Create icon Select the create new option and enter a description for your basic IDOC type and press enter Select the IDOC Name and click Create icon The system prompts us to enter a segment type and its attributes Choose the appropriate values and press Enter The system transfers the name of the segment type to the IDOC editor.
Create IDOC Type
Follow these steps to add more number of segments to Parent or as Parent-child relation
Save it and go back
Go to Edit -> Set release
Create Message Type
Go to transaction code WE81
Change the details from Display mode to Change mode After selection, the system will give this message The table is cross-client (see Help for further info). Press Enter Click New Entries to create new Message Type Fill details Save it and go back
Assign Message Type to IDoc Type
Go to transaction code WE82
Change the details from Display mode to Change mode After selection, the system will give this message The table is cross-client (see Help for further info). Press Enter.
Click New Entries to create new Message Type.
Fill details
Save it and go back -
Can anyone confirm the date used for pushing the data into AR interface table? Is it abse don Actual ship date or scheduled ship date? We are facing a scenario where trx date is lower than the actual ship to which logically sounds incorrect.
Appreciate any quick response around this.Hi,
Transaction date will be your autoinvoice master program submission level date (If you haven't setup any logic.
Please check the program level default date, if user enter old date ststem will pick the same.
Customer is trying to set the value of the profile OM:Set receivables transaction date as current date for non-shippable lines at the responsiblity level. System does not set the transaction date to current date in ra_interface_lines_all.
CAUSE
Customer has used the functionality in R11i. But after the upgrade to R12, the system functions differently than R11i.
SOLUTION
1.Ensure that there are no scheduled workflow background process run.
2.Set the profile "OM: Set Receivables Transaction Date as Current Date for Non-Shippable Lines" at Responsibility level only as Yes.
3.Now switch the responsibility to which the profile is set.
4.Create order for Non-Shippable Lines and progress it to invoicing.
5.Ensure that the 'workflow background process' concurrent program is run in the same responsibility and this line is considered in it.
6.Now check if the 'SHIP_DATE_ACTUAL' is populated to ra_interface_lines_all -
Unable to Loda the data into PSA.
Hi Xpert,
i am uanble to load the data into PSA.
my source system is not R/3,it is BI.
(ctually we are xtracting the data (from a cube) with the help of programm to a table and then i am creating a generic data ource on that table and loading the data to my cube.)
i am getting this error message.
Error message from the source system
Diagnosis
An error occurred in the source system.
System Response
Caller 09 contains an error message.
Further analysis:
The error occurred in Extractor .
Refer to the error message.
Procedure
How you remove the error depends on the error message.
Note
If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the
Then i replicated the data source and activate ti also then also i am getting this error message.
when i am checking my data source on RSA3 i am getting this error message,
Two internal tables are neither compatible nor convertible.
t happened?
Error in the ABAP Application Program
The current ABAP program "SAPLAQBWEXR" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
or analysis
You attempted to move one data object to another.
This is not possible here because the internal tables concerned
are neither compatible nor convertible.
gger Location of Runtime Error
Program SAPLAQBWEXR
Include LAQBWEXRU01
Row 419
Module type (FUNCTION)
Module Name AQBW_CALL_EXTRACTOR_QUERY
Regards,
sat534Hi
Problem looks to be with generic datasource
share details of data source and how you created it.
Regards
Sudeep -
Hi,
Iam new to the xml,
can u please anyone help me how to write procedure to load the data into a table using xml as input parameter to a procedure and xml file is as shown below which is input to me.
<?xml version="1.0"?>
<DiseaseCodes>
<Entity><dcode>0</dcode><ddesc>(I87)Other disorders of veins - postphlebitic syndrome</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
<Entity><dcode>0</dcode><ddesc>(J04)Acute laryngitis and tracheitis</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
<Entity><dcode>0</dcode><ddesc>(J17*)Pneumonia in other diseases - whooping cough</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
</DiseaseCodes>.
Regards,
vikram.here is the your XML parse in 11g :
select *
from xmltable('//Entity' passing xmltype
'<?xml version="1.0"?>
<DiseaseCodes>
<Entity><dcode>0</dcode><ddesc>(I87)Other disorders of veins - postphlebitic syndrome</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
<Entity><dcode>0</dcode><ddesc>(J04)Acute laryngitis and tracheitis</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
<Entity><dcode>0</dcode><ddesc>(J17*)Pneumonia in other diseases - whooping cough</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity>
</DiseaseCodes>
') columns
"dcode" varchar2(4000) path '/Entity/dcode',
"ddesc" varchar2(4000) path '/Entity/ddesc',
"reauthflag" varchar2(4000) path '/Entity/reauthflag'
dcode ddesc reauthflag
0 (I87)Other disorders of veins - postphlebitic syndrome 0
0 (J04)Acute laryngitis and tracheitis 0
0 (J17*)Pneumonia in other diseases - whooping cough 0
SQL>
Using this parser you can create procedure as
SQL> create or replace procedure myXMLParse(x clob) as
2 begin
3 insert into MyXmlTable
4 select *
5 from xmltable('//Entity' passing xmltype(x) columns "dcode"
6 varchar2(4000) path '/Entity/dcode',
7 "ddesc" varchar2(4000) path '/Entity/ddesc',
8 "reauthflag" varchar2(4000) path '/Entity/reauthflag');
9 commit;
10 end;
11
12 /
Procedure created
SQL>
SQL>
SQL> exec myXMLParse('<?xml version="1.0"?><DiseaseCodes><Entity><dcode>0</dcode><ddesc>(I87)Other disorders of veins - postphlebitic syndrome</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity><Entity><dcode>0</dcode><ddesc>(J04)Acute laryngitis and tracheitis</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity><Entity><dcode>0</dcode><ddesc>(J17*)Pneumonia in other diseases - whooping cough</ddesc><claimid>34543></claimid><reauthflag>0</reauthflag></Entity></DiseaseCodes>');
PL/SQL procedure successfully completed
SQL> select * from MYXMLTABLE;
dcode ddesc reauthflag
0 (I87)Other disorders of veins - postphlebitic syndrome 0
0 (J04)Acute laryngitis and tracheitis 0
0 (J17*)Pneumonia in other diseases - whooping cough 0
SQL>
SQL>
Ramin Hashimzade -
How to create a procedure in oracle to write the data into file
Hi All,
I am just wondered on how to create a procedure which will do following tasks:
1. Concat the field names
2. Union all the particular fields
3. Convert the date field into IST
4. Prepare the statement
5. write the data into a file
Basically what I am trying to achieve is to convert one mysql proc to oracle. MySQL Proc is as follows:
DELIMITER $$
USE `jioworld`$$
DROP PROCEDURE IF EXISTS `usersReport`$$
CREATE DEFINER=`root`@`%` PROCEDURE `usersReport`(IN pathFile VARCHAR(255),IN startDate TIMESTAMP,IN endDate TIMESTAMP )
BEGIN
SET @a= CONCAT("(SELECT 'User ID','Account ID','Gender','Birthdate','Account Registered On') UNION ALL (SELECT IFNULL(a.riluid,''),IFNULL(a.rilaccountid,''),IFNULL(a.gender,''),IFNULL(a.birthdate,''),IFNULL(CONVERT_TZ(a.creationDate,'+0:00','+5:30'),'') INTO OUTFILE '",pathFile,"' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '' LINES TERMINATED BY '\n' FROM account_ a where a.creationDate>='",startDate,"' and a.creationdate <='",endDate,"')");
PREPARE stmt FROM @a;
EXECUTE stmt;
DEALLOCATE PREPARE stmt ;
END$$
DELIMITER ;
Regards,
Vishal G1. Concat the field names
Double Pipe (||) is the concatenation operator in Oracle. There is also a function CONCAT for this purpose
2. Union all the particular fields
Not sure what do you mean by UNION ALL particular fields? UNION ALL is a set operation applied on two different result sets that have the same projection.
3. Convert the date field into IST
SQL> select systimestamp "Default Time"
2 , systimestamp at time zone 'Asia/Calcutta' "IST Time"
3 from dual;
Default Time IST Time
05-05-15 03:14:52.346099 AM -04:00 05-05-15 12:44:52.346099 PM ASIA/CALCUTTA
4. Prepare the statement
What do you mean by prepare the statement?
5. write the data into a file
You can use the API UTL_FILE to write to a file. -
Insert the data into two tables at a time.
Hi ,
i have these two tables
create table [dbo].[test1](
[test1_id] [int] identity(1,1) primary key,
[test2_id] [int] not null
create table [dbo].[test2](
[test2_id] [int] identity(1,1) primary key,
[test1_id] [int] not null
alter table [dbo].[test1]
add constraint [fk_test1_test2_id] foreign key([test2_id])
references [dbo].[test2] ([test2_id])
alter table [dbo].[test2] add constraint [fk_test2_test2_id] foreign key([test1_id])
references [dbo].[test1] ([test1_id])
I want to insert the data into two tables in one insert statement. How can i do this using T-SQL ?
Thanks in advance.You can INSERT into both tables within one Transaction but not in one statement. By the way, you would need to alter your dbo.Test1 table to allow null for first INSERT test2_id column
See sample code below:
CREATE TABLE #test1(test1_ID INT IDENTITY(1,1),test2_id INT NULL)
CREATE TABLE #test2(test2_ID INT IDENTITY(1,1),test1_ID INT)
DECLARE @Test1dentity INT
DECLARE @Test2dentity INT
BEGIN TRAN
-- Insert NULL as test2_ID value is unknown
INSERT INTO #test1(test2_ID)
SELECT NULL;
-- get inserted identity value
SET @Test1dentity = SCOPE_IDENTITY();
INSERT INTO #test2(test1_ID)
SELECT @Test1dentity;
-- get inserted identity value
SET @Test2dentity = SCOPE_IDENTITY();
-- Update test1 table
UPDATE #test1
SET test2_ID = @Test2dentity
WHERE test1_ID = @Test1dentity;
COMMIT
SELECT * FROM #test1;
SELECT * FROM #test2;
-- Drop temp tables
IF OBJECT_ID('tempdb..#test1') IS NOT NULL
BEGIN
DROP TABLE #test1
END
IF OBJECT_ID('tempdb..#test2') IS NOT NULL
BEGIN
DROP TABLE #test2
END
web: www.ronnierahman.com -
Error while writing the data into the file . can u please help in this.
The following error i am getting while writing the data into the file.
<bindingFault xmlns="http://schemas.oracle.com/bpel/extension">
<part name="code">
<code>null</code>
</part>
<part name="summary">
<summary>file:/C:/oracle/OraBPELPM_1/integration/orabpel/domains/default/tmp/
.bpel_MainDispatchProcess_1.0.jar/IntermediateOutputFile.wsdl
[ Write_ptt::Write(Root-Element) ] - WSIF JCA Execute of operation
'Write' failed due to: Error in opening
file for writing. Cannot open file:
C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\
BPEL_Import_with_Dynamic_Transformation\WORKDIRS\SampleImportProcess1\input for writing. ;
nested exception is: ORABPEL-11058 Error in opening file for writing.
Cannot open file: C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\
BPEL_Import_with_Dynamic_Transformation
\WORKDIRS\SampleImportProcess1\input for writing. Please ensure 1.
Specified output Dir has write permission 2.
Output filename has not exceeded the max chararters allowed by the
OS and 3. Local File System has enough space
.</summary>
</part>
<part name="detail">
<detail>null</detail>
</part>
</bindingFault>Hi there,
Have you verified the suggestions in the error message?
Cannot open file: C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\BPEL_Import_with_Dynamic_Transformation\WORKDIRS\SampleImportProcess1\input for writing.
Please ensure
1. Specified output Dir has write permission
2. Output filename has not exceeded the max chararters allowed by the OS and
3. Local File System has enough space
I am also curious why you are writing to a directory with the name "..\SampleImportProcess1\input" ? -
Error while activating the data into DSO
Hi
My base DSO is used to load 4 other data targets.
In process chain, after the base DSO gets activated there are 4 DTPu2019s running to load the data from base DSO to other DSO and 3 cubes.
When loading to other DSO, We have encountered an error
Object is currently locked by BI Remote
Lock not set for : Activating data in DSO
Activation of M records terminated.
1. My question is when loading the data from base DSO to other objects , how does the lock mechanism works.
I know that we cannot load the data into base DSO, when base DSO is sending data into target.
2. What difference does it make when loading DSO to DSO and cube parallel?
Thanks
AnnieHi Annie.....
1. My question is when loading the data from base DSO to other objects , how does the lock mechanism works.
I know that we cannot load the data into base DSO, when base DSO is sending data into target.
Do you mean to say that the loading in the 2nd level DSO was successful .....but the activation failed ?
Have you checked in SM12 that whether that 2nd level DSO is somehow locked or not ?
Is any further targets getting loaded from this 2nd level DSO ?
Look suppose u r loading a DSO A.........and in the mean time some load starts from DSO A to some other target(it may be DSO or a cube).........then the activation in the DSO A will fail........because since the last request in the DSO A is not activated....that request will not get considered in the subsequent load....and since the load is already in progress....system will not allow to activate any new request......
Another option can be that DSO A is getting loaded from some other targets as well.......so since still some load is in progress in this target....it will not allow the activation....
So check it and atart the activation again..
2. What difference does it make when loading DSO to DSO and cube parallel?
The main difference is that there is no activation concept in the cube....so a cube may get loaded from several targets in parallel......
A DSO can also get loaded in parallel.......but activation should start once all the loads get completed successfully.....
Regards,
Debjani.... -
Error While loading the data into PSA
Hi Experts,
I have already loaded the data into my cube. But it doesn't have values for some fields. So again i modified the data in the flat file and tried to load them into PSA. But while starting the Info Package, I got an error saying that,
"Check Load from InfoSource
Created YOKYY on 20.02.2008 12:44:33
Check Load from InfoSource , Packet IP_DS_C11
Please execute the mail for additional information.
Error message from the source system
Diagnosis
An error occurred in the source system.
System Response
Caller 09 contains an error message.
Further analysis:
The error occurred in Extractor .
Refer to the error message.
Procedure
How you remove the error depends on the error message.
Note
If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
Pls help me in this......
With Regards,
Yokesh.Hi,
After editing the file did you save the file and close it.
This error may come if your file was open at the time of request.
Also did you check the file path settings.
If everything is correct try saving the infopackage once and loading again.
Thanks,
JituK -
Custom PL/SQL API that inserts the data into a custom interface table.
We are developing a custom Web ADI integrator for importing suppliers into Oracle.
The Web ADI interface is a custom PL/SQL API that inserts the data into a custom interface table. We have defined the content, uploader and an importer. The importer is again a custom PL/SQL API that will process the records inserted into the custom table and updates the STATUS column of the custom interface table. We want to show the status column back on the spreadsheet.
Defined the 'Document Row' import rule and added the rows that would identify the unique record.
Errored row import rule, we are using a SELECT * from custom_table where status<>'Success' and vendor_name=$param$.vendor_name
The source of this parameter is import.vendor_name
We have also defined an Error lookup.
After the above setup is completed, we invoke the create document and click on Oracle->Upload.
The records are getting imported, but the importer program is failing with An error has occurred while running an API import. The ERRORED_ROWS step 20003:ER_500141, parameter number 1 must contain the value BIND in attribute 1.'The same issue.
Need help.
Also checked bne.log, no additional information.
<bne:document xmlns:bne="http://www.oracle.com/bne">
<bne:message bne:type="DATA" bne:text="BNE_VALID_ROW_COUNT" bne:value="11" />
<bne:message bne:type="DATA" bne:text="BNE_INVALID_ROW_COUNT" bne:value="0" />
<bne:message bne:type="ERROR" bne:text="An error has occurred while running an API import"
bne:cause="The ERRORED_ROWS step 20003:ER_500165, parameter number 1 must contain the value BIND in attribute 1."
bne:action="" bne:source="BneAPIImporter" >
<bne:context bne:collection="collection_1" />
</bne:message><bne:message bne:type="STATUS"
bne:text="No rows uploaded" bne:value="" >
<bne:context bne:collection="collection_1" /></bne:message>
<bne:message bne:type="STATUS" bne:text="0 rows were invalid" bne:value="" >
<bne:context bne:collection="collection_1" /></bne:message></bne:document>
Maybe you are looking for
-
Auto Update Server 4.2 funktion update now doesn't work
hello, in the new 64 Bit version I got the follwoing error if I using the the update now funktion. HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request.
-
Info about webshop in ERP 4.6C
Hello, Does any one have usefull info about webshop in ERP 4.6C? Goal is to keep it as basic as possible (sales order entry and kind of product catalog). Many thanks in advance! Ana
-
Hello there, I need some help, since I handle post production work, I don't really have a handle on flash. I have all the files for my site. I need advice on what needs to be done to make my site stick out from the crowd. I can pay via paypal. I hope
-
How to boldface the header for ONE column
hi, I want to know how to boldface the header for ONE column. The following will boldface the header for all the columns in the table table.getTableHeader().setFont(new java.awt.Font("Dialog", 1, 12)); Thanks. Jrabi
-
How to Go back to to old update 31.x.x.x
Hi friends, I have updated with 40.0.005, now i have many issues like Alarm snoozing/stopping, Swithing off like battry drain when charge reaches to last two line. is this issues because of this update ? i dont know. but after this update i face like