CSV file generation issue
Hello All,
We are facing below issue during CSV file generation :
Generated file shows field value as 8.73E+11 in output and when we are clicking inside this column result shown is approximate of the correct value like 873684000000. We wish to view the correct value 872684000013.
Values passes from report program during this file generation are correct.
Please advice to resolve this issue.
Thanks in Advance.
There is nothing wrong in your program, it is the property of excel that if the value in the cell is larger than
default size it shows in output of that format.
Similar Messages
-
Hi All,
We have IDoc to File(CSV) Scenario.
Target Field Values have comma(,) character in it. Field Separator : comma(,)
Fields contain comma disturbs the File Generation Sequence.
eg.,
Header
field 1, field 2, field 3, field 4
field 1=test
field 2=sample
field 3=firstname,lastname
field4 = address
Output CSV:
field1, field2 , field 3, field 4
test,sample,firstname,lastname,address
Field 3 Value has been splitted into two. How to handle this case. Kindly help
Best Regards,
Suresh SHi,
Double Quotes inclusion at Mapping level and following FCC Parameters helped to resolved that issue.
However, we just need to exclude again the double quotes in the field before posting it to end Application which can be handled through FTP Module level configuration.
Does anyone have idea about Standard Adapter Module which handle my requirement
Best Regards,
Suresh S -
Excel 2007 csv file formatting issue
Our users create .csv files for upload to SAP. Their habit is to include a number of blank lines in excel to make it more readable.
In Excel 2003, blank lines were handled as, literally, blank lines, and opening in a text editor shows exactly that, a blank line (with a CR-LF character to terminate the row).
In Excel 2007 however, the blank line consists of a number of commas equal to the no. of columns, followed by the CR-LF termination. Hope that makes sense.
While the 2003-generated .CSVs are fine, the 2007 versions cause SAP to throw an exception ("Session never created from RFBIBL00") and the upload fails. The question therefore is, has anyone ever come across anything similar, or is anyone aware of any remediation that might be possible? Haven't been able to find any documentation on this Excel 2003-2007 change sonot able to address the issue through Excel config.
Thanks!
DuncanHello
Please refer to the consulting note 76016 which will provide information on the performance of the standard program
RFBIBL00.
Regards. -
Ssrs 2008 export to csv file display issue
In a new SSRS 2008 report, I would like to know if there is a way to automatically expand the width of some of the rows when the data is exported to a CSV file so the data is displayed correctly. Here are examples that I am referring to:
1. In one column where there is suppose to be a date that looks like 12/11/2014, the value of ########## is displayed. The value of 12/11/2014 is what is setup in the SSRS formatting option.
2. In a number field that is suppose to look like 6039267049 the value that is displayed is 6E+09.
Basically if I manually expand the width of the columns that I am referring to above, the data is displayed correctly. Thus can you tell me what I can do so that the data is disaplayed correctly in the CSV file and ther user does not need to manually expand
the column to see the data?Hi wendy,
After testing the issue in my local environment, I can reproduce it when use Excel to open the csv file. As per my understanding, there is no width when we export the report to csv, Excel is just using the default cell sizes when we open the csv. So when
a date value is wider than the default cell size, it would be displayed as ##########. When a number value is wider than the default cell size, it would use Scientific format.
As to the date value, we can use the expression =cstr(Fields!Date.Value) to replace the former one =Fields!Date.Value. In this way, the width of the value is narrower than the default cell size, so that the date value can be displayed correctly. For the
number value, it already narrow down the width with Scientific format. Or we can Select all the cells in the csv file, then click Format| AutoFit Column Width to change all the cells width to fit its corresponding values in Excel level.
Besides, we can try to export to excel instead of csv. Excel format can inherit the column width in the report. So in this way, we can directly change the width to fit the values in Reporting Services level.
Hope this helps.
Thanks,
Katherine Xiong
If you have any feedback on our support, please click
here.
Katherine Xiong
TechNet Community Support -
hi can u reply me for the below post
Actually iam using a FF connection manger to read the csv file .. with the row delimiter to {CR}{LF}.
Suddenly in looping through the files package got failed because it cant able to read the csv file
Again i changed the row delimiter to {LF} then it worked for the file which i faced issue with the {CR}{LF} delimiter.
Now i want to know why the package is failing for the row delimiter issue..
can any one help me on this.
Please share me what actually the difference between thosePlease share me what actually the difference between those
CR = Carriage Return = CHAR(13) in SQL
This character is used in Mac OS as new line.
When this character is used, cursor goes to first position of the line
LF = Line Feed = CHAR(10) in SQL
This character is used in Unix as new line.
When this character is used, cursor goes to next line of the line (Old days typewritter when paper moves up)
CR LF
New line in Windows system. Combination of CR and LF.
Best thing is to open the test flat file in Notepadd++ and enable show symbols - show chars and see what exactly you have as row delimiter
Cheers,
Vaibhav Chaudhari
[MCTS],
[MCP] -
Spool output to .csv file - having issues with data display
Hi,
Need to deliver the output of a select query which has around 80000 records to a .csv file. A procedure is written for the select query and the procedure is being called in the spool script. But few of the columns have comma(,) in the values. For Example, there is a personal_name column in the select query which says the name as " James, Ed". Then output is displayed in different columns. Hence the data is being shifted to the right for the remaining columns.
Could some one help fix this issue. I mainly used a procedure as the select query is about three pages and hence want the script to look clear.
Script is,
set AUTOPRINT ON ;
set heading ON;
set TRIMSPOOL ON ;
set colsep ',' ;
set linesize 1000 ;
set PAGESIZE 80000 ;
variable main_cursor refcursor;
set escape /
spool C:\documents\querys\personal_info.csv
EXEC proc_personal_info(:main_cursor);
spool off;Hi,
set PAGESIZE 80000 ;is not valid and it will print header as default every 14 rows.
You can avoid printing the header in this way:
set AUTOPRINT ON ;
set heading ON;
set TRIMSPOOL ON ;
set colsep ',' ;
set linesize 1000 ;
set PAGESIZE 0 ;
set escape /
set feedback off
spool c:\temp\empspool.csv
SELECT '"'||ename||'"', '"'||job||'"'
FROM emp;
spool offThe output will look like this in this case
"SMITH" ,"CLERK"
"ALLEN" ,"SALESMAN"
"WARD" ,"SALESMAN"
"JONES" ,"MANAGER"
"MARTIN" ,"SALESMAN"
"BLAKE" ,"MANAGER"
"CLARK" ,"MANAGER"
"SCOTT" ,"ANALYST"
"KING" ,"PRESIDENT"
"TURNER" ,"SALESMAN"
"ADAMS" ,"CLERK"
"JAMES" ,"CLERK"
"FORD" ,"ANALYST"
"MILLER" ,"CLERK"You can also consider creating a unique column by concatenating the columns in this way:
spool c:\temp\empspool.csv
SELECT '"'||ename||'","'||job||'"' In this case the output will look without spaces between columns:
"SMITH","CLERK"
"ALLEN","SALESMAN"
"WARD","SALESMAN"
"JONES","MANAGER"
"MARTIN","SALESMAN"
"BLAKE","MANAGER"
"CLARK","MANAGER"
"SCOTT","ANALYST"
"KING","PRESIDENT"
"TURNER","SALESMAN"
"ADAMS","CLERK"
"JAMES","CLERK"
"FORD","ANALYST"
"MILLER","CLERK"Regards.
Al
Edited by: Alberto Faenza on May 2, 2013 5:48 PM -
CSV file generation with timestamp time details
CREATE OR REPLACE PROCEDURE extract_proc AS
l_rows number;
begin
l_rows := extract_function('select *
from table1 where date1 between trunc(sysdate) and sysdate
'MYDIR1',
'test.csv');
end;
I have a procedure extract_proc shown above which create a CSV in the directory MYDIR1 in the unix.
I would like to have the timestamp ie ddmmyyyyHH24:MI details in the CSV file name created. ie. to have a file name of test22072011.csv whenever it generates with respective timestamp to be prefixed or suffixed to the file number. Is ther any way to achieve this
Pls suggest
Many thanksWhy don't you set your file name dynamically? Something like:
AS
file_name VARCHAR2(50);
BEGIN
file_name := 'test' || TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') || '.csv';
extract_function(...,file_name);
END;
/ -
Problem in csv file generation
Hi ,
I am trying to spool a query in csv format using "col sep ,"
but it is giving problem for the text values like example i have a sql_text column which gives the sql text. in the csv file which was generated the select statement columns are going to next column where the comma is invoked.
is there any way to format while generating csv file that entire sql_text should come in one column
Thanks
Rakeshset echo OFF pages 50000 lin 32767 feed off heading ON verify off newpage none trimspool on
define datef=&1
define datet=&2
set colsep ','
spool querries.csv
SELECT s.parsing_schema_name,
p.instance_number instance_number,
s.sql_id sql_id,
x.sql_text sql_text,
p.snap_id snap_id,
TO_CHAR (p.begin_interval_time,'mm/dd/yyyy hh24:mi') begin_interval_time,
TO_CHAR (p.end_interval_time,'mm/dd/yyyy hh24:mi') end_interval_time,
s.elapsed_time_delta / DECODE (s.executions_delta, 0, 1, s.executions_delta) / 1000000 elapsed_time_per_exec,
s.elapsed_time_delta / 1000000 elapsed_time,
s.executions_delta executions, s.buffer_gets_delta buffer_gets,
s.buffer_gets_delta / DECODE (s.executions_delta, 0, 1, s.executions_delta) buffer_gets_per_exec,
module module
FROM dba_hist_sqlstat s, dba_hist_snapshot p, dba_hist_sqltext x
WHERE p.snap_id = s.snap_id
AND p.dbid = s.dbid
AND p.instance_number = s.instance_number
AND p.begin_interval_time >
TO_TIMESTAMP ('&datef','yyyymmddhh24miss')
AND p.begin_interval_time <
TO_TIMESTAMP ('&datet','yyyymmddhh24miss')
AND s.dbid = x.dbid
AND s.sql_id = x.sql_id
ORDER BY instance_number, elapsed_time_per_exec DESC ;
SPOOL OFF;
exit; -
BO 4.0 save as csv file format issue
Hi All,
We are using BO 4.0 Webi for reporting on SAP BW 7.3 system. Some of our reports have to be scheduled to bring the output in CSV file format. When I schedule the report in CSV format, the final output has data in two sets. The first set has the list of columns which I have selected in my report. Starting on the next row I get to see the second set of data with all the objects selected in the query panel including the detail objects.
We only need the data for the columns which is selected in the report, but it is bringing the dump from all the objects in the dataprovider.
Can anyone tell me how to get rid of the second set of data in the same csv file ?
Thanks,
PrasadHi,
CSV format is reserved for 'data only' dataprovider dumps. it exports the entire webi microcube (query results)
You don't get that option when going 'save as' - which preserves the report formatting - in which case you should consider .xls
regards,
H -
Hi All,
Can we have a comma included in the csv generated file?
Suppose query contains comma
fieid1 field2
Pickle This pickle, must be sour.
sweet More sweets, leads to diabetes.
My csv should be of two fields not three. But i am getting the csv file in three field because of comma in field2. How Can we resolve it.
field1 field2 field3
Pickle This pickle must be sour.
sweet More sweets leads to diabetes.
Hi anybody can help on these??
ThanksThe same as you put any other character in the file...
As sys user:
CREATE OR REPLACE DIRECTORY TEST_DIR AS '\tmp\myfiles'
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO myuser
/As myuser:
CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2
,p_dir IN VARCHAR2
,p_header_file IN VARCHAR2
,p_data_file IN VARCHAR2 := NULL) IS
v_finaltxt VARCHAR2(4000);
v_v_val VARCHAR2(4000);
v_n_val NUMBER;
v_d_val DATE;
v_ret NUMBER;
c NUMBER;
d NUMBER;
col_cnt INTEGER;
f BOOLEAN;
rec_tab DBMS_SQL.DESC_TAB;
col_num NUMBER;
v_fh UTL_FILE.FILE_TYPE;
v_samefile BOOLEAN := (NVL(p_data_file,p_header_file) = p_header_file);
BEGIN
c := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
d := DBMS_SQL.EXECUTE(c);
DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
FOR j in 1..col_cnt
LOOP
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
WHEN 2 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
WHEN 12 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
ELSE
DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
END CASE;
END LOOP;
-- This part outputs the HEADER
v_fh := UTL_FILE.FOPEN(upper(p_dir),p_header_file,'w',32767);
FOR j in 1..col_cnt
LOOP
v_finaltxt := ltrim(v_finaltxt||','||lower(rec_tab(j).col_name),',');
END LOOP;
-- DBMS_OUTPUT.PUT_LINE(v_finaltxt);
UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
IF NOT v_samefile THEN
UTL_FILE.FCLOSE(v_fh);
END IF;
-- This part outputs the DATA
IF NOT v_samefile THEN
v_fh := UTL_FILE.FOPEN(upper(p_dir),p_data_file,'w',32767);
END IF;
LOOP
v_ret := DBMS_SQL.FETCH_ROWS(c);
EXIT WHEN v_ret = 0;
v_finaltxt := NULL;
FOR j in 1..col_cnt
LOOP
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
WHEN 2 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
v_finaltxt := ltrim(v_finaltxt||','||v_n_val,',');
WHEN 12 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
v_finaltxt := ltrim(v_finaltxt||','||to_char(v_d_val,'DD/MM/YYYY HH24:MI:SS'),',');
ELSE
v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
END CASE;
END LOOP;
-- DBMS_OUTPUT.PUT_LINE(v_finaltxt);
UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
END LOOP;
UTL_FILE.FCLOSE(v_fh);
DBMS_SQL.CLOSE_CURSOR(c);
END;This allows for the header row and the data to be written to seperate files if required.
e.g.
SQL> exec run_query('select * from emp','TEST_DIR','output.txt');
PL/SQL procedure successfully completed.Output.txt file contains:
empno,ename,job,mgr,hiredate,sal,comm,deptno
7369,"SMITH","CLERK",7902,17/12/1980 00:00:00,800,,20
7499,"ALLEN","SALESMAN",7698,20/02/1981 00:00:00,1600,300,30
7521,"WARD","SALESMAN",7698,22/02/1981 00:00:00,1250,500,30
7566,"JONES","MANAGER",7839,02/04/1981 00:00:00,2975,,20
7654,"MARTIN","SALESMAN",7698,28/09/1981 00:00:00,1250,1400,30
7698,"BLAKE","MANAGER",7839,01/05/1981 00:00:00,2850,,30
7782,"CLARK","MANAGER",7839,09/06/1981 00:00:00,2450,,10
7788,"SCOTT","ANALYST",7566,19/04/1987 00:00:00,3000,,20
7839,"KING","PRESIDENT",,17/11/1981 00:00:00,5000,,10
7844,"TURNER","SALESMAN",7698,08/09/1981 00:00:00,1500,0,30
7876,"ADAMS","CLERK",7788,23/05/1987 00:00:00,1100,,20
7900,"JAMES","CLERK",7698,03/12/1981 00:00:00,950,,30
7902,"FORD","ANALYST",7566,03/12/1981 00:00:00,3000,,20
7934,"MILLER","CLERK",7782,23/01/1982 00:00:00,1300,,10The procedure allows for the header and data to go to seperate files if required. Just specifying the "header" filename will put the header and data in the one file.
Adapt to output different datatypes and styles are required. -
I am trying to load data from csv file into oracle table.
The interface executes successfully but the problem is-
there are 501 rows in the original csv file.
When i load it in the file model it shows only 260 rows.
What is the problem? Why all the rows are not loaded?Just forget about the interface.
I am creating a new datastore of file type.
In the resource name, i m giving my files path.
when i reverse engineer it and check for the data, it shows only 260.
But there are 501 records in my csv file. -
EJB 3.0: deployment/jar file generation issue
We have a web application project (ADF Faces) which depends on a model project (EJB 3.0 session/entity beans) that implements both business and persistence layers. This EJB project points to a utility jar file containing some classes we have built. When we run the web project from inside JDev, everything works perfectly, with no error. However, when generating EAR file (deploying web project to WAR) we experience an issue.
As expected, JDev generated the war file for the web app, the jar file for the EJB and finally the EAR file containing both war and jar files. After that, we managed to deploy the ear file to our OAS 10.1.3 test instance (not standalone OC4J) and an error occured. The error is due to the fact JDev did not create the /lib subdirectory inside ejb project jar file /META-INF directory with our utility jar file inside.
We tried to generate a separated jar file for the ejb project alone with the same results.
Is this an IDE bug which does not pack the application as it should? What could we be doing wrong? If yes, is there a work-around? Like I said, the application runs fine on embedded server. Also, when the EJB application does not depend on external libraries, the deployment to OAS 10.1.3 works perfectly. We have tested it!
Could anyone help us?
Thanks in advance.
Best Regards,
Gustavo
PS: Sorry for my English. :)Hi Shay! I follow what you said and it worked. However, we had to manually edit MANIFEST file for the EJB jar in order to add classpath clause to map our util jar. IMHO, JDev should do it for us. Anyway, we were able to deploy the app successfully.
Best Regards,
Gustavo -
Temporary file Generation issue - file Receiver Adapter(NFS)
Hi Experts,
We are using Dynamic Configuration for generating the Dynamic File name as per the requirement in the File receiver communication channel with transport protocol NFS. As, per the requirement we need to use the "Use Temporary File" in write mode under processing parameter of File Receiver channel - as, the temporary file creation is one of the requirement of the target system.
We have used the dynamic configuration for generating the Dynamic File name so, i have set the Adapter specific Message attribute for channel .
Use Adapter Specific Message Attribute
Fail if Adapter-Specific Message Attributes
File Name
Temporary Name Scheme for Target File Name
Now , at the run-time due to the last parameter Channel is giving the error.
" Could not process due to error: com.sap.aii.adapter.file.configuration.DynamicConfigurationException: The Adapter Message Property 'TargetTempFileName' was configured as mandatory element, but was not supplied in the XI Message header"
Request you all to suggest the workaround that the with Dynamic File Name generation we, can use the temporary file name generation scheme.
Regards,
Gaurav JindalHi ,
Thanks for the reply.
If we are using the ASMA with the specific parameters:
Use Adapter Specific Message Attribute
Fail if Adapter-Specific Message Attributes
File Name
And I have choose the " Use Temporary File" in the Write mode of Processing Parameters.
As, per your suggestion, how we track the temporary file name is created as, i am not able to see that in the audit log.
Regards,
Gaurav Jindal -
4010_850 EDI file Generation issue.
Hi All,
We are facing issue in converting 850 EDI XML to EDI file.
We could generate a simple EDI based on mandatory elements. But when we try to provide DTM and AMT values , We are hitting below issues..
I am missing some basic stuff. Please find the EDI XML i am trying to convert to EDI file.
<?xml version = '1.0' encoding = 'UTF-8'?><Transaction-850 xmlns:ns1="urn:oracle:b2b:X12/V4010/850" Standard="X12" xmlns="urn:oracle:b2b:X12/V4010/850">
<ns1:Segment-ST>
<ns1:Element-143>850</ns1:Element-143>
<ns1:Element-329>000000010</ns1:Element-329>
</ns1:Segment-ST>
<ns1:Segment-BEG>
<ns1:Element-353>00</ns1:Element-353>
<ns1:Element-92>NE</ns1:Element-92>
<ns1:Element-324>12345678</ns1:Element-324>
<ns1:Element-373>20140703</ns1:Element-373>
</ns1:Segment-BEG>
<ns1:Loop-PO1>
<ns1:Segment-PO1>
<ns1:Element-350>001</ns1:Element-350>
<ns1:Element-330>1</ns1:Element-330>
<ns1:Element-212>96</ns1:Element-212>
<ns1:Element-639>AA</ns1:Element-639>
<ns1:Element-235_1>VC</ns1:Element-235_1>
<ns1:Element-234_1>571157</ns1:Element-234_1>
<ns1:Element-235_2>CB</ns1:Element-235_2>
<ns1:Element-234_2>00100</ns1:Element-234_2>
</ns1:Segment-PO1>
<ns1:Loop-PID>
<ns1:Segment-PID>
<ns1:Element-349>F</ns1:Element-349>
<ns1:Element-352>Rockford product</ns1:Element-352>
</ns1:Segment-PID>
</ns1:Loop-PID>
<ns1:Segment-DTM>
<ns1:Element-374>038</ns1:Element-374>
<ns1:Element-373>20140626</ns1:Element-373>
</ns1:Segment-DTM>
<ns1:Loop-AMT>
<ns1:Segment-AMT>
<ns1:Element-522>1</ns1:Element-522>
<ns1:Element-782>1</ns1:Element-782>
</ns1:Segment-AMT></ns1:Loop-AMT>
</ns1:Loop-PO1>
<ns1:Segment-SE>
<ns1:Element-96>#SegmentCount#</ns1:Element-96>
<ns1:Element-329>000000010</ns1:Element-329>
</ns1:Segment-SE>
</Transaction-850>
Only DTM passed :
=============
Error :
Extra Element was found in the data file as part of Segment DTM. Segment DTM is defined in the guideline at position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 5{br}{tab}Element Count: 1{br}{tab}Characters: 1014 through 1017
Extra Element was found in the data file as part of Segment DTM. Segment DTM is defined in the guideline at position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 5{br}{tab}Element Count: 2{br}{tab}Characters: 1052 through 1060
Element DTM01 (Date/Time Qualifier) is missing. This Element's standard option is 'Mandatory'. Segment DTM is defined in the guideline at position 210.{br}{br}This Element was expected in:{br}{tab}Segment Count: 5{br}{tab}Element Count: 1{br}{tab}Character: 1078
DTM + AMT Passed :
===============
Error : Extra Element was found in the data file as part of Segment DTM. Segment DTM is defined in the guideline at position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 5{br}{tab}Element Count: 1{br}{tab}Characters: 1014 through 1022 Extra Element was found in the data file as part of Segment DTM. Segment DTM is defined in the guideline at position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 5{br}{tab}Element Count: 2{br}{tab}Characters: 1057 through 1060 Element DTM01 (Date/Time Qualifier) is missing. This Element's standard option is 'Mandatory'. Segment DTM is defined in the guideline at position 210.{br}{br}This Element was expected in:{br}{tab}Segment Count: 5{br}{tab}Element Count: 1{br}{tab}Character: 1078 Unrecognized data was found in the data file as part of Loop PO1. The last known Segment was DTM at guideline position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 6{br}{tab}Characters: 1278 through 1286 Unrecognized data was found in the data file as part of Loop PO1. The last known Segment was DTM at guideline position 210.{br}{br}This error was detected at:{br}{tab}Segment Count: 7{br}{tab}Characters: 1295 through 1367
Thanks in advance,
Siddhardha
Manager , Deloitte Consutling.Issue is solved with the files provided by oracle support. They used Document editor version
Oracle Document Editor -> 7.0.5.4018 & X12 - 8.0.0.186
Not sure what is wrong with our files.
We used document editor 7.0.5.4043 & X12 - 8.0.0.186
Below is the update i put in SR.
==================================
My xsd contains below text.
<xsd:appinfo>
<UNMKey>Full|CodeList|-<Parent Node ID>.<Index>|Composite|-<Node ID>|Element|-<Node ID>|Loop|-<Node ID>|Segment|-<Node ID>|Transaction|-<Node ID>|ReplacementCharacter|_|InternalSeparator|-</UNMKey>
</xsd:appinfo>
===================
The file you provided have the below.
<xsd:appinfo>
<UNMKey>Full|Default|-<Node ID>|CodeList|-<Parent Node ID>.<Index>|ReplacementCharacter|_|InternalSeparator|-
</UNMKey>
</xsd:appinfo>
Not sure if that is really causing the issue.
==========================================
If anyone is interested here is the SR number (#3-9312618901)
Thanks,
Sid -
XML File Generation Issues using DBMS_XMLGEN
Hi,
I am using DBMS_XMLGEN package in a stored procedure to generate an XML file on Oracle 10 G R2 .But, the problem is the format in which the file is generated.
CREATE OR REPLACE TYPE "state_info" as object (
"@product_type" Number
CREATE OR REPLACE TYPE prod_tab as TABLE OF "state_info";
CREATE OR REPLACE TYPE state_t as object (
"state_code" CHAR(2),
"state_info" prod_tab
CREATE OR REPLACE PROCEDURE get_xml_serviced_state (p_clob OUT clob)
IS
v_xmlctx DBMS_XMLGEN.ctxhandle;
v_str VARCHAR2 (1000);
BEGIN
v_str := 'SELECT state_t(a.st_cd,CAST(MULTISET(select veh_type_id from svcd_st where st_cd =a.st_cd)as prod_tab)) as "state_info"
from svcd_st a where a.st_cd in (select distinct st_cd from svcd_st) group by a.st_cd having count(*)>=1';
v_xmlctx := DBMS_XMLGEN.newcontext (v_str);
DBMS_XMLGEN.setrowsettag (v_xmlctx,'serviced_state');
DBMS_XMLGEN.setrowtag (v_xmlctx,NULL);
p_clob := DBMS_XMLGEN.getxml (v_xmlctx);
printclobout(p_clob);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
WHEN OTHERS
THEN
RAISE;
END;
SHOW ERRORS
Output :
<serviced_state>
<state_info state_code="VA">
<state_info>
<state_info product_type="2"/>
<state_info product_type="5"/>
<state_info product_type="9"/>
</state_info>
</state_info>
<state_info state_code="AB">
<state_info>
<state_info product_type="2"/>
<state_info product_type="5"/>
<state_info product_type="9"/>
</state_info>
</state_info>
</serviced_state>
Required it in below format:
<serviced_state>
<state_info state_code="VA">
<state_info product_type="2"/>
<state_info product_type="5"/>
<state_info product_type="9"/>
</state_info>
<state_info state_code="AB">
<state_info product_type="2"/>
<state_info product_type="5"/>
<state_info product_type="9"/>
</state_info>
</serviced_state>
I just need to put in all the rows under one single tag.Appreciate your early responses.
Thanks.Your wanted XML output is NOT VALID...(try it yourself on http://tools.decisionsoft.com/schemaValidate/)
Maybe you are looking for
-
Error while trying to open crystal report with SAP B1 8.8
Hello Experts I found one problem when I open my Crystal Report from SAP B1 8.8 then it gives me error like "Failed to retrieve data from database. Details : [Database Vendor Code : 156]" This report perfectly work while open in crystal re
-
Team Calendar warning - No Team setup for the user in the selection period.
Good morning, I am attempting to replace the older iViews (leaverequestapprover and teamcalendar) with the newer iView (approveleaverequest) which seems to combine the two functions. When implementing the approveleaverequest iView, I get the followin
-
Safari 2.0 with OS 10.4.10
I've read the posts about Safari plus and they don't apply. I've been getting Safari "unexpectly quit" messages for several weeks. This AM I used spring cleaning to deleted all of Safari. I downloaded a fresh copy and it is denied at the install stag
-
Help needed with bit & unicode
Hi, I have an issue with the following code when setting the unicode flag for a Z program. check usr02-uflag o lock. check usr02-uflag z lock. The field usr02-uflag has a data type of INT1and an output length of 3. The field lock is defined as "lock(
-
LaserJet 700 MFP (M775) Can't Print PDF Drawings
Running a MAC OSX 10.9.5, printing to a HP LaserJet 700 MFP (M775). I can print word docs fine, even ones converted to a PDF. When attempting to print an 11x17 PDF of a drawing created in CAD I get an error message intermittently, typically a Insu