How to give path in plsql while exporting table data into .csv file
hi,
i have a code like this
PROCEDURE dump_table_to_csv (
p_tname IN VARCHAR2,
p_dir IN VARCHAR2,
p_filename IN VARCHAR2
IS
l_output UTL_FILE.file_type;
l_thecursor INTEGER DEFAULT DBMS_SQL.open_cursor;
l_columnvalue VARCHAR2 (4000);
l_status INTEGER;
l_query VARCHAR2 (1000) DEFAULT 'select * from ' || p_tname;
l_colcnt NUMBER := 0;
l_separator VARCHAR2 (1);
l_desctbl DBMS_SQL.desc_tab;
BEGIN
l_output := UTL_FILE.fopen (p_dir, p_filename, 'w');
EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss''';
DBMS_SQL.parse (l_thecursor, l_query, DBMS_SQL.native);
DBMS_SQL.describe_columns (l_thecursor, l_colcnt, l_desctbl);
FOR i IN 1 .. l_colcnt
LOOP
UTL_FILE.put (l_output,
l_separator || '"' || l_desctbl (i).col_name || '"'
DBMS_SQL.define_column (l_thecursor, i, l_columnvalue, 4000);
l_separator := ',';
END LOOP;
UTL_FILE.new_line (l_output);
l_status := DBMS_SQL.EXECUTE (l_thecursor);
WHILE (DBMS_SQL.fetch_rows (l_thecursor) > 0)
LOOP
l_separator := '';
FOR i IN 1 .. l_colcnt
LOOP
DBMS_SQL.column_value (l_thecursor, i, l_columnvalue);
UTL_FILE.put (l_output, l_separator || l_columnvalue);
l_separator := ',';
END LOOP;
UTL_FILE.new_line (l_output);
END LOOP;
DBMS_SQL.close_cursor (l_thecursor);
UTL_FILE.fclose (l_output);
EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-MON-yy'' ';
EXCEPTION
WHEN OTHERS
THEN
EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-MON-yy'' ';
RAISE;
END;
I am getting error like :-------
SQL> exec dump_table_to_csv('deptair','c:/csv','aa.deptair');
BEGIN dump_table_to_csv('deptair','c:/csv','aa.deptair'); END;
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 8
ORA-29280: invalid directory path
ORA-06512: at "SCOTT.DUMP_TABLE_TO_CSV", line 58
ORA-06512: at line 1
pplease help me out
thanks
vicky
look at your other thread, answer is already there.
Similar Messages
-
Export batch data into CSV file using SQL SP
Hi,
I have created WCF-Custom receive adapter to poll Sql SP (WITH xmlnamespaces(DEFAULT 'Namespace' and For XML PATH(''), Type) . Get the result properly in batch while polling and but getting error while converting into CSV by using map.
Please can anyone give me some idea to export SQL data into CSV file using SP.How are you doing this.
You would have got XML representation for the XML batch received from SQL
You should have a flat-file schema representing the CSV file which you want to send.
Map the received XML representation of data from SQL to flat-file schema
have custom pipeline with flat-file assembler on the assembler stage of the send pipeline.
In the send port use the map which convert received XML from SQL to flat file schema and use the above custom flat-file disassembler send port
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply. -
Exporting table data into text files
I got a request to export the data from about 85 tables into 85 text files. I assume that they will want a header row and some delimiter character. This is a process that must run every night by a sql job.
Obviously I want it to be flexible. Tables could be addede and removed and columns could be added and removed. There will probably be a control table that has the list of tables to export.
Looked at bcp - but seems that it is command line only?
SSIS package?
What other options are there?Wanted to post my solution. Ended up using bcp in a PowerShell Script. I was unable to use bcp in SQL because it requires xp_cmdshell and that is not allowed at many of our client sites. So wrote a Powershell script that executes bcp. The ps script
loops through a table that contains a row for each table/view to export along with many values and switches for the table/view export: export path, include column headers, enclose each field in double quotes, double up embedded double quotes, how to format
dates, what to return for NULL integers - basically the stuff that is not flexible in bcp. To get this flexibility I created a SQL proc that takes these values as input and creates a view. Then the PS bcp does a SELECT on the view. Many of my tables are
very wide and bcp has a limit of 4000/8000. Some of my SELECT statements ended up being GT 35k in length. So using a view got around this size limitation.
Anyway, the view creation and bcp download is really fast. It can download about 4 gb of data in 20 minutes. It can download it faster the 7z can zip it.
Below is the SQL proc to format the SELECT statement to create the view for bcp (or some other utility like SQLCMD, Invoke-SQLCMD) or SQL query by "SELECT * from v_ExportData". the proc can be used from SQL or PS, or anything that can call a SQL
Proc and then read a SQL view.
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ExportTablesCreateView]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[ExportTablesCreateView]
GO
CREATE PROCEDURE [dbo].[ExportTablesCreateView]
ExportTablesCreateView
Description:
Read in a tablename or viewname with parameters and create a view v_ExportTable of the
data. The view will typically be read by bcp to download SQL table data to flat files.
bcp does not have the options to include column headers, include fields in double quotes,
format dates or use '0' for integer NULLS. Also, bcp has a limit of varhar 4000 and
wider tables could not be directly called by bcp. So create the v_ExportTable and have
bcp SELECT v_ExportTable instead of the original table or view.
Parameters:
@pTableName VARCHAR(128) - table or view to create v_ExportTable from
@pColumnHeader INT =1 - include column headers in the first row
@pDoubleQuoteFields INT = 1 - put double quotes " around all column values including column headers
@pDouble_EmbeddedDoubleQuotes INT = 1 - This is usually used with @pDoubleQuoteFields INT = 1. 'ab"c"d would be 'ab""c""d.
@pNumNULLValue VARCHAR(1) = '0' - NULL number data types will export this value instead of bcp default of ''
@pDateTimeFormat INT = 121 - DateTime data types will use this format value
Example:
EXEC ExportTablesCreateView 'custname', 1, 1, 1, '0', 121
@pTableName VARCHAR(128),
@pColumnHeader INT = 1,
@pDoubleQuoteFields INT = 1,
@pDouble_EmbeddedDoubleQuotes INT = 1,
@pNumNULLValue VARCHAR(1) = '0',
@pDateTimeFormat INT = 121
AS
BEGIN
DECLARE @columnname varchar(128)
DECLARE @columnsize int
DECLARE @data_type varchar(128)
DECLARE @HeaderRow nvarchar(max)
DECLARE @ColumnSelect nvarchar(max)
DECLARE @SQLSelect nvarchar(max)
DECLARE @SQLCommand nvarchar(max)
DECLARE @ReturnCode INT
DECLARE @Note VARCHAR(500)
DECLARE db_cursor CURSOR FOR
SELECT COLUMN_NAME, ISNULL(Character_maximum_length,0), Data_type
FROM [INFORMATION_SCHEMA].[COLUMNS]
WHERE TABLE_NAME = @pTableName AND TABLE_SCHEMA='dbo'
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @ColumnName, @ColumnSize, @Data_type
SET @HeaderRow = ''
SET @ColumnSelect = ''
-- Loop through each of the @pTableColumns to build the SELECT Statement
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
-- Put double quotes around each field - example "MARIA","SHARAPOVA"
IF @pDoubleQuoteFields = 1
BEGIN
-- Include column headers in the first row - example "FirstName","LastName"
IF @pColumnHeader = 1
SET @HeaderRow = @HeaderRow + '''"' + @ColumnName + '"'' as ''' + @columnname + ''','
-- Unsupported Export data type returns "" - example "",
IF @Data_Type in ('image', 'varbinary', 'binary', 'timestamp', 'cursor', 'hierarchyid', 'sql_variant', 'xml', 'table', 'spatial Types')
SET @ColumnSelect = @ColumnSelect + '''""'' as [' + @ColumnName + '],'
-- Format DateTime data types according to input parameter
ELSE IF @Data_Type in ('datetime', 'smalldatetime', 'datetime2', 'date', 'datetimeoffset')
-- example - CASE when [aaa] IS NULL THEN '""' ELSE QUOTENAME(CONVERT(VARCHAR,[aaa], 121), CHAR(34)) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN ''""'' ELSE QUOTENAME(CONVERT(VARCHAR,[' + @columnname + '],' + CONVERT(VARCHAR,@pDateTimeFormat) + '), CHAR(34)) END AS [' + @ColumnName + '],'
-- SET Numeric data types with NULL value according to input parameter
ELSE IF @Data_Type in ('bigint', 'numeric', 'bit', 'smallint', 'decimal', 'smallmoney', 'int', 'tinyint', 'money', 'float', 'real')
-- example - CASE when [aaa] IS NULL THEN '"0"' ELSE QUOTENAME([aaa], CHAR(34)) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN ''"' + @pNumNULLValue + '"'' ELSE QUOTENAME([' + @columnname + '], CHAR(34)) END AS [' + @ColumnName + '],'
ELSE
-- Double embedded double quotes - example "abc"d"ed" to "abc""d""ed". Only applicible for character data types.
IF @pDouble_EmbeddedDoubleQuotes = 1
BEGIN
-- example - CASE when [aaa] IS NULL THEN '""' ELSE '"' + REPLACE([aaa],'"','""') + '"' END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @ColumnName + '] IS NULL THEN ''""'' ELSE ''"'' + REPLACE([' + @ColumnName + '],''"'',''""'') + ''"'' END AS [' + @ColumnName + '],'
END
-- DO NOT PUT Double embedded double quotes - example "abc"d"ed" unchanged to "abc"d"ed"
ELSE
BEGIN
-- example - CASE when [aaa] IS NULL THEN '""' ELSE '"' + [aaa] + '"' END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @ColumnName + '] IS NULL THEN ''""'' ELSE ''"'' + [' + @ColumnName + '] + ''"'' END AS [' + @ColumnName + '],'
END
END
-- DO NOT PUT double quotes around each field - example MARIA,SHARAPOVA
ELSE
BEGIN
-- Include column headers in the first row - example "FirstName","LastName"
IF @pColumnHeader = 1
SET @HeaderRow = @HeaderRow + '''' + @ColumnName + ''' as ''' + @columnname + ''','
-- Unsupported Export data type returns '' - example '',
IF @Data_Type in ('image', 'varbinary', 'binary', 'timestamp', 'cursor', 'hierarchyid', 'sql_variant', 'xml', 'table', 'spatial Types')
SET @ColumnSelect = @ColumnSelect + ''''' as [' + @ColumnName + '],'
-- Format DateTime data types according to input parameter
ELSE IF @Data_Type in ('datetime', 'smalldatetime', 'datetime2','date', 'datetimeoffset')
-- example - CASE when [aaa] IS NULL THEN '''' ELSE CONVERT(VARCHAR,[aaa], 121) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN '''' ELSE CONVERT(VARCHAR,[' + @columnname + '],' + CONVERT(VARCHAR,@pDateTimeFormat) + ') END AS [' + @ColumnName + '],'
-- SET Numeric data types with NULL value according to input parameter
ELSE IF @Data_Type in ('bigint', 'numeric', 'bit', 'smallint', 'decimal', 'smallmoney', 'int', 'tinyint', 'money', 'float', 'real')
-- example - CASE when [aaa] IS NULL THEN '"0"' ELSE CONVERT(VARCHAR, [aaa]) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN ''' + @pNumNULLValue + ''' ELSE CONVERT(VARCHAR,[' + @columnname + ']) END AS [' + @ColumnName + '],'
ELSE
BEGIN
-- Double embedded double quotes - example "abc"d"ed" to "abc""d""ed". Only applicible for character data types.
IF @pDouble_EmbeddedDoubleQuotes = 1
-- example - CASE when [aaa] IS NULL THEN '' ELSE CONVERT(VARCHAR,REPLACE([aaa],'"','""')) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN '''' ELSE CONVERT(VARCHAR,REPLACE([' + @columnname + '],''"'',''""'')) END AS [' + @ColumnName + '],'
ELSE
-- example - CASE when [aaa] IS NULL THEN '' ELSE CONVERT(VARCHAR,[aaa]) END AS [aaa],
SET @ColumnSelect = @ColumnSelect + 'CASE WHEN [' + @columnname + '] IS NULL THEN '''' ELSE CONVERT(VARCHAR,[' + @columnname + ']) END AS [' + @ColumnName + '],'
END
END
FETCH NEXT FROM db_cursor INTO @ColumnName, @ColumnSize, @Data_Type
END TRY
BEGIN CATCH
RETURN (1)
END CATCH
END
CLOSE db_cursor
DEALLOCATE db_cursor
BEGIN TRY
-- remove last comma
IF @pColumnHeader = 1
SET @HeaderRow = SUBSTRING(@HeaderRow , 1, LEN(@HeaderRow ) - 1)
SET @ColumnSelect = SUBSTRING(@ColumnSelect, 1, LEN(@ColumnSelect) - 1)
-- Put on the finishing touches on the SELECT
IF @pColumnHeader = 1
SET @SQLSelect = 'SELECT ' + @HeaderRow + ' UNION ALL ' +
'SELECT ' + @ColumnSelect + ' FROM [' + @pTableName + ']'
ELSE
SET @SQLSelect = 'SELECT ' + @ColumnSelect + ' FROM [' + @pTableName + ']'
---- diagnostics
---- PRINT truncates at 4k or 8k, not sure, my tables have many colummns
--PRINT @SQLSelect
--DECLARE @END varchar(max) = RIGHT(@SQLSelect, 3000)
--PRINT @end
--EXECUTE sp_executesql @SQLSelect
-- drop view if exists -- using view because some tables are very wide. one of my tables had a 33k select statement
SET @SQLCommand = '
IF EXISTS (SELECT * FROM SYS.views WHERE name = ''v_ExportTable'')
BEGIN
DROP VIEW v_ExportTable
END'
EXECUTE @ReturnCode = sp_executesql @SQLCommand
IF @returncode = 1
BEGIN
RETURN (1)
END
-- create the view
SET @SQLCommand = '
CREATE VIEW v_ExportTable AS ' + @SQLSelect
-- diagnostics
--print @sqlcommand
EXECUTE @ReturnCode = sp_executesql @SQLCommand
IF @returncode = 1
BEGIN
RETURN (1)
END
END TRY
BEGIN CATCH
RETURN (1)
END CATCH
RETURN (0)
END -- CREATE PROCEDURE [dbo].[ExportTablesCreateView]
GO -
Exporting table data to .csv file
How to export the table data (this table is having 18 million records) to .csv file. using SQL Developer and PL/SQL deveoloper it is taking too long time to complete. Please let me know is there any faster way to complete this task.
Thanks in advanceAlso bear in mind that SQL Developer and PL/SQL Developer are running on your local client, so it's transferring all 18 million rows to the client before formatting and putting them out to a file. If you had some PL/SQL code to export it on the database server it would run a lot faster as you wouldn't have the network traffic delay to deal with, although your file would then reside on the server rather than the client (which isn't necessarily a bad thing as a lot of companies would be concerned about the security of their data especially if it ends up being stored on local machines rather than on a secure server).
As already asked, why on earth do you need to store 18 million rows in a CSV file? The database is the best place to keep the data. -
Reg: Export table data into CSV
Hi Experts,
I'm stuck up with a requirement, where I have to export data of all tables from a schema into separate table specific CSV files.
I tried using SQL Plus 'SPOOL' but it gives me additional queries executed on top of CSV generated.
Also, i'm not getting the headers (coz i'm using pagesize 0 )
Any help id highly appreciated?
Ranit B.Starting point...
As sys user:
CREATE OR REPLACE DIRECTORY TEST_DIR AS '\tmp\myfiles'
GRANT READ, WRITE ON DIRECTORY TEST_DIR TO myuser
/As myuser:
CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2
,p_dir IN VARCHAR2
,p_header_file IN VARCHAR2
,p_data_file IN VARCHAR2 := NULL) IS
v_finaltxt VARCHAR2(4000);
v_v_val VARCHAR2(4000);
v_n_val NUMBER;
v_d_val DATE;
v_ret NUMBER;
c NUMBER;
d NUMBER;
col_cnt INTEGER;
f BOOLEAN;
rec_tab DBMS_SQL.DESC_TAB;
col_num NUMBER;
v_fh UTL_FILE.FILE_TYPE;
v_samefile BOOLEAN := (NVL(p_data_file,p_header_file) = p_header_file);
BEGIN
c := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
d := DBMS_SQL.EXECUTE(c);
DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
FOR j in 1..col_cnt
LOOP
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
WHEN 2 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val);
WHEN 12 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val);
ELSE
DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000);
END CASE;
END LOOP;
-- This part outputs the HEADER
v_fh := UTL_FILE.FOPEN(upper(p_dir),p_header_file,'w',32767);
FOR j in 1..col_cnt
LOOP
v_finaltxt := ltrim(v_finaltxt||','||lower(rec_tab(j).col_name),',');
END LOOP;
-- DBMS_OUTPUT.PUT_LINE(v_finaltxt);
UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
IF NOT v_samefile THEN
UTL_FILE.FCLOSE(v_fh);
END IF;
-- This part outputs the DATA
IF NOT v_samefile THEN
v_fh := UTL_FILE.FOPEN(upper(p_dir),p_data_file,'w',32767);
END IF;
LOOP
v_ret := DBMS_SQL.FETCH_ROWS(c);
EXIT WHEN v_ret = 0;
v_finaltxt := NULL;
FOR j in 1..col_cnt
LOOP
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
WHEN 2 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
v_finaltxt := ltrim(v_finaltxt||','||v_n_val,',');
WHEN 12 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
v_finaltxt := ltrim(v_finaltxt||','||to_char(v_d_val,'DD/MM/YYYY HH24:MI:SS'),',');
ELSE
DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
v_finaltxt := ltrim(v_finaltxt||',"'||v_v_val||'"',',');
END CASE;
END LOOP;
-- DBMS_OUTPUT.PUT_LINE(v_finaltxt);
UTL_FILE.PUT_LINE(v_fh, v_finaltxt);
END LOOP;
UTL_FILE.FCLOSE(v_fh);
DBMS_SQL.CLOSE_CURSOR(c);
END;This allows for the header row and the data to be written to seperate files if required.
e.g.
SQL> exec run_query('select * from emp','TEST_DIR','output.txt');
PL/SQL procedure successfully completed.Output.txt file contains:
empno,ename,job,mgr,hiredate,sal,comm,deptno
7369,"SMITH","CLERK",7902,17/12/1980 00:00:00,800,,20
7499,"ALLEN","SALESMAN",7698,20/02/1981 00:00:00,1600,300,30
7521,"WARD","SALESMAN",7698,22/02/1981 00:00:00,1250,500,30
7566,"JONES","MANAGER",7839,02/04/1981 00:00:00,2975,,20
7654,"MARTIN","SALESMAN",7698,28/09/1981 00:00:00,1250,1400,30
7698,"BLAKE","MANAGER",7839,01/05/1981 00:00:00,2850,,30
7782,"CLARK","MANAGER",7839,09/06/1981 00:00:00,2450,,10
7788,"SCOTT","ANALYST",7566,19/04/1987 00:00:00,3000,,20
7839,"KING","PRESIDENT",,17/11/1981 00:00:00,5000,,10
7844,"TURNER","SALESMAN",7698,08/09/1981 00:00:00,1500,0,30
7876,"ADAMS","CLERK",7788,23/05/1987 00:00:00,1100,,20
7900,"JAMES","CLERK",7698,03/12/1981 00:00:00,950,,30
7902,"FORD","ANALYST",7566,03/12/1981 00:00:00,3000,,20
7934,"MILLER","CLERK",7782,23/01/1982 00:00:00,1300,,10The procedure allows for the header and data to go to seperate files if required. Just specifying the "header" filename will put the header and data in the one file.
Adapt to output different datatypes and styles are required.
With this you can call the procedure for each of your tables, querying whatever columns you want, and specifying whatever filename(s) you want. -
How to create a datasource for 0COSTCENTER to load data in csv file in BI
how to create a datasource for 0COSTCENTER to load data in csv file in BI 7.0 system
can you emil me the picture of the step about how to loaded individual values of the hierarchy using CSV file
thank you very much
my emil is <Removed>
allenStep 1: Load Required Master Data for 0CostCenter in to BI system
Step 2: Enable Characteristics to support Hierarchy for this 0Cost Center and specify the External Characteristic(In the Lowest Node or Last Node) while creation of this Characteristic InfoObject
Step 3: On Last Node of Hierarchy Structure in the InfoObject, Right Click and then Create Hierarchy MANUALLY by Inserting the Master Data Value as BI dosent Support the Hierarchy load directly you need to do it manually....
Step 4: Mapping
Create Text Node thats the first node (Root Node)
Insert Characteristic Nodes
Insert the Last Node of the Hierarchy
Then you need to create a Open hub Destination for extracting data into the .csv file...
Step1 : Create the Open Hub Destination give the Master Data table name and enter all the fields required....and create the transformations for this Open Hub connecting to the External file or excel file source...then give the location on to your local disk or path of the server in the first tab and request for the data...It should work alright let me know if you need anything else...
Thanks,
Sandhya -
Error while writing the data into the file . can u please help in this.
The following error i am getting while writing the data into the file.
<bindingFault xmlns="http://schemas.oracle.com/bpel/extension">
<part name="code">
<code>null</code>
</part>
<part name="summary">
<summary>file:/C:/oracle/OraBPELPM_1/integration/orabpel/domains/default/tmp/
.bpel_MainDispatchProcess_1.0.jar/IntermediateOutputFile.wsdl
[ Write_ptt::Write(Root-Element) ] - WSIF JCA Execute of operation
'Write' failed due to: Error in opening
file for writing. Cannot open file:
C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\
BPEL_Import_with_Dynamic_Transformation\WORKDIRS\SampleImportProcess1\input for writing. ;
nested exception is: ORABPEL-11058 Error in opening file for writing.
Cannot open file: C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\
BPEL_Import_with_Dynamic_Transformation
\WORKDIRS\SampleImportProcess1\input for writing. Please ensure 1.
Specified output Dir has write permission 2.
Output filename has not exceeded the max chararters allowed by the
OS and 3. Local File System has enough space
.</summary>
</part>
<part name="detail">
<detail>null</detail>
</part>
</bindingFault>Hi there,
Have you verified the suggestions in the error message?
Cannot open file: C:\oracle\OraBPELPM_1\integration\jdev\jdev\mywork\BPEL_Import_with_Dynamic_Transformation\WORKDIRS\SampleImportProcess1\input for writing.
Please ensure
1. Specified output Dir has write permission
2. Output filename has not exceeded the max chararters allowed by the OS and
3. Local File System has enough space
I am also curious why you are writing to a directory with the name "..\SampleImportProcess1\input" ? -
RE: How to Export the Table data Into PDF File in ADF
Hi Experts,
I am using Jdeveloper 11.1.2.3.0
I am created employee VO and Drag and Drop as a Table in a page. So need to Export the Table data into A PDF file.
So please give me some suggestions regarding this Scnerio.
With Regards,
satishHi Guys ,
Any more answers for this question.
Please find my jsff below
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
xmlns:f="http://java.sun.com/jsf/core" xmlns:report="http://www.adfwithejb.blogspot.com">
<af:panelGroupLayout layout="vertical" id="pgl2">
<af:query id="qryId1" headerText="Service Tariff Mapping Details" disclosed="true"
value="#{bindings.findByTarifValidFromQuery.queryDescriptor}"
model="#{bindings.findByTarifValidFromQuery.queryModel}"
queryListener="#{reportWiseInvoiceBean.genericQueryListener}"
queryOperationListener="#{bindings.findByTarifValidFromQuery.processQueryOperation}"
resultComponentId="pc1::t2">
<f:attribute name="queryExpression" value="bindings.findByTarifValidFromQuery.processQuery"/>
</af:query>
<af:panelCollection id="pc1" styleClass="AFStretchWidth">
<f:facet name="menus"/>
<f:facet name="toolbar">
<af:toolbar id="t1">
<af:menuBar id="pt_m1">
<report:reportDeclarative ButtonName="ExportToExcel" ReportName="ServiceTariffMappingDetails"
ReportType="PDF" TableId=":::pc1:t2" id="rd1" Pagination="true"/>
<af:commandButton text="excel" id="cb1" binding="#{exportToExcelBean.exportID}">
<af:setActionListener from="pt1:pgl1:pgl2:pc1:t2" to="#{viewScope['exporter.exportedId']}"/>
<af:setActionListener from="border:1px solid #cccccc" to="#{viewScope['exporter.thStyle']}"/>
<af:setActionListener from="border:1px solid #cccccc" to="#{viewScope['exporter.tdStyle']}"/>
<af:fileDownloadActionListener method="#{exportToExcelBean.exportToExcel}" filename="Service TariffMapping.xls"
contentType="text/excel;chatset=UTF-8;"/>
</af:commandButton>
<af:commandMenuItem id="pt_cmi133" icon="/images/common/Excel-icon.png"
shortDesc="ExportToExcel"
>
<af:exportCollectionActionListener exportedId="t2" type="excelHTML"
title="Service Tariff Mapping"
filename="Service Tariff Mapping.xls"/>
</af:commandMenuItem></af:menuBar>
</af:toolbar>
</f:facet>
<f:facet name="statusbar"/>
<af:table value="#{bindings.ServiceTariffMappingDtlsRVO1.collectionModel}" var="row"
rows="#{bindings.ServiceTariffMappingDtlsRVO1.rangeSize}"
emptyText="#{bindings.ServiceTariffMappingDtlsRVO1.viewable ? 'No data to display.' : 'Access Denied.'}"
fetchSize="#{bindings.ServiceTariffMappingDtlsRVO1.rangeSize}" rowBandingInterval="0"
filterModel="#{bindings.findByTarifValidFromQuery.queryDescriptor}"
queryListener="#{bindings.findByTarifValidFromQuery.processQuery}" filterVisible="true" varStatus="vs"
id="t2" columnStretching="last" binding="#{ServiceTariffMappBean.testTable}">
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.label}"
id="c1">
<af:inputText value="#{row.bindings.NormalTariffCode.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NormalTariffCode.tooltip}" id="it1">
<f:validator binding="#{row.bindings.NormalTariffCode.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.label}"
id="c2">
<af:inputText value="#{row.bindings.ServiceCode.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceCode.tooltip}" id="it2">
<f:validator binding="#{row.bindings.ServiceCode.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.label}" id="c3">
<f:facet name="filter">
<af:inputDate value="#{vs.filterCriteria.TrfVldFrm}" id="id1">
<af:convertDateTime pattern="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.format}"/>
</af:inputDate>
</f:facet>
<af:inputDate value="#{row.bindings.TrfVldFrm.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.displayWidth}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.tooltip}" id="id2">
<f:validator binding="#{row.bindings.TrfVldFrm.validator}"/>
<af:convertDateTime pattern="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfVldFrm.format}"/>
</af:inputDate>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.label}"
id="c4">
<af:inputText value="#{row.bindings.ServiceDesc.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ServiceDesc.tooltip}" id="it3">
<f:validator binding="#{row.bindings.ServiceDesc.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.label}" id="c5">
<af:inputText value="#{row.bindings.OtTrfCode.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtTrfCode.tooltip}" id="it4">
<f:validator binding="#{row.bindings.OtTrfCode.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.label}" id="c6">
<af:inputText value="#{row.bindings.OtUnitRate.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.OtUnitRate.tooltip}" id="it5">
<f:validator binding="#{row.bindings.OtUnitRate.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.label}" id="c7">
<af:inputText value="#{row.bindings.NtUnitRate.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.NtUnitRate.tooltip}" id="it6">
<f:validator binding="#{row.bindings.NtUnitRate.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.label}" id="c8">
<af:inputText value="#{row.bindings.TrfGrt.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.TrfGrt.tooltip}" id="it7">
<f:validator binding="#{row.bindings.TrfGrt.validator}"/>
</af:inputText>
</af:column>
<af:column sortProperty="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.name}" filterable="true"
sortable="true" headerText="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.label}"
id="c9">
<af:inputText value="#{row.bindings.ChargePartyCode.inputValue}"
label="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.label}"
required="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.mandatory}"
columns="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.displayWidth}"
maximumLength="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.precision}"
shortDesc="#{bindings.ServiceTariffMappingDtlsRVO1.hints.ChargePartyCode.tooltip}" id="it8">
<f:validator binding="#{row.bindings.ChargePartyCode.validator}"/>
</af:inputText>
</af:column>
</af:table>
</af:panelCollection>
</af:panelGroupLayout>
</jsp:root> -
How to download internal table data into xml file?
Hi,
Experts,
I have downloaded internal table data into XLS format using GUI_DOWNLOAD Function module, But i didn't Know how to download internal table data into XML format please post some ideas/inputs on this issue.
Thank you,
Shabeer ahmed.check this
data : gd_repid type sy-repid.
GD_REPID = SY-REPID.
DATA : L_DOM TYPE REF TO IF_IXML_ELEMENT,
M_DOCUMENT TYPE REF TO IF_IXML_DOCUMENT,
G_IXML TYPE REF TO IF_IXML,
W_STRING TYPE XSTRING,
W_SIZE TYPE I,
W_RESULT TYPE I,
W_LINE TYPE STRING,
IT_XML TYPE DCXMLLINES,
S_XML LIKE LINE OF IT_XML,
W_RC LIKE SY-SUBRC.
DATA: XML TYPE DCXMLLINES.
DATA: RC TYPE SY-SUBRC,
BEGIN OF XML_TAB OCCURS 0,
D LIKE LINE OF XML,
END OF XML_TAB.
data : l_element type ref to if_ixml_element,
xml_ns_prefix_sf type string,
xml_ns_uri_sf type string.
CLASS CL_IXML DEFINITION LOAD.
G_IXML = CL_IXML=>CREATE( ).
CHECK NOT G_IXML IS INITIAL.
M_DOCUMENT = G_IXML->CREATE_DOCUMENT( ).
CHECK NOT M_DOCUMENT IS INITIAL.
CALL FUNCTION 'SDIXML_DATA_TO_DOM'
EXPORTING
NAME = 'REPAIRDATA'
DATAOBJECT = IT_FINAL_LAST1[]
IMPORTING
DATA_AS_DOM = L_DOM
CHANGING
DOCUMENT = M_DOCUMENT
EXCEPTIONS
ILLEGAL_NAME = 1
OTHERS = 2.
CHECK NOT L_DOM IS INITIAL.
W_RC = M_DOCUMENT->APPEND_CHILD( NEW_CHILD = L_DOM ).
*Start of code for Header
* namespace
t_mnr = sy-datum+4(2).
CALL FUNCTION 'IDWT_READ_MONTH_TEXT'
EXPORTING
LANGU = 'E'
MONTH = t_mnr
IMPORTING
T247 = wa_t247
concatenate sy-datum+6(2)
wa_t247-ktx
sy-datum(4) into t_var1.
concatenate sy-uzeit(2)
sy-uzeit+2(2)
sy-uzeit+4(2) into t_var2.
clear : xml_ns_prefix_sf,
xml_ns_uri_sf.
l_element = m_document->get_root_element( ).
xml_ns_prefix_sf = 'TIMESTAMP'.
concatenate t_var1 t_var2 into xml_ns_uri_sf separated by space.
clear : t_var1,
t_var2,
t_mnr,
wa_t247.
l_element->set_attribute( name = xml_ns_prefix_sf
namespace = ' '
value = xml_ns_uri_sf ).
clear : xml_ns_prefix_sf,
xml_ns_uri_sf.
xml_ns_prefix_sf = 'FILECREATOR'.
xml_ns_uri_sf = 'SAP'.
l_element->set_attribute( name = xml_ns_prefix_sf
namespace = ' '
value = xml_ns_uri_sf ).
clear : xml_ns_prefix_sf,
xml_ns_uri_sf.
xml_ns_prefix_sf = 'CLAIMGROUP'.
xml_ns_uri_sf = '1'.
l_element->set_attribute( name = xml_ns_prefix_sf
namespace = ' '
value = xml_ns_uri_sf ).
clear : xml_ns_prefix_sf,
xml_ns_uri_sf.
xml_ns_prefix_sf = 'CLAIMTYPES'.
xml_ns_uri_sf = 'W'.
l_element->set_attribute( name = xml_ns_prefix_sf
namespace = ' '
value = xml_ns_uri_sf ).
*End of Code for Header
CALL FUNCTION 'SDIXML_DOM_TO_XML'
EXPORTING
DOCUMENT = M_DOCUMENT
IMPORTING
XML_AS_STRING = W_STRING
SIZE = W_SIZE
TABLES
XML_AS_TABLE = IT_XML
EXCEPTIONS
NO_DOCUMENT = 1
OTHERS = 2.
LOOP AT IT_XML INTO XML_TAB-D.
APPEND XML_TAB.
ENDLOOP.
*Start of Code for File name
concatenate p_file
'\R'
'000_119481'
sy-datum+6(2) sy-datum+4(2) sy-datum+2(2)
sy-uzeit(2) sy-uzeit+2(2) sy-uzeit(2) '.xml' into p_file.
*End of Code for File name
CALL FUNCTION 'WS_DOWNLOAD'
EXPORTING
BIN_FILESIZE = W_SIZE
FILENAME = p_file
FILETYPE = 'BIN'
TABLES
DATA_TAB = XML_TAB
EXCEPTIONS
OTHERS = 10.
IF SY-SUBRC = 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF. -
Exporting table data in csv format
hi all...
my schema has total 672 tables and i need to export thedatas of all tables in separate csv files(i.e for one table there will be one csv)..
I am using plsql developer...but i got no tool to do that..
Manually doing that is next to impossible for all the tables...
Is there any way out???
please help...hi...is it possible to do this using simple cursor...
i have tried this...but its not working...
SQL> spool d:\abcd.csv
Started spooling to d:\abcd.csv
SQL>
SQL> declare
2 cursor c1 is select table_name from user_tables;
3 v_c1 c1%rowtype;
4
5 begin
6 open c1;
7 loop
8 fetch c1 into v_c1;
9 exit when c1%notfound;
10 select * from v_c1.table_name;
11 end loop;
12 close c1;
13
14 end;
15 /
declare
cursor c1 is select table_name from user_tables;
v_c1 c1%rowtype;
begin
open c1;
loop
fetch c1 into v_c1;
exit when c1%notfound;
select * from v_c1.table_name;
end loop;
close c1;
end;
ORA-06550: line 11, column 20:
PL/SQL: ORA-00942: table or view does not exist
ORA-06550: line 11, column 1:
PL/SQL: SQL Statement ignored
SQL> -
How to export waveform data into text file
Hi..
I am trying to export my waveform graph data into write spreadsheet file. When running the proram, I am getting graph values (values in a matrix) when checking with the probe. But once I save it to a text file, the values recorded are just zeros. I have attached the text file along with this. Could anyone please tell me why the graph data is not recorded in the spreadsheet? I have extracted the Y component of the graph and then wired it to the write to spreadsheet function. It seems like the labview is getting the data.. but now the problem is how to get the data out of labview?Please find the attched image of the whole program. I tried running the program after removing the array to matrix but still I am not able to export the graph values to spreadsheet. The program as a whole is working as I am able to display the oscilloscope graph into the waveform graph but I am not able to export the values. When I tried using probes, I am getting differnet matrix values till the end of the connection given to 'write to spreadsheet' but not able to export the values. The second image shows the write to spreadsheet file inside the while loop as i thought it will reduce the memory.. but it is still not working...
Attachments:
Scanner complete program.jpg 713 KB
full scanner program 2.jpg 664 KB -
Problem while Exporting Table Data to Excel Sheet
Hi All,
Im getting problem while downloading the table data to Excel.
Steps i was fallowed:
1)created Context node with four attributs.
2)created one attribut(resource) type: com.sap.ide.webdynpro.uielementdefinitions.Resource
3)Taken one Filedownload UI and Property resource bounded to (resource)
4)Added required jar file in the library and copied the same jar file in the navigator lib file
written the necessary coding but showing the below problem in the Browser
You must Flush before accessing the resource content
Please give me any suggestion on this
Regards
PolakaHi
Check this might help with Brian Fernandes comments.
Flush?
Regards
Arun Jaiswal -
How do I export phone contacts into CSV file?
I can't seem to export the contacts in my iPhone into a CSV file so I can import them into Yahoo Mail or Gmail to edit.
Contacts are designed to be synced with a supported application on your computer. This is so common with Windows users - here anyway - not having contacts available on their computer. How does one get by without having contact info available on their computer, or having email addresses only for contacts on their computer, telephone numbers only for contacts on their cell phone, and mailing addresses who knows where? Not a good idea to depend on an iPhone or any cell phone only for contact info - along with having to manually enter all contact info on a cell phone, which can be lost or stolen.
With Windoze, you can sync contacts with Outlook 2003 or 2007 along with syncing calendar events, or with the address book used by Outlook Express or by Windows Mail (depending on the Windoze version) called Windows Contacts for syncing contacts only. You may not have Outlook 2003 or 2007, but you have Windows Contacts available.
If Windows Contacts is empty - no contact info, before the first sync enter one contact in Windows Contacts. Make the contact up if needed, which can be deleted later. This will provide a merge prompt with the first sync for this data, which you want to select. Syncing contacts with Windows Contacts is selected under the Info tab for your iPhone sync preferences with iTunes.
After your contacts are available in Windows Contacts, you can export the contacts from there in a CSV file for import by Yahoo or Gmail. -
How to add the records of 2 internal table records into one file
hello experts,
My scenario is...
I am retrieving the data for the for the credit, debit and trailer records of the customer into 3 different internal tables and finally i have to append all those records into one file first debit records then credit records finally the trailer record.... how to do that can anyone give some idea plzzzzzzzzz..
Plz its bit urgent..
Thanks a lot for your anticipation
SRIHello,
Do like this.
" Assume u have three itab.
"Itab1 - debit
"Itab2 - credit
"Itab3 - Credit.
REPORT ZV_TEST_SERVER .
*PARAMETERS: P_FILE TYPE STRING."RLGRAP-FILENAME.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
* BIN_FILESIZE =
FILENAME = P_FILE
* FILETYPE = 'ASC'
APPEND = 'X' " Check here
* WRITE_FIELD_SEPARATOR = ' '
* HEADER = '00'
* TRUNC_TRAILING_BLANKS = ' '
* WRITE_LF = 'X'
* COL_SELECT = ' '
* COL_SELECT_MASK = ' '
* DAT_MODE = ' '
* IMPORTING
* FILELENGTH =
TABLES
DATA_TAB = ITAB1
* EXCEPTIONS
* FILE_WRITE_ERROR = 1
* NO_BATCH = 2
* GUI_REFUSE_FILETRANSFER = 3
* INVALID_TYPE = 4
* NO_AUTHORITY = 5
* UNKNOWN_ERROR = 6
* HEADER_NOT_ALLOWED = 7
* SEPARATOR_NOT_ALLOWED = 8
* FILESIZE_NOT_ALLOWED = 9
* HEADER_TOO_LONG = 10
* DP_ERROR_CREATE = 11
* DP_ERROR_SEND = 12
* DP_ERROR_WRITE = 13
* UNKNOWN_DP_ERROR = 14
* ACCESS_DENIED = 15
* DP_OUT_OF_MEMORY = 16
* DISK_FULL = 17
* DP_TIMEOUT = 18
* FILE_NOT_FOUND = 19
* DATAPROVIDER_EXCEPTION = 20
* CONTROL_FLUSH_ERROR = 21
* OTHERS = 22
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
* BIN_FILESIZE =
FILENAME = P_FILE
* FILETYPE = 'ASC'
APPEND = 'X' " Check here
* WRITE_FIELD_SEPARATOR = ' '
* HEADER = '00'
* TRUNC_TRAILING_BLANKS = ' '
* WRITE_LF = 'X'
* COL_SELECT = ' '
* COL_SELECT_MASK = ' '
* DAT_MODE = ' '
* IMPORTING
* FILELENGTH =
TABLES
DATA_TAB = ITAB2 " Check here
* EXCEPTIONS
* FILE_WRITE_ERROR = 1
* NO_BATCH = 2
* GUI_REFUSE_FILETRANSFER = 3
* INVALID_TYPE = 4
* NO_AUTHORITY = 5
* UNKNOWN_ERROR = 6
* HEADER_NOT_ALLOWED = 7
* SEPARATOR_NOT_ALLOWED = 8
* FILESIZE_NOT_ALLOWED = 9
* HEADER_TOO_LONG = 10
* DP_ERROR_CREATE = 11
* DP_ERROR_SEND = 12
* DP_ERROR_WRITE = 13
* UNKNOWN_DP_ERROR = 14
* ACCESS_DENIED = 15
* DP_OUT_OF_MEMORY = 16
* DISK_FULL = 17
* DP_TIMEOUT = 18
* FILE_NOT_FOUND = 19
* DATAPROVIDER_EXCEPTION = 20
* CONTROL_FLUSH_ERROR = 21
* OTHERS = 22
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
* BIN_FILESIZE =
FILENAME = P_FILE
* FILETYPE = 'ASC'
APPEND = 'X' " Check here
* WRITE_FIELD_SEPARATOR = ' '
* HEADER = '00'
* TRUNC_TRAILING_BLANKS = ' '
* WRITE_LF = 'X'
* COL_SELECT = ' '
* COL_SELECT_MASK = ' '
* DAT_MODE = ' '
* IMPORTING
* FILELENGTH =
TABLES
DATA_TAB = ITAB3 " Check here
* EXCEPTIONS
* FILE_WRITE_ERROR = 1
* NO_BATCH = 2
* GUI_REFUSE_FILETRANSFER = 3
* INVALID_TYPE = 4
* NO_AUTHORITY = 5
* UNKNOWN_ERROR = 6
* HEADER_NOT_ALLOWED = 7
* SEPARATOR_NOT_ALLOWED = 8
* FILESIZE_NOT_ALLOWED = 9
* HEADER_TOO_LONG = 10
* DP_ERROR_CREATE = 11
* DP_ERROR_SEND = 12
* DP_ERROR_WRITE = 13
* UNKNOWN_DP_ERROR = 14
* ACCESS_DENIED = 15
* DP_OUT_OF_MEMORY = 16
* DISK_FULL = 17
* DP_TIMEOUT = 18
* FILE_NOT_FOUND = 19
* DATAPROVIDER_EXCEPTION = 20
* CONTROL_FLUSH_ERROR = 21
* OTHERS = 22
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
If useful reward.
Vasanth -
How to download dynamic table data into CSV format
Hi,
I have data in dynamic table <it_data>, and I want to download that into CSV format.
so I tried to use FM SAP_CONVERT_TO_CSV_FORMAT but how to use this FM with dynamic table?
here <it_data> type standard table, we can pass this to I_TAB_SAP_DATA. but problem is with changing parameter. (TRUXS_T_TEXT_DATA), what type of table it should be?
and one more thing is <it_data> structure is also dynamic , depending on input the structure of <it_data> may vary.
Regards,
MrunalHi,
check this one may be help full to u...........
*Use FM SAP_CONVERT_TO_CSV_FORMAT and pass a Delimiter and ITAB. This
returns an Ouput_String
Convert Internal Table to ; delimited format
There is a filename parameter on this FM but I don't think it's used
call function 'SAP_CONVERT_TO_CSV_FORMAT'
exporting
i_field_seperator = Delimiter "default ';'
i_filename = Filename
tables
i_tab_sap_data = ITAB
changing
i_tab_converted_data = Ouput_String
exceptions
conversion_failed = 1
others = 2.
Regard's
SHAIK.
Maybe you are looking for
-
Itunes will not restore my 2nd gerneration ipod shuffle giving error1418
Itunes will not restore my 2nd generation ipod shuffle. I am getting an error 1418. I am running Windows Vista so none of the fixes that I have seen seem to work for me. Is there anything else that I can try or am I ipodless until Wednesday or Thusda
-
Adobe Air App in First Lady Michelle Obama's "Apps for Healthy Kis" Contest
Break pal - Fitness at Your Desk (http://www.breakpal.com). An online workplace wellness application built using Adobe Air and Drupal has been accepted into First Lady Michelle Obama's " Apps for healthy kids competition. The competition is to insp
-
I am having trouble with the service on my iPhone 4s, particularly with it not connecting to my Vodafone Suresignal box. Vodafone said my phone has not synchronised with the data network for over a month ang gave me a new sim card, but it still isn't
-
What gives? I have turned the exception trace on (why it is not on by default is a surprise) and when I try to create a standard Weekly Audit Report, (Activity this Week) I now get to see this exception. How on earth can IdM generate a badly formed X
-
Window borders are not displaying.
Hi Experts, while testing Window borders are not displaying but in se71 T.Code, in menu option Utilities -> Printing Test it is printing the boarders. Please help me in this regard.