CSV File problem
Hi ,
Our web based application generates some csv reports. It is working well from other machine. When i access those reports from my machine , all data is shown in one column. I found that the every row is written with in double quotes. why it generates wrongly at my m/c ? can you suggest , is there any wrong setting at my m/c ?
Thx !
With regards,
Rahul.
It's really hard to say from what you have posted what the problem is.
Downloading files in a browser like CSV sometimes does funny things (especially IE and especially Mac). But this doesn't sound to me like one of those problems.
I suggest using a tool that is not a web browser to confirm what the output files from your web application are exactly.
Similar Messages
-
Hello everyone.
I have a minor problem in uploading CSV file to HTMLDB.
I don't know the exact reason, but HTMLDB threw
"ORA-20001: Unable to create collection: ORA-06502: PL/SQL: numeric or value error" whenever I tried to upload my csv file. after a few repetition of deleting potential problem-causing columns and trying again, I found out the following:
when numeric value and character value are stored together in single column, the upload fails. For example, we have a column which stores the employee number. The employee number is just a sequential numeric value, however temporary employees have 'T' in front of their employee number, so it begins something like T0032 and so on.
So, then, I tried to enclose all the employee numbers which start with numeric value with " character, but that would just simply take too long to do it manually, and excel does not seem to support enclosing the values with " when it's saving the spreadsheet with CSV format.
So, I'm kind of stuck right now.
Can anyone give me a good way to deal it?
THANK YOU!Thanks for updating my forum setting, my name is now clearly visible :-)
anyway.. I went back and tested couple of things...
It now appears that the problem is not caused from values inside the column... instead..
I believe the size of csv file with certain character set is the issue here...
This is a rough estimate, but file size larger than about 31.7 ~ 9kb caused errors IF THEY CONTAINED OTHER CHARACTER SET THAN ENGLISH.
here are informations about my setting:
1. Oracle database: initially 9.2.0.1 -> patched upgrade to 9.2.0.4
2. HTMLDB: 1.4.0.00.21c (downloaded from otn)
3. db character set : UTF-8
4. OS: windows 2000 (with up-todate service pack and
security patches and etc..)
5. system: toshiba tecra 2100 with 1GB ram and 40GB hdd
6. operation system locale: korean, south korea
I tried uploading many other files in both english and korean, which is my national language. The english csv file worked beautifully, without any file size limitations. However, when I tried to upload a file with
KOREAN characters in it, it failed.
Intrigued by this behavior I started to test the file upload with various excel files, and found out that..
1. english csv files caused absolutely no errors.
2. engilsh file with single korean character immediately
threw the error, if the size exceeded 31.8kb (or I
think the size is 32kb)
3. I tested korean file mixed english file, caused
the same error if the size exceeded 32kb.
the distribution of korean characters inside the
csv file did not matter, just don't go beyond 32kb!
Please reproduce this behavior (but I presume that some efforts will be required in order to reproduce this error perfectly, since it is not easy to obtain foreign OS in US regions.. is it?)
anyway, thanks for your quick reply, and
I hope this problem gets fixed, because in this manner,
I have to split my file into 32kb chunks!
- Howard -
Ssrs 2012 export to comma delimited (csv) file problem
In an ssrs 2012 report, I want to be able to export all the data to a csv (comma delimited) file that only contains the detailed row information. I do not want to export any rows that contain header data information.
Right now the export contains header row information and detail row information. The header row information exists for the PDF exports.
To have the header row not exist on CSV exports, I have tried the following options on the tablix with the header row information with no success:
1. = iif(Mid(Globals!RenderFormat.Name,1,5)<>"EXCEL",false,true),
2. = iif(Mid(Globals!RenderFormat.Name,1,3)="PDF",false,true),
3. The textboxes that contain the column headers, the dataelelementoutput=auto and
the text boxes that contain the actual data, I set the dataelelementoutput=output.
Basically I need the tablix that contains the header information not to appear when the export is for a csv file.
However when the export is for the detail lines for the csv export, I need that tablix to show up.
Thus can you tell me and/or show me code on how to solve this problem?Hi wendy,
Based on my research, the expression posted by Ione used to hide items only work well when render format is RPL or MHTML. Because only the two render format’s RenderFormat.IsInteractive is true.
In your scenario, you want to hide tablix header only when export the report to csv format. While CSV file uses the textbox name in detail section as the column name, not the table header when you view the table. So if we want to hide the header row of the
tablix, please refer to the steps below to enable the “NoHeader” setting in the RSReportserver.config file:
Please navigate to RSReportserver.config file: <drive:>\Program Files\Microsoft SQL Server\MSRS1111.MSSQLSERVER\Reporting Services\ReportServer \RSReportserver.config.
Backup the RSReportserver.config file before we modify it, open the RSReportserver.config file with Notepad format.
In the <Render> section, add the new code for the CSV extension like this:
< Extension Name="CSV" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering">
<Configuration>
<DeviceInfo>
<NoHeader>True</NoHeader>
</DeviceInfo>
</Configuration>
< /Extension>
Save the RSReportserver.config file.
For more information about CSV Device Information Settings, please see:
http://msdn.microsoft.com/en-us/library/ms155365(v=sql.110)
If you have any other questions, please feel free to ask.
Thanks,
Katherine xiong
If you have any feedback on our support, please click
here.
Katherine Xiong
TechNet Community Support -
Hi Guru,
I encounted 1 problem where my output csv file content incorrect. Source is from SAP Proxy.
Example my csv file should have Material code, material description and date. However when i open the csv file, it is only have material code in one line and the rest of fields are not display in the csv file.
<?xml version="1.0" encoding="UTF-8"?>
<ns0:MT_PRD_MASTER_SND xmlns:ns0="http://com.starhub/sapprdcat">
<RECORDSET> 1:1
<RECORD> 1:unbounded
<MATNR/>
<MAKTX/>
<DATE_ADD/>
</RECORD>
</RECORDSET>
</ns0:MT_PRD_MASTER_SND>
My source and target are same.
CC Receiver setting:
Transport Protocal : File System NFS
Message Protocal : File Content Conversion
Adapter Engine : Integration Server
Recordset Structure : RECORD
RECORD.fiedlNames : MATNR,MAKTX,DATE_ADD
RECORD.endSeparator : 'nl'
RECORD.fieldSeparator : ,
RECORD.processFieldNames: fromConfiguration
RECORD.addHeaderLine: 0
The INCORRECT result only display MATNR in one line only.
HNOK0000102 HNOK0000108 C032S31384A C032S3XDATA CFGSC300300 CRAWC300001
Expected Result should be :
HNOK0000102,Material test 1,01/01/2007
HNOK0000108,Material test 2,01/02/2007
C032S31384A,Material test 3,01/01/2007
C032S3XDATA,Material test 4,01/01/2007
CFGSC300300,Material test 5,01/01/2007
CRAWC300001,Material test 6,01/01/2007
Thank you very much for help.
Regards,hi,
check this parameter:
Recordset Structure : RECORD [wrong]
Recordset Structure : RECORDSET,RECORD [right]
and the other parameters will be like this:
RECORD.fiedlNames : MATNR,MAKTX,DATE_ADD
RECORD.endSeparator : 'nl'
RECORD.fieldSeparator : ,
RECORD.processFieldNames: fromConfiguration
RECORD.addHeaderLine: 0
<b>RECORDSET.fieldSeparator: 'nl'</b>
Sachin
Message was edited by:
Sachin Dhingra -
File transfer Open dataset CSV file Problem
Hi Experts,
I have an issue in transferring Korean characters to a .CSV file using open dataset.
data : c_file(200) TYPE c value '
INTERFACES\In\test8.CSV'.
I have tried
open dataset c_file for output LEGACY TEXT MODE CODE PAGE '4103'.
open dataset c_file for output in TEXT MODE ENCODING NON-UNICODE.
open dataset c_file for output in TEXT MODE ENCODING Default.
Nothing is working.
But to download to the presentation server the below code is working. How can the same be achieved for uploading the file to application server.
CALL METHOD cl_gui_frontend_services=>gui_download
EXPORTING
filename = 'D:/test123.xls'
filetype = 'ASC'
write_field_separator = 'X'
dat_mode = 'X'
codepage = '4103'
write_bom = 'X'
CHANGING
data_tab = t_tab
EXCEPTIONS
file_write_error = 1
no_batch = 2
gui_refuse_filetransfer = 3
invalid_type = 4
no_authority = 5
unknown_error = 6
header_not_allowed = 7
separator_not_allowed = 8
filesize_not_allowed = 9
header_too_long = 10
dp_error_create = 11
dp_error_send = 12
dp_error_write = 13
unknown_dp_error = 14
access_denied = 15
dp_out_of_memory = 16
disk_full = 17
dp_timeout = 18
file_not_found = 19
dataprovider_exception = 20
control_flush_error = 21
not_supported_by_gui = 22
error_no_gui = 23
OTHERS = 24.Hi,
I would recommend to use OPEN DATASET ... ENCODING UTF-8 ...
If your excel version is unable to open this format, you can convert from 4110 to 4103 with report RSCP_CONVERT_FILE.
Please also have a look at
File upload: Special character
Best regards,
Nils Buerckel -
Spool to csv file problem--comma included in the data
Friends,
i spooled the table's data to csv file.
when i open it in the excel sheet some of the records are in the different cells.
below is the record....
Colonial Street, Ruwanda
so when i view it in the excel sheet.
Colonial Street is coming in one cell
and
Ruwanda is coming in the next cell.
but "Colonial Street, Ruwanda" is the single entry in the column.
so how can i spool the record which is having comma (,) in between the words to get it in the single cell of excel?
thanksUsually putting double quotes around your column values should take care of that:
select '"'||column1
||'","'||column2
||'","'||column3||'"'
from your_table -
Ssrs 2008 r2 export to csv file problem
In an SSRS 2008 R2 report, the users are going to export the data to: csv (comma delimited) and excel.
I am using the following to display if a column is visible or not:
=IIF(Mid(Globals!RenderFormat.Name,1,5)="EXCEL" AND First(Len(Fields!CustomerNumber.Value)) > 0,False,true).
I have set the DataElementOutput=Output for the textbox that displays the value. I have left DataElementOutput=Auto for the textbox that contains the column header.
When exporting to csv (comma delimited) or excel, I basically want the column to be visible when there is data in the field and the column not to be visible when there is no data.
The code works for excel but the code does not work for comma delimited.
Thus can you tell me what I can do so the column is not disaplyed when the data is exported to csv (comma delimited)?I don't think what you are trying to do is supported in .CSV files as it only saves text and values. See the following article:
http://office.microsoft.com/en-us/help/excel-formatting-and-features-that-are-not-transferred-to-other-file-formats-HP010014105.aspx
If you open a .CSV file using excel you can use formulas, but if you try and save it, it will not allow you to. I assume this is what you are trying to do. -
Memory problem with loading a csv file and displaying 2 xy graphs
Hi there, i'm having some memory issues with this little program.
What i'm trying to do is reading a .csv file of 215 mb (6 million lines more or less), extracting the x-y values as 1d array and displaying them in 2 xy graphs (vi attacked).
I've noticed that this process eats from 1.6 to 2 gb of ram and the 2 x-y graphs, as soon as they are loaded (2 minutes more or less) are really realy slow to move with the scrollbar.
My question is: Is there a way for use less memory resources and make the graphs move smoother ?
Thanks in advance,
Ierman Gert
Attachments:
read from file test.vi 106 KBHi lerman,
how many datapoints do you need to handle? How many do you display on the graphs?
Some notes:
- Each graph has its own data buffer. So all data wired to the graph will be buffered again in memory. When wiring a (big) 1d array to the graph a copy will be made in memory. And you mentioned 2 graphs...
- load the array in parts: read a number of lines, parse them to arrays as before (maybe using "spreadsheet string to array"?), finally append the parts to build the big array (may lead to memory problems too).
- avoid datacopies when handling big arrays. You can show buffer creation using menu->tools->advanced->show buffer allocation
- use SGL instead of DBL when possible...
Message Edited by GerdW on 05-12-2009 10:02 PM
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Problems With Data Alignment when spooling to a CSV file
Dear members,
I am spooling data to a csv file. My data contains 3 columns
For example :
col1 col2 col3
USD,10000033020000000000000,-1144206.34
The 2nd column is alphanumeric, it contains some rows which have only numbers and some which have numbers and alphabets.
The 3rd column contains only numbers with positive or negative values.
I am facing problem with alignment. when i open the spooled csv file then i find that the 3rd column is aligned to right .
In the 2nd column, rows which have only numbers are right justified and rows which have alpha numeric data are left justified.
I tried using the JUSTIFY function in sql plus but still it is not working for me.
Can any body give your opinion on how to control the alignment in spooled csv files.
Your responce is highly appreciated.
Here is my code :
WHENEVER SQLERROR CONTINUE
SET TIMING off
set feedback off
set heading off
set termout OFF
set pagesize 0
set linesize 200
set verify off
set trimspool ON
SET NEWPAGE NONE
col to_char(glcd.segment1||glcd.segment2||glcd.segment3||glcd.segment4||glcd.segment5||glcd.segment6) ALIAS CONCATENATED_SEGMENTS
col CONCATENATED_SEGMENTS justify left
col to_char(decode(glbal.currency_code,glsob.currency_code,
(begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr),
(begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq))) alias Total_Functional_Currency
col Total_Functional_Currency justify left
COlUMN V_INSTANCE NEW_VALUE V_inst noprint
select trim(lower(instance_name)) V_INSTANCE
from v$instance;
column clogname new_value logname
select '/d01/oracle/'|| '&&V_inst' ||'out/outbound/KEMET_BALANCE_FILE_EXTRACT' clogname from dual;
spool &&logname..csv
SELECT glsob.currency_code ||','||
to_char(glcd.segment1||glcd.segment2||glcd.segment3||glcd.segment4||glcd.segment5||glcd.segment6) ||','||
to_char(decode(glbal.currency_code,glsob.currency_code,
(begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr),
(begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq)))
from gl_balances glbal , gl_code_combinations glcd , gl_sets_of_books glsob
where period_name = '&1' /* Period Name */
and glbal.translated_flag IS NULL
and glbal.code_combination_id = glcd.code_combination_id
and glbal.set_of_books_id = glsob.set_of_books_id
and glbal.actual_flag = 'A'
and glsob.short_name in ('KEC-BOOKS' , 'KUE' , 'KEU','KEMS', 'KEAL' , 'KEAL-TW' , 'KEAL-SZ' , 'KEAM')
and glcd.segment1 != '05'
and decode(glbal.currency_code , glsob.currency_code , (begin_balance_dr - begin_balance_cr) + (period_net_dr -period_net_cr) ,
(begin_balance_dr_beq - begin_balance_cr_beq) + (period_net_dr_beq -period_net_cr_beq)) != 0
and glbal.template_id IS NULL
ORDER BY glcd.segment1 || glcd.segment2 || glcd.segment3 || glcd.segment4 || glcd.segment5 || glcd.segment6
spool off
SET TIMING on
set termout on
set feedback on
set heading on
set pagesize 35
set linesize 100
set echo on
set verify on
Thanks
Sandeepi think you do not have to worry about your code when you say that the plain texts created are ok when opened on the notepad. it is on the excel that you will need some adjustments. not sure about this but you might want to read about the applying styles in the excel by going through it's help menu.
-
How to avoid the split problem when uploading the data from csv file
Dear Friends,
I have to upload data from the .csv file to my custom table , i have found a problem when uploading the data .
i am using the code as below , please suggest me what i have to do in this regard
SPLIT wa_raw_csv AT ',' INTO
wa_empdata_csv-status
wa_empdata_csv-userid
wa_empdata_csv-username
wa_empdata_csv-Title
wa_empdata_csv-department.
APPEND wa_empdata_csv TO itab.
in the flat file i can see for one of the record for the field Title as
Director, Finance - NAR............there by through my code the wa_empdata_csv-Title is getting splited data as "Director, and Department field is getting Finance - NAR" , i can see that even though " Director, Finance - NAR" is one word it is getting split .
.......which is the problem iam facing.Please could any body let me know how in this case i should handle in my code that this word
"Director,Finance - NAR" wil not be split into two words.
Thanks & Records
MadhuriHi Madhuri,
Best way to avoid such problem is to use TAB delimited file instead of comma separated data. Generally TAB does not appear in data.
If you are generating the file, use tab instead of comma.
If you cannot modify the format of file and data length in file is fixed character, you will need to define the structure and then move data in fixed length structure.
Regards,
Mohaiyuddin -
Hello:)
I'm in trouble: I have a big csv file (over 5gb of web-analytics data) and my 64 bit excel (and 6gb ram)
I cant load file to data model because of it's size. There is an error "out of memory" in power query.
This is the first time when I encountered such a problem.
What options do I have to work with such a file? To increase memory in my computer? Would it solve the problem? How much do I need to work with 6gb csv?
Or may be I can upload my data somewhere to azure and work with it there?
So the problem - is there any way to deal with big files using power query? Or I need to become a developer and learn sql or other languages?
Thanks in advance.
MaxHi Miguel!
Thanks for your answer.
I've tried to load this file on virtual pc from azure cloud with this config:
I have increased memory limit in power query settings:
And still, the proble is the same:
What I do wrong? -
I have one problem in creating csv file.
If one column has single line value, it is coming in single cell. But if the column has no.of lines using carriage return while entering into the table,
I am not able to create csv file properly. That one column value takes more than one cell in csv.
For example the column "Issue" has following value:
"Dear
I hereby updated the Human Resources: New User Registration Form Request.
And sending the request for your action.
Regards
Karthik".
If i try to create the csv file that particular record is coming as follows:
0608001,AEGIS USERID,SINGAPORE,Dear
I hereby updated the Human Resources: New User Registration Form Request.
And sending the request for your action.
Regards
Karthik,closed.
If we try to load the data in table it is giving error since that one record is coming in more than one line. How can I store that value in a single line in csv file.
Pls help.I have tried using chr(10) and chr(13) like this......still it is not solved.
select REQNO ,
'"'||replace(SUBJECT,chr(13),' ')||'"' subject,
AREA ,
REQUESTOR ,
DEPT ,
COUNTRY ,
ASSIGN_TO ,
to_Date(START_DT) ,
START_TIME ,
PRIORITY ,
'"'||replace(issues, chr(13), ' ')||'"' issues,
'"'||replace(SOLUTIONS,chr(13),' ')||'"' SOLUTIONS ,
'"'||replace(REMARKS,chr(13),' ')||'"' REMARKS ,
to_date(CLOSED_DT) ,
CLOSED_TIME ,
MAN_DAYS ,
MAN_HRS ,
CLOSED_BY ,
STATUS from asg_log_mstr
Pls help. -
Comma problem in receiver file FCC csv file
Hi,
We had a problem with CSV conversions in file adapter.Our scenario is: Proxy>Xi>FCC(csv)
<?xml version="1.0" encoding="utf-8" ?>
- <n0:RDM_OrganizationalStructure_MT_OB xmlns:n0="ness.com:RDM_ORGANIZATIONALSTRUCTURE" xmlns:prx="urn:sap.com:proxy:DV3:/1SAI/TAS9B35FF35521A19F6CBEE:701:2009/02/10">
- <row>
<KTEXT>Information,Technology</KTEXT>
<KTEXT>"Information, for internal use only" </KTEXT>
</row>
</n0:RDM_OrganizationalStructure_MT_OB>
In O/P csv file the filed KTEXT is spiting as Information onecolumn and Technology in the next column.
for avoiding this i conceited with " (double-quote) on both the sides to the KTEXT filed. so the Information,Technolg is appending in the same column. Its fine,
but for the second record from the source side itself the its getting as "Information, for internal use only"
so this it is spliting and appending in the next column. in 1st column Information and in the 2nd for internal use only. i want to append this in 1st column as "Information, for internal use only"
how to resolve this issue for appending the record in the same column?
and also i want to know why " is not inserting in csv file.
My Fcc conversion parameters are
record Structure: row
row.addHeaderLine: 1
row.headerLine: Code,Name,Description
row.fieldSeparator: ,
row.endSeparator: 'nl'
pls advice how to overcome this problem
Thanks
VankadoathHi Vanka,
In O/P csv file the filed KTEXT is spiting as Information onecolumn and Technology in the next column. for avoiding this i conceited with " (double-quote) on both the sides to the KTEXT filed. so the Information,Technolg is appending in the same column. Its fine,
To make it in the same column, you put it into double quotes.
"Information,Technology" -->> Information,Technology
but for the second record from the source side itself the its getting as "Information, for internal use only" . so this it is spliting and appending in the next column. in 1st column Information and in the 2nd for internal use only. i want to append this in 1st column as "Information, for internal use only"
As this already has double quotes, you need to use 2 more double quotes to escape that.
"""Information, for internal use only""" -->> "Information, for internal use only"
Regards,
Sunil Chandra -
prodorder;batchno ;text1 ; text2; text3;text4;text5;
"100001012;0008339492;487-07-G1;22;6,86;000081022G; "
"100001013;0008339493;487-07-G1;22;6,86;000081022E;1 "
This is my sample csv file.
i want to upload using bdc.
im splitting the internal table and uploading like this.
call function 'GUI_UPLOAD'
exporting
FILENAME = FILE
FILETYPE = 'ASC'
has_field_separator = ','
HEADER_LENGTH = 0
READ_BY_LINE = 'X'
DAT_MODE = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
CHECK_BOM = ' '
IMPORTING
FILELENGTH =
HEADER =
tables
DATA_TAB = ITAB
loop at ITAB.
split ITAB at ';' into ITAB1-AUFNR
ITAB1-CHARG
ITAB1-TEXT1
ITAB1-TEXT2
ITAB1-TEXT3
ITAB1-TEXT4
ITAB1-TEXT5.
append ITAB1.
endloop.
But finally im getting output as
"100001012 in ordernumber of co01 transaction.
it should display 100001012 in ordernumber field.
please suggest how we can rectify this problem.Hi Vijay,
I can sugest you one thing. Before splitting ITAB at " , " .
use REPLACE command to remove inverted code value ( " ) .
Eg Loop at Itab.
REPLCE ALL OCCURANCES OF " WITH SPACE ( Check syntax).
Hope this will solve your problem.
Regards
Sourabh -
I have a csv file that I download once a week and load into excel. Yesterday FF25 decides that the csv file is a text file (it is not) and converts everything to text. This is a new and unwelcome problem. To my horror, I was forced to try opening in IE where it worked just fine.
The FF open screen has
filename.csv
which is: text document
from https://companyName.comThis means that the server sends the file as text/plain (this may have changed if it didn't happen before).<br />
Firefox always opens a file send as text/plain in a browser tab.<br />
You can right-click the link and choose "Save Link As" to save the file or use "Save Page As" if the file opens in a tab.
Maybe you are looking for
-
How come two signals of the same frequency show on the graph as having the same?
Hi! My assignment for school is to build a 2 channel oscilloscope, with generators too. Initially, I tried using a waveform chart, that actually showed a very "ugly" image. Just for testing, I left the chart there with hidden indicator. The student t
-
Automatically open raw files in camera raw
PLEASE READ the post before posting a reply as i may have already tried something you think might be the fix. basics: latest version of all adobe CC 2014 products including bridge 6.1.0 win8.1 after i've imported raw files (either RW2 or CR2) it USED
-
1242AG How to set a password on wireless client
Listen I wasn't able to get these woking as a Root-Bridge & Non-Root Bridge but was able to get them to work as Root-Bridge with Wireless. Ok I'm out here in the middle of nowhere in Iraq so i say fine just put a password on the wireless client but i
-
Hi expert, We set up a worklfow on SOD. However, it works on my colleagues' laptop not mine. The only possible explanation we can think of is the IE version. I am on IE7.O while my colleagues are on 6.0 or earlier version. I am not sure if this is th
-
Re:uninstalling bloatware
hi everyone.. anyone help me regarding this issue..if possible nabeel,session,auni help me.. i think i have succesfully rooted my neo v using andriod emulator trick and i see a superuser symbol in my app box.. how do i uninstall the bloatware...using