Long number in CSV file appearing in scientific notation by default
Hi,
How can I stop a long number in a CSV file that is opened in excel from appearing in scientific notation by default?
eg.
"hello","778002405501 ", "yes"
becomes:
hello | 7.78002E+11 | yes
I have tried wrapping the data in quotes in the csv but to no avail.
Thanks in advance,
Alistair
You can change the extension from ".csv" to ".xls" and use table to form the data and use
style=”mso-number-format:\@;”
Please read the sample code below in Classic ASP:
You can also read in my blog http://sarbashish.wordpress.com/2012/11/30/export-to-excel-how-to-prevent-long-numbers-from-scientific-notation/
<%
Response.Clear
Response.CacheControl = “no-cache”
Response.AddHeader “Pragma”, “no-cache”
Response.Expires = -1
Response.ContentType = “application/vnd.ms-excel”
Dim FileName
FileName = “TestDB Lookup-” & month(now)&”-”&day(now)&”-”&year(now)&”.xls”
Response.AddHeader “Content-Disposition”, “inline;filename=” & FileName
%>
<html xmlns:o=”urn:schemas-microsoft-com:office:office” xmlns:x=”urn:schemas-microsoft-com:office:excel” xmlns=”http://www.w3.org/TR/REC-html40″;>
<head>
<meta http-equiv=Content-Type content=”text/html; charset=UTF-8″>
<!–[if gte mso 9]>
<xml>
<x:ExcelWorkbook>
<x:ExcelWorksheet>
<x:WorksheetOptions>
<x:DisplayGridlines/>
</x:WorksheetOptions>
</x:ExcelWorksheet>
</x:ExcelWorksheets>
</x:ExcelWorkbook>
</xml>
<![endif]–>
</head>
<body>
<table border=”0″>
<tr>
<td>ID</td>
<td>Name</td>
</tr>
<tr>
<td style=”mso-number-format:\@;”>01234567890123456567678788989909000030</td>
<td>Sarbashish B</td>
</tr>
</table>
</body>
</html>
Sarbashish Bhattacharjee http://sarbashish.wordpress.com
Similar Messages
-
Trouble loading a large number of csv files
Hi All,
I am having an issue loading a large number of csv files into my LabVIEW program. I have attached a png of the simplified code for the load sequence alone.
What I want to do is load data from 5000 laser beam profiles, so 5000 csv files (68x68 elements), and then carry out some data analysis. However, the program will only ever load 2117 files, and I get no error messages. I have also tried, initially loading a single file, selecting a crop area - say 30x30 elements - and then loading the rest of the files cropped to these dimensions, but I still only get 2117 files.
Any thoughts would be much appreciated,
Kevin
Kevin Conlisk
Ph.D Student
National Centre for Laser Applications
National University of Ireland, Galway
IRELAND
Solved!
Go to Solution.
Attachments:
Load csv files.PNG 14 KBHow many elements are in the array of paths (your size(s) indicator) ?
I suspect that the open file is somewhat limited to a certain number.
You could also select a certain folder and use 'List Folder' to get a list of files and load those.
Your data set is 170 MB, not really asthounising, however you should whatc your programming to prevent data-doublures.
Ton
Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas
LabVIEW, programming like it should be! -
Error while running bulk load utility for account data with CSV file
Hi All,
I'm trying to run the bulk load utility for account data using CSV but i'm getting following error...
ERROR ==> The number of CSV files provided as input does not match with the number of account tables.
Thanks in advance........Please check your child table.
http://docs.oracle.com/cd/E28389_01/doc.1111/e14309/bulkload.htm#CHDCGGDA
-kuldeep -
External Table which can handle appending multiple csv files dynamic
I need an external table which can handle appending multiple csv files' values.
But the problem I am having is : the number of csv files are not fixed.
I can have between 2 to 6-7 files with the suffix as current_date. Lets say it will be like my_file1_aug_08_1.csv, my_file1_aug_08_2.csv, my_file1_aug_08_3.csv etc. and so on.
I can do it by following as hardcoding if I know the number of files, but unfortunately the number is not fixed and need to something dynamically to inject with a wildcard search of file pattern.
CREATE TABLE my_et_tbl
my_field1 varchar2(4000),
my_field2 varchar2(4000)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY my_et_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL )
LOCATION (UTL_DIR:'my_file2_5_aug_08.csv','my_file2_5_aug_08.csv')
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;Please advice me with your ideas. thanks.
Joshua..Well, you could do it dynamically by constructing location value:
SQL> CREATE TABLE emp_load
2 (
3 employee_number CHAR(5),
4 employee_dob CHAR(20),
5 employee_last_name CHAR(20),
6 employee_first_name CHAR(15),
7 employee_middle_name CHAR(15),
8 employee_hire_date DATE
9 )
10 ORGANIZATION EXTERNAL
11 (
12 TYPE ORACLE_LOADER
13 DEFAULT DIRECTORY tmp
14 ACCESS PARAMETERS
15 (
16 RECORDS DELIMITED BY NEWLINE
17 FIELDS (
18 employee_number CHAR(2),
19 employee_dob CHAR(20),
20 employee_last_name CHAR(18),
21 employee_first_name CHAR(11),
22 employee_middle_name CHAR(11),
23 employee_hire_date CHAR(10) date_format DATE mask "mm/dd/yyyy"
24 )
25 )
26 LOCATION ('info*.dat')
27 )
28 /
Table created.
SQL> select * from emp_load;
select * from emp_load
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
SQL> set serveroutput on
SQL> declare
2 v_exists boolean;
3 v_file_length number;
4 v_blocksize number;
5 v_stmt varchar2(1000) := 'alter table emp_load location(';
6 i number := 1;
7 begin
8 loop
9 utl_file.fgetattr(
10 'TMP',
11 'info' || i || '.dat',
12 v_exists,
13 v_file_length,
14 v_blocksize
15 );
16 exit when not v_exists;
17 v_stmt := v_stmt || '''info' || i || '.dat'',';
18 i := i + 1;
19 end loop;
20 v_stmt := rtrim(v_stmt,',') || ')';
21 dbms_output.put_line(v_stmt);
22 execute immediate v_stmt;
23 end;
24 /
alter table emp_load location('info1.dat','info2.dat')
PL/SQL procedure successfully completed.
SQL> select * from emp_load;
EMPLO EMPLOYEE_DOB EMPLOYEE_LAST_NAME EMPLOYEE_FIRST_ EMPLOYEE_MIDDLE
EMPLOYEE_
56 november, 15, 1980 baker mary alice 0
01-SEP-04
87 december, 20, 1970 roper lisa marie 0
01-JAN-99
SQL> SY.
P.S. Keep in mind that changing location will affect all sessions referencing external table. -
Can you open CSV-files with commas?
It seems to me that we have something of a regression in Numbers 2.0 (part of iWork '09) - it (like Excel before it) no longer opens regular CSV-files.
When I use the new Numbers to try to make a CSV-file, it'll actually use semicolons instead of commas, and any of my old CSV-files (like this one: http://d.ooh.dk/misc/postnumre.csv ) (with commas) will load all the values (and the commas) in a single column.
I wonder if it's just me, or perhaps only a problem when using European notation (with comma as the decimal separator)?At last, Apple adopted the same behavior than Bento 1.
When the decimal separator is the period, it works with standard CSV files.
When the decimal separator is comma, it works with CSV using the semi-colon as item delimiters.
To open your old CSV, set temporarily your system to a region whose decimal separator is period. Given that, Numbers will open them flawlessly.
The ability to chose the separator (as it is now in Bento 2) would have been fine .
Yvan KOENIG (from FRANCE dimanche 11 janvier 2009 20:30:03) -
Dropping csv file into a cell?
After updating "Numbers", I can no longer call drop csv files into a cell and have the row complete. What am I missing here?
Hello fellow sufferers of CSV headaches from commas within fields and ; instead of , etc.. They've been around as long as I can remember.
If you don't want to use Excel (which is good at this sort of thing), this script should help. (Dropbox download, AppleScript file). The effect is similar to Excel's Text to Columns but the usage is slightly different.
If you end up with text all in one column as Cuky describes, you simply select the body cells in that column containing the text, run the script, click on a cell (the first cell in the body of the next column to the right often works best) and paste.
The results here look like this:
I deliberately made the unexpanded column untidy and threw some difficult things at it, and it did a pretty good job in testing here. Return or shift-return within a field, even if quoted, will cause Numbers to move to a new line, but that's a Numbers thing that can't be helped. Otherwise, it handles the usual csv conventions better than expected.
If your separator is ; instead of , there is a place you can change that it the script.
If you frequently import data, this is easily placed in the menu, either as an Automator Service or under Scripts.
And it could be modified to read a csv file rather than grabbing values already in a Numbers column, making it almost as convenient as the old drag and drop. It could ask for the csv file and put the results on the clipboard for pasting into a table.
If there is interest in that let me know. And suggestions/feedback on this one welcome.
SG -
Scientific notation doesn't display correctly
Hi, this is just a general question but it's about the Calculator app. The problem is with displaying answers in scientific notation. When I'm in scientific calculator mode, answers displayed on the screen only appear in scientific notation when they are larger than the number of digits that can be displayed. However, whenever I use it to calculate something in which the answer is smaller than the set precision for the number of decimal places, it always gives 0 instead of scientific notation. For example, if I type a number such as 1E19 and press =, then the answer displays correctly as 1E19. However, if I type 1/1E19, and press =, then the answer simply shows up as 0 instead of 1E-19, which is the correct answer in scientific notation. So, it seems that the scientific notation in Calculator simply does not display negative exponents in the scientific notation (I know that I can type such numbers but for some reason, it cannot display them as answers). This is very annoying because I often work with very small numbers, not just very large ones. Is there any way that I can get the scientific notation to display negative exponents when I'm working with very small numbers and when the answer is smaller than the set number of decimal places?
There are tonnes of possibilities and many websites with suggestions. I tried all sorts of these option (clear cache, clear cookies, anti-virus changes, page style options etc) with no success until eventually I accidentally changed my character encoding autodetect (firefox 6) from "off" to "universal" and everything is fine.
-
Numberformatter and scientific notation
Hello all. I'm having an issue with Numberformatter that I could use some assistance with. It looks like this:
<mx:NumberFormatter id="numberFormatter"
precision="6"
rounding="up"
useThousandsSeparator="false"
useNegativeSign="true"/>
I'm applying it to a Number object that was created from a deserialized java BigDecimal. In most cases, this is successful. So, 0.000024603877571 becomes 0.000025 and so on.
However, in some cases, my Number is in scientific notation like "0E-15". In this case, the NumberFormatter formats the number as -15.000000. Not helpful. Any ideas on what is going on and how I might could work around it?Thanks for the quick response, Michael. I was about to simply change the sending of the numbers as a string when I decided to try one more thing. If I use toFixed() on my number first (before the NumberFormatter.format() function) it seems to fix the issue. So, for example, in my label function for a column of a datagrid that has Numbers that sometimes appear in scientific notation, I do something like this:
public function myNumberLabelFunction(item:Object, column:DataGridColumn):String
var myNumber:Number = item.number;
var myNumberString:String = myNumber.toFixed(20);
return numberFormatter.format(myNumberString);
That (toFixed) appears to not mess up the regular numbers as well as the ones that have E notation. I don't know if anyone has any thoughts on the workaround. It seems to work find since my formatter is ultimately rounding to a precison of 6. -
Decimal Format and Scientific Notation
I am trying to print numbers in scientific notation using the Decimal Format class. What follows is a simple test program I wrote to find the bug. So far, I have not found a solution.
import java.text.*;
public class formatted {
public static void main (String Arguments[]) {
DecimalFormat form = new DecimalFormat("0.###E0");
double numb = 123456.789;
System.out.println("Nuber is: " +
form.format(numb));
The output of this program is... Nuber is: 123456E
The output is the same if numb is an int, float, or double. If I format the number as "#####.0" or "#####.00" the output is correct. I think that I am following the rules for formatting a number in scientific notation as the process is outlined in the documentation (provided below).
***** From Decimal Format under Scientific Notation ***
Numbers in scientific notation are expressed as the product of a mantissa and a power of ten, for
example, 1234 can be expressed as 1.234 x 10^3. The mantissa is often in the range 1.0 <= x < 10.0,
but it need not be. DecimalFormat can be instructed to format and parse scientific notation only via a
pattern; there is currently no factory method that creates a scientific notation format. In a pattern,
the exponent character immediately followed by one or more digit characters indicates scientific
notation. Example: "0.###E0" formats the number 1234 as "1.234E3".
Anyone understand how the short program is incorrectly written?
MarcThe problem is
format = "0.###E0"
input number = 123456.789
output = 123456E (not scientific notation!)
This is not scientific notation at all. There is no decimal point given and no value in the exponent.
I understand entirely that by adding more #'es will provide more precision. The bug I have is the output is not printed in the scientific format; other formats work.
MArc -
ORA-01722: Invalid number when importing .csv file
Hi,
I did not find any information regarding my specific problem until now.
I try to import a *.csv file containing id, double, double, double, double, double (e.g. as a sample line "id_1, 674,6703459157907, 4212,205538937771, 674,6703459158016, 5561,546230769363, 2714,6367797576822") into a table with the following definition:
CREATE TABLE "foo"."BUILDING_SURFACES"
( "ID" VARCHAR2(40 BYTE),
"AREA1" BINARY_DOUBLE DEFAULT 0,
"AREA2" BINARY_DOUBLE DEFAULT 0,
"AREA3" BINARY_DOUBLE DEFAULT 0,
"AREA4" BINARY_DOUBLE DEFAULT 0,
"AREA5" BINARY_DOUBLE DEFAULT 0
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ;
I am doing it with the help of the importer tool in the SQLDeveloper, by doing a right-click onto the table and selecting import data. In the assistant everything seem to be fine, even the data preview.
But when I try to import, I get an error: "ORA-01722: Invalid number" and a couple of failure windows appear. These windows display a NullPointerException:
java.lang.NullPointerException
at oracle.dbtools.raptor.data.writers.DataTypeFormatterRegistry.getFormattor(DataTypeFormatterRegistry.java:42)
at oracle.dbtools.raptor.data.writers.ImportGenerator.getBatchForInsert(ImportGenerator.java:1837)
at oracle.dbtools.raptor.data.writers.ImportGenerator.access$1800(ImportGenerator.java:84)
at oracle.dbtools.raptor.data.writers.ImportGenerator$1.afterLoopProcessing(ImportGenerator.java:1125)
at oracle.dbtools.raptor.newscriptrunner.ScriptExecutorTask.loopThroughAllStatements(ScriptExecutorTask.java:220)
at oracle.dbtools.raptor.newscriptrunner.ScriptExecutorTask.doWork(ScriptExecutorTask.java:165)
at oracle.dbtools.raptor.data.writers.ImportGenerator$1.doWork(ImportGenerator.java:782)
If I cancel the task, the insert statements are displayed:
SET DEFINE OFF
--Einfügen für Zeilen 1 bis 2 nicht erfolgreich
--ORA-01722: Ungültige Zahl
--Zeile 1
INSERT INTO BUILDING_SURFACES (ID, AREA1, AREA2, AREA3, AREA4, AREA5) VALUES ('BLDG_0003000b002ea10f','674.6703459157907','4212.205538937771','674.6703459158016','5561.546230769363','2714.6367797576822');
--Zeile 2
As one can see, the numbers are quoted ('). If I delete the quotes by hand, the insert statement works correctly.
What can I do, so that the import tool does not quote the numbers?
Deleting the quotes with the help of a regular expression is not workaround, since a lot of error windows appear, which needs to be closed by hand...
Thanks in advance,
Richard
Some infos about my machine:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.10
Release: 12.10
Codename: quantal
$ java -version
java version "1.6.0_38"
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)
Oracle SQL Developer 3.2.20.09
Version 3.2.20.09
Build MAIN-09.87
Copyright © 2005, 2012 Oracle. All Rights Reserved. Alle Rechte vorbehalten.
IDE Version: 11.1.1.4.37.59.48
Product ID: oracle.sqldeveloper
Product Version: 11.2.0.09.87I solved my problem. :)
I changed the datatype of the area fields to "NUMBER" and edited my input file in a way, that it is tab separated with "," as decimal sign.
Richard -
No scientific notation in csv file
Hi, All
I have a package is extracting data from DB to CSV file.
And I made up a column like '48484848484848484', when I load this column into CSV file,
it shows me scientific notation along with the data.
How can I get rid of the scientific notation ?
I've tried data conversion, derived column. Nothing works.
This really makes my frustrated.
Thank you in advance.Hi,
In that case, the SQL export process is fine.
It's just Excel suggesting a scientific notation where it's not needed.
You can get rid of the scientific notation by forcing your "long" numeric value into a string.
For example
"A001","Item code", "123376265892759026"
instead of
"A001", "Item code", 123376265892759026
Sebastian Sajaroff Senior DBA Pharmacies Jean Coutu
My data souce is like:
Select '1234567891123' as column1,
Id as column2,
Name as column3,
From Table1
I think I already give column1 as a string. And I also tried to cast column1 to nvarchar or varchar or decimal or numeric, nothing works. -
Loop through a csv file and return the number of rows in it?
What would be simplest way to loop through a csv file and
return the number of rows in it?
<cffile action="read" file="#filename#" output="#csvstr#"
>
<LOOP THROUGH AND COUNT ROWS>ListLen(). Use chr(13) as your delimiter
-
Drag and Drop CSV file onto a table in Number
Hi everyone !
I really love the new version of iWork. However I can't find a feature I was heavilly using which is the drag and drop of a CSV file into numbers which creates the table associated to this file.
Is there anyone who has find a way to re-activate this feature? Or will I have to open new number spreadsheat each time I want to import CSV ?
Thanks for your help !
Vincent.Jerrold Green1 wrote:
I have very few photos in my Contacts, so I saw the problem on my first try.
Exactly what happens here too when dragging from Contacts; I typically don't have photos there either. I submitted a "bug" report via Provide Numbers Feedback.
Do you still get misalignment of column headers when dragging or pasting csv or tsv (i.e. not from Contacts)? I've had pretty good results with csv and tsv here in v. 3.2, a welcome change from the earlier releases.
SG -
Reading a csv file with a large number of columns
Hello
I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
The file size can be up to 100 MB.
Scan does not seem support the large number of fields.
Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
Thanks
Solved!
Go to Solution.see if strtok() is useful http://www.elook.org/programming/c/strtok.html
-
Decimal points appearing for number in flat file upload
Hello
I am trying to upload a .csv file to a datasource. It is a number but when I see it in preview I am getting a decimal number with 3 points.
Can you tell me what I am doing wrong?
Thank you
RaviHello Durgesh
Thank you for the reply.
I am new to BI. I am just trying to upload a flatfile to Datasouce.
I have created InfoObject with NUMC 4. The thing I am not able to get is, when we are uploading a file from source system to datasouce, I am at an understanding that there is no link between the field type we are using for the infoObject and the .csv file data in the data source. Please correct me if I am wrong.
So even before creating the transformation, by just loading and looking at the preview of data in datasouce itself, I am seeing decimals for number. I even tried to make the data file in notepad and save as .csv inorder to avoid any cimplications with excel.
Please advice.
Thanks
Ravi
Maybe you are looking for
-
How can we place a push button in ALV report
hi could anybody inform me how can we place a push button in ALV report thanx regards kals.
-
Hard Drive bottle neck. Running SRT thinking of switch to SSD
I did the "windows experience index" on windows 7 on my new custom build. I got 7.9 (which is the best score) on everything but the primary hard disk. I got a 5.9 score for the primary hard disk. I set up the drives with Intel's SRT with one 120 Pa
-
How to download a SOAP attachment using pl/sql
Gurus, I have a custom pl sql application that is web services based. The custom pl sql application invokes the external web services and updates Oracle Field Service application. One of the new requirements is to get the signatures from SOAP Attachm
-
URGENT question please help.
My question is i have an ipod video 30 gig and i was wondering if somehow i could use or update the software that is on a new 80 gig ipod to mine, so i could play and download other games. is that possible? thanks. Custom Built Windows XP Pro AMD
-
Crash After 10.4.4 update
Seems that after the last software update I keep getting crashes in from my ATI Card. Here is the log from the ATI Monitory.crash.log. Can anyone help. My system does not hard crash but the monitor hangs. I think it is just the graphics card driver c