Reading a CSV file with PL/SQL

Hello:
I have a CSV file provided to me by a client. We want to insert the data from this file into a table and perform other processing.
Is it possible to call a CSV file within PL/SQL? If so, how do I begin?
Thanks.

Nawneet wrote:
CSV file can load it using SQL loader
go through the below link
http://www.orafaq.com/wiki/SQL*Loader_FAQ
SQL*Loader doesn't work very well in PL/SQL.
Better is External Tables:
http://www.morganslibrary.org/reference/externaltab.html

Similar Messages

  • Reading a csv file with a large number of columns

    Hello
    I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
    The file size can be up to 100 MB.
    Scan does not seem support the large number of fields.
    Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
    Thanks
    Solved!
    Go to Solution.

    see if strtok() is useful  http://www.elook.org/programming/c/strtok.html

  • Read a csv file and update sql db

    I'm a newbie to C#. I've put together the code below - see two commented lines where the for loop starts. need advise as to the correct approach. you may insert pseudo
    code in appropriate places. Thanks in advance.
    Below is my code:
    public class MailingConfirmationService
            private SqlConnection _con = new SqlConnection(ConfigurationManager.AppSettings.Get("ConnectionString"));
            public void ProcessFilesForMailingConfirmation()
                //Process file(s)
                IResource resourceFile = ResourceFactory.GetResourceFile(ResourceList.FileLocation);
                DirectoryInfo dataFiles = new DirectoryInfo(resourceFile.GetString("filesourcelocation"));
                if (dataFiles.GetFiles().Length > 0)
                    log.Info("File Processing started");
                    foreach (FileInfo dataFile in dataFiles.GetFiles("*.csv"))
                        log.Info("Processing Data File: " + dataFile.Name);
                        if (dataFile != null && dataFile.Name.Contains("IN"))
                            foreach (string sLine in File.ReadLines(dataFile))                     
                                      _con.Open();
                                    cmd.ExecuteNonQuery();
                                    _con.Close();

    The   foreach (string sLine in File.ReadLines(dataFile))
      should be  foreach (string sLine in File.ReadLines(dataFile.FullName))
    its expecting path as param.
    Check also for this example to enumerate files and their lines:
    https://msdn.microsoft.com/en-us/library/dd383503%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
    Fouad Roumieh

  • Read a csv file in a PL/SQL routine with the path as a parameter

    hello oracle community,
    System Windows 2008, Oracle 11.1 Database
    I have created a directory in the database and there is no problem to read a csv file from there. Since the files can be at different places, my java developer is asking me, if he can give me the path of the file as a parameter. So far I know I will need a directory to open the file with the UTL_FILE package, but I cannot create diretories for folders, that will be created in the future. Is there any way to solve that problem, so I can use the path as a parameter in my routine ?
    Ikrischer

    BluShadow wrote:
    Ikrischer wrote:
    as I said before, they are dynamic. As usal to most other companies, you get new partners and you will lose partners, so new folders will be added, old ones will be deleted. if you use different directories for different folders, you have more work to handle them.I wouldn't call that dynamic, I would call that business maintenance. Dynamic would mean that the same company is putting their files into different folders each time or on a regular basis. If it's one folder per company then that's not dynamic.the folder structure is changing, I call that dynamic, does not look static to me. but thats just a point of view, it is not important how we call it. lets just agree, the folder structure will change in the future, no matter how many folders one partner will have.
    BluShadow wrote:
    So you're receiving files and you don't even know the format of them? Some of them are not "correct"? Specify that the company must provide the files in the correct format or they will be rejected.
    Computers work on known logic in most cases, not some random guessing game.we know the correct format, but we also know, that the partner wont always send the correct format, period. Cause of that we want to deliver an error log, record 11,15 and 20 is wrong cause of....that is a very usefull service.
    BluShadow wrote:
    The folders should be known.impossible, we do not know the future.
    BluShadow wrote:
    The file format should be known.see above, it has a fixed format, but thats not a gurantee you get the correct format. for me thats a normal situation, it happens every day, you have fixed rules and they will be broken.
    BluShadow wrote:
    The moment you allow external forces to start specifying things how they want it then you're shooting yourself in the foot. That's not good design and not good business practice.maybe we are talking about different things, we do not allow to load data into our system with a wrong format. but we do allow to to send us a wrong format, and even more, we do allow them to send wrong content, but still we dont load it into our system. format and content must be correct to be loaded in the real system, but both we are checking. and with external tables you cannot check a wrong format we a detailed error log.
    Ikrischer

  • Unable to Load CSV file with comma inside the column(Sql Server 2008)

    Hi,
    I am not able load an CSV file with Comma inside a column. 
    Here is sample File:(First row contain the column names)
    _id,qp,c
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    When i specify the Text Qualifier as "(Double quotes) it work in SQL Server 2012, where as fail in the SQL Server 2008, complaining with error:
    TITLE: Microsoft Visual Studio
    The preview sample contains embedded text qualifiers ("). The flat file parser does not support embedding text qualifiers in data. Parsing columns that contain data with text qualifiers will fail at run time.
    BUTTONS:
    OK
    I need to do this in sql server 2008 R2 with Service pack 2, my build version is 10.50.1600.1.
    Can someone let me know it is possible in do the same way it is handle in SQL Server 2012?
    Or
    It got resolved in any successive Cumulative update after 10.50.1600.1?
    Regards Harsh

    Hello,
    If you have CSV with double quotes inside double quotes and with SSIS 2008, I suggest:
    in your data flow you use a script transformation component as Source, then define the ouput columns id signed int, Gp unicode string and C unicode string. e.g. Ouput 0 output colmuns
    Id - four-byte signed
    gp - unicode string
    cc - unicode string
    Do not use a flat file connection, but use a user variable in which you store the name of the flat file (this could be inside a for each file loop).
    The user variable is supplied as a a readonly variable argument to the script component in your dataflow.
    In the script component inMain.CreateNewOutputRows use a System.IO.Streamreader with the user variable name to read the file and use the outputbuffer addrow method to add each line to the output of the script component.
    Between the ReadLine instraction and the addrow instruction you have to add your code to extract the 3 column values.
    public override void CreateNewOutputRows()
    Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
    For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
    string line;
    System.IO.StreamReader file = new System.IO.StreamReader( Variables.CsvFilename);
    while ((line = file.ReadLine()) != null)
    System.Windows.Forms.MessageBox.Show(line);
    if (line.StartsWith("_id,qp,c") != true) //skip header line
    Output0Buffer.AddRow();
    string[] mydata = Yourlineconversionher(line);
    Output0Buffer.Id = Convert.ToInt32(mydata[0]);
    Output0Buffer.gp = mydata[1];
    Output0Buffer.cc = mydata[2];
    file.Close();
    Jan D'Hondt - SQL server BI development

  • Problem reading csv file with special character

    Hai all,
    i have the following problem reading a csv file.
    442050-Operations Tilburg algemeen     Huis in  t Veld, EAM (Lisette)     Gebruikersaccount     461041     Peildatum: 4-5-2010     AA461041                    1     85,92
    when reading this line with FM GUI_UPLOAD this line is split up in two lines in data_tab of the FM,
    it is split up at this character 
    Line 1
    442050-Operations Tilburg algemeen     Huis in
    Line 2
    t Veld, EAM (Lisette)     Gebruikersaccount     461041     Peildatum: 4-5-2010     AA461041                    1     85,92
    Anyone have a idea how to get this in one line in my interbal table??
    Greetz Richard

    Hi Greetz Richard
      Problably character  contains same binary code as line feed + carriage return.
      You can use statement below as workaround.
    OPEN DATASET file FOR INPUT IN TEXT MODE ENCODING UNICODE
    In this case your system must support Unicode encoding
    Kind regards

  • How to read columns of a Microsoft Excel .csv file with LabVIEW?

    This should be simple.  Any ideas on how to read an Excel file with csv extension using LabVIEW?
    Thanks!

    Use the Read From SpeadSheet File.VI which you can find on the FILE I/O pallette.
    "Read From Spreadsheet File
    Reads a specified number of lines
    or rows from a numeric text file beginning at a specified character offset and
    converts the data to a 2D, single-precision array of numbers. You optionally can transpose the
    array. The VI opens the file before reading from it and closes it afterwards.
    You can use this VI to read a spreadsheet file saved in text format. This VI
    calls the Spreadsheet String
    to Array function to convert the data."
    The CSV file is essentially a text file which uses commas as delimiters in the files structure.

  • Data formatting and reading a CSV file without using Sqlloader

    I am reading a csv file to an Oracle table called sps_dataload. The table is structured based on the record type of the data at the beginning of
    each record in the csv file. But the first two lines of the file are not going to be loaded to the table due to the format.
    Question # 1:
    How can I skip reading the first two lines from my csv file?
    Question # 2:
    There are more fields in the csv file than there are number of columns in my table. I know I can add filler as an option, but then there are
    about 150 odd fields which are comma-separated in the file and my table has 8 columns to load from the file. So, do I really have to use filler
    for 140 times in my script or, there is a better way to do this?
    Question # 3:
    This is more of an extension of my question above. The csv file has fields with block quotes - I know this could be achieved in sql loader when we mention Occassionally enclosed by '"'.
    But can this be doable in the insert as created in the below code?
    I am trying to find the "wrap code" button in my post, but do not see it.
    Heres my file layout -
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1
    This is my table structure -
    desc sps_dataload;
    File_Name     Varchar2 (50) Not Null,
    Record_Layer Varchar2 (20) Not Null,     
    Level_Id     Varchar2 (20),
    Desc1          Varchar2 (50),
    Desc2          Varchar2 (50),
    Desc3          Varchar2 (50),
    Desc4          Varchar2 (50)
    Heres my code to do this -
    create or replace procedure insert_spsdataloader(p_filepath IN varchar2,
    p_filename IN varchar2,
    p_Totalinserted IN OUT number) as
    v_filename varchar2(30) := p_filename;
    v_filehandle UTL_FILE.FILE_TYPE;
    v_startPos number; --starting position of a field
    v_Pos number; --position of string
    v_lenstring number; --length of string
    v_record_layer varchar2(20);
    v_level_id varchar2(20) := 0;
    v_desc1 varchar2(50);
    v_desc2 varchar2(50);
    v_desc3 varchar2(50);
    v_desc4 varchar2(50);
    v_input_buffer varchar2(1200);
    v_delChar varchar2(1) := ','
    v_str varchar2(255);
    BEGIN
    v_Filehandle :=utl_file.fopen(p_filepath, p_filename, 'r');
    p_Totalinserted := 0;
    LOOP
    BEGIN
    UTL_FILE.GET_LINE(v_filehandle,v_input_buffer);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    EXIT;
    END;
    -- this will read the 1st field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,1);
    v_lenString := v_Pos - 1;
    v_record_layer := substr(v_input_buffer,1,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 2nd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,2);
    v_lenString := v_Pos - v_startPos;
    v_desc1 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 3rd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,3);
    v_lenString := v_Pos - v_startPos;
    v_desc2 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 4th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,4);
    v_lenString := v_Pos - v_startPos;
    v_desc3 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 5th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,5);
    v_lenString := v_Pos - v_startPos;
    v_desc4 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    v_str := 'insert into table sps_dataload values('||v_filename||','||v_record_layer||','||v_level_id||','||v_desc1||','||v_desc2||','||v_desc3||','||v_desc4||')';
    Execute immediate v_str;
    p_Totalinserted := p_Totalinserted + 1;
    commit;
    END LOOP;
    UTL_FILE.FCLOSE(v_filehandle);
    EXCEPTION
    WHEN UTL_FILE.INVALID_OPERATION THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20051, 'sps_dataload: Invalid Operation');
    WHEN UTL_FILE.INVALID_FILEHANDLE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20052, 'sps_dataload: Invalid File Handle');
    WHEN UTL_FILE.READ_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20053, 'sps_dataload: Read Error');
    WHEN UTL_FILE.INVALID_PATH THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20054, 'sps_dataload: Invalid Path');
    WHEN UTL_FILE.INVALID_MODE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20055, 'sps_dataload: Invalid Mode');
    WHEN UTL_FILE.INTERNAL_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20056, 'sps_dataload: Internal Error');
    WHEN VALUE_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20057, 'sps_dataload: Value Error');
    WHEN OTHERS THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE;
    END insert_spsdataloader;
    /

    Justin, thanks. I did happen to change my pl sql procedure using utl_file.get_file and modifying the instr function based on position of ',' in the file, but my procedure is getting really big and too complex to debug. So I got motivated to use external tables or sql loader as plan b.
    As I was reading more about creating an external table as an efficient way and thus believe I can perhaps build an extern table with my varying selection from the file. But I am still unclear if I can construct my external table by choosing different fields in a record based on a record identifier string value (which is the first field of any record). I guess I can, but I am looking for the construct as to how am I going to use the instr function for selecting the field from the file while creating the table.
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1For example, if I want to create an external table like this -
    CREATE TABLE extern_sps_dataload
    ( record_layer            VARCHAR2(20),
      attr1                   VARCHAR2(20),
      attr2                   VARCHAR2(20),
      attr3                   VARCHAR2(20),
      attr4                   VARCHAR2(20)
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dataload
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE
        BADFILE     dataload:'sps_dataload.bad'
        LOGFILE     dataload:'sps_dataload.log'
        DISCARDFILE dataload:'sps_dataload.dis'
        SKIP 2
        VARIABLE 2 FIELDS TERMINATED BY ',' 
        OPTIONALLY ENCLOSED BY '"' LRTRIM
        MISSING FIELD VALUES ARE NULL
        +LOAD WHEN RECORD_LAYER = 'PROJECT' (FIELD2, FIELD3,FIELD7,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'PRODUCT' (FIELD3,FIELD4,FIELD8,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'SEGMENT' (FIELD1,FIELD2,FIELD4,FIELD5)+    LOCATION ('sps_dataload.csv')
    REJECT LIMIT UNLIMITED;
    {code}
    While I was reading the external table documentation, I thought I could achieve similar things by using position_spec option, but I am not getting behind its parameters. I have highlighted italics in the code above(from LOAD WHEN....FIELDS....), the part I think I am going to use, but not sure of it's construct.
    Thank you for your help!! Appreciate your thoughts on this..
    Sanders.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • How to read a Zip file in PL/SQL program

    Hi,
    I was Reading a ".csv" file from a web URL eg. "www.aba.com/zxc.csv" in my PL/SQL procedure using UTL_HTTP package and loading data in my DB.
    Now that file is in .zip format, is there any way that I can still read that .csv.zip file through my PL/SQL procedure. Because now I have to download it first then
    unzip it and then I am loading it into my DB using UTL_FILE package instead I want to do it automatically as I was doing it before when it was not zipped.
    Thanks & Regards
    Sanjay

    Peters solution is a great, nice alternative - you should consider it.
    But your present solution reads the data from the web using UTL_HTTP into a CLOB I presume? So you are not hitting the file system at all?
    If it is not an option for you to go to the file system, then it is possible that you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/u_compr.htm#BGBBCDDI]UTL_COMPRESS package.
    That is (maybe) a possibility if the zip file only contains one file - it cannot handle if the zip file is a zip archive with several files in it.
    You could try doing your UTL_HTTP load into a BLOB and then call UTL_COMPRESS.LZ_UNCOMPRESS to unzip that BLOB into another BLOB.
    I'm not guaranteeing it will work, but it may be worth a try if you cannot do it Peters way ;-)

  • Working With Two External Files With T-SQL Commands

    Hi friends,
    In a VSTO Excel AddIn of mine I have come across a need to use T-SQL commands before bringing the dataset on the activesheet, derived from two tables in the shape of external CSV files. However,
    since using SQL Express in this regard requires connection to a server. With Visual Studio as well as SSMS installed on my laptop I'm able to use the said facility but I am unable to distribute such an addin to other team members not having any of the studios
    installed nor access to any server.
    Is there any way to use the similar features on the users' side?
    In other words, how to develop a macro, for example, in which I could consider two CSV files as two different tables as in a database, and carryout some T-SQL functions/commands on the same like:
    UNION of first three columns from both the tables;
    Creating a temporary table with the DISTINCT sets of the said three columns; and
    Then carryout LEFT JOINS between the new table and the earlier two tables;
    to derive a simple comparative dataset?
    Looking forward for you experts' valuable advices in this regard.
    Thanx in advance.
    Thanx in advance, Best Regards, Faraz A Qureshi

    Hi FARAZ,
    As with Excel files, you can also use Odbc driver to connect to CSV files. As long as the user's machine installs the ODBC driver, you can OdbcConnection to connect to the CSV files.
    The sample code to manimulate CSV files with ODBC driver(C#):
    string connectionString = @"Driver={Microsoft Text Driver (*.txt; *.csv)}; Dbq=C:\; Extensions=asc,csv,tab,txt;";
    using (OdbcConnection conn = new OdbcConnection(connectionString))
    conn.Open();
    string sql = "SELECT u.UserName,u.Age,c.Phone,c.Address FROM user.csv u left join contact.csv c on u.Id=c.UserId";
    using (OdbcCommand cmd = new OdbcCommand(sql, conn))
    But if you want to publish the Add-in to the user's machine, you still need to install the .Net Framework and the VSTO runtime to support the Add-in.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Read a CSV file and dynamically generate the insert

    I have a requirement where there are multiple csv's which needs to be exported to a sql table. So far, I am able to read the csv file and generate the insert statement dynamically for selected columns however, the insert statement when passed as a parameter
    to the $cmd.CommandText
    does not evaluate the values
    How to evaluate the string in powershell
    Import-Csv -Path $FileName.FullName | % {
    # Insert statement.
    $insert = "INSERT INTO $Tablename ($ReqColumns) Values ('"
    $valCols='';
    $DataCols='';
    $lists = $ReqColumns.split(",");
    foreach($l in $lists)
    $valCols= $valCols + '$($_.'+$l+')'','''
    #Generate the values statement
    $DataCols=($DataCols+$valCols+')').replace(",')","");
    $insertStr =@("INSERT INTO $Tablename ($ReqColumns) Values ('$($DataCols))")
    #The above statement generate the following insert statement
    #INSERT INTO TMP_APPLE_EXPORT (PRODUCT_ID,QTY_SOLD,QTY_AVAILABLE) Values (' $($_.PRODUCT_ID)','$($_.QTY_SOLD)','$($_.QTY_AVAILABLE)' )
    $cmd.CommandText = $insertStr #does not evaluate the values
    #If the same statement is passed as below then it execute successfully
    #$cmd.CommandText = "INSERT INTO TMP_APL_EXPORT (PRODUCT_ID,QTY_SOLD,QTY_AVAILABLE) Values (' $($_.PRODUCT_ID)','$($_.QTY_SOLD)','$($_.QTY_AVAILABLE)' )"
    #Execute Query
    $cmd.ExecuteNonQuery() | Out-Null
    jyeragi

    Hi Jyeragi,
    To convert the data to the SQL table format, please try this function out-sql:
    out-sql Powershell function - export pipeline contents to a new SQL Server table
    If I have any misunderstanding, please let me know.
    If you have any feedback on our support, please click here.
    Best Regards,
    Anna
    TechNet Community Support

  • Is there a way to open CSV files with more than 255 columns?

    I have a CSV file with more than 255 columns of data.  It's a fairly standard export of social media data that shows volume of posts by day for the past year, from which I can analyze the data and publish customized charts. Very easy in Excel but I'm hitting the Numbers limit of 255 columns per table. Is there a way to work around the limitation? Perhaps splitting the CSV in two? The data shows up in the CSV file when I open via TextEdit, so it's there. Just can't access it in Numbers. And it's not very usable/useful for me in TextEdit.
    Regards,
    Tim

    You might be better off with Excel. Even if you could find a way to easily split the CSV file into two tables, it would be two tables when you want only one.  You said you want to make charts from this data.  While a series on a chart can be constructed from data in two different tables, to do so takes a few extra steps for each series on the chart.
    For a test to see if you want to proceed, make two small tables with data spanning the tables and make a chart from that data.  Make the chart the normal way using the data in the first table then repeat the following steps for each series
    Select the series in the chart
    Go to Format sidebar
    Click in the "Value" box
    Add a comma then select the data for this series from the second chart
    Press Return
    If there is an easier way to do this, maybe someone else will chime in with that info.

  • I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I would also recommend to write the description into the meta data. This would allow to place a text frame above the image and it is possible to add meta information and file name automatically together with the image, when you place it or even in a prepared template.
    Meta data information can be written easily in Bridge in the Meta File Workspace.

  • Reading a CSV file from server

    Hi All,
    I am reading a CSV file from server and my internal table has only one field with lenght 200. In the input CSV file there are more than one column and while splitting the file my internal table should have same number of rows as columns of the input record.
    But when i do that the last field in the internal table is appened with #.
    Can somebody tell me the solution for this.
    U can see the my code below.
    data: begin of itab_infile occurs 0,
             input(3000),
          end of itab_infile.
    data: begin of itab_rec occurs 0,
             record(200),
          end of itab_rec.
    data: c_comma(1) value ',',
            open dataset f_name1 for input in text mode encoding default.
            if sy-subrc <> 0.
              write: /, 'FILE NOT FOUND'.
              exit.
            endif.
    do
      read dataset p_ipath into waf_infile.
      split itab_infile-input at c_sep into table itab_rec.
    enddo.
    Thanks in advance.
    Sunil

    Sunil,
    You go not mention the platform on which the CSV file was created and the platform on which it is read.
    A common problem with CSV files created on MS/Windows platforms and read on unix is the end-of-record (EOR) characters.
    MS/Windows usings <CR><LF> as the EOR
    Unix using either <CR> or <LF>
    If on unix open the file using vi in a telnet session to confirm the EOR type.
    The fix options.
    1) Before opening the opening the file in your ABAP program run the unix command dos2unix.
    2) Transfer the file from the MS/Windows platform to unix using FTP using ascii not bin.  This does the dos2unix conversion on the fly.
    3) Install SAMBA and share the load directory to the windows platforms.  SAMBA also handles the dos2unix and unix2dos conversions on the fly.
    Hope this helps
    David Cooper

Maybe you are looking for