CSV file encoded as UTF - 8 loses characters when displayed with excel 2010

Hello everybody,
I have adapted a customer report to be able to send certain data via mail a a CSV attachment.
For that purpose I am using class cl_bcs.
Everything goes fine, but since mail attachment contains certain german characters as Ü, when displaying it with excel those characters appear as corrupted.
It seems the problem is with excel, because when opening the same file with notepad, the Ü is there. If I import the file to excel with the importer, it is correct too.
Anyway, is there any solution to this problem?
I have tried concatenating byte_order_mark_utf8 in the beginning of the file, but still excel does not recognize it.
Thanks in advance,
Pablo.
Edited by: katathema on Jan 31, 2012 2:05 PM

- Does ms excell actually support UTF-8
Yes. I believed that we installed some international add-on which is not in default installnation. Anyway, other UTF-8 or UTF-16 file can be openned and viewed by Excel without any problem.
- have you verifide that the file is viewable as a UTF-8 -encoded file
I think so. If I open it into Notepad and choose "save as", the file type if UTF-8 file
- Try opening the file in a program you are confident
that it support UTF-8 - eg. Mozilla...
I will try that.
- Check that your UTF-8 -encoded file has a UTF-8 identifier (0xFEFF ?)
as the first character
The unicode-16(LE or BE) file I got from internet, I found there is always two bytes in the front. (0xFEFF or 0xFFFE). My UTF-8 file generated by java doesn't have that. But should UTF-8 file also has this kind of specifcal bytes in the front? If I manually add these bytes in the front of my file using Ultraeditor and open it in Excel2000, it didn't help.
- Try using another spreadsheet program that supports UTF-8
Do you know any other spreadsheet program supports csv file and UTF-8.

Similar Messages

  • Unable to set JVM file encoding to UTF-8 on Windows

    Hi,
    I am running Tomcat on 1.5.0_05 JRE. I tried several things to set the jvm file encoding to UTF-8 instead of the default Cp1252, but no luck yet.
    The most intuitive approach seems to be to use a JVM option like
    "-Dfile.encoding=UTF-8"
    but this does not seem to have any effect. I have a WinXP pro m/c. I saw some bug reports which seemed to indicate that changing the JVM file encoding is not an available feature....is that correct? I would really appreciate any help/pointers on this. I will post the solution if I find something in the meantime.
    Thanks,
    Sriram

    I am fail to set it too. I think it is better to separate the file.encoding into two, one for accept local OS, the other for compile .java and .jsp and so on files. So we can change it and the bugs will be decrease!

  • Shape file encode in UTF-8

    hi all,
    I am interested to import shape file which is encode in UTF-8. When i import shape file with the help of map builder, i lost my Unicode character.
    NLS_NCHAR_CHARACTERSET value of my data base is AL16UTF16
    Any suggestion how i can import these Unicode code characters.

    If your CSV API offers the option of saving as UTF-8 (and it should), that would be the best way to go. Otherwise, you can use InputStreamReader and OutputStreamWriter to convert the file.

  • SQL bulk copy from csv file - Encoding

    Hi Experts
    This is the first time I am creating a PowerShell script and it is almost working. I just have some problems with the actual bulk import to SQL encoding from the text file since it replaces
    special characters with a question mark. I have set the encoding when creating the csv file but that does not seem to reflect on the actual bulk import. I have tried difference scenarios with the encoding part but I cannot find the proper solution for that.
    To shortly outline what the script does:
    Connect to Active Directory fetching all user - but excluding users in specific OU's
    Export all users to a csv in unicode encoding
    Strip double quote text identifiers (if there is another way of handling that it will be much appreciated)
    Clear all records temporary SQL table
    Import records from csv file to temporary SQL table (this is where the encoding is wrong)
    Update existing records in another table based on the records in the temporary table and insert new record if not found.
    The script looks as the following (any suggestions for optimizing the script are very welcome):
    # CSV file variables
    $path = Split-Path -parent "C:\Temp\ExportADUsers\*.*"
    $filename = "AD_Users.csv"
    $csvfile = $path + "\" + $filename
    $csvdelimiter = ";"
    $firstRowColumns = $true
    # Active Directory variables
    $searchbase = "OU=Users,DC=fabrikam,DC=com"
    $ADServer = 'DC01'
    # Database variables
    $sqlserver = "DB02"
    $database = "My Database"
    $table = "tblADimport"
    $tableEmployee = "tblEmployees"
    # Initialize
    Write-Host "Script started..."
    $elapsed = [System.Diagnostics.Stopwatch]::StartNew()
    # GET DATA FROM ACTIVE DIRECTORY
    # Import the ActiveDirectory Module
    Import-Module ActiveDirectory
    # Get all AD users not in specified OU's
    Write-Host "Retrieving users from Active Directory..."
    $AllADUsers = Get-ADUser -server $ADServer `
    -searchbase $searchbase -Filter * -Properties * |
    ?{$_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com' `
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'}
    Write-Host "Users retrieved in $($elapsed.Elapsed.ToString())."
    # Define labels and get specific user fields
    Write-Host "Generating CSV file..."
    $AllADUsers |
    Select-Object @{Label = "UNID";Expression = {$_.objectGuid}},
    @{Label = "FirstName";Expression = {$_.GivenName}},
    @{Label = "LastName";Expression = {$_.sn}},
    @{Label = "EmployeeNo";Expression = {$_.EmployeeID}} |
    # Export CSV file and remove text qualifiers
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    Write-Host "Removing text qualifiers..."
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Write-Host "CSV file created in $($elapsed.Elapsed.ToString())."
    # DATABASE IMPORT
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data")
    [void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
    $batchsize = 50000
    # Delete all records in AD import table
    Write-Host "Clearing records in AD import table..."
    Invoke-Sqlcmd -Query "DELETE FROM $table" -Database $database -ServerInstance $sqlserver
    # Build the sqlbulkcopy connection, and set the timeout to infinite
    $connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
    $bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
    $bulkcopy.DestinationTableName = $table
    $bulkcopy.bulkcopyTimeout = 0
    $bulkcopy.batchsize = $batchsize
    # Create the datatable and autogenerate the columns
    $datatable = New-Object System.Data.DataTable
    # Open the text file from disk
    $reader = New-Object System.IO.StreamReader($csvfile)
    $columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
    if ($firstRowColumns -eq $true) { $null = $reader.readLine()}
    Write-Host "Importing to database..."
    foreach ($column in $columns) {
    $null = $datatable.Columns.Add()
    # Read in the data, line by line
    while (($line = $reader.ReadLine()) -ne $null) {
    $null = $datatable.Rows.Add($line.Split($csvdelimiter))
    $i++; if (($i % $batchsize) -eq 0) {
    $bulkcopy.WriteToServer($datatable)
    Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
    $datatable.Clear()
    # Add in all the remaining rows since the last clear
    if($datatable.Rows.Count -gt 0) {
    $bulkcopy.WriteToServer($datatable)
    $datatable.Clear()
    # Clean Up
    Write-Host "CSV file imported in $($elapsed.Elapsed.ToString())."
    $reader.Close(); $reader.Dispose()
    $bulkcopy.Close(); $bulkcopy.Dispose()
    $datatable.Dispose()
    # Sometimes the Garbage Collector takes too long to clear the huge datatable.
    [System.GC]::Collect()
    # Update tblEmployee with imported data
    Write-Host "Updating employee data..."
    $queryUpdateUsers = "UPDATE $($tableEmployee)
    SET $($tableEmployee).EmployeeNumber = $($table).EmployeeNo,
    $($tableEmployee).FirstName = $($table).FirstName,
    $($tableEmployee).LastName = $($table).LastName,
    FROM $($tableEmployee) INNER JOIN $($table) ON $($tableEmployee).UniqueNumber = $($table).UNID
    IF @@ROWCOUNT=0
    INSERT INTO $($tableEmployee) (EmployeeNumber, FirstName, LastName, UniqueNumber)
    SELECT EmployeeNo, FirstName, LastName, UNID
    FROM $($table)"
    try
    Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query $queryUpdateUsers
    Write-Host "Table $($tableEmployee) updated in $($elapsed.Elapsed.ToString())."
    catch
    Write-Host "An error occured when updating $($tableEmployee) $($elapsed.Elapsed.ToString())."
    Write-Host "Script completed in $($elapsed.Elapsed.ToString())."

    I can see that the Export-CSV exports into ANSI though the encoding has been set to UNICODE. Thanks for leading me in the right direction.
    No - it exports as Unicode if set to.
    Your export was wrong and is exporting nothing. Look closely at your code:
    THis line exports nothing in Unicode"
    Export-Csv -NoTypeInformation $csvfile -Encoding Unicode -Delimiter $csvdelimiter
    There is no input object.
    This line converts any file to ansi
    (Get-Content $csvfile) | foreach {$_ -replace '"'} | Set-Content $csvfile
    Set-Content defaults to ANSI so the output file is converted.
    Since you are just dumping into a table by manually building a recorset why not just go direct.  You do not need a CSV.  Just dump theresults of the query to a datatable.
    https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
    This script dumps to a datatable object which can now be used directly in a bulkcopy.
    Here is an example of how easy this is using your script:
    $AllADUsers = Get-ADUser -server $ADServer -searchbase $searchbase -Filter * -Properties GivenName,SN,EmployeeID,objectGUID |
    Where{
    $_.DistinguishedName -notmatch 'OU=MeetingRooms,OU=Users,DC=fabrikam,DC=com'
    -and $_.DistinguishedName -notmatch 'OU=FunctionalMailbox,OU=Users,DC=fabrikam,DC=com'
    } |
    Select-Object @{N='UNID';E={$_.objectGuid}},
    @{N='FirstName';Expression = {$_.GivenName}},
    @{N='LastName';Expression = {$_.sn}},
    @{N=/EmployeeNo;Expression = {$_.EmployeeID}} |
    Out-DataTable
    $AllDUsers is now a datatable.  You can just upload it.
    ¯\_(ツ)_/¯

  • How to set File Encoding to UTF-8 On Save action in JDeveloper 11G R2?

    Hello,
    I am facing issue when I am modifying a File using JDeveloper 11G R2. JDeveloper is changing the Encoding of the File to System default Encoding (ANSI) instead of UTF-8. I have updated the Encoding to UTF-8 in "Tools | Preferences | Environment | Encoding" option and restarted the JDeveloper. I have also updated "Project Properties | Compiler | Character Encoding" option to UTF-8. None of them are working.
    I am using below version of JDeveloper,
    Oracle JDeveloper 11g Release 2 11.1.2.3.0
    Studio Edition Version 11.1.2.3.0
    Product Version: 11.1.2.3.39.62.76.1
    I created a file in UTF-8 Encoding. I opened it, do some changes and Save it.
    When I open the "Properties" tab using "Help | About" Menu, I can see that the Properties of JDeveloper are showing encoding as Cp1252. Is it related?
    Properties
    sun.jnu.encoding
    Cp1252
    file.encoding
    Cp1252
    Any idea how to make sure JDeveloper saves the File in UTF-8 always?
    - Sujay

    I have already done that. That is the first thing I did as mentioned in my Thread. I have also added below 2 options in jdev.conf and restarted JDeveloper, but that also did not work.
    AddVMOption -Dfile.encoding=UTF-8
    AddVMOption -Dsun.jnu.encoding=UTF-8
    - Sujay

  • Dynamic CSV file name in target (Multiple workflow calling same dataflow with new global variable value)

    Hi their,
    I have multiple data flows doing 90% of the process same. The difference is in source query where clause and target flat file.
    I used the global variables to dynamically change the query where clause easily, but I need help in dynamically changing the target flat file (CSV file).
    What I want to do is have multiple workflows, which will first set the global variable to new value in the script box and then call the same data flow.
    Please let me know if you have any solution or any idea which might put me in the direction to my quest for solution.
    thank you,

    Hi Raj - in your content conversion for lineitem .. read the additional attribute as well.
    Change your source strcture and additional field company_code
    As you are already sending the filename for each line item using the UDF.. it's just you need to modify your UDF to take another input i.e. Company Code.
    or
    If your PI version is > 7.1 use the graphical variable to hold the filename..
    and while sending the company code information for every item just use the concat function to append filename & company code..
    http://scn.sap.com/people/william.li/blog/2008/02/13/sap-pi-71-mapping-enhancements-series-using-graphical-variable

  • Single pdf lose text when merged with muliple

    I have 24 single PDF files made from word documents. They are
    all great until I merge them into one PDF file. Then I have several that lose text.
    I have remade that single files after deleting the old ones and recreated the merge, still same problem.
    Upgrade my adobe, cleared old files, rebooted, nothing works.

    Use the pdf optimizer to remove the embedded fonts. Then use the Preflight capability to re-embed the fonts. The issue is caused by the different font subsets that exist in different parts of the pdf.

  • File encoding with UTF-8

    Hello all,
    My scenario is IDoc -> XI -> File (txt).
    Everything was working fine until i have to handle eastern european language with weird symbol
    So in my file adapter receiver, i'm using the file encoding code UTF-8 and when i look my field in output, everything is fine.
    BUT, when i look in binary, the length of these field is not longer fixed because a special character takes 2 bytes instead of one.
    I would like to know if it's possible to handle those characters with a file encoding code UTF-8 in a fixed length field of 40 characters for example don't want a variable length for my fields...
    Thanks by advance,
    JP

    I agree with you. In XI, i don't have this problem, i have it in my ouput file when i edit my text file in binary mode !
    My field should be on 40 characters but the special symbol which take 2 bytes instead of 1 make the length of my output fields variable !!!
    My question was to know if there is a way to have a fixed length in my output file..
    Sorry if i wasn't clear in my first post.
    JP

  • How to set the system default file character encoding to UTF-8?

    Hi all. This is driving me nuts, on both my Windows box and Snow Leopard; I figure much more chance of finding the answer for OS X.
    My language and locale are set to Australian English. $LANG=en_AU.UTF-8
    However, as I believe is expected, OS X (and Windows for that matter) will create files by default with character encoding of Cp1252 (Latin-1). That is, the FILE encoding in the file metadata - the Byte Order Mark I believe. The file itself, not the characters written to it.
    This, in a word, bites. I don't want to be restricted to only ASCII by default, and it is causing me problems with certain software (a Firefox plugin) that creates text files, passing in UTF-8 encoded content, which is then mangled because the file encoding itself is still Cp1252. (I know, I've tested this by changing the file encoding manually and having it overwritten again by the plugin: works correctly.)
    As a simple example, just `touch somefile` from terminal creates a file in Cp1252 -- I'm obtaining that info by opening in jEdit by the way (anyone know of something better?).
    In other locales that are not English-based, I believe the default file encoding is UTF-8. But surely this can be controlled independently? There must be a system configuration value somewhere that specifies file encoding default. Can someone please tell me what it is?
    Thanks!

    However, as I believe is expected, OS X (and Windows for that matter) will create files by default with character encoding of Cp1252 (Latin-1). That is, the FILE encoding in the file metadata - the Byte Order Mark I believe. The file itself, not the characters written to it.
    Apps like TextEdit and Mail have settings that let you determine the encoding of text produced. The default would normally depend on the character content of the file, ranging from ASCII for basic English to Windows Latin-1 (Win 1252) or ISO Latin -1 (ISO 8859-1) to UTF-8 for other content.
    Win 1252 is not ASCII, but has twice the number of characters in the latter.
    Byte Order Mark is something totally different --it's a particular character used to signal certain encodings.
    http://en.wikipedia.org/wiki/Byteordermark
    As a simple example, just `touch somefile` from terminal creates a file in Cp1252 -- I'm obtaining that info by opening in jEdit by the way (anyone know of something better?).
    For what Terminal does and how to change it, it might best to post in the Unix forum:
    http://discussions.apple.com/forum.jspa?forumID=735
    For problems with a FireFox plugin, it might be good to ask on their own forums as well.

  • How to display data from local csv files (in a folder on my desktop) in my flex air application using a datagrid?

    Hello, I am very new to flex and don't have a programming background. I am trying to create an air app with flex that looks at a folder on the users desktop where csv files will be dropped by the user. In the air app the user will be able to browse and look for a specific csv file in a list container, once selected the information from that file should be displayed in a datagrid bellow. Finally i will be using Alive PDF to create a pdf from the information in this datagrid laid out in an invoice format. Bellow is the source code for my app as a visual refference, it only has the containers with no working code. I have also attached a sample csv file so you can see what i am working with. Can this be done? How do i do this? Please help.
    <?xml version="1.0" encoding="utf-8"?>
    <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" width="794" height="666">
        <mx:Label x="280" y="19" text="1. Select Purchase Order"/>
        <mx:List y="45" width="232" horizontalCenter="0"></mx:List>
        <mx:Label x="158" y="242" text="2. Verify Information"/>
        <mx:DataGrid y="268" height="297" horizontalCenter="0" width="476">
            <mx:columns>
                <mx:DataGridColumn headerText="Column 1" dataField="col1"/>
                <mx:DataGridColumn headerText="Column 2" dataField="col2"/>
                <mx:DataGridColumn headerText="Column 3" dataField="col3"/>
            </mx:columns>
        </mx:DataGrid>
        <mx:Label x="355" y="606" text="3. Generated PDF"/>
        <mx:Button label="Click Here" horizontalCenter="0" verticalCenter="311"/>
    </mx:WindowedApplication>

    Open the file, parse it, populate an ArrayCollection or XMLListCollection, and make the collection the DataGrid dataProvider:
    http://livedocs.adobe.com/flex/3/html/help.html?content=Filesystem_08.html
    http://livedocs.adobe.com/flex/3/html/help.html?content=12_Using_Regular_Expressions_01.ht ml
    http://livedocs.adobe.com/flex/3/html/help.html?content=dpcontrols_6.html
    http://livedocs.adobe.com/flex/3/langref/mx/collections/ArrayCollection.html
    http://livedocs.adobe.com/flex/3/langref/mx/collections/XMLListCollection.html
    If this post answered your question or helped, please mark it as such.

  • Issue in CSV file attachment.

    Hi All,
             I am using the FM : SO_DOCUMENT_SEND_API1 for sending mails in CSV format. This CSV file contains some chinese scripts. But when I open the attachment I can see some junk characters instead of that. My SAP version is 4.6c
    I searched in all the forums and found these 2 solutions
    1. OSS note 633265 .
    2. Changed the characters set into simplified and traditional chinese.
    Both didnt helped.
    Suggestions are welcome.
    Regards,
      Dinesh.

    Hi Dinesh,
    again: Never use FM : SO_DOCUMENT_SEND_API1 . Go for CL_BCS. Check programs
    BCS_EXAMPLE_5
    BCS_EXAMPLE_6
    BCS_EXAMPLE_7
    BCS_EXAMPLE_8
    Regards,
    Clemens

  • .CSV file supported microsoft office.

    Hi,
    I am trying to download CSV file format in Japanese language from webdynpro application where csv file generated in runtime. But the japanese characters are in distorted order so any one can suggest what is the reason for such kind of error.
    Is it due to any version of Microsoft office.
    Thanks,
    Vijayalaxmi

    Hi
    I do not quite understand what your problem is, but when working with EXCEL inplace in you will need to look at the security settings in EXCEL. Here you need to make sure that your Excel has the "Trust access to Visual Basic Project" checked. But take a look at OSS Note 696069
    Regards
    Morten Nielsen

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • How can I email using UTL_SMTP with a csv file as an attachment?

    Dear All,
    It would be great if someone could help me. I am trying to use UTL_SMTP to email with a csv file as attachment. I do get an email with a message but no attachment arrives with it.
    In fact the code used for attaching the csv file gets appended in the message body in the email.
    CREATE OR REPLACE PROCEDURE test_mail
    AS
    SENDER constant VARCHAR2(80) := '[email protected]';
    MAILHOST constant VARCHAR2(80) := 'mailhost.xxxx.ac.uk';
    mail_conn utl_smtp.connection;
    lv_rcpt VARCHAR2(80);
    lv_mesg VARCHAR2(9900);
    lv_subject VARCHAR2(80) := 'First Test Mail';
    lv_brk VARCHAR2(2) := CHR(13)||CHR(10);
    BEGIN
    mail_conn := utl_smtp.open_connection(mailhost, 25) ;
    utl_smtp.helo(mail_conn, MAILHOST) ;
    dbms_output.put_line('Sending Email to : ' ||lv_brk||'Suhas Mitra' ) ;
    lv_mesg := 'Date: '||TO_CHAR(sysdate,'dd Mon yy hh24:mi:ss')||lv_brk||
    'From: <'||SENDER||'>'||lv_brk||
    'Subject: '||lv_subject||lv_brk||
    'To: '||'[email protected]'||lv_brk||
    'MIME-Version: 1.0'||lv_brk||
    'Content-type:text/html;charset=iso-8859-1'||lv_brk||
    ' boundary="-----SECBOUND"'||
    ''||lv_brk||
    '-------SECBOUND'||
    'Some Message'
              || lv_brk ||
    '-------SECBOUND'||
              'Content-Type: text/plain;'|| lv_brk ||
              ' name="xxxx.csv"'|| lv_brk ||
              'Content-Transfer_Encoding: 8bit'|| lv_brk ||
              'Content-Disposition: attachment;'|| lv_brk ||
              ' filename="xxxx.csv"'|| lv_brk ||
              lv_brk ||
    'CSV,file,attachement'|| lv_brk ||     -- Content of attachment
    lv_brk||
    '-------SECBOUND' ;
    dbms_output.put_line('lv_mesg : ' || lv_mesg) ;
    utl_smtp.mail(mail_conn, SENDER) ;
    lv_rcpt := '[email protected]';
    utl_smtp.rcpt(mail_conn, lv_rcpt) ;
    utl_smtp.data(mail_conn, lv_mesg) ;
    utl_smtp.quit(mail_conn);
    EXCEPTION
    WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
    NULL ;
    WHEN OTHERS THEN
    dbms_output.put_line('Error Code : ' || SQLCODE) ;
    dbms_output.put_line('Error Message : ' || SQLERRM) ;
    utl_smtp.quit(mail_conn) ;
    END;

    LKBrwn_DBA wrote:
    Use UTL_MAIL instead.That package is an utter disappointment - and an excellent example IMO of how not to design an application programming interface. Even the source code is shoddy.. I mean, having to resort to a GOTO statement....!!?? The person(s) who wrote that package are sorely lacking in even the most basic of programming skills if structured programming is ignored and a spaghetti command used instead.
    No wonder the public interface of that code is equally shabby and thoughtless... The mail demo code posted by Oracle was better written than this "+package+" they now have bundled as the official Mail API.
    I dunno.. if I was in product management there would have been hell to pay over pushing cr@p like that to customers.

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

Maybe you are looking for

  • HT1782 storage system fail or repair failed

    When I use the disk utility, I click on the first volume which reads ST9500325ASG, it gives me a message "storage system fail or repair failed". When I click on the disk below which reads Macintosh HD. It says the volume 2FE etc.....appears to be ok.

  • IC webclient doesn't work for role IC_AGENT

    Hi, we've got a major problem with our CRM 7.0 SP02 web UI: When starting the web UI and choosing business role IC_AGENT, the screen shows the message "Starting SAP CRM", however, the browser stops loading the page, actually the browser says that the

  • Syncing iPad with Lion operating system for the 1st time

    I am having a problem syncing my iPad after I installed Lion on my iMac.  stops in the middle of sync saying my iTunes version may not be current.  I have the latest version installed.  any help?

  • How can I make new photos visible in the Calendar option?

    I'm halfway through a calendar, and needed to add some more photos. I imported some additional photos into iPhoto, but these are not available in the unplaced photo area. It seems (unless I'm wrong), that your calendar project will only recognise pho

  • How to put SPACE thousands separator

    Hi there, I'm thinking how to format following numbers, eg: (integers) 150000 2000000 and I would like to export them as CSV file from sqldeveloper to MS Excel and displaying them like => (with space separator) 150 000 2 000 000 without Excel (format