Formatting issue in csv file

set echo off
--set line 999
set space 0
set feedback on
set term off
set linesize 32000
set pagesize 49999
set colsep ,
select ci.CIRC_HUM_ID as Circuit_id,
        cp.CIRC_PATH_HUM_ID as Associated_Path,
        ci.TYPE as Category,
        ci.bandwidth,
        ci.VENDOR,
        ci.STATUS,
        si.CLLI as A_Site,
        si.ADDRESS as A_Site_Address,
        si.CITY as A_Site_City,
        si.STATE_PROV as A_Site_State,
        si1.CLLI as Z_Site,
        si1.ADDRESS as Z_Site_Address,
        si1.CITY as Z_Site_City,
        si1.STATE_PROV as Z_Site_State
from telepac1.circ_path_inst cp
     ,telepac1.site_inst si
     ,telepac1.site_inst si1
     ,telepac1.circ_inst ci
where si.site_inst_id = cp.A_SIDE_SITE_ID
and si1.site_inst_id = cp.Z_SIDE_SITE_ID
and cp.CIRC_PATH_INST_ID = ci.CIRC_PATH_INST_ID (+);I am getting correct data in csv format but for Z_Site_Address some the vales of cell are coming to next row.
Z_Site_Address
SBC
345 N San Joaquin      the data is coming is like this for few rows
but i want data like below
Z_Site_Address
SBC 345 N San Joaquin

>
I am getting correct data in csv format but for Z_Site_Address some the vales of cell are coming to next row.
Z_Site_Address
SBC
345 N San JoaquinThat is because, as Jeneesh said, your data has embedded record delimiters. That data above may LOOK wrong but it is EXACTLY what your column contained.
That often happens when an application uses a MEMO field that lets a user type in multiple lines for a long text description or, in your case, for an address.
If you don't want embedded record delmiters in your DB data you should remove them when your app saves the data to begin with. If you convert those characters to spaces in the file and load the file again you will NOT have the same content you have to begin with. That means that app field will display one long line of characters to the user.
That is the price you pay for using delimited files; by default any embedded delimiters will cause the created file to be unloadable.
Be aware that if you are creating a COMMA delimited file then any embedded commas can cause a problem also unless the field is enclosed in double quotes. That is because when you read or try to load the file those 'extra' comma will be interpreted as a FIELD separator.
For example an address like '123 Maple St, Apt 111' will appear to be TWO values: '123 Maple St' and 'Apt 111'.

Similar Messages

  • Issue in CSV file attachment.

    Hi All,
             I am using the FM : SO_DOCUMENT_SEND_API1 for sending mails in CSV format. This CSV file contains some chinese scripts. But when I open the attachment I can see some junk characters instead of that. My SAP version is 4.6c
    I searched in all the forums and found these 2 solutions
    1. OSS note 633265 .
    2. Changed the characters set into simplified and traditional chinese.
    Both didnt helped.
    Suggestions are welcome.
    Regards,
      Dinesh.

    Hi Dinesh,
    again: Never use FM : SO_DOCUMENT_SEND_API1 . Go for CL_BCS. Check programs
    BCS_EXAMPLE_5
    BCS_EXAMPLE_6
    BCS_EXAMPLE_7
    BCS_EXAMPLE_8
    Regards,
    Clemens

  • BEA 9.2 Portal issue: downloaded CSV file contains embedded html code

    We have J2EE application using BEA 9.2 Portal framework, and one of the page has feature to generate report( in pop up window) in CSV file format. As per the history from previous developer, BEA 8.1 didn't have this issue but, after mirgration to 9.2, they started having file download error( incomplete contents ..). To overcome this issue they commented out setting content length to HttpServletResponse as attached below, but this, now, causes html page source code of the parent page( where submit button is clicked to generate csv file report) being rendered along with actual report in the downloaded CSV file. Has anyone have this sort of issue? If so, can you please share your thoughts? or any thoughts in general?
    BEA 9.2 with Portal framework, JDK 15, JSP, Beehive NetUI, Sun Microsystem Solaris server
    Here is the source code that avoids setting content length and reasoning behind it..
    private static void setResponseHeadersForCSVFile(HttpServletResponse response, String filename, int contentLength)
    String mimeType = mimeTypes.getContentType(filename);
    response.reset();
    response.setContentType(mimeType);
    // DON'T explicitly set the content length, since the length of the String or StringBuffer that contains
    // the contents of the CSV file will be character encoded when it is actually written to the output stream, based
    // upon the character encoding of this JVM App Server's settings. So let the JVM App Server framework apply the character
    // encoding AND set the final and truly correct content length header at the time the contents of the String or StringBuffer are truly
    // streamed back to the user.
    //response.setContentLength(contentLength);
    response.addHeader("Content-Disposition", "attachment; filename=\"" + filename + "\"; size=" + contentLength);
    }

    1) Yes, the old content.tld is available as part of a web library module as a taglib.tld file. The module is: wlp-services-web-lib.war, which can be found in your bea/weblogic92/portal/lib/modules directory.
    2) The new API is accessible from the ContentManagerFactory class. This provides access to INodeManager, ITypeManager, ISearchManager, etc. The new API is contained within the com.bea.content.federated package. The 8.1.x API in the com.bea.content.manager package including RepositoryManager, NodeOps, SearchOps, etc. has been deprecated with 9.2.
    3) Yes, via the new I*Manager implementations. The entitlement support is for application-scoped visitor roles. Make sure you're using the ISearchManager when performing search operations. This will ensure secure results are returned.

  • Tidal Oracle DB Adapter: Can we out the CSV format as a CSV file and send it out in a mail?

    I am trying to run a Select query from Oracle DB job in Tidal. The results of this query can be output-ed in a CSV format. But, the requirement is for me to send out a CSV file, not just data in CSV format.
    I've this same requirement with XML output format as well. Can we send out a "Completed Normally" mail with an XML output of the result of the Select query?
    Thanks in advance.

    Hi,
    I have a jpeg movie file 60 mins long and a text file tell me five time-lines for breaking the movie. For example, break the movie at 10:21, 16:05�
    The final output should be five small jpeg movie files. Each file contains a 90 sec (30 sec before the break time and 60 sec after the break time).
    Do you know any java library (jar) contain the library that can help me on programming this? Any existing source code? Any SDK for a movie editor can do that?
    Please help.
    Thanks
    Kenny

  • Wrong date format when import CSV files

    When you import a CSV file that contains fields with German date formats, these fields are displayed incorrectly.
    Example: Contents of the CSV file "01.01.14". After importing the corresponding cell in Numbers has the content "40178".
    A reformat the cell to a date format is not possible.
    How do I get the date in the new version of Numbers displayed correctly?

    It seems that there are more than a few problems related to import/export with non-US localizations.
    I, personally, don't have a solution to your problem. I started to adjust my Language & Region settings to test your problem but it was several settings, I didn't get it right, and I didn't want to mess up my computer so I set everything back to US/English.
    The only workarounds I can suggest are
    Insert a new column into your table and in it put a formula that adds the number to the date 01.01.1904.  Or,
    Edit the CSV in TextEdit to Replace All "." with "/".  This will work if "." is used for nothing else but these dates.
    I recommend the second one if it will work for you. Hopefully Apple is addressing problems such as the one you are seeing.

  • Format time in csv file

    Hi everyone.
    I have a csv file with first Column is time stamp. but i can't read in labview because Labview don't support.
    The information as the attachment. Please help me read value in the first Column to time as red area on Fx.
    Thanks and Best Regards

    Try scanning your string like this.  There's a bug where the Scan From String doesn't like to pick up the fractional seconds in a specificly formatted timestamp string.  So I had to scan it separately and add.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Get Timestamp.png ‏17 KB

  • Issue loading CSV file into HANA

    Hi,
    From last couples of weeks i am trying to load my CSV file into HANA Table, but i am unable to succeed.
    I am getting error "Cannot open Control file, /dropbox/P1005343/CRM_OBJ_ID.CTL". I have followed each and every step in SDN, still I could not load data into my HANA table.
    FTP: /dropbox/P1005343
    SQL Command:
    IMPORT FROM '/dropbox/P1005343/crm_obj_id.ctl'
    Error:
    Could not execute 'IMPORT FROM '/dropbox/P1005343/crm_obj_id.ctl''
    SAP DBTech JDBC: [2]: general error: Cannot open Control file, /dropbox/P1005343/crm_obj_id.ctl
    Please help me on this
    Regards,
    Praneeth

    Hi All,
    I have successfully loaded the file into HANA database in folder P443348 but while importing file, I am getting the following error message such as
    SAP DBTECH JDBC: [2]  (at 13) : general error: Cannot open Control file, "/P443348/shop_facts.ctl"
    This is happening while I am executing the following import statement
    IMPORT FROM '/P443348/shop_facts.ctl';
    I have tried several options including changing the permissions of the folders and files to no success. As of now my folder has full access, which is 777
    Any help would be greatly appreciated so that I can proceed further
    Thanks
    Vamsidhar

  • Issue Loading CSV file using OPENROWSET

    Hi All,
    I have a csv file that has a value like -200 when i tried open the file in SQL using OPENROWSET that value is read as null.
    select * from
    OPENROWSET('Microsoft.ACE.OLEDB.12.0',
    'Text;Database=C:\TEST\;',
    'SELECT * FROM ABC.csv')
    when i save teh fiel as .xlsx and open using the OPENROWSET syntax for xlsx file it shows the correct value.
    I am not sure if it is the behaviour of teh file or withe SQL.Can some one tell me what would be the possible reasons for that.
    Thanks in advance.
    Raghav

    Check this link and sample:
    http://www.databasejournal.com/features/mssql/article.php/10894_3331881_2
    select * from OpenRowset('MSDASQL', 'Driver={Microsoft Text Driver (*.txt; *.csv)};
    DefaultDir=C:\External;','select top 6 * from
    MyCsv.csv')
    Best Wishes, Arbi; Please vote if you find this posting was helpful or Mark it as answered.

  • Crystal Reports 9.2.0.448 Formatting and exporting issues on CSV file

    Hello,
    when starting export to csv, excel or text - out of the crystal reports preview - content gets lost, i.e. crystal cuts parts of the csv text.
    If I try exporting to word or pdf the whole content is being exported.
    Can anybody tell me how to fix that problem?
    Is this a known issue? Does there probably already exist a hotfix? I couldn't find any.
    Would be great, if someone helped me!
    Monika

    Hi Monika,
    No more fixes done to CR 9 since it's so old. Go to this link and download any patches you can find for CR 9:
    http://service.sap.com/sap/bc/bsp/spn/bobj_download/main.htm
    If that doesn't work try increasing the field width or using the Can Grow option. If that doesn't work then try downloading a trial version of CR XI R2 or Cr 2008 and test again. I suggest you try this on a PC other than your main PC just in case. We support side by side installs but you never know what it may do depending on what else you have installed.
    Thank you
    Don

  • SQLLDR issue with csv file

    Hi All,
    I have a table with the following columns
    SYSTEM_SOURCE VARCHAR
    NATIONAL_IDENTIFIER VARCHAR
    HOURS NUMBER
    EARNING_CODE VARCHAR
    PAYROLL_NAME VARCHAR
    COST_CENTER VARCHAR
    BRAND VARCHAR
    WEEK_NUMBER NUMBER
    AMOUNT NUMBER
    STATUS VARCHAR
    DETAIL_FLAG VARCHAR
    My sqlldr script is as follows:
    LOAD DATA
    INFILE *
    APPEND INTO TABLE XXARG.XXARG_HR_TIME_SUMMARY_STG
    fields terminated by ","
    TRAILING NULLCOLS
    SYSTEM_SOURCE CONSTANT 'RTI'
    ,NATIONAL_IDENTIFIER CHAR terminated by "," "substr(REPLACE(:NATIONAL_IDENTIFIER,'TC#'),1,3) || '-' || substr(REPLACE(:NATIONAL_IDENTIFIER,'TC#'),4,2) || '-' || substr(REPLACE(:NATIONAL_IDENTIFIER,'TC#'),6,4)"
    ,HOURS DECIMAL EXTERNAL terminated by "," NULLIF (HOURS=BLANKS)
    ,EARNING_CODE CHAR terminated by "," "trim(:EARNING_CODE)"
    ,PAYROLL_NAME CHAR terminated by ","
    ,COST_CENTER CHAR terminated by "," "'01'|| LPAD(TRIM(REPLACE(:COST_CENTER,'a')),5,'0')"
    ,BRAND CONSTANT 'ARG'
    ,WEEK_NUMBER DECIMAL EXTERNAL
    ,AMOUNT DECIMAL EXTERNAL NULLIF (AMOUNT=BLANKS)
    ,STATUS CONSTANT 'U'
    ,DETAIL_FLAG CONSTANT 'N'
    The data file is as follows:
    999999999 , 35.416, REG, RNO , 07157, 1, ,
    999999999 , 34.549, REG, RNO , 07157, 2, ,
    999999999 , , BON, RNO , 07157, 2, 25.00
    When I run sqlldr from my desktop (sqlldr 9i), it works fine. However when I run the sqlldr from the server (unix box), it fails for row 3
    Record 3: Rejected - Error on table XXARG.XXARG_HR_TIME_SUMMARY_STG, column AMOUNT.
    ORA-01722: invalid number
    If I put a ',' at the end of line 3, it works fine on the server as well. Appreciate your help in resolving this issue.
    Thanks
    Venky

    What is the full database version (I suspect 10g+) ? In your control file, you have specified
    >
    fields terminated by ","
    >
    yet your last field in the third row does not have this delimiter. 9i may have been forgiving and used the CR/LF as a separator, but the database version needs the delimiter.
    HTH
    Srini

  • File upload - issue with European .csv file format

    All,
    when uploading the .csv file for "Due List for Planned Receipts" in the File Transfer Upload center I receive an error.  It appears that it is due to the european .csv file format that is delimited by semicolon rather than comma. The only way I could solve this issue is to change Regional and Language options to English. However, I don't think this is a great solution as I can't ask all our suppliers to change their settings.  Has anyone come across this issue and another way of solving it?
    Thank you!
    Have a good day,
    Johanna

    Igor thank you for your suggestion. 
    I found this SAP note:
    +If you download a file, and the formatting of the CSV file is faulty, it is possible that your column separator does not match the standard settings of the SAP system. In the standard SAP system, the separator is ,.
    To ensure that the formatting is correct, set your global default for column separation in your system, so that it matches that of the SAP system you are using.+
    To do that Microsoft suggests to change the "List separator" in the Regional and Language Options Customize view. Though like you suggest that does not seem to do the trick. Am I missing something?
    However, if I change the whole setting from say German to English (UK) the .csv files are comma delimited and can be easily uploaded.  Was hoping there would be another way of solving this without need for custom development.

  • CSV file generation issue

    Hello All,
    We are facing below issue during CSV file generation :
    Generated file shows field value as 8.73E+11 in output and when we are clicking inside this column result shown is approximate of the correct value like 873684000000. We wish to view the correct value 872684000013.
    Values passes from report program during this file generation are correct.
    Please advice to resolve this issue.
    Thanks in Advance.

    There is nothing wrong in your program, it is the property of excel that if the value in the cell is larger than
    default size it shows in output of that format.

  • CSV formatting issues

    I have tried different things but I still have formatting issues with CSV. I ran my query in Toad and it displays the comma delimted values accurately. But when I run the report, the data is displayed in random cells. I am using vertical elasticity-expand, horizontal elasticity-fixed. Another report with the same input runs just fine. Any suggestions are welcome. Is there something I have to do in the layout editor or in the wizard?
    Thanks

    Hi,
    What version of reports are you using? Are you running the report to extract the values into a file? Are you using the srw.run_report feature? If so, try setting the delimiter=','.
    Martin

  • Email CSV Files from ABAP in 4.6c

    Hi all,
    I've been trying to work out how to send an email from an ABAP program with a body of email text and a CSV file  attachment. We are running 4.6c and have SAP connect configured. I can send plain text emails but can't seem to attach a CSV file successfully. In my program I have set up an internal table which represents the CSV contents.
    Any of the threads I follow on the SDN seem to point towards binary attachments which I have tried but do not seem to work as I want them to.. there are formatting issues or the file ends up having an extension which is not suitable.
    I want to be able to name the attachment appropriately ... e.g. filename.csv so that the receiver can open it directly in Excel or in other text file applications.
    Can anyone point me in the right direction for this ?
    Cheers,
    Gordon

    Hi Thomas, thanks for that.. I'm starting to get the hang of the packing table. But I'm still stuggline with the format of the file when it arrives in the email.
    If I set the DOC_TYPE field to 'TXT' for the attachment, the email does arrive now with my data in a .txt file attachment however the CSV data which I had in my internal table is not presented in the file as it was in the table. In my table I_EXTRACT, each row is a CSV record I want to go into the file.. but when it arrives the txt file has incorrect line breaks, so more than one record per line etc. It's as all the data has been spewed out as long string into the text file.
    Is there an extra step or escape key I need to add to my data to get SAP to add the line breaks correctly ?
    My code for adding the attachment to the email data is as follows:
      LOOP AT i_extract.
        APPEND i_extract TO objtxt.
      ENDLOOP.
    Create Message Attachment
    Write Packing List (Attachment)
      att_type = 'TXT'.
      DESCRIBE TABLE objtxt LINES tab_lines.
      READ TABLE objtxt INDEX tab_lines.
      objpack-doc_size = ( tab_lines - 2 ) * 255 + STRLEN( objtxt ).
      clear objpack-transf_bin.
      objpack-head_start = 2.
      objpack-head_num = 0.
      objpack-body_start = 2.
      objpack-body_num = tab_lines.
      objpack-doc_type = att_type.
      objpack-obj_name = 'ATTACHMENT'.
      objpack-obj_descr = 'myfile'.
      APPEND objpack.

  • Different formats of the flat file for the same target

    In our deployment, we use plugin code to extract the csv files in the required format. The customers are on same version of datamart, but they are on different versions of source database - from 3.x to 4.5 depending on which version of application they are using. In 4.0, we introduced a new column email in the user table in the source database. Accordingly, plugin will add the field in the csv file. But not all the customers will get the upgraded version of plugin at the same time. So ETL code needs to decide which data flow to process depending on the format of the csv file to load data to the same target table. I made the email field in the target table nullable but it still expects the same format of the csv file with delimiter for null value.
    Need help to achieve this. Can I read the structure of the flat file in DS or get the count of delimiters so that I can use a conditional to use different data flow based on the format of the flat files.
    Can I make the email column in the flat file optional?
    Thanks much in advance.

    You can add an email column that maps to null in a query transform for the source that does not contain this column. 
    Or else you can define two different file formats that map to the same file.  One with the column and one without

Maybe you are looking for

  • Ask The Expert: Understanding, Implementing, and Troubleshooting Cisco Prime Network

    Ask questions and learn about Cisco Prime Network with Cisco experts Vignesh Rajendran Praveen and Jaminder Singh Bali. Cisco Prime Network is and  Cisco Prime Network provides cost-effective device operation, administration and network fault managem

  • I can't install iTunes error: "HKEY_LOCAL_MACHINE/Software/Classes/pcast"

    Hi i can't install. iTunes first i want to update and i take this error *"Could not open key :HKEYLOCALMACHINE/Software/Classes/pcast* Verify that you have sufficient access to that key or contact your support personel" and i delete pcast files after

  • A pool operation was specified for a pooled resource that does not belong t

    hi i left the server running for days so my application was running i refresh the page by clicking the ok button in small popup box,the page was refresh but when i what to select my lov am geting this error A pool operation was specified for a pooled

  • Is billing doc generated without PGI

    Dear All, Encountered an unique process. Is it possible to create a billing doc without creating/doing PGI doc, but picking is done. (Though this is a Basic information to be known) Am able to save the billing doc without doing PGI, which is not supp

  • InputStream in JDeveloper 10g with windows

    How can I stop the input in the Scanner class. I am using the following simple code to input text but I can't stop the standard input and it seems stuck. Any ideas?. Scanner scanner=new Scanner(System.in); String LineSeparator= System.getProperty("li