Extra characters when writing to file

Hello,
I'm currently working on a datalogging system using XP and 8.2. Our program monitors CAN messages, saves values to local variables, and records to a text file based on several inputs from the front panel (file location, ms delay, and write ratio of fast update to slow update parameters). In the interest of saving disk space some variable are only written every "Write Ratio" times the fast variables are recorded.
The file creation works fine, and the write to file works fairly well, but I get garbage characters at the beginning of most lines. You can see in the attached text file there is an extra "J" before my header (not a big deal) but there is also an extra character at the beginning of each line. Usually it's a square, but on lines which will eventually write all the variables it fluctuates between a space, ", or !. I've also gotten files with a * or ). Any idea why I'm getting these? My gut instinct says it's caused by the Write Counter >= Write Ratio Case structure or the EOL constant in the main concatenate strings. I've tried changing both the EOL constant and the false case write string with the same problems.
While I've got the sample posted, is there a better way to implement my traveled distance function? The sequence structure seems...inelegant in this case. I could post my entire prototype VI if that would help, but I'm a bit secretive of its contents and simultaneously ashamed at its ham-handedness.
Thanks
Mark
Attachments:
Write Problems1.vi ‏68 KB
phev_0225_1336Drive1.txt ‏4 KB

Tbob,
Did you try saving the .txt file to your hard drive then opening from there in notepad?  Doing a open in new window from the browser gave me a blank screen, but saving the file and opening in notepad worked and looked like the attached.
Mark,
I don't know about the J.  But at the beginning of most lines in the .txt file I see a square.  Doing a copy and paste into a string constant in labview, then converting it from normal display to hex display,  it is the Hex character 13 which is decimal 19 or a Ctrl-S or DC3.  Maybe that will help you find the problem.
One thing that is kind of odd about your write VI is that you are converting all the numbers to strings, but you are feeding that into a Write to Binary file VI rather than to a Write to Text file VI.  I bet that may be the source of the problem.
Edit:  This may also explain why your "text" files are quite showing up in the browser.
Message Edited by Ravens Fan on 02-25-2008 06:13 PM
Attachments:
phev Drive1.PNG ‏30 KB

Similar Messages

  • Extra characters at end of file

    I am getting extra characters (carraige returns I think) at the end of these clobs using the source below.
    This only heppens when PL/SQL sends small clobs. We are building xml files from PL/SQL for file system use.
    Any input is appreciated.
    Thanks
    create or replace and compile java source named sjs.write_CLOB as
    import java.io.*;
    import java.sql.*;
    import java.math.*;
    import oracle.sql.*;
    import oracle.jdbc.driver.*;
    public class write_CLOB extends Object
    public static void pass_str_array(oracle.sql.CLOB p_in,java.lang.String f_in)
    throws java.sql.SQLException, IOException
    File target = new File(f_in);
    FileWriter fw = new FileWriter(target);
    BufferedWriter out = new BufferedWriter(fw);
    Reader is = p_in.getCharacterStream();
    char buffer[] = new char[8192];
    int length;
    while( (length=is.read(buffer,0,8192)) != -1) {
    out.write(buffer);
    // out.newLine();
    is.close();
    fw.close();
    CREATE OR REPLACE PROCEDURE sjs.write_CLOB(CLOB_in CLOB,file_in VARCHAR2) AS
    LANGUAGE JAVA NAME 'write_CLOB.pass_str_array(oracle.sql.CLOB,java.lang.String)';
    /

    Jeff,
    The SQL in the script file is below.  To be honest, I have reduced it down to a simple select from dual and it still puts extra spaces at the end of the single line.
    col ord noprint
    spool test.csv
    SELECT  ' ' ord,
      'ZONE_ORDER_NUMBER'||','||
      'ZONE_NAME'||','||
      'ZONE_TYPE'||','||
      'DESCRIPTION'||','||
      'START_DATE'||','||
      'END_DATE'
    FROM dual
    UNION ALL
    SELECT zone_name ord,
      '"'||zone_order_number||'"'||','||
      '"'||zone_name||'"'||','||
      '"'||zone_type||'"'||','||
      '"'||description||'"'||','||
      '"'||TO_CHAR(start_date,'DD-MON-YY')||'"'||','||
      '"'||TO_CHAR(end_date,'DD-MON-YY')||'"'
    FROM zones
    ORDER BY ord;
    spool off

  • Out of memory error when writing large file

    I have the piece of code below which works fine for writing small files, but when it encounters much larger files (>80M), the jvm throws an out of memory error.
    I believe it has something to do with the Stream classes. If I were to replace my PrintStream reference with the System.out object (which is commented out below), then it runs fine.
    Anyone else encountered this before?
         print = new PrintStream(new FileOutputStream(new File(a_persistDir, getCacheFilename()),
                                                                false));
    //      print = System.out;
              for(Iterator strings = m_lookupTable.keySet().iterator(); strings.hasNext(); ) {
                   StringBuffer sb = new StringBuffer();
                   String string = (String) strings.next();
                   String id = string;
                   sb.append(string).append(KEY_VALUE_SEPARATOR);
                   Collection ids = (Collection) m_lookupTable.get(id);
                   for(Iterator idsIter = ids.iterator(); idsIter.hasNext();) {
                        IBlockingResult blockingResult = (IBlockingResult) idsIter.next();
                        sb.append(blockingResult.getId()).append(VALUE_SEPARATOR);
                   print.println(sb.toString());
                   print.flush();
    } catch (IOException e) {
    } finally {
         if( print != null )
              print.close();
    }

    Yes, my first code would just print the strings as I got them. But it was running out of memory then as well. I thought of constructing a StringBuffer first because I was afraid that the PrintStream wasn't allocating the memory correctly.
    I've also tried flushing the PrintStream after every line is written but I still run into trouble.

  • Short Dump when writing the file to Unix.

    ST22 Dump Log.
    Short text Error when writing to the file "/appl/data/backlog/outbound/bop_bwfcstngall_erp
    What happened? Resource bottleneck
    The current program "ZPS_BACKLOG_NEW_READ" had to be terminated because
    a capacity limit has been reached.
    _Error analysis
    An error occurred when writing to the file
    "/appl/data/outbound/bop_bwfcstngall_erp_20110919_151220.dat".
    Error text: "A file, file system or message queue is no longer available."
    Error code: 52

    ask your basis collegues, the dump says that the file system was not available, so it is clear that your writing to Unix could not happen.
    have a look into OSS Note 1100926 - FAQ: Network performance

  • Strange characters when writing files

    I've made a program using a FileWriter. The program is coded and compiled on a Linux PC. I'm using the filewriter to dynamically write HTML-lines. I use "\r" to write returns. On my Linux PC and on my Windows XP sp2 laptop it works fine. On the PC of a friend it does not work correctly. It sees the "\r" as an unknown character. The browser also comments on that (illegal character). She also has Windows XP sp 2 installed running explorer 6. What could be the solution for this problem? Should I recompile on my friends PC? It is strange though. We use the same Java version (1.5).

    I don't know why you aren't using a print writer (with a println) method.
    Regardless with text files you will always find that they will render correctly on one system and not another because different systems have different line seperators and some software is pretty crappy when it comes to display and gets it's knickers in a knot when it sees line seperators it doesn't like. i.e. Notepad.

  • Replacing Special Characters when writing to a file

    I am using the following to replace Muñoz  (the ñ with null) .  It should output as Muoz.
    This should be simple but it is not working.  209 and 241 represent the upper and lower case ñ.
    This statement is in a CFFILE.  I am generating the file but the ñ is still there.
    replace(LGL_NM,chr(209)&chr(241),'3','all')
    I also tried the following without success:
    <cfset x = ReplaceNoCase(#Object_query.LGL_NM#,"#chr(209)#", " ", "all")>
    <cfset LGL_NM = ReplaceNoCase(x,"#chr(241)#", " ", "all")>
    Please Help.
    Thanks!
    Bill

    I mean on the page itself, in the meta tag.  Not sure if it's what's causing the difference though.
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />

  • Page fault in nipalk.sys when writing large file , system crashes

    When reading from VXI 2211 and writing to a file the system crashes with a page fault error in nipalk.sys. I tried loading new drivers for VISA, VXI, even DAQ which I don't use. Where does nipalk.sys come from??

    jcnoble wrote:
    > When reading from VXI 2211 and writing to a file the system crashes
    > with a page fault error in nipalk.sys. I tried loading new drivers for
    > VISA, VXI, even DAQ which I don't use. Where does nipalk.sys come
    > from??
    This is the low level device driver used by all NI hardware device
    drivers. PAL stands most probably for Platform Abstraction Layer and
    what that means it is providing an uniform device driver API for
    managing memory, IO addresses, DMA channels and Interrupts. Most
    probably this NIPAL device driver API is almost the same on all
    platforms NI develops modern hardware drivers for. This allows easier
    porting of device drivers to new hardware as the big work for porting a
    hardware driver is about porting this NIPAL device driver.
    I
    would not think that disk writing is the real culprit here, but it may
    trigger a problem occuring earlier during VXI access.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Square in front of data when writing to file

    When I gather up a bunch of strings in a while loop, after I write the data to a file, the first line always has 3 spaces followed by a square. What is happening and is there a workaround to the problem.
    I included sample code of what Im trying to accomplish, essentially, the while loop has a string concantanation for each line in the file I am trying to write to, everything works great but the darn square problem is putting a damper on my progress.
    Any help is greatly appreciated!!
    Thanks
    David
    Attachments:
    autotune_wingy_fix.vi ‏70 KB

    Hi David,
    I think your problem could come from your way of writing into the file. It's better to write a string than an array of strings into a file.
    Try to convert your array into a single string and write it to the file.
    I've attached an LV6.1 Version of your VI modified.
    Hope this helps !
    Julien
    Attachments:
    autotune_wingy_fix[1].vi ‏65 KB

  • "PLS-00172: string literal too long" When Writing XML file into a Table

    Hi.
    I'm using DBMS_XMLStore to get a XML file into a db table. See the example below, I'm using that for my PL/SQL format. Problem is that because there're too many XML elements that I use in "xmldoc CLOB:= ...", I get "PLS-00172: string literal too long" error.
    Can someone suggest a workaround?
    THANKS!!!
    DECLARE
    insCtx DBMS_XMLStore.ctxType;
    rows NUMBER;
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <EMPNO>7369</EMPNO>
    <SAL>1800</SAL>
    <HIREDATE>27-AUG-1996</HIREDATE>
    </ROW>
    <ROW>
    <EMPNO>2290</EMPNO>
    <SAL>2000</SAL>
    <HIREDATE>31-DEC-1992</HIREDATE>
    </ROW>
    </ROWSET>';
    BEGIN
    insCtx := DBMS_XMLStore.newContext('scott.emp'); -- get saved context
    DBMS_XMLStore.clearUpdateColumnList(insCtx); -- clear the update settings
    -- set the columns to be updated as a list of values
    DBMS_XMLStore.setUpdateColumn(insCtx,'EMPNO');
    DBMS_XMLStore.setUpdateColumn(insCtx,'SAL');
    DBMS_XMLStore.setUpdatecolumn(insCtx,'HIREDATE');
    -- Now insert the doc.
    -- This will only insert into EMPNO, SAL and HIREDATE columns
    rows := DBMS_XMLStore.insertXML(insCtx, xmlDoc);
    -- Close the context
    DBMS_XMLStore.closeContext(insCtx);
    END;
    /

    You ask where am getting the XML doc. Well, am not
    getting the doc itself.I either don't understand or I disagree. In your sample code, you're certainly creating an XML document-- your local variable "xmldoc" is an XML document.
    DBMS_XMLSTORE package needs
    to know the canonical format and that's what I
    hardcoded. Again, I either don't understand or I disagree... DBMS_XMLStore more or less assumes the format of the XML document itself-- there's a ROWSET tag, a ROW tag, and a then whatever column tags you'd like. You can override what tag identifies a row, but the rest is pretty much assumed. Your calls to setUpdateColumn identifies what subset of column tags in the XML document you're interested in.
    Later in code I use
    DBMS_XMLStore.setUpdateColumn to specify which
    columns are to be inserted.Agreed.
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <KEY_OLD> Smoker </KEY_OLD>
    <KEY_NEW> 3 </KEY_NEW>
    <TRANSFORM> Specified </TRANSFORM>
    <KEY_OLD> Smoker </KEY_OLD>
    <VALUEOLD> -1 </VALUEOLD>
    EW> -1 </VALUENEW>
         <DESCRIPTION> NA </DESCRIPTION>
    </ROW>
    ROWSET>';This is your XML document. You almost certainly want to be reading this from the file system and/or have it passed in from another procedure. If you hard-code the XML document, you're limited to a 32k string literal, which is almost certainly causing the error you were reporting initially.
    As am writing this I'm realizing that I'm doing this
    wrong, because I do need to read the XML file from
    the filesystem (but insert the columns
    selectively)...What I need to come up with is a proc
    that would grab the XML file and do inserts into a
    relational table. The XML file will change in the
    future and that means that all my 'canonical format'
    code will be broken. How do I deal with anticipated
    change? Do I need to define/create an XML schema in
    10g if am just inserting into one relat. table from
    one XML file?What does "The XML file will change in the future" mean? Are you saying that the structure of the XML document will change? Or that the data in the XML document would change? Your code should only need to change if the structure of the document changes, which should be exceptionally uncommon and would only be an issue if you're adding another column that you want to work with, which would necessitate code changes.
    I found an article where the issue of changing XML
    file is dealt by using a XSL file (that's where I'd
    define the 'canonical format'), but am having a
    problem with creating one, because the source XML is
    screwed up in terms of the format:
    it's not <x> blah </x>
    <x2> blah </x2>
    x2="blah" x3="blah> ...etc
    Can you point me in the right direction, please?You can certainly use something like the DBMS_XSLProcessor package to transform whatever XML document you have into an XML document in an appropriate format for the XMLStore package and pass that transformed XML document into something like your sample procedure rather than defining the xmldoc local variable with a hard-coded document. Of course, you'd need to write appropriate XSL code to do the actual transform.
    Justin

  • How to avoid duplicates when writing a file

    Hai all,
    I wrote a jsp program which generates html files dynamically taking particular keyword and category. In the same program i am also writing the links in xml file i.e page location (href). I page will call everday once. for the first time it is adding once if that page was called on second day it is adding agin those links are adding twice to that xml,even if those links are present in that xml file. but i need if the link is present in that xml file then it does not write again in that xml file for that i put the condition but is giving error.
    Here i am putting my xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
    <url>
    <loc>http://localhost:8080/Links/debt_get out of debt plan.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    <url>
    <loc>http://localhost:8080/Links/debt_no to debt.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    <url>
    <loc>http://localhost:8080/Links/sports_football.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    </urlset>>
    below i am putting my java code
    import java.io.*;
    class FileReadTest {
    public static void main (String[] args) {
         FileReadTest f = new FileReadTest();
    f.readMyFile();
    void readMyFile() {
    DataInputStream dis = null;
    String record = null,r=null;
    int recCount = 0;
    try {
    File f = new File("MyXMLFile.xml");
    FileInputStream fis = new FileInputStream(f);
    BufferedInputStream bis = new BufferedInputStream(fis);
    dis = new DataInputStream(bis);
    while ( (record=dis.readLine()) != null )
    recCount++;
                   r=record;
    //System.out.println(recCount + ": " + record); //compareTo
              String sb=r.toString();
              int l =0;
              boolean b=sb.contains("http://localhost:8080/Links/debt_no to debt.html");
                   // l =sb.indexOf("http://www.abacapital.com/privacy/?action=print");
                   l =sb.indexOf("http://localhost:8080/Links/debt_no to debt.html");
              System.out.println(b);
    if(l!=-1)
                             System.out.println("This is equla " +l);
                        else{
                        System.out.println("Not equal");}
    } catch (IOException e) {
    // catch io errors from FileInputStream or readLine()
    System.out.println("Uh oh, got an IOException error!" + e.getMessage());
    } finally {
    // if the file opened okay, make sure we close it
    if (dis != null) {
         try {
    dis.close();
         } catch (IOException ioe) {
    error what i am getting is
    false
    Not equal
    plese help me to solve this problem. I donot want to write second time if that link was already present in that. If link was not in that file then it has to write that link in that file.

    while ( (record=dis.readLine()) != null )
       recCount++;
       r=record;
       //System.out.println(recCount + ": " + record); //compareTo
    String sb=r.toString();
    int l =0;
    boolean b=sb.contains("http://localhost:8080/Links/debt_no to debt.html");
    // l =sb.indexOf("http://www.abacapital.com/privacy/?action=print");
    l =sb.indexOf("http://localhost:8080/Links/debt_no to debt.html");
    System.out.println(b);I think the contains and indexOf checks should be done within the loop for each line; the way it is written above, i think the loop finishes after reading all the lines which means r always contains the last line. Which means the check is always made for the last line.

  • Wrong data type when writing XML file

    We are trying to write an XML file from ODI but we get the following error:
    ODI-1217: Fallo de la sesión CMT_PAQ_CREA_XML (15001) con código de retorno 8000.
    ODI-1226: Fallo en el paso WRITE_XML_SCHEMA después de 1 intento(s).
    ODI-1232: Fallo en la ejecución del procedimiento WRITE_XML_SCHEMA.
    ODI-1228: Fallo en la tarea WRITE_XML_SCHEMA (Procedimiento) en el destino XML conexión CMTInCaMo.
    Caused By: java.sql.SQLException: Could not save the file D:\ODI\oracledi\XMLFiles\Cuentas11g.xml because a class java.sql.SQLException occurred and said: java.sql.SQLException: java.sql.SQLException: Wrong data type: type: <b>NUMERIC (2) expected: INTEGER value: 301000000232</b>
         at com.sunopsis.jdbc.driver.xml.SnpsXmlFile.writeToFile(SnpsXmlFile.java:740)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlConnection.internalExecute(SnpsXmlConnection.java:713)
         at com.sunopsis.jdbc.driver.xml.SnpsXmlPreparedStatement.executeUpdate(SnpsXmlPreparedStatement.java:111)
         at com.sunopsis.sql.SnpsQuery.executeUpdate(SnpsQuery.java:665)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.executeUpdate(SnpSessTaskSql.java:3218)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execStdOrders(SnpSessTaskSql.java:1785)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java:2805)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2515)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:537)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:449)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1954)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:322)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:224)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:246)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:237)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:794)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:114)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:83)
         at java.lang.Thread.run(Thread.java:619)
    The datastores have valid records but when we try to write them in the XML file it didn't work.
    ¿Can somebody help us?

    Thanks, let me give you an update:
    The datatype of the element is defined as follows:
              <xs:element name="AcctNo" minOccurs="1" maxOccurs="1">
                   <xs:simpleType>
                        <xs:restriction base="xs:string">
                             <xs:maxLength value="60"></xs:maxLength>
                        </xs:restriction>
                   </xs:simpleType>
              </xs:element>
    The ODI reverse ingeneering has created the column as VARCHAR(255). I think this ok, because XML driver doesn't takes in account the xsd element defined length.
    We have checked the XSD and is valid.
    In the other hand, I must say that we have created an interface that is loading data into the corresponding datastore without any problem but we couldn't generate an XML file from it.
    Any other advice?

  • Not enough memory to complete operation when writing TDMS file.

    Hello,
       I am new to Labview and having a bit of trouble. I attach some code here. What I want to do is sample from a NI 9220 DAQ from 16 channels at 20kHz, while from a second NI 6009 sample from 4 channels at 1000 kHz. I want to append these together and then write to a TDMS file. 
       I have tried to write this code using NIDAQmx VIs but when I have it has resulted in the two DAQs not having the right timing with each other. The 6009 samples for a longer time. 
       I have now tried instead to use the DAQ assistant to read from the two VIs and it works in that TDMS files produced have the correct timing between the two DAQS. However, if I record for more than 2 minutes, in the end I want to end up recording for a much longer time, I have the "Not enough memory to complete operation" message appearing. This still happens even if I get rid of my charts to display the data, and also if I get rid of the NI 6009 completely and just keep the 9220 sampling at 20kHz. It happens even if I repalde my TDMS write and put a write measurement assistant in which I tell it to write a series of files that are each less than 2 minutes long. 
        I think it is something to do with the amount of data I am reading and is being held in memory. What can I do about this? Also, my charts display very slowly, basically evey second when the 20k are read in. However if I lower the amount of data read the charts don't display all the data points. 
        I attach my code, thanks for your help!
        Alfredo 
    Attachments:
    03_02_15.vi ‏688 KB

    alfredog wrote:
    Hello,
       I am new to Labview and having a bit of trouble. I attach some code here. What I want to do is sample from a NI 9220 DAQ from 16 channels at 20kHz, while from a second NI 6009 sample from 4 channels at 1000 kHz. I want to append these together and then write to a TDMS file. 
       I have tried to write this code using NIDAQmx VIs but when I have it has resulted in the two DAQs not having the right timing with each other. The 6009 samples for a longer time. 
       I have now tried instead to use the DAQ assistant to read from the two VIs and it works in that TDMS files produced have the correct timing between the two DAQS. However, if I record for more than 2 minutes, in the end I want to end up recording for a much longer time, I have the "Not enough memory to complete operation" message appearing. This still happens even if I get rid of my charts to display the data, and also if I get rid of the NI 6009 completely and just keep the 9220 sampling at 20kHz. It happens even if I repalde my TDMS write and put a write measurement assistant in which I tell it to write a series of files that are each less than 2 minutes long. 
        I think it is something to do with the amount of data I am reading and is being held in memory. What can I do about this? Also, my charts display very slowly, basically evey second when the 20k are read in. However if I lower the amount of data read the charts don't display all the data points. 
        I attach my code, thanks for your help!
        Alfredo 
    As far as your charts updating very slowly - the way your code is designed, your charts only get data when both 20K samples & 1M samples are done collecting.  Have you tried setting up DAQ assistant for continuous sampling instead of 20K samples or 1M samples?
    -BTC
    New Controls & Indicators made using vector graphics & animations? Click below for Pebbles UI

  • Duplicate namespace declarations when writing a file with JCA file adapter

    I am using JCA File adapter to write a an XML file. The composite contains a mediator which received and transforms an XML to desired format and then calls a JCA file adapter to write the file.
    The problem that I am having is that the written file has declaration of namespaces repeated with repeating elements instead of a single declaration at root. For ex.
    instead of
    <ns0:Root xmlns:ns0="namespace0"  xmlns:ns1="namespace1" xmlns:ns2="namespace2">
    <ns0:RepeatingChild>
    <ns1:Element1>value1</ns1:Element1>
    <ns2:Element2>value2</ns2:Element2>
    </ns0:RepeatingChild>
    <ns0:RepeatingChild>
    <ns1:Element1>value3</ns1:Element1>
    <ns2:Element2>value4</ns2:Element2>
    </ns0:RepeatingChild>
    </ns0:Root>What I see in the file is:
    <ns0:Root xmlns:ns0="namespace0"  xmlns:ns1="namespace1" xmlns:ns2="namespace2">
    <ns0:RepeatingChild>
    <ns1:Element1 xmlns:ns1="namespace1" xmlns:"namespace1">value1</ns1:Element1>
    <ns2:Element2 xmlns:ns2="namespace2" xmlns:"namespace2">>value2</ns2:Element2>
    </ns0:RepeatingChild>
    <ns0:RepeatingChild>
    <ns1:Element1 xmlns:ns1="namespace1" xmlns:"namespace1">>value3</ns1:Element1>
    <ns2:Element2 xmlns:ns2="namespace2" xmlns:"namespace2">>value4</ns2:Element2>
    </ns0:RepeatingChild>
    </ns0:Root>So basically all the elements which are in different namespace than root element have a namespace declaration repeated even though the namespace identifier is declared at the root elment level.
    Although, the XML is still valid, but this is unnecessarily increasing the filesizes 3-4 times. Is there a way I can write the XML file without duplicate declarations of namespaces?
    I am using SOA Suite 11.1.1.4
    The file adapter has the schema set as above XML.
    I tried the transformation in mediator using XSL and also tried using assign [source(input to mediator) and target(output of mediator, input of file adapter) XMLs are exactly same].
    but no success.

    I used automapper of JDeveloper to generate the schema. The source and target schema are exactly same.
    I was trying to figure it out and I observed that if the namespaces in question are listed in exclude-result-prefixes list in xsl, then while testing the XSL in jDeveloper duplicate namespaces occur in the target and if I remove the namespace identifiers from the exclude-result-prefixes list then in jDeveloper testing the target correctly has only a single namespace declaration at the root node.
    But, when I deployed the same to server and tested there, again the same problem.

  • E71 also steals characters when writing characters...

    Whenever I write a text message and use one of those character á,ã,í,ó,õ,ú or č,ć,š,đ,ž (Serbian Latin) for the first time in a message it takes 91 characters. And when you use it for the second, third... time it takes 1 ch. WHY?
    That is also a problem on all other nokia symbian phone (my experience with 3650, n70, n73). I don't have this problem on NONsymbian nokia phones and all other non nokia phones.

    Is that phenomenon similar to this: "normal" English characters take up 1 character. But mix Chinese characters the count jumps by around 50% more.
    I've learnt to not mix English and Chinese messages whenever possible.
    Never got around to finding out why though
    E71! Lovin' It!

  • How do i create a new line when writing to file

    I am trying to write a few stings to a file but i cant get them to print on seperate lines. They all end up together. I tried to include the \n in the string but that didnt work. Any suggestions? I thought that this part would be simple.

    http://forum.java.sun.com/thread.jsp?forum=54&thread=450725&tstart=0&trange=15

Maybe you are looking for