Duplicate namespace declarations when writing a file with JCA file adapter

I am using JCA File adapter to write a an XML file. The composite contains a mediator which received and transforms an XML to desired format and then calls a JCA file adapter to write the file.
The problem that I am having is that the written file has declaration of namespaces repeated with repeating elements instead of a single declaration at root. For ex.
instead of
<ns0:Root xmlns:ns0="namespace0"  xmlns:ns1="namespace1" xmlns:ns2="namespace2">
<ns0:RepeatingChild>
<ns1:Element1>value1</ns1:Element1>
<ns2:Element2>value2</ns2:Element2>
</ns0:RepeatingChild>
<ns0:RepeatingChild>
<ns1:Element1>value3</ns1:Element1>
<ns2:Element2>value4</ns2:Element2>
</ns0:RepeatingChild>
</ns0:Root>What I see in the file is:
<ns0:Root xmlns:ns0="namespace0"  xmlns:ns1="namespace1" xmlns:ns2="namespace2">
<ns0:RepeatingChild>
<ns1:Element1 xmlns:ns1="namespace1" xmlns:"namespace1">value1</ns1:Element1>
<ns2:Element2 xmlns:ns2="namespace2" xmlns:"namespace2">>value2</ns2:Element2>
</ns0:RepeatingChild>
<ns0:RepeatingChild>
<ns1:Element1 xmlns:ns1="namespace1" xmlns:"namespace1">>value3</ns1:Element1>
<ns2:Element2 xmlns:ns2="namespace2" xmlns:"namespace2">>value4</ns2:Element2>
</ns0:RepeatingChild>
</ns0:Root>So basically all the elements which are in different namespace than root element have a namespace declaration repeated even though the namespace identifier is declared at the root elment level.
Although, the XML is still valid, but this is unnecessarily increasing the filesizes 3-4 times. Is there a way I can write the XML file without duplicate declarations of namespaces?
I am using SOA Suite 11.1.1.4
The file adapter has the schema set as above XML.
I tried the transformation in mediator using XSL and also tried using assign [source(input to mediator) and target(output of mediator, input of file adapter) XMLs are exactly same].
but no success.

I used automapper of JDeveloper to generate the schema. The source and target schema are exactly same.
I was trying to figure it out and I observed that if the namespaces in question are listed in exclude-result-prefixes list in xsl, then while testing the XSL in jDeveloper duplicate namespaces occur in the target and if I remove the namespace identifiers from the exclude-result-prefixes list then in jDeveloper testing the target correctly has only a single namespace declaration at the root node.
But, when I deployed the same to server and tested there, again the same problem.

Similar Messages

  • Problem with JCA compliant adapter in XI

    Hi
    I have problem with JCA compliant adapter in XI (SP15).
    Adapter works in PCK but doesn't work in XI (periodically).
    Sometimes messages delivered with successful status but
    very often after send message i have following message in
    Message Display Tool (Detail Display):
    Engine All
    Status System Error
    Repeatable Yes
    Cancelable Yes
    Start 29.03.2006 17:26:57
    End 29.03.2006 17:26:57
    Quality of Service Exactly Once
    Error Category XICACHE
    Error Code COMMUNICATION

    Increase the trace level of AFW components to "info" with the Log Configurator J2EE service using VisualAdmin.
    When the error happens again, crawl into the most recent DefaultTrace, either by opening the file or by using the more confortable NetWeaver Administrator log viewer (defaulttrace), reachable via browser at http://xiboxhostname:5<sysnr>00/nwa
    Good luck.
    Alex

  • Out of memory error when writing large file

    I have the piece of code below which works fine for writing small files, but when it encounters much larger files (>80M), the jvm throws an out of memory error.
    I believe it has something to do with the Stream classes. If I were to replace my PrintStream reference with the System.out object (which is commented out below), then it runs fine.
    Anyone else encountered this before?
         print = new PrintStream(new FileOutputStream(new File(a_persistDir, getCacheFilename()),
                                                                false));
    //      print = System.out;
              for(Iterator strings = m_lookupTable.keySet().iterator(); strings.hasNext(); ) {
                   StringBuffer sb = new StringBuffer();
                   String string = (String) strings.next();
                   String id = string;
                   sb.append(string).append(KEY_VALUE_SEPARATOR);
                   Collection ids = (Collection) m_lookupTable.get(id);
                   for(Iterator idsIter = ids.iterator(); idsIter.hasNext();) {
                        IBlockingResult blockingResult = (IBlockingResult) idsIter.next();
                        sb.append(blockingResult.getId()).append(VALUE_SEPARATOR);
                   print.println(sb.toString());
                   print.flush();
    } catch (IOException e) {
    } finally {
         if( print != null )
              print.close();
    }

    Yes, my first code would just print the strings as I got them. But it was running out of memory then as well. I thought of constructing a StringBuffer first because I was afraid that the PrintStream wasn't allocating the memory correctly.
    I've also tried flushing the PrintStream after every line is written but I still run into trouble.

  • Short Dump when writing the file to Unix.

    ST22 Dump Log.
    Short text Error when writing to the file "/appl/data/backlog/outbound/bop_bwfcstngall_erp
    What happened? Resource bottleneck
    The current program "ZPS_BACKLOG_NEW_READ" had to be terminated because
    a capacity limit has been reached.
    _Error analysis
    An error occurred when writing to the file
    "/appl/data/outbound/bop_bwfcstngall_erp_20110919_151220.dat".
    Error text: "A file, file system or message queue is no longer available."
    Error code: 52

    ask your basis collegues, the dump says that the file system was not available, so it is clear that your writing to Unix could not happen.
    have a look into OSS Note 1100926 - FAQ: Network performance

  • Namespace tag change in receiver file adapter

    Hi Group,
    My requirement: Need to change the Namespace tag at the receiver side.
    My XML will be something like below:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:ASNList xmlns:ns0="http://www.apriso.com/SXI.GLS">
       <EventData>
          <SenderName>Test</SenderName>
          <ReceiverName>Test</ReceiverName>
          <TimeUTC>10:00:00 IST</TimeUTC>
       </EventData>
       <AdvancedShipmentNotification>
          <ID>ASN</ID>
       </AdvancedShipmentNotification>
    </ns0:ASNList>
    I need to transform the message like below;
    <?xml version="1.0" encoding="UTF-8"?>
    <n1:ASNList xsi:schemaLocation="http://www.apriso.com/PMI.GLS SXI.GLS.ASN.xsd" xmlns:n1="http://www.apriso.com/SXI.GLS" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <EventData>
    <SenderName>LES</SenderName>
    <ReceiverName>OSS</ReceiverName>
    <EventTimeUTC>2001-12-17T09:30:47.0Z</EventTimeUTC>
    </EventData>-
    <AdvancedShipmentNotification>
    <ID>ASN</ID>
    </AdvancedShipmentNotification>
    </n1:ASNList>
    I have tried to achieve by writing a UDF, all seems to be fine but the "XML is not well-formed". Below is my UDF code:
    public String setNSDeclarations(String a,String b,String c,Container container){
    StructureNode node = ((StructureNode) container.getParameter("STRUCTURE_NODE"));
    node.setNSDeclarations(" xmlns:" + c + "=" + b);
    node.setQName(a + "ASNLIST");
    return "";
    Please suggest if there is any other way to achieve this requirement.
    Regards,

    Hi Hanumantha,
    XMLAnonymizerBean should work for your requirement, please refer the below blog
    Remove namespace prefix or change XML encoding with the XMLAnonymizerBean
    http://scn.sap.com/community/pi-and-soa-middleware/blog/2012/07/10/handling-namespaces-in-pi-using-xmlanonymizerbean
    http://scn.sap.com/people/gabrielsagayaselvam.panneerselvam/blog/2009/12/07/standard-adapter-framework-modules-afmodules-in-pi-71-part-1
    regards,
    Harish

  • How to change value of ${eol} from \n to \r\n, writing by File Adapter

    Hi All,
    How to change value of ${eol} from \n to \r\n, while writing file through File Adapter.
    As my file is being created in Linux environment, so for new line separater, \n is used. I mounted this file directory to windows System.
    As Windows requires \r\n for new line seperator. So I need to change value of ${eol} in the Schema created for file writing.
    Can any one provide help regarding this.
    Thanks in advance

    Hi,
    When an XML payload is written to a file \n would be used as the default line separator regardless of if its windows or linux, as per the XML specification http://www.w3.org/TR/REC-xml/#sec-line-ends.
    As such, one need not worry about the line separators because, the XML file would eventually be processed by a parser which usually would be able to understand both line endings (as they are whitespace characters as far as the parser is concerned).
    If you want to convert the line terminators (for readability etc), 10.1.3.4 has a new feature - in the XML schema definition for the file adapter payload, add these two top level directives to the schema element:
    xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd"
    nxsd:normalizeLineTerminators="false"
    Regards,
    Shanmu.

  • Error when configuring Sender File Adapter (XI 7.0)

    Hi all,
    After i had configured my Sender File Adapter of Communication Channel 'CC_SEND_MATERIAL_FILE', an error message for this Communication Channel (in Runtime WorkBench/Component Monitoring/Adapter Engine/Communication Channel Monitoring) occurred as below:
    Error: com.sap.aii.af.ra.ms.api.DeliveryException: java.net.MalformedURLException: unknown protocol: dest
    I think that all configurations for this Sender File Adapter are correct. But i don't know where is protocol "dest" and how can i correct this protocol.
    Could anyone please help me to resolve this problem?
    Thanks for all in advance,
    Vinh Vo

    Hi Sarvesh,
    Yes, i have configured the source file with the path in the AL11. I think all an other configurations are also OK. But i don't know what is protocol 'dest'.
    Anybody can explain and help me, please!
    Vinh Vo

  • Duplicate message handling in the sender file adapter

    Hi,
    I enabled duplicate file handling check in the sender file adapter so that whenever there is a duplicate file it should send me an alert also it should disable the channel so that i do not get that duplicate file alert message again and again.
    My question is will it activate the channel again as soon as a new file arrives or do i need to manually do that.
    Michal's PI tips: Duplicate handling in file adapter - 7.31

    Hi Hema,
    You will have to activate the channel manually. The idea behind the 'disable' functionality is to avoid further file processing through that channel which can only start once the channel is activated again manually.
    Regards,
    Abhishek

  • Security issues when using the file adapter

    I want to use the file adapter, and in the documentation it states that you can use "directories" and "ftp".
    I want to transfer a file secure and both of these options are not really secure.
    Directory reading is not really secure when you read files from a windows share (SMB protocol).
    Is it possible to make a share more secure in a windows environment?
    In an ftp session everybody can steal the password. Is there a possibility to use "sftp"? Or will this be available?

    Currently we dont have, but we will have it soon.

  • How to get the filename in mapping when using sender File adapter?

    hi Experts,
       I have scenario where XI reads the input file using Sender file adapter.
       The file name is configured in the communication channel.
       In my message mapping it is possible to read this file name?
    Thanks
    gopal

    Hi Gpoal,
    Use Dynamic Configuration - /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Regards,
    Geetha

  • How to avoid duplicates when writing a file

    Hai all,
    I wrote a jsp program which generates html files dynamically taking particular keyword and category. In the same program i am also writing the links in xml file i.e page location (href). I page will call everday once. for the first time it is adding once if that page was called on second day it is adding agin those links are adding twice to that xml,even if those links are present in that xml file. but i need if the link is present in that xml file then it does not write again in that xml file for that i put the condition but is giving error.
    Here i am putting my xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
    <url>
    <loc>http://localhost:8080/Links/debt_get out of debt plan.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    <url>
    <loc>http://localhost:8080/Links/debt_no to debt.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    <url>
    <loc>http://localhost:8080/Links/sports_football.html</loc>
    <priority>0.6</priority>
    <changefreq>weekly</changefreq>
    </url>
    </urlset>>
    below i am putting my java code
    import java.io.*;
    class FileReadTest {
    public static void main (String[] args) {
         FileReadTest f = new FileReadTest();
    f.readMyFile();
    void readMyFile() {
    DataInputStream dis = null;
    String record = null,r=null;
    int recCount = 0;
    try {
    File f = new File("MyXMLFile.xml");
    FileInputStream fis = new FileInputStream(f);
    BufferedInputStream bis = new BufferedInputStream(fis);
    dis = new DataInputStream(bis);
    while ( (record=dis.readLine()) != null )
    recCount++;
                   r=record;
    //System.out.println(recCount + ": " + record); //compareTo
              String sb=r.toString();
              int l =0;
              boolean b=sb.contains("http://localhost:8080/Links/debt_no to debt.html");
                   // l =sb.indexOf("http://www.abacapital.com/privacy/?action=print");
                   l =sb.indexOf("http://localhost:8080/Links/debt_no to debt.html");
              System.out.println(b);
    if(l!=-1)
                             System.out.println("This is equla " +l);
                        else{
                        System.out.println("Not equal");}
    } catch (IOException e) {
    // catch io errors from FileInputStream or readLine()
    System.out.println("Uh oh, got an IOException error!" + e.getMessage());
    } finally {
    // if the file opened okay, make sure we close it
    if (dis != null) {
         try {
    dis.close();
         } catch (IOException ioe) {
    error what i am getting is
    false
    Not equal
    plese help me to solve this problem. I donot want to write second time if that link was already present in that. If link was not in that file then it has to write that link in that file.

    while ( (record=dis.readLine()) != null )
       recCount++;
       r=record;
       //System.out.println(recCount + ": " + record); //compareTo
    String sb=r.toString();
    int l =0;
    boolean b=sb.contains("http://localhost:8080/Links/debt_no to debt.html");
    // l =sb.indexOf("http://www.abacapital.com/privacy/?action=print");
    l =sb.indexOf("http://localhost:8080/Links/debt_no to debt.html");
    System.out.println(b);I think the contains and indexOf checks should be done within the loop for each line; the way it is written above, i think the loop finishes after reading all the lines which means r always contains the last line. Which means the check is always made for the last line.

  • "PLS-00172: string literal too long" When Writing XML file into a Table

    Hi.
    I'm using DBMS_XMLStore to get a XML file into a db table. See the example below, I'm using that for my PL/SQL format. Problem is that because there're too many XML elements that I use in "xmldoc CLOB:= ...", I get "PLS-00172: string literal too long" error.
    Can someone suggest a workaround?
    THANKS!!!
    DECLARE
    insCtx DBMS_XMLStore.ctxType;
    rows NUMBER;
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <EMPNO>7369</EMPNO>
    <SAL>1800</SAL>
    <HIREDATE>27-AUG-1996</HIREDATE>
    </ROW>
    <ROW>
    <EMPNO>2290</EMPNO>
    <SAL>2000</SAL>
    <HIREDATE>31-DEC-1992</HIREDATE>
    </ROW>
    </ROWSET>';
    BEGIN
    insCtx := DBMS_XMLStore.newContext('scott.emp'); -- get saved context
    DBMS_XMLStore.clearUpdateColumnList(insCtx); -- clear the update settings
    -- set the columns to be updated as a list of values
    DBMS_XMLStore.setUpdateColumn(insCtx,'EMPNO');
    DBMS_XMLStore.setUpdateColumn(insCtx,'SAL');
    DBMS_XMLStore.setUpdatecolumn(insCtx,'HIREDATE');
    -- Now insert the doc.
    -- This will only insert into EMPNO, SAL and HIREDATE columns
    rows := DBMS_XMLStore.insertXML(insCtx, xmlDoc);
    -- Close the context
    DBMS_XMLStore.closeContext(insCtx);
    END;
    /

    You ask where am getting the XML doc. Well, am not
    getting the doc itself.I either don't understand or I disagree. In your sample code, you're certainly creating an XML document-- your local variable "xmldoc" is an XML document.
    DBMS_XMLSTORE package needs
    to know the canonical format and that's what I
    hardcoded. Again, I either don't understand or I disagree... DBMS_XMLStore more or less assumes the format of the XML document itself-- there's a ROWSET tag, a ROW tag, and a then whatever column tags you'd like. You can override what tag identifies a row, but the rest is pretty much assumed. Your calls to setUpdateColumn identifies what subset of column tags in the XML document you're interested in.
    Later in code I use
    DBMS_XMLStore.setUpdateColumn to specify which
    columns are to be inserted.Agreed.
    xmldoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <KEY_OLD> Smoker </KEY_OLD>
    <KEY_NEW> 3 </KEY_NEW>
    <TRANSFORM> Specified </TRANSFORM>
    <KEY_OLD> Smoker </KEY_OLD>
    <VALUEOLD> -1 </VALUEOLD>
    EW> -1 </VALUENEW>
         <DESCRIPTION> NA </DESCRIPTION>
    </ROW>
    ROWSET>';This is your XML document. You almost certainly want to be reading this from the file system and/or have it passed in from another procedure. If you hard-code the XML document, you're limited to a 32k string literal, which is almost certainly causing the error you were reporting initially.
    As am writing this I'm realizing that I'm doing this
    wrong, because I do need to read the XML file from
    the filesystem (but insert the columns
    selectively)...What I need to come up with is a proc
    that would grab the XML file and do inserts into a
    relational table. The XML file will change in the
    future and that means that all my 'canonical format'
    code will be broken. How do I deal with anticipated
    change? Do I need to define/create an XML schema in
    10g if am just inserting into one relat. table from
    one XML file?What does "The XML file will change in the future" mean? Are you saying that the structure of the XML document will change? Or that the data in the XML document would change? Your code should only need to change if the structure of the document changes, which should be exceptionally uncommon and would only be an issue if you're adding another column that you want to work with, which would necessitate code changes.
    I found an article where the issue of changing XML
    file is dealt by using a XSL file (that's where I'd
    define the 'canonical format'), but am having a
    problem with creating one, because the source XML is
    screwed up in terms of the format:
    it's not <x> blah </x>
    <x2> blah </x2>
    x2="blah" x3="blah> ...etc
    Can you point me in the right direction, please?You can certainly use something like the DBMS_XSLProcessor package to transform whatever XML document you have into an XML document in an appropriate format for the XMLStore package and pass that transformed XML document into something like your sample procedure rather than defining the xmldoc local variable with a hard-coded document. Of course, you'd need to write appropriate XSL code to do the actual transform.
    Justin

  • Page fault in nipalk.sys when writing large file , system crashes

    When reading from VXI 2211 and writing to a file the system crashes with a page fault error in nipalk.sys. I tried loading new drivers for VISA, VXI, even DAQ which I don't use. Where does nipalk.sys come from??

    jcnoble wrote:
    > When reading from VXI 2211 and writing to a file the system crashes
    > with a page fault error in nipalk.sys. I tried loading new drivers for
    > VISA, VXI, even DAQ which I don't use. Where does nipalk.sys come
    > from??
    This is the low level device driver used by all NI hardware device
    drivers. PAL stands most probably for Platform Abstraction Layer and
    what that means it is providing an uniform device driver API for
    managing memory, IO addresses, DMA channels and Interrupts. Most
    probably this NIPAL device driver API is almost the same on all
    platforms NI develops modern hardware drivers for. This allows easier
    porting of device drivers to new hardware as the big work for porting a
    hardware driver is about porting this NIPAL device driver.
    I
    would not think that disk writing is the real culprit here, but it may
    trigger a problem occuring earlier during VXI access.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Duplicate Records error when processing transaction file....BPC 7.0

    Hi All,
    I have a situation. I am using BPC NW 7.0 and I have updated my dimension files. When I try to validate my transaction file every single record is validated successfully. But when I try to import the flat file into my application, I am getting a lot of dupplication records error and these are my questions.
    1. Will we get duplicate records in transaction files?
    2. Even if there are duplication, since it is a cube it should summarize not display that as a error and reject records?
    3. Is there something I can do to accept duplicates (I have checked the Replace option in the data package, to overwrite the simillar records, but it is only for account, category and entity only.
    5. In mycase I see identical values in all my dimension and the $value is the only difference. Why is it not summing up.
    Your quickest reply is much appreciated.
    Thanks,
    Alex.

    Hi,
    I have the same problem.
    In my case the file that I want to upload has different row that differ for the nature column. In the conversion file I map different nature to one internal nature.
    ES: cost1 --> cost
          cost2 --> cost
          cost3 --> cost
    In my desire was that in BPC the nature cost assume the result  cost = cost1 + cost2 + cost3.
    The result is that only the first record is uploaded and all other recorda are rejected as duplicate.
    Any suggestion?

  • Not enough memory to complete operation when writing TDMS file.

    Hello,
       I am new to Labview and having a bit of trouble. I attach some code here. What I want to do is sample from a NI 9220 DAQ from 16 channels at 20kHz, while from a second NI 6009 sample from 4 channels at 1000 kHz. I want to append these together and then write to a TDMS file. 
       I have tried to write this code using NIDAQmx VIs but when I have it has resulted in the two DAQs not having the right timing with each other. The 6009 samples for a longer time. 
       I have now tried instead to use the DAQ assistant to read from the two VIs and it works in that TDMS files produced have the correct timing between the two DAQS. However, if I record for more than 2 minutes, in the end I want to end up recording for a much longer time, I have the "Not enough memory to complete operation" message appearing. This still happens even if I get rid of my charts to display the data, and also if I get rid of the NI 6009 completely and just keep the 9220 sampling at 20kHz. It happens even if I repalde my TDMS write and put a write measurement assistant in which I tell it to write a series of files that are each less than 2 minutes long. 
        I think it is something to do with the amount of data I am reading and is being held in memory. What can I do about this? Also, my charts display very slowly, basically evey second when the 20k are read in. However if I lower the amount of data read the charts don't display all the data points. 
        I attach my code, thanks for your help!
        Alfredo 
    Attachments:
    03_02_15.vi ‏688 KB

    alfredog wrote:
    Hello,
       I am new to Labview and having a bit of trouble. I attach some code here. What I want to do is sample from a NI 9220 DAQ from 16 channels at 20kHz, while from a second NI 6009 sample from 4 channels at 1000 kHz. I want to append these together and then write to a TDMS file. 
       I have tried to write this code using NIDAQmx VIs but when I have it has resulted in the two DAQs not having the right timing with each other. The 6009 samples for a longer time. 
       I have now tried instead to use the DAQ assistant to read from the two VIs and it works in that TDMS files produced have the correct timing between the two DAQS. However, if I record for more than 2 minutes, in the end I want to end up recording for a much longer time, I have the "Not enough memory to complete operation" message appearing. This still happens even if I get rid of my charts to display the data, and also if I get rid of the NI 6009 completely and just keep the 9220 sampling at 20kHz. It happens even if I repalde my TDMS write and put a write measurement assistant in which I tell it to write a series of files that are each less than 2 minutes long. 
        I think it is something to do with the amount of data I am reading and is being held in memory. What can I do about this? Also, my charts display very slowly, basically evey second when the 20k are read in. However if I lower the amount of data read the charts don't display all the data points. 
        I attach my code, thanks for your help!
        Alfredo 
    As far as your charts updating very slowly - the way your code is designed, your charts only get data when both 20K samples & 1M samples are done collecting.  Have you tried setting up DAQ assistant for continuous sampling instead of 20K samples or 1M samples?
    -BTC
    New Controls & Indicators made using vector graphics & animations? Click below for Pebbles UI

Maybe you are looking for

  • Printing multiple pages per sheet on Adobe Reader for Windows 8

    I cannot print multiple pages per sheet on Adobe Reader touch for windows 8.1

  • EPMA dimension import issue with spanish characters

    We are using special characters for spanish words, like the word Español. When this was imported into EPMA using the Import Profile and importing a flat file, EPMA added the alias member as espan then showed a square box and then the l. Then, when de

  • Stupid Question, but.......

    I run TM on three Macs (two iMacs and a MacBook Pro). I really do not know much about TM other than once set-up it runs pretty much on its own backing up the computers. Does it back everything up including iTunes? I know that this is probably a stupi

  • IOS 6 what's up application problem

    Hi everyone, I have iPhone 4 and update IOS 6. But I have some problem now. I'm using What's up application. I didn't pay money because when I download this application It had been free. Anyway When I start What's up I didn't see pictures other conca

  • Nokia N8 Contact RSS Feeds

    Could someone tell me if the N8 Contacts include RSS Feeds for the Contact, i.e., Facebook... etc. like the X6? Link to the Conversation for the selected Contact? Anyone have an image of the N8 Contact View?