Encoding Issue : JMS and Mapping : utf-8 iso8859-1

Hi All,
I am facing some problem with encoding issue.
Scenario :  JMS -->  SAP PI --> JMS
Requirment : Input plain text file contain some special characters,"©®" . Based on this condition,In Java Mapping
                   we check the Payload and changed the 'encoding' tag to UTF-8 or   ISO8859-1.                                                     
               : <?xml version="1.0" encoding="UTF-8"?>     in the target XML output.
While testing in Operation mapping our Java mapping works fine. as the encodeing tag changes from
             UTF-8 to ISO8859-1 if the special character exists.But if I test the same in Integration Directory(Test Configuration)
             or did a end to end  testing. The encoding tag did'nt changes.
For testing we had to a set of Plain Text files with UTF-8 and ISO8859-1 .
I tried the options of using beans in Adapter modules in Sender JMS channel.
MessageTransformBean, TextCodepageConversionBean, XmlAnonymizerBean
These doc & threads ,was also referred[How to Handle Encoding in PI|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42]
Regards,
Ashutosh R

Hi
public static boolean fixSpecialCharforWeb(String text) {
        int i = 0;
        Character c = null;
        char[] ctext = null;
        StringBuffer newText = new StringBuffer("");
        //boolean encodingType = false;
        if ((text == null) || (text.trim().length() == 0)) {
            return encodingType;
        } else {
            try {
                               for (i = 0; i < text.trim().length(); i++) {
                    ctext = text.trim().substring(i, i + 1).toCharArray();
                    c = new Character(ctext[0]);
                    //Single quote
                    if ((text.trim().substring(i, i + 1).equals("'")) || (c.hashCode() == 8217) || (text.trim().substring(i, i + 1).equals("?")) || (c.hashCode() == 146) || (c.hashCode() == 145)) {
                        //newText.append("'");
                        encodingType = true;
                        return encodingType;
                    //Double quotes
                    if ((c.hashCode() == 8220) || (c.hashCode() == 8221) || (c.hashCode() == 147) || (c.hashCode() == 148)) {
                        //newText.append(""");
                        encodingType = true;
                        return encodingType;
                    // bullet point
                    if ((c.hashCode() == 8226) || (c.hashCode() == 149)){
                        encodingType = true;
                        return encodingType;
                    // tilde
                    if ((c.hashCode() == 732) || (c.hashCode() == 152)){
                        encodingType = true;
                        return encodingType;
                    // Soft Hypen
                    if (c.hashCode() == 173){
                        encodingType = true;
                        return encodingType;
                    // En-Dash
                    if ((c.hashCode() == 8211) || (c.hashCode() == 150)) {
                        encodingType = true;
                        return encodingType;
                    // Em-Dash
                    if ((c.hashCode() == 8212) || (c.hashCode() == 151)) {
                        encodingType = true;
                        return encodingType;
                    // Euro Sign
                    if ((c.hashCode() == 8364) || (c.hashCode() == 128)) {
                        encodingType = true;
                        return encodingType;
                    // Yen Sign
                    if (c.hashCode() == 165) {
                        encodingType = true;
                        return encodingType;
                    // Pound Sign
                    if (c.hashCode() == 163) {
                        encodingType = true;
                        return encodingType;
                    // 1/2 sign
                    if (c.hashCode() == 189) {
                        encodingType = true;
                        return encodingType;
                    // 1/4 sign
                    if (c.hashCode() == 188) {
                        encodingType = true;
                        return encodingType;
                    // 3/4 sign
                    if (c.hashCode() == 190) {
                        encodingType = true;
                        return encodingType;
                    // Sword/dagger
                    if ((c.hashCode() == 8224) || (c.hashCode() == 134)) {
                        encodingType = true;
                        return encodingType;
                    // Trademark
                    if ((c.hashCode() == 8482) || (c.hashCode() == 153)) {
                        encodingType = true;
                        return encodingType;
                    // Ampersand &
                    if ((text.trim().substring(i, i+1).equals("&")) || (c.hashCode() == 38)) {
                        encodingType = true;
                        return encodingType;
                    //Registered mark
                    if ((text.trim().substring(i, i + 1).equals("?")) || (c.hashCode() == 174)) {
                        //newText.append("®");
                        encodingType = true;
                        return encodingType;
                    //Copyright mark
                    if ((text.trim().substring(i, i + 1).equals("?")) || (c.hashCode() == 169)) {
                        encodingType = true;
                        return encodingType;
                    // Question.
                    if (c.hashCode() == 63 && c.toString().equals("?")){
                        //newText.append("?");
                        encodingType = true;
                        return encodingType;
                    //handling symbol ?
                    if ((text.trim().substring(i, i+1).equals("?")) || (c.hashCode() == 233)) {
                        encodingType = true;
                        return encodingType;
                    if ((text.trim().substring(i, i+1).equals("?")) || (c.hashCode() == 232)) {
                        encodingType = true;
                        return encodingType;
                    if (c.hashCode() == 144) {
                        encodingType = true;
                        return encodingType;
            } catch (Exception e) {
                e.printStackTrace();
        return encodingType;

Similar Messages

  • Embedding HTML in XML CDATA and encoding issues

    Hi all,
    I'm embedding HTML code in a CDATA section. My problem is that, depending on the document, the HTML can be encoded in many formats. I borrowed a piece of code that sniffs that format so i can create String in the "right" encoding (or at least the one that was guessed).
    - If I directly injected those in the CDATA section, i guess they'd be encoded in UTF-8 and some character would be misinterpreted?
    - What if i would transcode the HTML from the sniffed format to utf-8?
    -Are there any issues woth doing this?
    Sorry if this is a dumb question but I'm quite new to that kind of encoding issues.
    BTW i'm using DOM.
    Thanks
    lexo

    I don't know if it's a dumb question. I just don't understand it at all. Encoding issues only arise when you write data from a Java program to an external location, or when you read data from an external location into a Java program. And none of the activities you mentioned there have anything to do with that.
    When you write your XML to an external file, or wherever you write it to, it gets encoded at that moment. The whole thing. Elements, attributes, CDATA sections, the whole thing. Doesn't matter what's in it, the whole thing gets encoded in whatever charset was chosen.
    Does that help?

  • Credit memo and MAP issue

    Hello Guru's
    I have 2 issues, and i am unable to understand the system logic for the same
    Issue 1:-
    We have an Intercomapny scenario
    sequence of transaction is as follows
    1. Create Return purchase order (document type NB) - Intercompany purchase
    2. Post return goods receipts - movement type 161
    3. Post credit memo with respect to return purchase order
    4. Cancel the movement type 161 - using 161 movement type
    System allows me to cancel the document created using 161 movement type
    Step 4 should not be allowed but system is allowing me to do so, please advise how to stop this
    Issue 2:-
    I have a strange MAP issues which is flactuating very drastically
    the scenario is as follows
    Store stock is 10 ea, stock value is 1000, MAP is 100
    DC map is 25
    When we make return STO from store to DC, ths stocks are issued at DC MAP (condition type P101)
    Assume we issue 9 quantity from Store then
    Store stock is 9 ea, stock value is (1000-(25*9)) =  775, MAP is 775/1 = 775
    This transaction is causing major flactuation in store MAP (100 changed to 775)
    Please advise if there is any way we can control this behaviour
    I understand that there is a setting which will in case of major MAP changes will put the amount into PRD account.
    Appreciate if you can guide me to help
    Regards
    Amit

    Hi,
    For your first scenario,
    for a retun po of IC NB type, after creating the PO you  need to deliver the same at VL10G. For the delivery system will allow to create the PGR . Prior to these steps you need to return the goods using 161 at MIGO.Please check this 161 stock posted to stock in transit.After PGR system will clear the stock in transit.
    Now after claring the stock from transit try cancelling the MIGO 161 document.
    For the second query,
    Please review your question' When we make return STO from store to DC, ths stocks are issued at DC MAP (condition type P101)
    Assume we issue 9 quantity from Store then
    Store stock is 9 ea, stock value is (1000-(25*9)) = 775, MAP is 775/1 = 775
    This transaction is causing major flactuation in store MAP (100 changed to 775),
    Please review the third line.
    Regards,

  • Issue in OWB mapping - when changing source and target database

    Hi,
    I need help for resolution of the issue I am facing when moving mapping from development environment to QA.
    Here is situation,
    We develop ETL using one source, one staging and one target database.
    In development we use one control_center for Source to staging and another control center staging to target.
    All works fine in development.
    Now I have created new runtime repository and imported all OWB projects (with full dependencies, exact replica of development). Now I need to change source and staging and target as different database.
    I have created new database location connections and defined/attached DB connectors with stage and target location.
    Now issues are
    1.     Two Staging mapping are not able to bound with source table (giving different error
    a.     One mapping show error for source synonym translation no longer valid when deploying this mapping , but validation comes without any issue)
    b.     Other mapping show error for source table/object not bound to repository
    2.     All the target mapping show validation successful, but when deploying says “table or view does not exist”. But tables are exists on source stage and target. (also permission are set correctly for target user to read from staging tables).
    Not sure how to proceed from here.
    I have recreated new repository and re-imported all project/mappings and defined all connection but still same issue.
    Thanks in advance,
    Vipin

    1. Two Staging mapping are not able to bound with source table (giving different error
    a. One mapping show error for source synonym translation no longer valid when deploying this mapping , but validation comes without any issue)
    b. Other mapping show error for source table/object not bound to repository
    The above error were resolved when re-synchronized the table (for few I have to reimport the table) and mapping.
    2. All the target mapping show validation successful, but when deploying says “table or view does not exist”. But tables are exists on source stage and target. (also permission are set correctly for target user to read from staging tables).
    The above error still pending. My target mapping are not able to deployed/compiled.
    For the above I have defined one Staging location to one target target location and target location have connector to staging (not sure if I have to define connector name same as staging location, as I have created DB connector with different name but reference database is same as staging location).
    Mapping are assoicated with desired data locatoin and meta data.
    control center is also have that data location.
    Mapping are configured for the desired location.

  • Premiere and Media Encoder CC encoding issue

    Hi all,
    I am having an encoding issue with PP and ME CC. My video assets are fine, and on the timeline they appear how they should, but when I look at the rendered h264 video there are encoding errors in the video. I have attached two images, the black is how it should look and the white is the error. The video plays fine and then it flickers between the images shown. 
    It has done this on a few different videos I have rendered over the last few days and I don't know why. It also happens to a different machine on CC as well. Does anyone have any suggestions?

    Hi James,
    I've never seen this before. Can you give us more info? Answer all the questions on this FAQ: What information should I provide when asking a question on this forum?
    Thanks,
    Kevin

  • External List Management - Issue in File Upload and Map

    I am facing problem while maintaining external lists through ELM and executing step upload and map file for a tab separated text format file.
    The file data is getting shown properly in file preview through mapping format but when I execute it through external list the data is not getting uploaded at all.The log is not showing me any error or reason for the same.
    I want to know apart from basic ELM config, is there some other configuartion required to enable ELM to upload and map the file in turn.
    Message was edited by:
            Pratyasha Shishodia

    I got the resolution to this answer.
    The issue was the task under ELM workflow was not marked for background processing and hence it was always in ready state and never proceeded ahead thru ELM transaction.All the steps due  to this reason were shown in planned or ready state.
    There was no error in ELM as everything in the system had no issues and hence nothing came in error log.

  • PI 7.1 : Taking a input PDF file and mapping it to a hexBinary attribute

    Hello All,
    We have a requirement which involves taking in an input PDF file and mapping it to a message type with binary attribute and sending it to an R3 system.
    Can anyone please detail the steps or point us to the correct documents for setting up the scenario.
    The scenario is file to Proxy adapter. The part which we need assitance is pulling up the input pdf and mapping it to binary field.
    Thanks.
    Kiran

    Thanks Praveen,Mayank,Sarvesh and Andreas for your  valuable help with the issue.
    I was able to successfully pick up the binary PDF file from a file server , encode it using Base 64 and post it to R3.
    I used the following code snippet and added the mentioned jar files to create a new jar file which was used as java mapping in the operation mapping.
    import com.sap.aii.mapping.api.StreamTransformation;
    import com.sap.aii.mapping.api.*;
    import com.sap.aii.utilxi.base64.api.*;
    import java.io.*;
    import java.util.*;
    public class Base64EncodingXIStandard implements StreamTransformation{
         String fileNameFromFileAdapterASMA;
         private Map param;
         public void setParameter (Map map)
              param = map;
              if (param == null)
                   param = new HashMap();
         public static void main(String args[])
              Base64EncodingXIStandard con = new Base64EncodingXIStandard();
              try
                   InputStream is = new FileInputStream(args[0]);
                   OutputStream os = new FileOutputStream(args[1]);
                   con.execute(is, os);
              catch (Exception e)
                   e.printStackTrace();
    public void execute(InputStream inputstream, OutputStream outputstream)
                   DynamicConfiguration conf = (DynamicConfiguration) param.get("DynamicConfiguration");
                   DynamicConfigurationKey KEY_FILENAME = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","FileName");
                   fileNameFromFileAdapterASMA = conf.get(KEY_FILENAME);
                   if (fileNameFromFileAdapterASMA == null)
                        fileNameFromFileAdapterASMA = "ToBase64.txt";
              try
                   while ((len = inputstream.read(buffer)) > 0)
                        baos.write(buffer, 0, len);
                   str = Base64.encode(baos.toByteArray());     //buffer);
                   outputstream.write("<?xml version=\"1.0\" encoding=\"utf-8\"?><ROOT>".getBytes());
                   outputstream.write(("<FILENAME>" + fileNameFromFileAdapterASMA + "</FILENAME>").getBytes());
                   outputstream.write( ("<BASE64DATA>" + str + "</BASE64DATA></ROOT>" ).getBytes());
              catch(Exception e)
                   e.printStackTrace();
         byte[] buffer = new byte[1024*5000];
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         int len;
         String str = null;
    I had to do the following configuration settings
    1)  Create a Sender Comm Channel with Adapter Specific message attributes and Filename checkbox checked.
    2) Use the Java Mapping in the Operation mapping.
    The scenario is working smoothly with out any issues.
    Thanks.
    Kiran

  • [SOLVED] File name encoding issue

    Hi all,
    I have a large series of files with accented characters, they were all displayed nicely, but at some point, when I copied them to another computer, the characters were replaced by codes, for instance: "ó" --> "ó".
    +Renaming ie. "Pasó" (bad encoding of "Pasó") --> Pasó, while writing it, it shows the correct character, but when pressing enter the name remains ("Pasó")
    +If I rename the file to something else and then to the correct name, it will accept it: Pasó --> Pas --> Pasó will display correctly.
    I don't know if it's a system wide encoding issue because new files are displayed correctly, but I would like to know if I have to change file names manually to make them right.
    PS. When copying bad encoded files to another FS (like a USB drive), nautilus and bash refuse to copy them.
    Last edited by Wasser (2012-09-17 21:10:52)

    My fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    tmpfs /tmp tmpfs nodev,nosuid 0 0
    # /dev/sda2 LABEL=ROOT
    UUID=d2243d9c-b8e7-442a-8446-5a43a4d9221b / ext4 rw,relatime,data=ordered 0 1
    # /dev/sda5 LABEL=HOME
    UUID=e67f5cfa-3ec3-4c06-9c2c-62c4cc188ffe /home ext4 rw,relatime,data=ordered 0 2
    # /dev/sda3 LABEL=VAR
    UUID=caac4924-2a13-4c97-9926-668ac0595ba3 /var reiserfs rw,relatime 0 2
    # /dev/sda1 LABEL=UEFI
    UUID=1E70-6485 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
    # /dev/sda4
    UUID=14993c2e-4bc4-42e4-b2e5-9dbc286abb4c none swap defaults 0 0
    Files in question are in /dev/sda5 (HOME)
    Last edited by Wasser (2012-09-16 08:37:52)

  • Pros and Cons of using REST over JMS (and other technologies)

    Hey all,
    I am working on a project where we were using JMS initially to send messages between servers. Our front end servers have a RESTful API and use JEE6, with EJB 3.1 entity beans connected to a mysql database and so forth. The back end servers are more like "agents" so to speak.. we send some work for them to do, they do it. They are deployed in GlassFish 3.1 as well, but initially I was using JMS to listen to messages. I learned that JMS onMessage() is not threaded, so in order to facilitate handling of potentially hundreds of messages at once, I had to implement my own threading framework. Basically I used the Executor class. I could have used MDBs, but they are a lot more heavyweight than I needed, as the code within the onMessage was not using any of the container services.
    We ran into other issues, such as deploying our app in a distributed architecture in the cloud like EC2 was painful at best. Currently the cloud services we found don't support multi-cast so the nice "discover" feature for clustering JMS and other applications wasn't going to work. For some odd reason there seems to be little info on building out a scalable JEE application in the cloud. Even the EC2 techs, and RackSpace and two others had nobody that understood how to do it.
    So in light of this, plus the data we were sending via JMS was a number of different types that had to all be together in a group to be processed.. I started looking at using REST. Java/Jersey (JAX-RS) is so easy to implement and has thus far had wide industry adoption. The fact that our API is already using it on the front end meant I could re-use some of the representations on the back end servers, while a few had to be modified as our public API was not quite needed in full on the back end. Replacing JMS took about a day or so to put the "onmessage" handler into a REST form on the back end servers. Being able to submit an object (via JAXB) from the front servers to the back servers was much nicer to work with than building up a MapMessage object full of Map objects to contain the variety of data elements we needed to send as a group to our back end servers. Since it goes as XML, I am looking at using gzip as well, which should compress it by about 90% or so, making it use much less bandwidth and thus be faster. I don't know how JMS handles large messages. We were using HornetQ server and client.
    So I am curious what anyone thinks.. especially anyone that is knowledgeable with JMS and may understand REST as well. What benefits do we lose out on via JMS. Mind you, we were using a single queue and not broadcasting messages.. we wanted to make sure that one and only one end server got the message and handled it.
    Thanks..look forward to anyone's thoughts on this.

    851827 wrote:
    Thank you for the reply. One of the main reasons to switch to REST was JMS is strongly tied to Java. While I believe it can work with other message brokers that other platforms/languages can also use, we didn't want to spend more time researching all those paths. REST is very simple, works very well and is easy to implement in almost any language and platform. Our architecture is basically a front end rest API consumed by clients, and the back end servers are more like worker threads. We apply a set of rules, validations, and such on the front end, then send the work to be done to the back end. We could do it all in one server tier, but we also want to allow other 3rd parties to implement the "worker" server pieces in their own domains with their own language/platform of choice. Now, with this model, they simply provide a URL to send some REST calls to, and send some REST calls back to our servers.well, this sounds like this would be one of those requirements which might make jms not a good fit. as ejp mentioned, message brokers usually have bindings in multiple languages, so jms does not necessarily restrict you from using other languages/platforms as the worker nodes. using a REST based api certainly makes that more simple, though.
    As for load balancing, I am not entirely sure how glassfish or JBoss does it. Last time I did anything with scaling, it involved load balancers in front of servers that were session/cookie aware for stateful needs, and could round robin or based on some load factor on each server send requests to appropriate servers in a cluster. If you're saying that JBoss and/or GlassFish no longer need that.. then how is it done? I read up on HornetQ where a request sent to one ip/hornetq server could "discover" other servers in a cluster and balance the load by sending requests to other hornetq servers. I assume this is how the JEE containers are now doing it? The problem with that to me is.. you have one server that is loaded with all incoming traffic and then has to resend it on to other servers in the cluster. With enough load, it seems that the glassfish or jboss server become a load balancer and not doing what they were designed to do.. be a JEE container. I don't recall now if load balancing is in the spec or not..I would think it would not be required to be part of a container though, including session replication and such? Is that part of the spec now?you are confusing many different types of scaling. different layers of the jee stack scale in different ways. you usually scale/load balance at the web layer by putting a load balancer in front of your servers. at the ejb layer, however, you don't necessarily need that. in jboss, the client-side stub for invoking remote ejbs in a cluster will actually include the addresses for all the boxes and do some sort of work distribution itself. so, no given ejb server would be receiving all the incoming load. for jms, again, there are various points of work to consider. you have the message broker itself which is scaled/load balanced in whatever fashion it supports (don't know many details on actual message broker impls). but, for the mdbs themselves, each jee server is pretty independent. each jee server in the cluster will start a pool of mdbs and setup a connection to the relevant queue. then, the incoming messages will be distributed to the various servers and mdbs accordingly. again, no single box will be more loaded than any other.
    load balancing/clustering is not part of the jee "spec", but it is one of the many features that a decent jee server will handle for you. the point of jee was to specify patterns for doing work which, if followed, allow the app server to do all the "hard" parts. some of those features are required (transactions, authentication, etc), and some of those features are not (clustering, load-balancing, other robustness features).
    I still would think dedicated load balancers, whether physical hardware or virtual software running in a cloud/VM setup would be a better solution for handling load to different tiers?like i said, that depends on the tier. makes sense in some situations, not others. (for one thing, load-balancers tend to be http based, so they don't work so well for non-http protocols.)

  • Combining 2 files and mapping it to a single destination file

    hi all;
    if i am combining 2 files and mapping it to a single destination file then do we need to define 2 sender communication channels and 1 receiver communication channel

    i have done with the BPM.
    steps
    1 Block with corelation name
    2 Fork with end condition counter not equal 2
    3 fork branch 1 -- receive with corelation and container incrementing count by 1
      fork branch 2 -- receive with corelation and container incrementing count by 1
    4 transformation
    5 send
    i have source file structure during mapping:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
       <ns0:Message1>
          <ns1:SenderData1 xmlns:ns1="http://multimapping.com">
             <Name>AAAA</Name>
          </ns1:SenderData1>
       </ns0:Message1>
       <ns0:Message2>
          <ns1:SenderData2 xmlns:ns1="http://multimapping.com">
             <Name>BBBB</Name>
          </ns1:SenderData2>
       </ns0:Message2>
    </ns0:Messages>
    i breaked the structure in 2 and placed it in different files
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:SenderData1 xmlns:ns0="http://multimapping.com">
       <Name>AAAA</Name>
    </ns0:SenderData1>
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:SenderData2 xmlns:ns0="http://multimapping.com">
       <Name>BBBB</Name>
    </ns0:SenderData2>
    is that all to be done

  • Java to Excel encoding issue.

    I'm trying to export my data in to Excel file. When I open the excel file the Japanese characters looks garbled.
    iResponse.setHeader("attachment ; filename = \"" + reportFileName() + "\" ", "content-disposition");
    iResponse.setHeader("application/vnd.ms-excel;","content-type");
    iResponse.setHeader("UTF-8","Content-Encoding");
    By Googling I learnt that Excel doesn't like UTF-8 format. Is there any other way I can export the data.
    Thanks.

    Specifying the content encoding for a binary format like Excel is pointless. You are producing native Excel format, aren't you? If you're producing a CSV file and claiming it's Excel, you should have mentioned that.
    Anyway: you're producing native Excel using something like Apache POI, I suppose? I would have thought it handled encoding issues, but check its documentation or its FAQ to see if that's really the case.
    Another option, which may or may not be feasible, is to use the new Office 2007 format. It's based on XML, so all of the encoding issues are automatically handled.

  • Encoding issue for file manager

    I am using the ditto command to duplicate a file. This file has unicode filename and as per http://developer.apple.com/qa/qa2001/qa1173.html I am first normalizing the name to kCFStringNormalizationFormD and then converting it to utf-8 before calling ditto on it. This all works smoothly but when I try to get the FSRef using the original unicode name I get fnfErr. Dosn't the API CFURLGetFSRef convert the string to kCFStringNormalizationFormD? Or is there any alternate for ditto on Tiger.

    no encoding issues if i use xml (xlf or xliff) bundle as xml supports utf-8 encoding.

  • Looking for clarification on network latency issue vs drive mapping

    Hi,
    I am seeing this as mystery and not getting crystal clear idea on the reason for the issue. Issue is related to the performance of the application interms of time it is taking in processing the input file.
    I wrote a swing application, which is a client application. Which takes some parameters like server name and iphostaddress and connects to the Process Server which is, responsible for processing client application requests. Client application will communicate with process server through TCP/IP connection and process the input file and returns the decisions back to the user through the output file.
    Below is the scenarios I am using for launching the application:
    1. If both client application and server are running locally in my desktop the time it is taking to process the input file is 2 minutes.
    2. If client is running my application and server is running remotely on wondows server, it's taking 13 minutes to process same input file.
    3. To reduce the time in scenario2, I installed the client appliation also on the remote server ( so that both client and server application are running on the windows server). and mapped the server's share drive to my desktop. And launched the application from my desktop (from U drive, where application is mapped), now it's taking 10 minutes to process the same input file.
    I am struggling in understanding why it's taking that long in scenario 3. Because application is installed locally on the server and input file and output files also copied onto the U drive. Sometimes thinking am I launching the application in the right way or not?
    Can somebody explain me, if we launch the remote java application through drive mapping will there be network latency there eventhough everything is there on the server (U drive)? Here I need to tell one more scenario 4, If loginto the remote windows server and launch the client application time it's taking to process the same input file is about a minute.
    Below are some more details on the issue: I am not encoding the file, I am using third party application, which provides an API to communicate with the process server. Just using the API methods and classes to pass the input file data to server. I have used the 'tracert' command for the remote server and I am seeing 8 hops between my desktop and remote server. I even installed network sniffer tool in my laptop and captured the files when application running.
    The input file has 140000 records (text lines with comma delimited) of 6.271MB in size. I have posted to understand the time it is taking in scenario3, where evrything is on mapped drive (i.e, client application and input file are technically recides on the server right?) , but client application is launched from desktop. The reason I am doing this way is, instead of log-in into the remote server, user can easily launch the application from the desktop. So, when I try to launch the application this way, this doesn't count as if client application is running local to the server? Will it becomes remote (I have even captured the network traffic file in this scenario too, and I have seen the comminication between my desktop ip address to server ip address and server is taking abour 3.84 milliseconds for each item to respond to client, I think it's just travel time not the process time). I am assuming, even when application launched from drive that is mapped, it should take about 1 minute (the time taking when I launch the aplication after log-into the server,not through drive mapping) to prcess input file as everything is on the server.
    Thanks in advance,
    Jyothi

    reading and writing the data shouldn't be the problem, its what you are doing with the data which will be taking all the time.
    Try this
    public class WriteFile {
        public static void main(String... args) throws IOException {
            String filename = "record.csv";
            int records = 140 * 1000;
            int values = 6;
                long start = System.nanoTime();
                PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter(filename)));
                for (int r = 0; r < records; r++) {
                    for (int i = 0; i < values; i++) {
                        if (i > 0)
                            pw.print(',');
                        pw.print(r * 10 + i);
                    pw.println();
                pw.close();
                long time = System.nanoTime() - start;
                System.out.printf("Time to write %,d records was %.3f sec. file size=%.3fMB%n",
                        records, time / 1e9, new File(filename).length() / 1e6);
                long start = System.nanoTime();
                BufferedReader br = new BufferedReader(new FileReader(filename));
                String line;
                while ((line = br.readLine()) != null) {
                    // do some work.
                    String[] parts = line.split(",");
                    int[] nums = new int[parts.length];
                    for (int i = 0; i < parts.length; i++)
                        nums[i] = Integer.parseInt(parts);
    br.close();
    long time = System.nanoTime() - start;
    System.out.printf("Time to read %,d records was %.3f sec%n",
    records, time / 1e9);
    PrintsTime to write 140,000 records was 0.462 sec. file size=6.193MB
    Time to read 140,000 records was 0.792 sec

  • Seeing � etc despite having View--Character encoding as unicode and auto-detect universal

    On viewing some web pages see characters such as �, ,  (for example). But View-Character Encoding is set at Unicode (UTF-8) or Western (ISO8859-1) and Tools-Options-Content-Fonts-Advanced Encoding set with either of those

    example of page:
    http://scienceofdoom.com/2010/09/17/on-missing-the-point-by-chilingar-et-al-2008/
    - a little over half way down, the section headed "Anthropogenic Imact on the Earth’s Climate – Tiny" from paragraph "And continue: " there are these non-characters in the equation (12) and subsequently.
    Another page : http://www.zimbabwesituation.com/sep26_2010.html in the topic " Red warning lights" .
    Most web-pages I read are without problem.
    I contacted the writer of the first page and s/he had no idea why it happens.

  • Issue with heat maps refresh process in EID 3.1?

    Issue with Heat Maps in EID 3.1? Heats maps don't refresh unless you go back to home page and then again go back to the Endeca app.
    In Oracle Sample app if we open the Map tab we will see that Milwaukee is really hot in the heat map. now if we filter the data to just show data from 100 miles within Orlando, FL then the map will get refreshed to show that area but the colors on heat maps do not change.
    Now if we keep the refinements same and go back to home page and again go back to sample app and maps tabs then it will still show area of 100 miles within Orlando, FL which is good but now heat map would have updated and it would show us correct color.
    Now if we remove the refinement then it would show complete US map as hot which is again wrong.
    I have observed this issue in chrome browser as well as firefox.
    Is there anyway to overcome this issue?

    This issue was resolved after applying latest patch from Oracle.

Maybe you are looking for