Streaming large amounts of data of socket causes corruption?

I'm wrinting an app to transfer large amounts of data via a simple client/server architecture between two machines.
Problem: If I send the data too 'fast', the data arrives corrupted:
- Calls to read() returns wrong data (wrong 'crc')
- Subsequent calls to read() do not return -1 but allow me to read e.g. another 60 or 80 KBytes.
- available() returns always '0'; but I'll get rid of that method anyway (as recommended in other forum entries).
The behaviour is somewhat difficult to repeat, but it fails for me reliably when transferring the data between two separate machines and when setting the number of packets (Sender.TM) to 1000 or larger.
Workaround: Reduce number of packages send to e.g. 1; or intruduce the 'sleep' on the Sender side. Another workaround: Changing alone to java.nio.* did not help, but when I got rid of the Streams and used solely ByteBuffers, the problem disappeared. Unfortunately the Streams are required by other parts of my application.
I'm running the code on two dual-CPU machines connected via
Below are the code of the Sender and the Listener. Please excuse the style as this is only to demonstrate the problem.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.nio.channels.Channels;
import java.nio.channels.SocketChannel;
import java.util.Arrays;
public class SenderBugStreams {
    public static void main(String[] args) throws IOException {
        InetSocketAddress targetAdr = new InetSocketAddress(args[0], Listener.DEFAULT_PORT);
        System.out.println("connecting to: "+targetAdr);
        SocketChannel socket = SocketChannel.open(targetAdr);
        sendData(socket);
        socket.close();
        System.out.println("Finished.");
    static final int TM = 10000;
    static final int TM_SIZE = 1000;
    static final int CRC = 2;
    static int k = 5;
    private static void sendData(SocketChannel socket) throws IOException {
        OutputStream out = Channels.newOutputStream(socket);
        byte[] ba = new byte[TM_SIZE];
        Arrays.fill(ba, (byte)(k++ % 127));
        System.out.println("Sending..."+k);
        for (int i = 0; i < TM; i++) {
            out.write(ba);
//            try {
//                Thread.sleep(10);
//            } catch (InterruptedException e) {
//                // TODO Auto-generated catch block
//                e.printStackTrace();
//                throw new RuntimeException(e);
        out.write(CRC);
        out.flush();
        out.close();
import java.io.IOException;
import java.io.InputStream;
import java.net.InetSocketAddress;
import java.nio.channels.Channels;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
public class ListenerBugStreams {
    static int DEFAULT_PORT = 44521;
     * @param args
     * @throws IOException
    public static void main(String[] args) throws IOException {
        ServerSocketChannel serverChannel = ServerSocketChannel.open();
        serverChannel.socket().bind(new InetSocketAddress(DEFAULT_PORT));
        System.out.print("Waiting...");
        SocketChannel clientSocket = serverChannel.accept();
        System.out.println(" starting, IP=" + clientSocket.socket().getInetAddress() +
            ", Port="+clientSocket.socket().getLocalPort());
        //read data from socket
        readData(clientSocket);
        clientSocket.close();
        serverChannel.close();
        System.out.println("Closed.");
    private static void readData(SocketChannel clientSocket) throws IOException {
        InputStream in = Channels.newInputStream(clientSocket);
        //read and ingest objects
        byte[] ba = null;
        for (int i = 0; i < SenderBugStreams.TM; i++) {
            ba = new byte[SenderBugStreams.TM_SIZE];
            in.read(ba);
            System.out.print("*");
        //verify checksum
        int crcIn = in.read();
        if (SenderBugStreams.CRC != crcIn) {
            System.out.println("ERROR: Invalid checksum: "+SenderBugStreams.CRC+"/"+crcIn);
        System.out.println(ba[0]);
        int x = in.read();
        int remaining = 0;
        while (x != -1) {
            remaining++;
            x = in.read();
        System.out.println("Remaining:"+in.available()+"/"+remaining);
        System.out.println(" "+SenderBug.TM+" objects ingested.");
        in.close();
}

Here is your trouble:
in.read(ba);read(byte[]) does not read N bytes, it reads up to N bytes. If one byte has arrived then it reads and returns that one byte. You always need to check the return value of read(byte[]) to see how much you got (also check for EOF). TCP chops up the written data to whatever packets it feels like and that makes read(byte[]) pretty random.
You can use DataInputStream which has a readFully() method; it loops calling read() until it gets the full buffer's worth. Or you can write a little static utility readFully() like so:
    // Returns false if hits EOF immediately. Otherwise reads the full buffer's
    // worth. If encounters EOF in mid-packet throws an IOException.
    public static boolean readFully(InputStream in, byte buf[])
        throws IOException
        return readFully(in, buf, 0, buf.length);
    public static boolean readFully(InputStream in, byte buf[], int pos, int len)
        throws IOException
        int got_total = 0;
        while (got_total < len) {
            int got = in.read(buf, pos + got_total, len - got_total);
            if (got == -1) {
                if (got_total == 0)
                    return false;
                throw new EOFException("readFully: end of file; expected " +
                                       len + " bytes, got only " + got_total);
            got_total += got;
        return true;
    }

Similar Messages

  • Couldn't copy large amount of data from enterprise DB to Oracle 10g

    Hi,
    I am using i-batis to copy data from enterprise DB to oracle and viceversa.
    The datatype of a field on EDB is 'text' and the datatype on oracle is 'SYS.XMLTYPE'
    i am binding these to a java string property in a POJO to bind values.
    I could successfully copy limited amount of data from EDB to oracle but if there is more data, i am getting the following exceptions with different oracle drivers ( but i could read large amount of data from EDB):
    --- Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeUpdate(MappedStatement.java:107)
    at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.update(SqlMapExecutorDelegate.java:457)
    at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.update(SqlMapSessionImpl.java:90)
    at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.update(SqlMapClientImpl.java:66)
    at com.aqa.pojos.OstBtlData.updateOracleFromEdbBtlWebservice(OstBtlData.java:282)
    at com.aqa.pojos.OstBtlData.searchEdbAndUpdateOracleBtlWebservice(OstBtlData.java:258)
    com.ibatis.common.jdbc.exception.NestedSQLException:
    --- The error occurred in com/aqa/sqlmaps/SQLMaps_OSTBTL_Oracle.xml.
    --- The error occurred while applying a parameter map.
    --- Check the updateOracleFromEDB-InlineParameterMap.
    --- Check the parameter mapping for the 'btlxml' property.
    --- Cause: java.sql.SQLException: setString can only process strings of less than 32766 chararacters
    at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeUpdate(MappedStatement.java:107)
    at com.iba
    I have latest oracle 10g jdbc drivers.
    remember, i could copy any amount of data from oracle to EDB but not otherway around.
    PLease let me know if you have come across this issue, any recommendation is very much appreciated.
    Thanks,
    CK.

    Hi,
    I finally remembered how I solved this issue previously.
    The jdbc driver isn't able to directly call the insert with a column xml_type. The solution I was using was to build a wrapper function in plSQL.
    Here it is (for insert but I suppose tha update will be the same)
    create or replace procedure insertXML(file_no_in in number, program_no_in in varchar2, ost_XML_in in clob, btl_XML_in in clob) is
    begin
    insert into AQAOST_FILES (file_no,program_no,ost_xml,btl_xml) values(file_no_in, program_no_in, xmltype(ost_XML_in), xmltype(btl_XML_in));
    end insertXML;
    here is the sqlmap file I used
    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE sqlMap
    PUBLIC "-//ibatis.apache.org//DTD SQL Map 2.0//EN"
    "http://ibatis.apache.org/dtd/sql-map-2.dtd">
    <sqlMap>
         <typeAlias alias="AqAost" type="com.sg2net.jdbc.AqAost" />
         <insert id="insert" parameterClass="AqAost">
              begin
                   insertxml(#fileNo#,#programNo#,#ostXML:CLOB#,#bltXML:CLOB#);
              end;
         </insert>
    </sqlMap>
    an here is a simple program
    package com.sg2net.jdbc;
    import java.io.IOException;
    import java.io.Reader;
    import java.io.StringWriter;
    import java.sql.Connection;
    import oracle.jdbc.pool.OracleDataSource;
    import com.ibatis.common.resources.Resources;
    import com.ibatis.sqlmap.client.SqlMapClient;
    import com.ibatis.sqlmap.client.SqlMapClientBuilder;
    public class TestInsertXMLType {
         * @param args
         public static void main(String[] args) throws Exception {
              // TODO Auto-generated method stub
              String resource="sql-map-config-xmlt.xml";
              Reader reader= Resources.getResourceAsReader(resource);
              SqlMapClient sqlMap = SqlMapClientBuilder.buildSqlMapClient(reader);
              OracleDataSource dataSource= new OracleDataSource();
              dataSource.setUser("test");
              dataSource.setPassword("test");
              dataSource.setURL("jdbc:oracle:thin:@localhost:1521:orcl");
              Connection connection=dataSource.getConnection();
              sqlMap.setUserConnection(connection);
              AqAost aqAost= new AqAost();
              aqAost.setFileNo(3);
              aqAost.setProgramNo("prg");
              Reader ostXMLReader= Resources.getResourceAsReader("ostXML.xml");
              Reader bltXMLReader= Resources.getResourceAsReader("bstXML.xml");
              aqAost.setOstXML(readerToString(ostXMLReader));
              aqAost.setBltXML(readerToString(bltXMLReader));
              sqlMap.insert("insert", aqAost);
              connection.commit();
         public static String readerToString(Reader reader) {
              StringWriter writer = new StringWriter();
              char[] buffer = new char[2048];
              int charsRead = 0;
              try {
                   while ((charsRead = reader.read(buffer)) > 0) {
                        writer.write(buffer, 0, charsRead);
              } catch (IOException ioe) {
                   throw new RuntimeException("error while converting reader to String", ioe);
              return writer.toString();
    package com.sg2net.jdbc;
    public class AqAost {
         private long fileNo;
         private String programNo;
         private String ostXML;
         private String bltXML;
         public long getFileNo() {
              return fileNo;
         public void setFileNo(long fileNo) {
              this.fileNo = fileNo;
         public String getProgramNo() {
              return programNo;
         public void setProgramNo(String programNo) {
              this.programNo = programNo;
         public String getOstXML() {
              return ostXML;
         public void setOstXML(String ostXML) {
              this.ostXML = ostXML;
         public String getBltXML() {
              return bltXML;
         public void setBltXML(String bltXML) {
              this.bltXML = bltXML;
    I tested the insert and it works correctly
    ciao,
    Giovanni

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • Azure Cloud service fails when sent large amount of data

    This is the error;
    Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
    aborted by the server (possibly due to the service shutting down). See server logs for more details.
    Calls with smaller amounts of data work fine. Large amounts of data cause this error.
    How can I fix this??

    Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
    maxReceivedMessageSize.
    http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
    Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
    Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Uploading of large amount of data

    Hi all,
    i really hope you can help me. I have to upload quite large amount of data from flat files to ODS (via PSA of course). But the process takes very long time. I used method of loadin to PSA and then packet by packet into ODS. Loading of cca 1.300.000 lines from flat file takes about 6 or more hours. It seems strange for me. Is it normal or not?? Or should I use another uploading method or set up ODS some way ?? thanks

    hi jj,
    welcome to the SDN!
    in my limited experience, 6hrs for 1.3M records is a bit too long. here are some things you could try and look into:
    - load from the application server, not from the client computer (meaning, move your file to the server where BW is running, to minimize network traffic).
    - check your transfer rules and any customer exits related to loading, as the smallest performance-inefficient bits of code can cause a lot of problems.
    - check the size of data packets you're transmitting, as it could also cause problems, via tcode RSCUSTA2 (i think, but i'm not 100% sure).
    hope ths helps you out - please remember to give out points as a way of saying thanks to those that help you out okay? =)
    ryan.

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Sending large amounts of data spontaneously

    In my normal experience with the internet connection, the bits of data sent is about 50 to 80% of that received, but occasionally Firefox starts transmitting large amounts of data spontaneously; what it is, I don't know and where it's going to, I don't know. For example, today the AT&T status screen showed about 19 MB received and about 10 MB sent after about an hour of on-line time. A few minutes later, I looked down at the status screen and it showed 19.5 MB received and 133.9 MB sent, and the number was steadily increasing. Just before I went on line today, I ran a complete scan of the computer with McAfee and it reported nothing needing attention. I ran the scan because a similar effusion of sending data spontaneously had happened yesterday. When I noticed the data pouring out today, I closed Firefox and it stopped. When I opened Firefox right afterward, the transmission of data from did not recommence. My first thought was that my computer had been captured by the bad guys and now I was a robot, but McAfee says not to worry. But should I worry anyway? What's going on, or that not having a good answer now, how can I find out what's going on? And how can I make it stop, unless I'm seeing some kind of maintenance operation that Mozilla or Microsoft is subjecting me to?

    Instead of using URLConnection open a Socket to the server port (80 probably) send a POST http request followed by the data, you may then (optional) recieve data from the server to check that the servlet is ok, this is the same protocol as URLConnection, but you have control over when the data is actually sent...
    Socket sock=new Socket(getHost(),80);
    DataOutputStream dos=new DataOutputStream(sock.getOutputStream());
    dos.writeBytes("POST servletname\r\n");
    dos.writeBytes("Content-type: text/plain\r\n");  //optional, but good if you know
    dos.writeBytes("Content-length: "+lengthOfData+"\r\n")  //again, optional, but good if you can know it without caching the data first
    dos.writeBytes("\r\n");   // gotta have a blank line before the data
      // send data now
    DataInputStream=new DataInputStream(sock.getInputStream());  //optional if you want to recieve
      // recieve any feedback from servlet if you want
    dis.close();
    dos.close();
    sock.close();im guessing that URLConnection caches the data so it can fill in "Content-length"

  • Real time application and large amount of data

    Hi,
    I've a real-time application that needs to allocate large amount of data in memory (more than 5GB). I'm using 1.6 version of JRE and I'm planning to migrate to 1.8 because the application experiment a lot of "Stop of the world" every day. I spent a lot of time analysing and testing all the parameters and policies of the GC.
    The question is, with the 1.8 version dissapears the pauses on the application caused by the GC cleaning process?
    Thanks.

    Just noting that the GC only really needs to do something when objects are created and then are no longer in use.  So if your application needs a large amount of memory then keeping it, rather than discarding it might be a better solution.

  • Java NIO - reading large amount of data

    Hi,
    I have diffuculties of reading large amount of data with SocketChannel (using directAllocated buffer & allocated one). Files greater than 300KB are cut even though I tried write the data into FileChannel.
    My Code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(150000);
    buffer = ByteBuffer.allocate(6000000);
    out = new     FOS("d:\\msgData.tmp");
    fc=out.getFOS().getChannel(); // FileChannel               
    int fileLength = (int)fc.size();
    while (clientChannel.read(directBlockBuffer)>0)
    {                              directBlockBuffer.flip()                         buffer.put(directBlockBuffer);
         directBlockBuffer.compact();
    //close data file
                                       buffer.flip();
                                       fc.write(buffer);
                                       fc.close();
    FOS.close();
    // end of code
    Any ideas?
    Thanks
    AST

    I don't understand how the "write" result will help read the whole data.
    Anyway, I changed the code so the SocketChannel will read in smaller chunks (~8KB) & the FileChannel writes in every read
    but the data stream is cut again (to ~5KB no matter what size of file I send).
    In the updated code when try to compare socketChannel.read to -1 I got endless loop.
    I'm basically trying to write POP3/SMTP server program, this part of code handles attachment that is received by the SocketChannel in one unit (i.e 1+ MB of data, the other SMTP commands/lines are no more than 27 chars and simple to handle).
    Therefore I need to be ready to accept large amount of data to the buffer & write it to filechannel. (In the POP3 thread I'm using MappedByteBuffer successfully).
    Updated code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(8192);
    while (clientChannel.read   (directBlockBuffer>0&&directBlockBuffer.hasRemaining))
              directBlockBuffer.flip();
              fc.write(directBlockBuffer);
              directBlockBuffer.clear();
         }I think based on API my code is logical (and good for small files) but what about handling bigger files (up to 5MB)?
    Thanks,
    AST 

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Open Large amount of data

    Hi
    I have a file on application server in .dat format, it contains large amount of data may be 2 million of records or  more, I need to open the file to check the record count, is there any software or any option to open the file, I have tried opening with Notepad, excel .... it gives error..
    please let me know
    Thanks

    Hi,
    Try this..
    Go to AL11..
    Go to the file directory..Then in the file there will be field called length..which is the total length of the file in characters..
    If you know the length of a single line..
    Divide the length of the file by the length of single line..I believe you will get the number of records..
    Thanks,
    Naren

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

Maybe you are looking for

  • My ipad 2 has OS version 6.13 . I have a software update for 7.0.6 ready to install. Can I install this without going to OS version 7.0 first

    My ipad 2 has OS version 6.13 . I have a software update for 7.0.6 ready to install. Can I install this without going to OS version 7.0 first?

  • Email is not being sent to the vendor

    Dear All, I am facing a concern where the Purchase order output email is not triggering to the vendor. When I have checked in the SPPFP transaction code, I am having the below error. I have checked the below and all are seems to be fine. 1.) In the B

  • ESS MSS multiple entry table is not showing up

    Hi All, I have a issue in ESS when user applies leave and cancels the leave. When the user clicks on that day again in the calendar then unable to see the history of the cancelled leave in ESS Leave application. multiple entries are there for that da

  • PR for RFQ

    Hi Friends, Is it possible to generate PR from RFQ?? One of the seniour intervier insisted on this scenario, and confidently mentioned that it is possible. If yes how?? Can any one please help me?? Thanks Prabhash

  • No utilities

    My utilities folder is empty and I don't know why, most importantly i don't have an installer to open pkg or mpkg files. I also can't find my osx disc. Is there any place to donload just the installer or other utilities. I need them soon. THANKS