Problems with large file

Dear all,
I am working on a conversion of ebcdic file to ascii format. I am able to convert 500 mb file in 713,634ms. But a file of size 1.5 GB it takes 8,299,389ms. which is more than 10 times than the other file. the pattern is that, the conversion is very slow until 70MB, then it really sloggs till around 1.1 Gb and slows down again till the end.
I am using randomaccessfile to read and write. but the read and write is sequential. Using two 3 MB buffers to read and write. I am running the application in HP-UX 11.11 with 8 Gig Memory with 4 CPU.
Can somebody explain me why this happens.
Thank you,
dhana

I have solved the problem. It was an error in my part.
i was using
raf.write(buffer, 0, length);
The length was wrong. i correct it with the right length. The write started to work faster.
Thx,
Dhana

Similar Messages

  • Is anyone else having problems with large files, such as installation images, becoming corrupted when downloaded in Mavericks using Safari?

    I am finding that when I try to download a disk image file, such as Office 2011, it reads invalid checksum, is corrputed and will not install. I tried to download it several times and even unchecked in the Disk Utility Preference Pane to verify checksums. The way I solved the problem, was to go to my Windows Machine and download the image, put it on a USB flash drive and install it on my Mac from the stick. Not a proper solution, but it did work. Has anyone out there had this problem? This is applying to any large .dmg files not just Office 2011.

    I'm having exactly the same problem with the dmg file for Office 2011, tried downloading from my windows machine with no luck, still having the invalid checksum message.

  • SFTP MGET of large files fails - connection closed - problem with spool file

    I have a new SFTP job to get files from an FTP Server.  The files are large (80mg, 150mg).  I can get smaller files from the ftp site with no issue, but when attempting the larger files the job completes abnormally after 2 min 1 sec. each time.  I can see the file is created on our local file system with 0 bytes, then when the FTP job fails, the 0 byte file is deleted.
    Is there a limit to how large an ftp file can be in Tidal?  How long an ftp job can run?
    The error in the job audit is Problem with spool file for job XXXX_SFTPGet and an exit code of 127 (whatever that is).
    In the log, the error is that the connection was closed.  I have checked with the ftp host and their logs show that we are disconnecting unexpectedly also.
    Below is an excerpt from the log
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.055 : Send : Name=SSH_FXP_STAT,Type=17,RequestID=12
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.055 : Transmit 44 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.055 : Remote window size decreased to 130808
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.071 : RepeatCallback received 84 bytes
    DEBUG [SSH2Connection] 6 Feb 2015 14:17:33.071 : ProcessPacket pt=SSH_MSG_CHANNEL_DATA
    DEBUG [SFTPMessageFactory] 6 Feb 2015 14:17:33.071 : Received message (type=105,len=37)
    DEBUG [SFTPMessageStore] 6 Feb 2015 14:17:33.071 : AddMessage(12) - added to store
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.071 : Reply : Name=SSH_FXP_ATTRS,Type=105,RequestID=12
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.071 : Send : Name=SSH_FXP_OPEN,Type=3,RequestID=13
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.071 : Transmit 56 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.071 : Remote window size decreased to 130752
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.087 : RepeatCallback received 52 bytes
    DEBUG [SSH2Connection] 6 Feb 2015 14:17:33.087 : ProcessPacket pt=SSH_MSG_CHANNEL_DATA
    DEBUG [SFTPMessageFactory] 6 Feb 2015 14:17:33.087 : Received message (type=102,len=10)
    DEBUG [SFTPMessageStore] 6 Feb 2015 14:17:33.087 : AddMessage(13) - added to store
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.087 : Reply : Name=SSH_FXP_HANDLE,Type=102,RequestID=13
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.087 : Send : Name=SSH_FXP_READ,Type=5,RequestID=14
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.087 : Transmit 26 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.087 : Remote window size decreased to 130726
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.118 : RepeatCallback received 0 bytes
    DEBUG [SFTPChannelReceiver] 6 Feb 2015 14:17:33.118 : Connection closed:  (code=0)
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 : Disconnected unexpectedly ( [errorcode=0])
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 : EnterpriseDT.Net.Ftp.Ssh.SFTPException:  [errorcode=0]
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 :    at EnterpriseDT.Net.Ftp.Ssh.SFTPMessageStore.CheckState()
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 :    at EnterpriseDT.Net.Ftp.Ssh.SFTPMessageStore.GetMessage(Int32 requestId)

    I believe there is a limitation on FTP and what you are seeing is a timeout built into the 3rd party application that tidal uses (I feel like it was hardcoded and it would be a big deal to change but this was before Cisco purchased tidal)  there may have been a tagent.ini setting that tweaks that but I can't find any details.
    We wound up purchasing our own FTP software (ipswitch MOVEit Central & DMZ) because we also had the need to host as well as Get/Put to other FTP sites. It now does all our FTP and internal file delivery activity (we use it's api and call from tidal if we need to trigger inside a workflow)

  • HP Officejet100 Mobile L411a intermittent Bluetooth printing problems with PDF files

    Hello,
    Our company provides computer-hosted medical devices controlled by an application running on Windows7 computers we supply. We are currently using the HP Officejet 100 as a Bluetooth printer. In some of our customer offices, we experience printing problems using the printer in Bluetooth mode (there are never problems if the printer is connected by a USB cable.)
    The printing problems typically affect PDF files printed through Adobe Reader 10, and the symptom is one of the following:
    Send a print job to the printer. It doesn't print and eventually the print queue shows an "Error Printing" message. If any print jobs are removed and the printer is power-cycled, the problem goes away for the time being.
    2.  Same as above but power-cycling does not cure the problem and nothing will print, even a test page.
    In either of these cases there are no problems if a USB cable is used.
    The PDF files we are trying to print are typically 5 pages with 3 of the pages having a lot of graphics and the remainder being text only.
    Has anyone seen this behavior and is there anything we can do about it?
    Thanks.

    I don't anything about this printer, but I do know that bluetooth is really flaky with large files since it's got a crappy transfer rate, could be that the transfer is dropping and freezing the printer up. Does it have encryption turned on?

  • Error: There is a problem with the file and it cannot be copied

    I've been trying to copy (and essentially move) the contents of an NTFS-formatted external HDD to my iMac's internal HDD so I can then format the external HDD to Mac OS Extended. However, when I simply try to drag and drop, I get an error during the transfer that states:
    There is a problem with the file and it cannot be copied.
    I tried a basic cp command in Terminal to copy all contents of the external HDD to a folder on my iMac's desktop, and found that while there were no errors, there were many individual files missing full chunks of data (ie. original file would be 4GB on my external HDD, but only 350MB on my desktop).
    Any ideas on how I can successfully copy a large amount of data (approx. 170GB) from my external HDD to my internal HDD while avoiding this error, so I can ultimately format my external HDD to Mac OS Extended? ANY help is greatly appreciated.

    That's not a good error to see. It indicates something is very wrong. Pulled out of an old programming header file:
    ioErr = -36, /*I/O error (bummers)*/
    If Apple labelled it "bummers," they had a good reason! Unfortunately, that doesn't bode well for you.
    Try running Disk Utility again. Keep repairing over and over until one of two things happens: 1) Disk Utility says no repair was needed, or 2) Disk Utility reports the same error in two sequential repair sessions and is unable to repair it both times.
    If you hit the second case, or if you hit the first but still can't copy files, then you've got two basic options:
    = Buy a third-party disk utility or two and try them. Try TechTool as a first choice.
    = Recover what files you can and write the rest off as gone.
    = Send your drive to a data recovery service and hope they can extract more than you can.
    Of course, none of this is necessary if you have a backup of the contents of that hard drive. (If you don't, this is your learning experience. Once bitten, twice shy, so they say.) Also, regardless of the outcome, once you've got your data or have decided it's gone, you're going to want to wipe that drive completely clean. Reformat the drive with Disk Utility, then when it's done, select the drive in Disk Utility and hit command-i. (Don't select the new volume you just created on that drive, select the drive itself. Mine looks like "232.9 GB Hitachi ..." with the volume name indented underneath.) Look for an item that says S.M.A.R.T. Status, and if it doesn't say Verified, you might as well throw out the drive. Don't trust any more data to it.
    If all appears safe, you can start moving data back onto it. But, as always, make sure you have a backup of everything!

  • Photoshop CS6 keeps freezing when I work with large files

    I've had problems with Photoshop CS6 freezing on me and giving me RAM and Scratch Disk alerts/warnings ever since I upgraded to Windows 8.  This usually only happens when I work with large files, however once I work with a large file, I can't seem to work with any file at all that day.  Today however I have received my first error in which Photoshop says that it has stopped working.  I thought that if I post this event info about the error, it might be of some help to someone to try to help me.  The log info is as follows:
    General info
    Faulting application name: Photoshop.exe, version: 13.1.2.0, time stamp: 0x50e86403
    Faulting module name: KERNELBASE.dll, version: 6.2.9200.16451, time stamp: 0x50988950
    Exception code: 0xe06d7363
    Fault offset: 0x00014b32
    Faulting process id: 0x1834
    Faulting application start time: 0x01ce6664ee6acc59
    Faulting application path: C:\Program Files (x86)\Adobe\Adobe Photoshop CS6\Photoshop.exe
    Faulting module path: C:\Windows\SYSTEM32\KERNELBASE.dll
    Report Id: 2e5de768-d259-11e2-be86-742f68828cd0
    Faulting package full name:
    Faulting package-relative application ID:
    I really hope to hear from someone soon, my job requires me to work with Photoshop every day and I run into errors and bugs almost constantly and all of the help I've received so far from people in my office doesn't seem to make much difference at all.  I'll be checking in regularly, so if you need any further details or need me to elaborate on anything, I should be able to get back to you fairly quickly.
    Thank you.

    Here you go Conroy.  These are probably a mess after various attempts at getting help.

  • Problems with large scanned images

    I have been giving Aperture another try since 1.1 came out, and I am still having problems with large tiff files derived from scanned 4x5 negatives. The files are 500mb or more, 16 bit RGB, with ProPhoto RGB or Ektaspace PS5 profiles, directly out of the scanner.
    Aperture imports the files correctly, and shows their thumbnails. When I select a thumbnail "Loading" is displayed briefly, and the the dreaded "Unsupported Image Format" is displayed. Sometimes "Loading" goes on for a while, and a geometric pattern (looking like a rendering of random memory) is displayed. Restarting Aperture doesn't help.
    Lower resolution (250mb, 16bit) files are handled properly. The scans are from an Epson 4870 scanner. I have tried pulling the scans into Photoshop and resaving with various tiff options, and as PSD with no improvement. I have the same problem with corrected/modified psd files coming out of Photoshop CS2.
    I am running on a Power Mac G5 dual 2ghz with 8gb of RAM and an NVIDIA GeForce 6800 GT DDL (250mb) video card, with all the latest OS and software updates.
    Has anyone else had similar problems? More importantly, is anyone else able to work with 500mb files of any kind? Is it my system, or is it the software? I sent feedback to Apple as well.
    dual g5 2ghz   Mac OS X (10.4.6)  

    I have a few (well actually about 100) scans on my system of >500Mb. I tried loading a few and am getting an inconsistent pattern of errors that correlates with what you are reporting.
    I imported 4 files and three were troubled, the fouth was OK. I imported another four files and the first one was OK and the three others had your reported error, also the previously good file from the first import was now showing the same 'unsupported' image' message.
    I would venture to say that if you shoot primarily 4x5 and work with scans of this size that Aperture is not the program for you--right now. I shoot 35mm and have a few images that I have scanned at 8000dpi on my Imacon 848 but most of my files are in the more reasonable 250Mb range (35mm @ 5000dpi).
    I will probably downsample my 8000dpi scans to 5000dpi and not worry to much about it. In a world where people believe that 16 megapixels is hi-res you are obviously on the extreme side.(Good for you!) You should definately file a bug report but I wouldn't expect much help anytime soon for your super-sized scans.

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • Fop 0.19 problems with small files?

    Hello!
    I have a problem with fop 0.19 serializer: when I try to generate
    pdf files smaller then cca. 10kB, IE does not open Acrobat
    Reader, and sometimes even freezes. With larger files, it works
    OK.
    My configuration:
    Tomcat 3.2
    XDK 9.0.2.0b
    fop 0.19
    Thanks in advance!
    Vedran

    Hi Eric/Steve,
    I have downloaded the Xml development kit for Java (9.0.2.0.0C
    beta) and created a new xsql RTF-serializer class according to
    the XSQLSampleSerializer included in the xsqlserializer.jar file.
    I also included Eric's CustFOP-serializer.
    I can compile my new class with JDeveloper but when i recreate
    the xsqlserializer.jar and try to use this jar-file with the
    xsql servlet i get an error.
    The error is "XSQL-022: cannot load serializer class
    oracle.xml.xsql.serializers.XSQLxxxSerializer", with xxx my own
    serializer, but also when i use the original FOP-serializer.
    So i think the jar-file is not correct.
    Can anyone tell me how to recreate the xsqlserializer.jar or
    tell me what else i have done wrong?
    (I also posted this question in the JDeveloper-forum)

  • RESOLVED - K8N Neo2 Platinum problem with large Seagate disk

    Hi All,
    I've been running this system for 4 years and I'm very happy with it. But I've run into a problem:
    I'm trying to exchange my 120 GB Seagate Barracuda systemdisk with a 500 GB Barracuda and I can't install XP with SP2.
    First I tried to clone the old HD with Norton Ghost and Acronis True Image. The cloning looked fine but after taking the old HD out and moving the new one to P-ATA 1 as Primary Master I get an "error - no operating system found". I also tried a fresh install from CD. After the CD reboots the system I get the same error.
    The HD is present in BIOS with the right parameters and all is looking well - but still no go.
    I've fiddled around in BIOS (vers. 1.3) with the different settings (LBA, Large, Auto, Csh.) The last 2 options gives the "no operating system" error and the first 2 gives a "disk error" although Seagate says that I should change settings to "LBA".
    I only have the new HD in the machine. I also tried fixboot and fixmbr. I can see all the installed files when I put it in as a slave.
    So my question is:
    Does the K8N Neo2 Platinum have a problem with large hd's as system disk?
    Or is this a Seagate or Windows problem ?
    Any help is much appreciated!
    Mads

    Actually that was the first thing I did just to see if the drive worked :-) - except that I had it as master on the second ide cable. It formats like it should and all partitions report "healthy". The only difference is that the new drives c-partition is set as active (can't find a way to change it back)  - while the old one is set as system. Could this make a difference ?
    Since neither cloning nor a fresh install works I think the problem is something else. Maybe I should try Seatools and see if that makes any difference.
    Thank's all for help so far! - Other suggestions ?

  • Problems with .ARW files and auto toning

    problems with .ARW files and auto toning
    let me try to explain this because this has happened in past and never found a way to resolve but i lived with it
    now that I have a Sony A7R the problem is more serious
    Firstly i take pride it making the picture happen all in camera, i use DRO lvl 5 to get enough light, like when i'm shooting at dusk. DRO its like doing HDR but in a single file, it lightens the darks. in my camera i'm happy with results
    but when I upload them to lightroom, they come out near black.
    allow me to explain
    lets say I import 100 images
    i double check my preferences and everything is UNCHECKED when it comes to importing options, there is no auto toning, nothing.
    as the images import i see a preview in the thumbnail which looks fine.
    i double click on one to enlarge it, hence leave grid view.
    for a brief 1 or 2 seconds, i see the full image in all its glory but than lightroom does something funny, it darkens the image
    one by one as it inspects each image, if it was a DRO image it makes it too dark.
    to make this clear, the image is perfect as it was in the beginning but after a few seconds lightroom for some reason thinks it needs to correct it.
    how to prevent lightroom from doing this, i want the image exactly as it is, why must lightroom apply a correction>?
    i think it has to do something with interpreting the raw file and lightroom applies its own algorithm.
    but here is what i dont get.....before lightroom makes the change i'm able to witness the picture exactly as it was taken and want it unchanged..
    now i have to tweak each file or find a profile for it which is added work.
    any ideas how to prevent lightroom from ruining my images and just leave them as they were when first detected...
    there are 2 phases...one is when it originally imports and they look fine
    second is scanning each image and applying some kind of toning which darkens it too much.
    thanks for the help

    sorry thats the auto reply message from yahoo email.
    i've disabled it now
    thing is, there is no DRO jpg to download from the camera
    its only ARW. so my understanding is when i use DRO setting, the camera makes changes to the ARW than lightroom somehow reads this from the ARW.
    but then sadly reverts it to no DRO settings.
    because i notice if i take normal picture in raw mode its dark but if i apply dro to it, it comes out brighter, yet when i d/l the image from camera to lightroom, which is an ARW - there are no jpgs. lightroom decides to mess it up
    so in reality there is no point in using DRO because when i upload it lightroom removes it.
    is there a way to tell lightroom to preserve the jpg preview as it first sees it.
    its just lame, picture appears perfect...than lightroom does something, than bam, its ruined,.
    what do i need to do to prevent lightroom from ruining the image? if it was good in the first place.

  • Problems with compressing  files with right hand click. it does not work.

    Problems with compressing files with right hand click.
    I am using the compress function in the Mac OS (File > Compress XX) from time to time. Today it does not work anymore. OS 10.5.6
    I get a message: The content list cannot be created for compressing.
    I tried it with files and folders and keep getting this message. Anybody any idea as to how to fix this

    Thanks I love my macbook!!!!
    I also got further problems such as copy-paste not working etc.
    so I fixed it just this morning by running Applejack 1.5 and am back up running noticing your post.
    thanks for helping though!

  • Problem with Image file

    Hi,
    Iam facing with one problem.I have one swing interface through which I can upload files(back end servlet programme).Now I can upload all types of file but problem with image file it uploading perfectly that means size of the uploaded file is ok but its format damaged.It can not be open.My backend servlet programme is ok coz i tested it with html form it is working perfectly.Problem with swing interface.Plz guide me where I done a mistake.Below r my codes:-
    ImageIcon Upload=new ImageIcon("images/Upload.gif");
         Button=new JButton(Upload);
         Button.setToolTipText("Upload");
    Button.addActionListener(new ActionListener()
    public void actionPerformed(ActionEvent e)
              int returnVal = fc.showOpenDialog(ActionDemo4.this);
              if (returnVal == JFileChooser.APPROVE_OPTION) {
              File file = fc.getSelectedFile();
    String aa=file.getAbsolutePath();
              textArea3.append(aa);
                   textArea2.append("Local URL:");
    long l=file.length();
              try
              byte buff[]=new byte[(int)file.length()];
              InputStream fileIn=new FileInputStream(aa);
              int i=fileIn.read(buff);
              String conffile=new String(buff);
              String str1=textArea10.getText();
    url = new URL ("http://127.0.0.1:7001/servletUpload?x="+str1);
         urlConn = url.openConnection();
         urlConn.setDoInput (true);
         urlConn.setDoOutput (true);
         urlConn.setUseCaches (false);
         urlConn.setRequestProperty("Content-Type","multipart/form-data;boundry=-----------------------------7d11e410e500f2");
         printout = new DataOutputStream (urlConn.getOutputStream ());
    String content ="-----------------------------7d11e410e500f2\r\n"+"Content-Disposition: form-data;"+"name=\"upload\"; filename=\""+aa+"\"\r\n"+"Content-Type: application/octet-strem\r\n\r\n\r\n"+conffile+"-----------------------------7d11e410e500f2--\r\n";
    printout.writeBytes(content);
    printout.flush ();
    printout.close ();
    Best Regards
    Bikash

    The errors are here:
              byte buff[]=new byte[(int)file.length()];
              InputStream fileIn=new FileInputStream(aa);
              int i=fileIn.read(buff);
              String conffile=new String(buff); (conffile is a String object containing the image)
    and here:
    String content ="-----------------------------7d11e410e500f2\r\n"+"Con
    ent-Disposition: form-data;"+"name=\"upload\";
    filename=\""+aa+"\"\r\n"+"Content-Type:
    application/octet-strem\r\n\r\n\r\n"+conffile+"--------
    --------------------7d11e410e500f2--\r\n";
    printout.writeBytes(content);conffie is sent to the server but
    it's non possible to treat binary data as String!
    Image files must be sent as byte[] NOT as String ......

  • Out.println() problems with large amount of data in jsp page

    I have this kind of code in my jsp page:
    out.clearBuffer();
    out.println(myText); // size of myText is about 300 kbThe problem is that I manage to print the whole text only sometimes. Very often happens such that the receiving page gets only the first 40 kb and then the printing stops.
    I have made such tests that I split the myText to smaller parts and out.print() them one by one:
    Vector texts = splitTextToSmallerParts(myText);
    for(int i = 0; i < texts.size(); i++) {
      out.print(text.get(i));
      out.flush();
    }This produces the same kind of result. Sometimes all parts are printed but mostly only the first parts.
    I have tried to increase the buffer size but neither that makes the printing reliable. Also I have tried with autoFlush="false" so that I flush before the buffer size gets overflowed; again same result, sometimes works sometimes don't.
    Originally I use such a system where Visual Basic in Excel calls a jsp page. However, I don't think that this matters since the same problems occur if I use a browser.
    If anyone knows something about problems with large jsp pages, I would appreciate that.

    Well, there are many ways you could do this, but it depends on what you are looking for.
    For instance, generating an Excel Spreadsheet could be quite easy:
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.io.*;
    public class TableTest extends HttpServlet{
         public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
              response.setContentType("application/xls");
              PrintWriter out = new PrintWriter(response.getOutputStream());
                    out.println("Col1\tCol2\tCol3\tCol4");
                    out.println("1\t2\t3\t4");
                    out.println("3\t1\t5\t7");
                    out.println("2\t9\t3\t3");
              out.flush();
              out.close();
    }Just try this simple code, it works just fine... I used the same approach to generate a report of 30000 rows and 40 cols (more or less 5MB), so it should do the job for you.
    Regards

  • What is the exact problem with this file?

    Hi all,
    There is an old form , which was not in use from many days.
    Now when we tried to run the form, i got the error saying "FRM-40734:Internal Error:Pl/SQL error occured.", in the login form.
    When i tried to open the fmb file in Oracle Forms Builder 6i, i got the following error:
    FRM-10102: Cannot attach PL/SQL library d2kwutil. This library attachment will be lost if the module is saved., but the fmb file got open.
    The login button has the following code:
    DECLARE
      UNAME VARCHAR2(30);
      --USER_ID PARAMLIST;
      V_USER APUSERMA.USER_NAME%TYPE;
      V_PASSWED APUSERMA.USER_PASSWD%TYPE;
    BEGIN
    select user_CD INTO :GLOBAL.USER_ID from apuserma 
    where user_CD = :TI_USER_NAME AND user_PASSWD = :IT_USER_PASSWD
    AND SYSDATE BETWEEN USER_VALID_FRM AND USER_VALID_TO;
    :global.user_id  := substr(win_api_environment.read_registry('HKEY_LOCAL_MACHINE\system\currentcontrolset\control\computername\computername','computername'),1,10);
    :global.compname := :compname;
    compnm(:compname);
    --USER_ID := CREATE_PARAMETER_LIST('USER_id_NAME');
    call_form('Forms\MAIN_SCREEN',hide,DO_REPLACE);
    exception
      when no_data_found then
      MESSAGE('Incorrect Username or Password.  Please Re-Enter');
      message(' ');
      RAISE FORM_TRIGGER_FAILURE;
    END;
    EXIT_FORM;
    When i tried to compile, i got error saying ,
    Error 201 at line 10, column 28
    identifier 'WIN_API_ENVIRONMENT.READ_REGISTRY' must be declared.
    I am not getting to know What is the exact problem with this file?
    Help me with this please.
    Thank You.
    Oracle forms builder 6i.
    Oracle 9i.

    Vijetha wrote:
    I also want to know what is the use of  win_api_environment.read_registry('HKEY_LOCAL_MACHINE\system\currentcontrolset\control\computername\computername','computername') ??
    What does it do??
    If i comment the following line , will it be a problem??
    :global.user_id  := substr(win_api_environment.read_registry('HKEY_LOCAL_MACHINE\system\currentcontrolset\control\computername\computername','computername'),1,10);
    Because i commented the above line & compiled, so it is not giving any error now.
    So please tell me what win_api_environment.read_registry does??
    it's read windows registry value. So, no problem if you comment it.
    Thanks

Maybe you are looking for