Slow output

Hi forums!
I have a big problem with some code, I've really hacked this down to the simplest example possible for forum use..I have a file of stuff for the printer. Average about 22 Kbytes. It's currently taking 16-17 seconds for it to print. It hangs on the line
bos.write(buff);Now, working from the port speed which I've set in windows @ 128000 kbits/s, this seems way too long. I calculate that should be about 1.4 seconds?
I can write the same file to the Parallel port and its almost instant. I'm guessing I'm doing something wrong? Any help appreciated thanks!
package paulie;
import gnu.io.CommPortIdentifier;
import gnu.io.CommPortOwnershipListener;
import gnu.io.NoSuchPortException;
import gnu.io.PortInUseException;
import gnu.io.SerialPort;
import gnu.io.UnsupportedCommOperationException;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.OutputStream;
public class WriteDemoTextToPrinter implements CommPortOwnershipListener {
     private OutputStream os;
     private BufferedOutputStream bos;
     private CommPortIdentifier portId;
     private SerialPort serialPort;
     private String portname = "COM8";
     public static void main(String[] args){
          new WriteDemoTextToPrinter().run();
     public void run() {
          try {     
               portId = CommPortIdentifier.getPortIdentifier(portname);
               portId.addPortOwnershipListener(this);     
               serialPort = (SerialPort)portId.open("MyPortApp2", 3024);
               serialPort.setSerialPortParams(128000, SerialPort.DATABITS_8,
                         SerialPort.STOPBITS_1,
                         SerialPort.PARITY_NONE);
               os = serialPort.getOutputStream();
               bos = new BufferedOutputStream(os);     
               File f = new File ("C:\\Program Files\\testProg\\printOutput");
               FileInputStream fi= new FileInputStream (f);
               byte[] buff = new byte[25000]; //file is average 22 Kbytes
               fi.read(buff);
               fi.close();
               bos.write(buff);
               bos.flush(); os.flush(); bos.close(); os.close(); serialPort.close();
          } catch (IOException e) {
               System.out.print("WriteDemoTextToPrinter:IO Exception Caught.");
          } catch (NoSuchPortException e) {
               e.printStackTrace();
          } catch (PortInUseException e) {
               e.printStackTrace();
          } catch (UnsupportedCommOperationException e) {
               e.printStackTrace();
     @Override
     public void ownershipChange(int arg0) {
          System.out.println("OwnershipChanged : " + arg0 + ": Current owner " + portId.getCurrentOwner());
}Edited by: fenderman on Jan 29, 2010 8:55 AM
Edited by: fenderman on Jan 29, 2010 8:56 AM
Edited by: fenderman on Jan 29, 2010 8:56 AM
Edited by: fenderman on Jan 29, 2010 8:58 AM

               byte[] buff = new byte[25000]; //file is average 22 KbytesAs you are using buffered output and you are writing to a slow device, the size of the buffer is pretty immaterial. It doesn't need to be that large. 8k would be more than enough.
               fi.read(buff);
               fi.close();
               bos.write(buff);
               bos.flush(); os.flush(); bos.close(); os.close(); serialPort.close();As noted above, this is just plain wrong.
int count;
while ((count = fi.read(buff)) > 0)
  bos.write(buff, 0, count);
bos.close();
fis.close();All the flushes and the other closes except maybe SerialPort.close() are redundant.

Similar Messages

  • Slow console output in new JDEVELOPER

    Hi,
    I have really trivial problem with new versions of JDEVELOPER (e.g. 9.0.3.1035 or
    9.0.3.1 built 1107) - slow console output.
    This OutputTest.java runs approx. 30 seconds !!
    There is no such a problem with e.g. JDEVELOPER 9.0.3.988 Preview. It takes only 0.2 second.
    Is there any workaround ? Can you help me to solve the problem?
    Thanks,
    Fero
    public class OutputTest {
    public static void main(String[] args) {
    for (int i = 0; i < 1000; i++) {
    System.out.println("Very slow output" + i);

    This issue has been fixed for a later release. I don't know of any work around for you, except not writing so much to System.out and System.err.
    -Liz

  • Publisher Margins OK - Output Totally Different on 1518 LaserJet

    I have a 1518ni COlor LaserJet. Excellent print quality, but many frustrations with this unit including VERY SLOW output.
    Have created an envelope template in Publisher. Print Preview is perfect. Output on 1518 seems to operate with phantom margins -- the printed output is pushed a good 1 1/2 - 2" to the right. Attempting to force the content to the left in Publisher only removes it from the printable area and thus doesn't print at all.
    Am using the CP1510 PCL6 Series driver. Windows 7 OS. Have wasted much envelope stock and frustration mounts. Any ideas or advice would be EXTREMELY appreciated!

    Have since hooked up an old HP 1012 and tried printing to that -- output is properly placed, but the toner does not bond well to the envelope. This tells me that Publisher is probably functioning fine and that my 1510 series driver is probably not optimized for my 1518 printer. Help???

  • Slow xfce4-terminal (and all vte-based terminals)

    Hi everyone!
    So this is the problem, i have slow output in all vte-based terminals.
    I tried this command:
    time seq -f 'teeeeeeeeeeeeeeeeeeeeeeeeeeeeeest %g' 100000
    to measure the performance, and in xfce4-terminal i get about 7.6 secs but in urxvt, for example, i get about 0.9 secs.
    Any suggestions? Thanks!
    PS: Sorry for my bad English.

    crew4ok wrote:
    goran'agar wrote:My suggestion is to uninstall xfce4-terminal and move to urxvt. Oh, well, that's what I did.
    If i don't get an answer, i'll do this way.
    My suggestion was serious. All vte terminals are slow and there's nothing you can do to speed them up. urxvt is way faster.

  • XDCAM Render time increased 10x in Premiere CS6

    My company works with XDCAM HD and Red footage in Premiere, and we recently upgraded two of our three workstations to CS6. We love the upgrade, but we've experienced 10x slower output from Adobe Media encoder with the XDCAM HD footage compared to Premiere CS5.5. We don't experience any slowdown in output time for Red 5K footage, but the XDCAM slowdown is killing us. We rolled back one of the workstations to CS5.5 and immediately saw the output time drop to 1.1x or 1.2x sequence time. In CS6 using the same sequence settings in Premiere and the same output settings in Media Encoder, we get more like 11x sequence time speed for output. We see the same slowdown in CS6 for rendering the work area for an XDCAM HD sequence in Premiere
    We aren't getting any error messages or crashes, and there are no artifacts in the video during real time playback or in the .mp4s we're outputting. We had the same problem on the two workstations, and rolling the one workstation back to CS5.5 fixed the problem immediately. And again, our Red workflow is just as fast as it was in CS5.5.
    We'd strongly prefer to work in CS6, but we will fall behind our current project's contract deadline if we get output this slow. Has anyone else experienced the same slowdown, or does anyone have a suggestion for settings I could change to improve performance?

    Jim Simon wrote:
    Adobe is aware of the issue and working on a fix, but until that comes out the only solution is to go back to an earlier version of PP, or to covert the multiple spanned files into one file outside of PP (preferably using a lossless format like the UT codec, so no additional degradation occurs.)  Take note that you may lose timecode and other metadata if you go the conversion route.
    You might also try not using the NanoFlash and recording directly in camera, as not all spanned media is affected by the problem.
    I use this option for now at Jim's recommendation in another thread, and admitted, it's an extra step in your workflow to transcode to UT first, but you don't have to monitor the computer while it does it, and you are safely editing after that.
    Hope for a soon fix though
    /Ulf

  • Third-party motor drive

    I am using Baldor's controller and amplifier. Can I use Labview enviroment to communicate with the controller? The controller has serial port RS232/RS485 and CAN port. Is the speed enough for control?
    If I use NI's controller and Baldor's amplifier, and use a UMI as an interface, does UMI accept 24V digital signal?
    Any suggestion is very welcome.
    Thanks,

    There is no detailed documentation on the UMI's internal circuitry. But since it runs on 5VDC I have severe doubts that you can connect a 24V digital signal directly to the box - whether the digital I/Os are processed internally or (which is more probable) they are fed directly to the I/O pins of the 7344.
    The circuitry of the I/O pins of the 7344 is not described in detail. But to avoid any damage to the 7344 you should limit your signals to +5VDC max. The best method to avoid any burn-outs of the 7344 in case any level converters will fail is to use optical isolation via optocouplers (+ recovery circuitry for the usually rather slow output signals of optocouplers). Also, the input signals must be tied to GND or have a voltage level of less than 0.8V (with r
    eference to the GND pins of the 7344) so that a logical low signal can be properly detected.

  • PPBM is Back--Benchmarking Premiere Pro CS4.1

    Well after a long absence PPBM is back and is appropriately called PPBM4.  If you are not familiar with my previous Adobe Premiere Pro BenchMarking I have finally developed a new hardware benchmark specific to the current version of Premiere Pro.  As a matter of fact I do not advise using it on anything other than CS4 version 4.1.
    With this benchmark you can evaluate how well your hardware performs when actually using Premiere and as I get more feedback of results from forum users like you I will publish the results.  With that data you should be able to see how well your system is tuned, what hardware will improve your performance and/or help to specify a new system.
    There are three basic tests:
    I measure how long it takes to render the timeline with my "standardized" project.
    I measure how long the project takes to encode MPEG2-DVD
    I measure how long the project takes to encode to Microsoft DV AVI.
    Unfortunately it will not run on a Mac with the native OS.
    Even if you do not want to benchmark you should go out to this site to see how far Harm Millaard's super overclocked i7-920 system outperforms my previous generation dual Xeon 5400 series system.  Also, I have data on my system with identical hardware running on XP Pro 32, Vista 64 and Win 7 64 (RC).

    If you are using a single fast RAID array then my tests show that all the files (project, media, scratch) should be on the RAID array, putting any one on a slower separate drive slows the benchmark.  I have all my project files on a 500 MB/s average read rate Array, if I put the Preview Files or the Output on even a ~200 MB/s RAID array you get slower performance.
    If you prefer not to use RAID, then using the forum recommended three drives, Take your choice (actually both are essentially the same)::
    "One drive for OS/programs, one for media and one for pagefile/scratch/renders".
    "One for OS & Programs  One for Projects/Scratch  One for Media."
    Then if you have faster and slower drives depends on what you want to speedup.  My opinion is use your fastest single drive drive for the OS/Applications.  If you Render the Timeline and intend to use the Preview Files for encoding you may want to put them on another fast disk.  The when you are trying to optimize encoding from AME this is a CPU intensive operation and a slightly slower Output Files disk probably will work perfectly adequate.  I would guess that Media files (my benchmark does not have media files by design) would also benefit from fast disk access. 
    Skeeze, when you say "rendering" are referring to encoding/transcoding as done in AME or are you referring to Rendering the Timeline as in hitting "Enter".
    I am absolutely amazed at the Michael's W5580 disk intensive AVI encoding results with only two SAS drives, since it is also a brand new system I suspect that they are Seagate 15K.7 drives with sustained read transfer rates from 204 to 122 MB/sec.  This one drive is equivalent to a pair of Seagate 7200.12 rpm drives in RAID 0!
    I would appreciate comments on my opinions above.

  • Outer Join or Row-Level Function

    Hi ALL,
    I have a query which involves five tables T1, R1, R2, R3 and R4. T1 is a transaction table, whereas R1, R2, R3 and R4 are reference tables (Parent tables, with foreign keys defined in T1). T1 table always contains R1 and R2 referenced data. BUT T1 table may contain sometimes NULL for R3 and R4.
    Now my question is simple;
    Should i use an OUTER Join for R3 and R4 in the query? like
    <code>
    select T1.col1, R1.col2, R2.col2, R3.col2, R4.col2
    from T1, R1, R2, R3, R4
    where T1.col2 = R1.col1
    and T1.col3 = R2.col1
    and T1.col4 = R3.col1(+)
    and T1.col5 = R4.col1(+)
    </code>
    OR
    Should i use row-level functions for R3 and R4, like
    <code>
    select T1.col1, R1.col2, R2.col2,
    (Select R3.col2 from R3 where R3.col1 = T1.col4),
    (Select R4.col2 from R4 where R4.col1 = T1.col5)
    from T1, R1, R2
    where T1.col2 = R1.col1
    and T1.col3 = R2.col1
    </code>
    which approach is better and why?
    Records in T1 = 2,000,000
    Records in R1 = 1000
    Records in R2 = 300
    Records in R3 = 1800
    Records in R4 = 200
    Please note that all foreign keys are indexed, there are primary keys in all R tables
    Thanks,
    QQ.

    dwh_10g wrote:
    I have preferred to go for Outer Joins, as there might be a possibility, if data grows than there will be more row-level scans; hence slower output in future.
    If i go with a row-level scan, then there will be more usage of Hash tables (Memory), so if more memory is required and our SGA limits is crossed, then there will be more disk I/O i guess. which is a costly operation.
    QQ,
    as already explained, unfortunately it's hard to predict how the "row-level" approach is going to perform in case of increased data volume. Since it is dependent on some factors it could be faster or slower than the outer join approach.
    You shouldn't worry too much about the size of the in-memory table. You haven't mentioned your version, but since your alias is "dwh10g" is assume you're using 10g.
    In 10g the size is defined by the hidden parameter "_query_execution_cache_max_size" and set to 65536 bytes by default. In pre-10g the size is always 256 entries which might grow bigger than 64k in case you have large input/output values (e.g. large VARCHAR2 strings). Compared to potentially multiple several megabytes large sort and/or hash SQL work areas used by each open cursor this table seems to be relatively small.
    I don't think that this memory is taken from the SGA, but belongs to the PGA of the process executing the query and therefore it should not allocate SGA memory (I think even in Shared Server configuration this part should be allocated from the PGA of the shared server process).
    So in summary if your data volume increases it shouldn't have an noticeable impact on the memory usage of the "row-level" approach, and therefore this I think is negligible.
    But as a recommendation I would say, since the "row-level" approach doesn't seem to be significantly faster than the "outer join" approach and it's behaviour in future is hard to predict (e.g. it could become worse if your data pattern changes or the execution plan changes and therefore the order of the row processing changes) I would stick to the "outer-join" approach. I assume it could be more stable in terms of performance.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Re: Premiere Pro CC taking FOREVER to export

    I'm having the same issues with super slow output on CC.
    I had a speedgrade round-trip file that I brought back in PremPro CC with color grading applied. Output dialog box (Media Encoder CC) said it would take 60+ hours to output a 2 hour clip with ProRes (LT). So, thinking it must be speedgrade interface, I did a test with a very short and simplified piece and no speedgrade involved, only fast color corrector as filter. The 5 minutes took 1.5 hours to output as ProRes (LT).
    Then I opened the same project in Premiere Pro CS6 the same 5 minutes - exactly the same in every way - took less than 10 minutes to output as ProRes (LT). There is definitely something wrong with CC...
    I'm on a MacBookPro 2.3 GHz Intel Core i7, 8GB RAM, OSX 10.7.5

    Hi mjgmacromedia,
    Please check the forum link mentioned below and check if the problem gets resolved using the steps provided. Also, please mention the make and model Graphics card installed on your machine.
    http://forums.adobe.com/message/6265065#6265065
    Regards,
    Vinay

  • Output report XML very slow

    Hi,
    I designed the report output includes 40 columns and 100000 rows. However, the request and opens slow in XML format on the client very slow.
    Please ask you one optimal solution to the problem above?
    (Version R12)
    Thank alot!
    Edited by: user12193364 on 14:03 24-04-2013

    Please post the details of the application release, database version and OS.
    Hi,
    I designed the report output includes 40 columns and 100000 rows. However, the request and opens slow in XML format on the client very slow.
    Please ask you one optimal solution to the problem above?Are you running the report from the application?
    Is the issue with processing the request or just with opening the output file?
    What is the size of the concurrent request output file?
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues [ID 1410160.1]
    EBS - Technology Area - Webcast Recording 'BI Publisher / XML Publisher Overview & Best Practices - Webinar' [video] [ID 1513592.1]
    Poor Performance /High Buffer Gets In FND_CONC_TEMPLATES [ID 1085293.1]
    Performance Issue - PDF Generated With BI Publisher [ID 956902.1]
    Overview of Available Patches for Oracle XML Publisher embedded in the Oracle E-Business Suite [ID 1138602.1]
    Tuning Output Post Processor (OPP) to Improve Performance [ID 1399454.1]
    Thanks,
    Hussein

  • Left headphone audio output is slow as compared to right ear phone

    I have Macbook Pro and since last few days left headphone audio output has became very slow as compared to right headphone audio output. I have tried multiple headphones but no luck so its problem with audio jack I think.
    Did anyone experience same problem? Kindly please guide.
    Thanks in advacne.

    This a new one for me, the is slower?
    Trya pram reset:
    Power off, power back on holding the following keys before the start up chime.
    option, command, P,R (no commas) continue to hold these keys till you hear the start chime 2 times and release.
    Reboot, see if this helps.

  • Adobe Captivate Outputting/Rendering Slow, Sluggish, and Choppy Video when ScreenCasting

    When I screencast my desktop and then output/render the video that Adobe Captivate recorded, the video is always so choppy, sluggish, and slow.
    I set the frames per second to 80 and interval to 1, 5, and 10, and still choppy video.
    My system specfications are as follows:
    Operating System: Microsoft Windows 7 Professional 64-bit SP1
    CPU: Intel Core i7 @ 3.40 GHz | Sandy Bridge 32nm Technology
    RAM: 16.0 GB Dual-Channel DDR3 @ 665MHz
    Graphics Card: AMD Radeon HD 6600 Series
    Is there something that I am doing wrong during/after I record?
    Any help would be awesome!
    Thank you!

    Hey Anjaneai,
    I mean slow as in my Cursor movements are not as smooth as I expected with this software.
    I have chosen the Video Demo recording mode.
    I did check the option to show "Actions in Real Time" from Recording and it only applies to Audio. I am speaking of video. I have tried comparing. I tried comparing with Software Demo, but it's impossible to compare, seeing as Adobe Captivate generates the mouse for you.
    I thought that the higher the Frame Rates the smoother the experience.
    Thank you!
    Aaron

  • Repairing Permissions in Disk Utility (10.7 MBPro 2.4GHz), but still slow! Any advice upon reading my output?

    Current scenario:
    2-yr old up-to-date Lion 10.7 on my MBPro Intel Core 2 Duo 2.4GHZ upgraded RAM to 6GB is suddenly showing sluggish startup and opening applications very slowly (non Pro Apps and Pro Apps alike)…when multiple apps are open simultaneously (simple stuff like: Safari, Photo Booth, etc) and I have a handful of tabs open in Safari, I am getting slow response in every category from time it takes to open an application, all the way down to simple keystroke delays. This is becoming really annoying and having a direct negative effect on my ability to be efficient and multitask! so I zap the PRAM a few times using the startup key combo, and try running DiskUtil to repair permissions. But alas, I seem to get the same thing each time after repair/verifying permissions a couple times… my output pasted below.
    My QUESTION: I know from articles I've read on this discussions board that I can all but ignore the output lines like "ACL found but not expected on…" but is there anything in the output i've pasted below that I should be worried about? Anything that might be pointing to a bigger problem?
    I also startup in singleuser mode and run /sbin/fsck -fy once every 10 days or so…is this the right thing to do? My overall speed and efficiency is not improving. Please lend me your infinite knowledge, apple gurus. Thanks!
    Verifying and repairing partition map for “ST9250315ASG Media”
    Repairing permissions for “Macintosh HD”
    Warning: SUID file “System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/MacOS/ARDAg ent” has been modified and will not be repaired.
    Group differs on “Library/Preferences/com.apple.alf.plist”; should be 80; group is 0.
    Repaired “Library/Preferences/com.apple.alf.plist”
    ACL found but not expected on “private/var/root/Library”
    Repaired “private/var/root/Library”
    ACL found but not expected on “private/var/root/Library/Preferences”
    Repaired “private/var/root/Library/Preferences”
    Permissions repair complete

    You can ignore the SUID messages > Mac OS X: Disk Utility's Repair Disk Permissions messages that you can safely ignore and that has no effect on speed.
    Not enough free space on the startup disk can slow the system down.
    Click your Apple menu icon top left in your screen.
    From the drop down menu click About This Mac > More Info then select Storage from the menu.
    Make sure there's never less than 15% available disk space.

  • How to properly timewarp 59.94 footage for slow motion effect for 29.97 output

    I have footage filmed in 59.94 that I want to use for slow motion effects and output to 29.97. Using timewarp I seem to get duplicate frames at times. What would the best workflow to accomplish the cleanest result? I've tried interpreting the footage at 29.97 and keeping it at its native 59.94. At this point I feel like I'm just confusing myself further by just trying random things. Thanks!

    Always interpret footage as what it really is. If it is really 59.94 progressive then that's what it should be interpreted as. It will play at the right speed in any comp of any frame rate. IOW, if the footage is 10 seconds and you put it in a 99fps comp and play it back at full frame rate the clip will still be 10 seconds long. If you put it in a comp that's 15 frames per second, the clip will still be 10 seconds long.
    Make sure you understand that.
    There is a chance, and a good chance, that your footage is actually full frame 59.94 fps video. That's why I suggested putting it in a comp that's also 59.94 fps and checking for duplicate frames.
    There is also a chance that your footage is actually 29.97 fps interlaced. If you separate fields and look at the footage in a 59.94 fps comp you won't see any duplicate frames.
    To change the speed of the clip you just need to apply Time Remapping, Time Warp, or any of the other Time Time tools in AE. It doesn't make any difference what the frame rate of the composition is, you'll be changing the length of the clip.
    To keep from creating duplicate frames you either need software that interprets the new, extra frames or you need to use an exact whole number multiple of the frame rate for rendering.
    Where did you get the footage? What Camera? What are you trying to do with it? What is the final destination (delivery) for your project.

  • Extremely slow variable window output

    Hello,
    exporting data from the array view in CVI2013 may be extremely slow. For example, having data as illustrated below (an array of 6016 elements) I used the menu command Output / ASCII to save the array to a file on the local harddisk. It took more than five minutes...
    Solved!
    Go to Solution.

    Hi and thanks for confirmation.
    'Known issue' means 'known to NI', so let me suggest to update the list of known issues, the last two months have seen quite a few new CARs

Maybe you are looking for