File size deviation between Finder and Quicktime

Today I encountered a weird phenomenon:
Some Quicktime .movs show different filesizes in the finder (e.g. 69,3MB) than the same clip has in Quicktime (Command-I says 34,71MB)
This happens if the clip was opened, truncated and saved (under the same name).
It still occurs after the computer was restartet.
Any idea what this may be?

Hi
According to the QuickTime help, after you delete parts of a movie, the file size stays the same until you choose File > Save As and select "Save as a self-contained movie". I'm only guessing, but this may be to enable the deletion to be undone, even after the file has been saved.
I guess the Finder is reporting the raw file size, whereas QuickTime is reporting the size of the edited clip.

Similar Messages

  • File size differences between cp3 and cp4

    Hello
    I created a cp file in captivate4 & import audio file (mp3) & published it.
    swf file size was 305kb.
    but when I created this file in cptivate3 & published it swf file size was 260kb.
    why swf file size in cptivate3 less than cptivate4?
    in two sample setting is default.
    Thanks for any help.

    Hello
    I created a cp file in captivate4 & import audio file (mp3) & published it.
    swf file size was 305kb.
    but when I created this file in cptivate3 & published it swf file size was 260kb.
    why swf file size in cptivate3 less than cptivate4?
    in two sample setting is default.
    Thanks for any help.

  • File size difference between DNG Converter and Lightroom Beta 4

    Hi,
    I want to go the all-DNG route and am trying several things ATM. I want my files to be as small as possible, thus I disable previews and RAW embedding and enable compression in DNG Converter. In Lightroom, there are no options at all. What I do get, are pretty amazing file size differences:
    Original .NEF as it came from my D70s: ~5MB
    .DNG created by DNG Converter: ~1MB
    .DNG created by Lightroom: ~4MB
    The very small file size in DNG Converter is the one that bother's me most. I get these small files from time to time. I checked both the DNG and the NEF in Photoshop, and they seem to be identical. So my question is: What triggers these small file sizes? Do I loose anything? Or is the Lightroom DNG converter not as advanced as the stand alone version?
    Maybe this helps: I get the ridicolous small files for very dull subjects that tell the computer scientist in me that it should be easily compressable by common compression algorithms.
    Thanks for any pointers,
    Markus

    Thanks for the hint! It did make me revisit those files and now I see the reason for the small file sizes: The Apple Finder does note update the file size view once a file was added to a folder. Here's what I did:
    Opened a folder full of .NEFs in detail view in Finder.
    Converted them using DNG Converter
    Looked at the sizes of the files as they were shown in the Finder window allready open.
    Unfortunately, those file sizes are not correct. If I open a new Finder window of the same folder, file sizes are correctly reported as between 3.5 and 5 MB.

  • File size difference between version 3 and 4

    I'd like to know how to publish a file at the smallest possible size with Captivate 4.
    I have 1 file that is 9176KB published with Captivate 3. The same file published with Captivate 4 becomes 10300KB. I didn't add any functionality just publish it once saved in 4.
    What is the same content ~1MB bigger with the new version? How do I make it smaller?
    Thanks,

    Thanks for the hint! It did make me revisit those files and now I see the reason for the small file sizes: The Apple Finder does note update the file size view once a file was added to a folder. Here's what I did:
    Opened a folder full of .NEFs in detail view in Finder.
    Converted them using DNG Converter
    Looked at the sizes of the files as they were shown in the Finder window allready open.
    Unfortunately, those file sizes are not correct. If I open a new Finder window of the same folder, file sizes are correctly reported as between 3.5 and 5 MB.

  • File Size Discrepancy Between Photoshop & the Finder

    I'm trying to be as brief as I can, so here goes. The specific application (PS) is irrelevant, I think. This is about why an app shows one file size & the Finder shows a different file size. In this case, it's a huge difference, due to the file being an image.
    I imported into PS CS, from a CD, an original image, which the Finder shows as 269.4 MB. The file format is TIFF, and the bit-depth is 16, not 8. The Finder shows it as a "TIFF Document." Now. I did a Save As and edited that as a master image file. So, I have two files: the original and the master.
    I substantially cropped (deleted) pixels in the master file. So, at the same 16-bit depth, the master file should be smaller in size than the original. Right? However, the Finder shows the file to be 433.6 MB in size! Photoshop shows the file to be a more realistic 185.8 MB in size. Why is the Finder showing such a huge file size? Why is the Finder storing 247.8 MB more than I need? The Finder shows this file as an "Adobe Photoshop TIFF file," so there has been a change in format. The file is flattened; no layers, etc., are involved.
    One clue could be that the Finder is storing the larger file size to accommodate Photoshop. If one multiplies 185.8 MB by 3, the result is close to the 433.6 MB figure. The 3 stands for the three color channels (red, green, blue) of each pixel (data element) in the image.
    The original image, however, is stored correctly by the Finder. Photoshop and the Finder agree on the 269.4 MB file size. If the above scenario were true, the Finder would be storing the original file at three times the size as shown in Photoshop. In other words, there would be consistency in what the Finder is doing.
    I suppose I could just ignore the discrepancy, but I have hundreds of images to process, and I don't want to have to go into PS every time to get a true reading of file sizes. The Finder should be accurate in doing that.
    I may be in the wrong forum re: Photoshop, but here I think I can find some expertise re: the Finder, since the Finder's storing procedures are in question, to my mind. It's definitely an app/OS interface problem, as I see it. Simply, I edit a file downward in data, save it, yet the Finder saves it at a larger size.

    ...do you think a lot of cloning & healing brush might have added to the file size, even though I cropped the image?
    Yes, depending on your History settings. The more you work on an image, the more history it accumulates. The more different states and sanpshots you save in the History palette, the bigger the file gets as you work on it, because you're storing (within the file) complete information about the file's state before and after every individual change you make to it. What I don't recall is whether that all gets saved to the file in a Save As, or whether the history is flushed each time the file is Saved.
    I should warn you that I am by NO stretch of the imagination a PS expert. I was still using PS 5.0.2 until last February, when I upgraded to CS2 (knowing it will be years before I have enough hardware horsepower to run CS3). I'm a rank beginner with CS2, and if someone else wants to jump in here and point out that I'm all wrong, it will be no surprise to me. And because I never used CS, I don't know whether what I'm describing in CS2 is even relevant here.

  • 4GB File Size Limit in Finder for Windows/Samba Shares?

    I am unable to copy a 4.75GB video file from my Mac Pro to a network drive with XFS file system using the Finder. Files under 4GB can be dragged and dropped without problems. The drag and drop method produces an "unexepcted error" (code 0) message.
    I went into Terminal and used the cp command to successfully copy the 4.75GB file to the NAS drive, so obviously there's a 4GB file size limit that Finder is opposing?
    I was also able to use Quicktime and save a copy of the file to the network drive, so applications have no problem, either.
    XFS file system supports terabyte size files, so this shouldn't be a problem on the receiving end, and it's not, as the terminal copy worked.
    Why would they do that? Is there a setting I can use to override this? Google searching found some flags to use with the mount command in Linux terminal to work around this, but I'd rather just be able to use the GUI in OS X (10.5.1) - I mean, that's why we like Macs, right?

    I have frequently worked with 8 to 10 gigabyte capture files in both OS9 and OS X so any limit does not seem to be in QT or in the Player. 2 GIg limits would perhaps be something left over from pre-OS 9 versions of your software, as there was a general 2 gig limit in those earlier versions of the operating system. I have also seen people refer to 2 gig limits in QT for Windows but never in OS 9 or later MacOS.

  • 45 min long session of log file sync waits between 5000 and 20000 ms

    45 min long log file sync waits between 5000 and 20000 ms
    Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
    I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
    The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
    I've run statspack reports on hourly intervals around this event:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And here is a sample while not encountering the issue:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66
    Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
    I welcome any and all feedback.
    Message was edited by:
    acyoung1
    Message was edited by:
    acyoung1

    Lee,
    log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
    I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
    A bit more information from statspack reports:
    Here is a sample while the issue is occuring.
    Snap Id Snap Time Sessions
    Begin Snap: 661 24-Mar-06 12:45:08 87
    End Snap: 671 24-Mar-06 13:41:29 87
    Elapsed: 56.35 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 615,141.44 2,780.83
    Logical reads: 13,241.59 59.86
    Block changes: 2,255.51 10.20
    Physical reads: 144.56 0.65
    Physical writes: 61.56 0.28
    User calls: 1,318.50 5.96
    Parses: 210.25 0.95
    Hard parses: 8.31 0.04
    Sorts: 16.97 0.08
    Logons: 0.14 0.00
    Executes: 574.32 2.60
    Transactions: 221.21
    % Blocks changed per Read: 17.03 Recursive Call %: 26.09
    Rollback per transaction %: 0.03 Rows per Sort: 46.87
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 98.91 In-memory Sort %: 100.00
    Library Hit %: 98.89 Soft Parse %: 96.05
    Execute to Parse %: 63.39 Latch Hit %: 99.87
    Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
    Shared Pool Statistics Begin End
    Memory Usage %: 89.96 92.20
    % SQL with executions>1: 76.39 67.76
    % Memory for SQL w/exec>1: 72.53 63.71
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And this is a sample during "normal" operation.
    Snap Id Snap Time Sessions
    Begin Snap: 671 24-Mar-06 13:41:29 88
    End Snap: 681 24-Mar-06 14:42:57 88
    Elapsed: 61.47 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 716,776.44 2,787.81
    Logical reads: 13,154.06 51.16
    Block changes: 2,627.16 10.22
    Physical reads: 129.47 0.50
    Physical writes: 67.97 0.26
    User calls: 1,493.74 5.81
    Parses: 243.45 0.95
    Hard parses: 9.23 0.04
    Sorts: 18.27 0.07
    Logons: 0.16 0.00
    Executes: 664.05 2.58
    Transactions: 257.11
    % Blocks changed per Read: 19.97 Recursive Call %: 25.87
    Rollback per transaction %: 0.02 Rows per Sort: 46.85
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 99.02 In-memory Sort %: 100.00
    Library Hit %: 98.95 Soft Parse %: 96.21
    Execute to Parse %: 63.34 Latch Hit %: 99.90
    Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
    Shared Pool Statistics Begin End
    Memory Usage %: 92.20 88.73
    % SQL with executions>1: 67.76 75.40
    % Memory for SQL w/exec>1: 63.71 68.28
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • Problem exporting txt' file size is 23 KB and '.zip' size 4 MB

    I am using Apex 3.0 version screen to upload '.txt' file and '.zip' file containing images.
    I can successfully export '.txt' file and '.zip' file containing images as long as '.txt' file size is < 23 KB and '.zip' file size < 4 MB from database table 'TBL_upload_file' to the OS directory on the server.
    processing of Larger files (sizes 35 KB and 6 MB) produce following Error Message.
    ‘ORA-21560: argument 2 is null, invalid or out of range’ error.
    Here is my code:
    I am using following code to export Documents from database table 'TBL_upload_file' to the OS directory on the server.
    create or replace procedure "PROC_LOAD_FILES_TO_FLDR_BYTES"
    (pchr_text_file IN VARCHAR2,
    pchr_zip_file IN VARCHAR2)
    is
    lzipfile varchar(100);
    lzipname varchar(100);
    sseq varchar(1000);
    ldocname varchar(100);
    lfile varchar(100);
    -- loaddoc (p_file in number) as
    l_file UTL_FILE.FILE_TYPE;
    l_buffer RAW(32000);
    l_amount NUMBER := 32000;
    l_pos NUMBER := 1;
    l_blob BLOB;
    l_blob_len NUMBER;
    l_file_name varchar(200);
    l_doc_name varchar(200);
    a_file_name varchar (200);
    end_pos NUMBER;
    begin
    -- Get LOB locator
    SELECT blob_content,doc_name
    INTO l_blob,l_file_name
    FROM tbl_upload_file
    WHERE DOC_NAME = pchr_text_file;
    --get length of blob
    l_blob_len := DBMS_LOB.getlength(l_blob);
    -- save blob length to determine end position
    end_pos:= l_blob_len;
    -- Open the destination file.
    -- l_file := UTL_FILE.fopen('BLOBS','MyImage.gif','w', 32767);
    l_file := UTL_FILE.fopen('BLOBS',l_file_name,'WB', 32760); --use write byte option supported in 10G
    -- if small enough for a single write
    IF l_blob_len < 32760 THEN
    utl_file.put_raw(l_file,l_blob);
    utl_file.fflush(l_file);
    ELSE -- write in pieces
    -- Read chunks of the BLOB and write them to the file
    -- until complete.
    WHILE l_pos < l_blob_len LOOP
    DBMS_LOB.read(l_blob, l_amount, l_pos, l_buffer);
    UTL_FILE.put_raw(l_file, l_buffer);
    utl_file.fflush(l_file); --flush pending data and write to the file
    -- set the start position for the next cut
    l_pos := l_pos + l_amount;
    -- set the end position if less than 32000 bytes, here end_pos captures length of the document
    end_pos := end_pos - l_amount;
    IF end_pos < 32000 THEN
    l_amount := end_pos;
    END IF;
    END LOOP;
    END IF;
    --- zip file
    -- Get LOB locator to locate zip file
    SELECT blob_content,doc_name
    INTO l_blob,l_doc_name
    FROM tbl_upload_file
    WHERE DOC_NAME = pchr_zip_file;
    l_blob_len := DBMS_LOB.getlength(l_blob);
    -- save blob length to determine end position
    end_pos:= l_blob_len;
    -- Open the destination file.
    -- l_file := UTL_FILE.fopen('BLOBS','MyImage.gif','w', 32767);
    l_file := UTL_FILE.fopen('BLOBS',l_doc_name,'WB', 32760); --use write byte option supported in 10G
    -- if small enough for a single write
    IF l_blob_len < 32760 THEN
    utl_file.put_raw(l_file,l_blob);
    utl_file.fflush(l_file); --flush out pending data to the file
    ELSE -- write in pieces
    -- Read chunks of the BLOB and write them to the file
    -- until complete.
    l_pos:=1;
    WHILE l_pos < l_blob_len LOOP
    DBMS_LOB.read(l_blob, l_amount, l_pos, l_buffer);
    UTL_FILE.put_raw(l_file, l_buffer);
    UTL_FILE.fflush(l_file); --flush pending data and write to the file
    l_pos := l_pos + l_amount;
    -- set the end position if less than 32000 bytes, here end_pos contains length of the document
    end_pos := end_pos - l_amount;
    IF end_pos < 32000 THEN
    l_amount := end_pos;
    END IF;
    END LOOP;
    END IF;
    -- Close the file.
    IF UTL_FILE.is_open(l_file) THEN
    UTL_FILE.fclose(l_file);
    END IF;
    exception
    WHEN NO_DATA_FOUND THEN
    RAISE_APPLICATION_ERROR(-20214,'Screen fields cannot be blank, Proc_Load_Files_To_Fldr_BYTES.');      
    WHEN TOO_MANY_ROWS THEN
    RAISE_APPLICATION_ERROR(-20215,'More than one record exist in the tbl_load_file table, Proc_Load_Files_To_Fldr_BYTES.');     
    WHEN OTHERS THEN
    -- Close the file if something goes wrong.
    IF UTL_FILE.is_open(l_file) THEN
    UTL_FILE.fclose(l_file);
    END IF;
    RAISE_APPLICATION_ERROR(-20216,'Some other errors occurred, Proc_Load_Files_To_Fldr_BYTES.');     
    end;
    I am new to the Oracle.
    Any help to modify this scipt and resolve this problem will be greatly appreciated.
    Thank you.

    Ask this question in the Apex forums. See Oracle Application Express (APEX)
    Regards Nigel

  • How keep size relation between p and h1?

    Hi!
    In my Reset it says "font-size:100%" for both my p and h1. In Explorer, the h1 is bigger than p (as it says it should be in my style sheat), but in Safari (used in Iphone), the h1 is smaller than p. What can I do in Reset in order to keep the size relation between p and h1 in different browsers?
    Greatful for answer!
    Milda

    h1 { font-size:140%; }

  • How to increase font size on the finder and menu bar

    how to increase font size on the finder and menu bar

    according to this thread, in Leopard you can't but they offer some suggestions: https://discussions.apple.com/thread/2075719?start=0&tstart=0

  • Incorrect QuickTime file size shown in Finder

    I have a few QuickTime movies that I moved out of Aperture in order to move back into the newest iPhoto. When I attempted to sync my iPhoto library with my iPhone, the videos would not sync, so I investigated further and found that while they play perfectly fine under both iPhoto and in QuickTime 7 and QuickTime X, the file size in Finder is showing each movie as only a few KB in size, when in fact they are dozens of MBs in size.
    What the heck is going on? Permissions repair did nothing, I can't think of what else to try.

    I would suspect you moved the .mov file, but at some point in processing and saving things you have failed to note that the .mov file should be saved as a "self contained" movie, so that the actual movie content is left somewhere on your hard-drive. The .mov file will play on the computer, because it is pointing to the location of the content, which is accessed and used, but when you move (or try to) the .mov file, the content is left behind on the hard drive, and only the container is moved. Thus there is nothing TO play in the new location, there is just the container with no content.
    Francine
    Francine
    Schwieder

  • How do I find out the file size of my photos, and change them to the size

    This will sound inane, but I cannot figure out where to find the file size of my photos in iPhoto. I need a jpeg of a certain file size and image size for my blog background, but can't figure out how to even start.

    If the limited quality options do not get you the file size you need export the full size image files to a folder on your Desktop and use Resize! to batch resize the image dimensions and file size (jpeg compression level) to what you need.
    Click to view full size
    With Resize! you can fine tune the pixel dimensions and file size.
    OT

  • File size problem between Acrobat 8 and 9

    When I print to a PDF from Acrobat 8 I get a file size that is slightly larger than the original PDF file size.  Example a 1.6 mb file becomes a 1.7mb file.  When I print the same file in Acrobat 9 with the same settings (to my Knowledge) (setting = High Quality Print) I get a file that 36.1mb.
    The reason we would print the file in the fist place is that we need to create a PDF that is ether slightly larger or slightly smaller (say 98%) of the original size.  Can anyone help us figure out why our upgraded version 9 would do this?  We also use 7 and between 7 and 8 there are no defenses but 9 makes the file ~21X the original file size.  Please see that attached JPEG for a screen shot of the file sizes.
    Thank you, -Dan

    Thank you for the quick response.  We are able to get some results through the Optimizer however they are not the same.  Also we would like to keep from adding an extra step into the process.  Especially a step that adds a lot of time to the process as the Optimizer does.  In versions 7 and 8 we did not have run the Optimizer (we also did not have to do this in version 5 back in the day).  Why would 9 have to add this step?  I am really looking for a way to keep the same workflow steps.  -Dan

  • Incorrect file sizes shown in Finder over NFS and permissions issues

    Hi there
    This is a problem that existed for me in Leopard and has not been resolved in Snow Leopard.
    I have an XSan with a Leopard server sharing over NFS and AFP. When I connect from a Leopard or Snow Leopard client over NFS the file sizes in Finder are incorrectly displayed. My Tiger clients work perfectly.
    Also, although it says I have read write access to the files over NFS I cannot save over an existing file when I make changes to it, I instead have to create a new version of it and remove the old one.
    Check the link to show a grab of one of the folders in question, the upper window is what the NFS shows me, the lower AFP. If you Get Info on the files over either connection the byte count is identical.
    http://www.the-9000.com/images/finder_anomaly.tiff
    Any info would be greatly appreciated.

    This is a problem that existed for me in Leopard and has not been resolved in Snow Leopard.
    Have you filed a bug report with Apple?
    http://developer.apple.com/bugreporter/
    If not, there's less of a chance they'll know about it and help fix it for you.
    Do things look OK from the command line in Terminal?
    It would probably be useful to use a tool like Wireshark to check out what each protocol is sending over the wire. That could at least narrow it down to being a client or server issue.
    Thanks
    --macko

Maybe you are looking for

  • External LED Monitor Inquire...

    Hello, I understand that LCD tvs are consumer grade and that the tvs are not color accurate. But for simple color correction and client viewing purposes, would using a Matrox on a LCD tv like the LG 47LW6500 47" 1080p LED TV be acceptable at all? Are

  • Connect ipod classic (gen 4) to a TV

    Tried connecting ipod classic (gen 4) to Panasonic flatscreen TV. I get only sound. Same with a friends ipod. Followed instructions using AV cables. Been looking at this for a day now. Anyone have a similar problem or know a solution? Regards. T

  • Printing reports using fop

    Hi: My java application creates reports using fop and xml. Currently, I am printing the reports to pdf files. Is it possible to send the report straight to the printer instead of first creating the pdf file and then printing the report? If it is, can

  • HT1222 What will happen if I update my ipod to 6.1.3

    If I update my ipod to 6.1.3 what will happen

  • Sudden loss of sound

    I was using my macbook (white 13") the other night to do a presentation with video.  I had a hard time getting sound to play, but finally did.  Now I have no sound at all.  The sound indicator is light gray in color as opposed to black and if I using