Faster alternative to cfcontent for large file downloads?

I am using cfcontent to securely download files so the user
can not see the path the file is stored in. With small files this
is fine, but with large 100mb+ files, it is much, much slower than
a straight html anchor tag. Does anyone have a fast alternative to
using cfheader/cfcontent for large files?
Thanks

You should be able to use Java to handle this, either through
a custom tag or you might be able to call the Java classes directly
in Coldfusion. I don't know much about Java, but I found this
example on the Web and got it to work for uploading files. Here's
an example of the code. Hope it gives you some direction:

Similar Messages

  • Need PDF Information for large files

    Hi experts,
    I need some information from you regarding PDF related.
    is there any newer adobe version for large files support - over 15mb, over 30 pages.
    is it possible Batch technology with PDF
    if not please suggest me Possible adobe replacements.
    Thanks
    Kishore

    Thanks so long.
    acroread renders the pages very fast. That's what i want - but it's proprietary:/
    For the moment it's ok but i'd like to have a free alternative to acroread that shows the pages as quick as acroread or preloads more than 1 page.
    lucke wrote:Perhaps, just perhaps it'd work better if you copied it to tmpfs and read from there?
    I've tried it: no improvement.
    Edit: tried Sumatra: an improvement compared to Okular etc. Thanks.
    Last edited by oneway (2009-05-12 21:50:10)

  • Thumbnails for large files

    OK..so now I've learned that for my large panorama filesI wont get a thumbnail in the Organizer. Just some Blue Triangle warning with a exclamation point. I do allot of golf course work where I stitch phototos and print 38 x 11 inches @ 300dpi.
    So whats the work around? Is this a problem in the Bid Photoshop program? Sure I can....copy the file...reduce the resolution..save it with a different name..etc.
    What the heck!  You can't tell me that the program writers at Adobe can make this work for large files!  Why the limit?  By the way...what is the limit?
    Thanks

    By the way...what is the limit?
    http://kb2.adobe.com/cps/402/kb402760.html
    Juergen

  • Are the brushes in Photoshop CC faster than CS6 - still need to use CS5 for large files

    Hey,
    Are the brushes in Photoshop CC any faster than Photoshop CS6.
    Here's my standard large file, which makes the CS6 brushes crawl:
    iPad 3 size - 2048 x 1536
    About 20-100 layers
    A combination of vector and bitmap layers
    Many of the layers use layer styles
    On a file like this there is a hesitation to every brush stroke in CS6. Even a basic round brush has the same hesitation, it doesn't have to be a brush as elaborate as a mixer brush.
    This hesitation happens on both the mac and pc, on systems with 16 gb of ram. Many of my coworkers have the same issue.
    So, for a complicated file, such as a map with many parts, I ask my coworkers to please work in CS5. If they work in CS6 I ask them to not use any CS6 only features, such as group layer styles. The only reason why one of them might want to use CS6 is because they're working on only a small portion of the map, such as a building. The rest of the layers are flattened in their file.
    Just wondering if there has ever been a resolution to this problem...or this is just the way it is.
    Thanks for your help!

    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster, who is not a Windows user).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Large file download issue

    When we are downloading a large file 100MB + with Internet Explorer the
    workstation will at some point report that the connection was reset by the
    server and we lose the download. This is kind of random and I've only seen
    it in IE. I have and use Firefox at my desk and I have no trouble like
    this. But our campus standard is IE and I really would like to pin this down.
    My fear is the Packet filters. I went and looked at the packet filters we
    have set up and was shocked to see we had 77 filter exceptions. My last
    look only showed 45 so I need to sit down and review every filter exception
    and thin the heard. Anyway, below are the exception for http which is what
    I believe IE uses in downloading files. Right? and wanted a second option
    from you guys.
    We have NW 6.5 sp 5 BM 3.8 sp 4 ir3 and I use Craigs tuneup and proxycfg.
    the packet typ HTTP is defines as:
    Protocol TCP
    SRC Port ALL
    DEST Port 80
    ACK bit Filtering disabled
    Stateful Filtering enabled
    The filter exception are:
    #15
    Packet Type: HTTP
    Source: B57_1_EII
    All Circuits
    Any Address
    Destination: B57_2_EII
    All Circuits
    66.89.73.96/255.255.255.224
    Comment:
    #22
    Packet Type: HTTP
    Source: <All Interfaces>
    All Circuits
    Any Address
    Destination: CE1000_1
    All Circuits
    Any Address
    Comment:
    #24
    Packet Type: HTTP
    Source: B57_2_EII
    All Circuits
    Any Address
    Destination: <All Interfaces>
    All Circuits
    Any Address
    Comment: Added by BRDCFG to allow HTTP proxy.
    #67
    Packet Type: HTTP
    Source: B57_2_EII
    All Circuits
    12.22.25.0/255.255.255.240
    Destination: <All Interfaces>
    All Circuits
    192.168.3.7
    Comment: Inbound http for powerschool
    #75
    Packet Type: HTTP
    Source: B57_1_EII
    All Circuits
    192.168.3.6
    Destination: <All Interfaces>
    All Circuits
    Any Address
    Comment:
    Thanks
    David

    When I looked at the version level of Proxy.cfg I use that you wrote it was
    version 21, I did download the current version. Would the differences
    between the versions cause any of the behavior I've seen. I didn't see much
    in there that looked too different.
    Other Comments inside the threaded message:
    > On your stateful HTTP exceptions, you are:
    > #15 Allowing all HTTP to network 66.89.73.96/255.255.255.224
    To allow private LAN access to the Public IP subnet range to allow
    management of Routers etc. that are outside of this firewall.
    > #22 Allowing all HTTP to cross interface CE1000_1 - if this is the public
    > interface, you are allowing all outbound HTTP without using a proxy. If
    > this is the private interface, you are allowing all inbound HTTP through
    > any static NAT connection.
    CE1000_1 is the interface that is connected to our DMZ. We host 3 webserver
    in the DMZ for public access. One is our Groupwise Webaccess.
    > #24 Allowing all HTTP as long as it originates from the B57_2 interface.
    > If the public IP address were also specified, it would be one of the
    > default exceptions intended to allow proxy to send HTTP requests out from
    > the server to the Internet. Because source/destination IP address is
    > missing, it effectively allows traffic into the B57_2 interface as well,
    > assuming the B57_2 interface is public.
    #24 here is the default added by BM. I don't think I ever touched this one.
    This is a set of filters that were imported from our earlier BM versions
    when we built this box. Sounds like it should have the Public IP address
    listed as the source IP. Would you suggest adding that in?
    > #67 Assuming B57_2 is public, you are allowing HTTP traffic to come from
    > a specific network, into a host at 192.168.3.7, via static NAT.
    Correct. We have a turn key box with a windows application that is managed
    by the vendor via HTTP and that is their IP range and the target IP is
    192.168.3.7 and it hold a static NAT in the 66.89.73 range on the BM Server.
    > #75 Hard for me to tell what this is doing, as it appears the B57
    > interface is your public interface. It will allow any HTTP from IP
    > address 192.168.3.6 to anywhere, but only if that address is on the B57
    > side of the server, which sounds incorrect. I'm guessing that the
    > 192.168.3.x network is on the CE1000 side of the server, in which case
    > this is a useless filter exception.
    The Interface is B57_1_EII and this is the Private LAN interface. The
    intent is to allow the SPAM filter box http access to the internet without
    requiring a proxy. The box is unable to accept a proxy setting and the way
    it gets it's updates from the vendor is via HTTP.
    Thanks
    David

  • Small NCP packets for large file read

    I am running the lastest 4.91sp4 client and am getting really slow
    performance through a application called bimap which reads and processes
    large map files. When these files are on a Netware server (OES or 6.5 NSS
    volume), Ethereal shows that NCP read requests are for only 512bytes and
    then an NCP read OK is returned with 512bytes worth of data. This takes
    the program several minutes to read the file and process. When the same
    files sits on a Micrsoft Server Ethereal shows bursts of TCP packets all
    with 1500bytes of data, resulting in a much faster load.
    LIP is turned on and seems to work OK when copying files around NetWare
    server, I see the bursts of 1500bytes of data, but doesn't seem to work
    when using this bimap application. Any ideas?

    Hi,
    Sasha wrote:
    >
    > Their not on the same segments, the routers are all correct according to
    > our Networks team. Ethereal definateley shows the NCP read request as
    > 512Bytes. Other applications such as Word opening large files are OK from
    > same PC, Ethereal shows NCP request of 4096. Once again using the Bimap
    > app and reading the same files stored on a Microsoft Server works OK as
    > well. My understanding with LIP is that the large packet size will try and
    > be negotiated, if it fails it reverts to it's default of 512. I assume it
    > is failing and I don't know why?
    It sounds like the application is specifically reading 512 byte
    fragments on purpose. The fact you see bigger reads on Windows is likely
    because on windows oplocking (caching) is enabled, but isn't on Netware.
    CU,
    Massimo Rosen
    Novell Product Support Forum Sysop
    No emails please!
    http://www.cfc-it.de

  • FTP Sender Adapter with EOIO for Large files

    Hi Experts,
    I am Using XI 3.0 version,
    My scenario is File->XI->FIle, I need to pick the files from FTP Server , there are around 50 files in it each of 10 MB Size.
    So i have to pick the files from FTP folder in the same order as they put into folder , i,e.., FIFO.
    So in Sender FTP Comm channel , i am Specifying
    Qos = EOIO
    queue name =  ACCOUNT
    whether the files will be picked in FIFO into the queue XBTO_ACCOUNT?
      So my question is,
    what is the procedure to specify the parameters so that it will process in FIFO?
      What would be the BEST PRACTICE to acheive it for better performance point of view.
    Thanks
    Sai.

    Hi spantaleoni ,
    I want to process the files using FTP protocol which should be in FIFO ,
    ie.., Files placed first in a folder should be picked up first and remaining one next and so on.
    So if i use FTP ,
    Then to process the files in sequence i.e, FIFO
    the processing parameters
    Qos= EOIO
    Queue name = ACCOUNT
    would process the files in FIFO.
    and to process large files (10MB size) what would be the best  Polling interval secs.
    Thanks,
    Sai

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Seeburger As2 Receiver channel exception for large files

    Hello Folks,
    We have JMS to seeburger AS2 interface we are facing follwing issue in As2 reciver channel for files larger than 20 MB.But In Production it is wokring fine for even 40 MB also.
    Delivering the message to the application usingconnection AS2_http://seeburger.com/xi failed, due to:
    com.sap.engine.interfaces.messaging.api.exception.MessagingException:javax.resource.ResourceException: Fatal exception: javax.resource.ResourceException:SEEBURGER AS2: org.apache.commons.httpclient.ProtocolException: Unbuffered entity enclosing request can not be repeated. # , SEEBURGER AS2:org.apache.commons.httpclient.ProtocolException: Unbuffered entity enclosingrequest can not be repeated. # .
    Please through some light on the issue .
    Regards
    Praveen Reddy

    Hi Praveen,
    The problem would be related to server size. genrally test system do not have same sizing as production server due to that you can not process large files in test system.
    check the sizing of the system.
    regards,
    Harish

  • Does Airport time capsule work well as a storage for large files, such as RAW Photos and HD Video?

    If Airport time capsule can work as a storage device, how reliable is it?  Are downloads into Airport dependent on internet speed, or on the time capsule wifi?

    If Airport time capsule can work as a storage device, how reliable is it?
    The Time Capsule was designed to be used for Time Machine backups, but can equally be used for storage. Like any network storage device it is relatively reliable. However, you should always have a good backup strategy for important files.
    Are downloads into Airport dependent on internet speed, or on the time capsule wifi?
    Data transfer rates "into" the Time Capsule are dependent on a number of factors. Which factors depend on the original source of the files. Regardless, the common factors will be the TC's internal hard drive interface and data write speeds of the drive itself.

  • WRT54GS v1 Corrupting large file downloads - how to fix?

    I have a two week old WRT54GS v1 (firmware 1.0.02) that appears to be corrupting file downloads of more than 2 or 3 Gb, more specifically program installation downloads.
    When downloading, the files appear to arrive properly according to their file sizes.  However, in both instances I've experienced this week with installation packages from different sources, the installation would fail with either Error 2350 or Error 11FD.  (file corruption errors)
    If I take the router out of the loop by connecting the desktop directly to the cable modem, the packages come through without error.
    It has been suggested that this has something to do with a broken transmission or filter issue caused by the NAT hardware firewall (I don't have a clue what that means)
    Does anyone have any suggestions?
    Rick Grant

    You should try to upgrade your router's firmware, reset and re-configure it...
    Download Firmware 1.7 MB...
    Follow these steps to upgrade the firmware on the device : -
    Open an Internet Explorer browser page on a computer hard wired to the router...
    In the address bar type - 192.168.1.1...Leave the username blank & in password use admin in lower case...
    Click on the 'Administration' tab- Then click on the 'Firmware Upgrade' sub tab- Here click on 'Browse' and browse the .bin firmware file and click on "Upgrade"...
    Wait for few seconds until it shows that "Upgrade is successful"  After the firmware upgrade, click on "Reboot" and you will be returned back to the same page OR it will say "Page cannot be displayed".
    Now reset your router :
    Press and hold the reset button for 30 seconds...Release the reset button...Unplug the power cable from your router, wait for 30 seconds and re-connect the power cable...Now re-configure your router...
    After re-configuring the router, observe the connection and see if it corrects your problem...

  • Recommended Structure for Large Files

    I am working at re-familiarizing myself with Oracle development and management so please forgive my ignorance on some of these topics or questions. I will be working with a client who is planning a large-scale database for what is called "Flow Cytometry" data which will be linked to research publications. The actual data files (FCS) and various text, tab-delimited and XML files will all be provided by researchers in a wrapper or zip-container which will be parsed by some as-yet-to-be-developed tools. For the most part, data will consist of a large FCS file containing the actual Flow Cytometry Data, along with various/accompanying text/XML files containing the metadata (experiment details, equipment, reagents etc). What is most important is the metadata which will be used to search for experiments etc. For the most part the actual FCS data files (up to 100-300 mb), will only need to be linked (stored as BLOB's?) to the metadata and their content will be used at a later time for actual analysis.
    1: Since the actual FCs files are large, and may not initially be parsed and imported into the DB for later analysis, how best can/should Oracle be configured/partitioned so that a larger/direct attached storage drive/partition can be used for the large files so as not to take up space where the actual running instance of Oracle is installed? We are expecting around 1TB of data files initially
    2: Are there any on-line resources which might be of value to such an implementation?

    Large files can be stored as BFILE datatypes. The data need not be transferred to Oracle tablespaces and the files will reside in OS.
    It is also possible to index bfiles using Oracle text indexing.
    http://www.oracle-base.com/articles/9i/FullTextIndexingUsingOracleText9i.php
    http://www.stanford.edu/dept/itss/docs/oracle/10g/text.101/b10730/cdatadic.htm
    http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle_Text/TEXT_3.shtml
    Mohan
    http://www.myoracleguide.com/

  • Which format on a flash drive for large files for use by Mac and PC

    I need to copy large files (9GB of movie file exported from iMovie of school graduation) onto 16GB flash drives so they can be used by school parents who may have Mac, PC or even TV.
    My first attempt says the files are too large.
    A lot of googling tells me this is a recognised problem but I am confursed by which advice to take.
    I am on a 2012 iMac running OSX version 10.7.4.
    Do I need to download some software to run NTFS.  It all sounds so confusing!
    I ended up in this predicament because the quality of my photo slideshows I copied to my DVD-R was so bad.  Thought a flash drive would be easy. Ha, not at all.
    Please answer in laymans terms - I could barely follow some of the advice I found.....

    Format the flash drives with Disk Utility in ExFAT format.

  • Anybody have a resolution for PRIAMOS file download using a Mac OSX 10.6.8 and Adobe Reader XI?

    Does anyone have experience of the EU PRIAMOS file download system? I am experiencing difficulties in downloading files in Adobe Reader format ( I have set the default PDF reader to Adobe Reader).
    The PRIAMOS system can only communicate with Windows 32bit - Mozilla Firefox 2 and higher or......
    IE 6 or 7
    IE 7 or 8 (in compatibility mode)
    Supported PDF programmes are:
    Adobe Reader 8.1 or higher
    My Adobe Reader software is the latest version (Adobe Reader XI for Mac with latest update as at 10 Sept 2013)
    The PRIAMOS system does not officially support Mac computers, but PRIAMOS has been used successfully with OS X version 10.5.8 (Leopard) and Mozilla Firefox (2.5) (with built in PDF reader disabled and Adobe Reader enabled) and Adobe Reader 8.1
    Any help would be gratefully appreciated

    Oops, I hit return before I was finished!
    In short, does anyone have any ideas apart from those I've tried. One solution seems to be to go back to 10.6.4 - one of the other discussions seem to indicate that the probelm comes in with 10.6.8.
    Thanks!

Maybe you are looking for

  • Apple verify email address problem :(

    I have a bought a IPhone 4S and I wanted to sign up for itunes. I have done everything. A verification mail is sent by apple but when I linked to the web site given and write my ID (I think it is my mail) and password, I got a message "This email add

  • Change font color shifts text in Acrobat XI PRO

    When I go into an acrobat document and try to chnage the font color, it shifts the text and the paragraph it's in. I contacted Adobe about it a few months ago and they told me it was a know bug. When is it going to be fixed?

  • Does Airport Express block 139/445 by default?

    I'm trying to help a friend troubleshoot a potential problem with his network. He's using an Airport Express wireless router with his G4 and also has a Windows 2000 laptop connected through the wired port. His Windows system recently became infected

  • How to know applied date in ESS leave request

    Hi all, How to know on which date employee has created a leave request in ESS.In which table this data is available? Thanks and Regards, Rajesh

  • Save As PNG Adds Hard Transition to Gradient

    I have an image that I have created which includes a gradient with a soft transition from the light tone to the dark tone. When I save as PNG, the resulting image displays a hard transition to the gradient. How do I create a PNG image which maintains