Use oraDav with large files (1GB+)?

My customer has many terabytes of images, many of which are over 1GB in size. Using interMedia w/oraDav looks very interesting. For example, the users could access the images from a client/server application via WebFolders from their Windows workstations. A concern is that the users could not begin browsing an image without downloading their entire image. Does oraDav respect the HTTP Content-Range header? This would allow the client application to randomly access portions of an image file.

I believe that the answer is yes. Have you tried it?

Similar Messages

  • Question: Best Strategy for Dealing with Large Files 1GB

    Hi Everyone,
    I have to build a UCM system for large files > 1GB.
    What will be the best way to upload them (applet, checkin form, webdav)?
    Also what will be the best way to download them (applet, web form, webdav)?
    Any tips will be greatly appreciated
    Tal.

    Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
    Boris
    Edited by: user8760096 on Sep 3, 2009 4:01 AM

  • How to use javap with jar files ?

    how to use javap with jar files ?
    thanks

    As long as the jar is on the class path, your gold. So,
    javap -classpath myjar.jar mypackage.MyClass
    Chuck

  • Photoshop CS6 keeps freezing when I work with large files

    I've had problems with Photoshop CS6 freezing on me and giving me RAM and Scratch Disk alerts/warnings ever since I upgraded to Windows 8.  This usually only happens when I work with large files, however once I work with a large file, I can't seem to work with any file at all that day.  Today however I have received my first error in which Photoshop says that it has stopped working.  I thought that if I post this event info about the error, it might be of some help to someone to try to help me.  The log info is as follows:
    General info
    Faulting application name: Photoshop.exe, version: 13.1.2.0, time stamp: 0x50e86403
    Faulting module name: KERNELBASE.dll, version: 6.2.9200.16451, time stamp: 0x50988950
    Exception code: 0xe06d7363
    Fault offset: 0x00014b32
    Faulting process id: 0x1834
    Faulting application start time: 0x01ce6664ee6acc59
    Faulting application path: C:\Program Files (x86)\Adobe\Adobe Photoshop CS6\Photoshop.exe
    Faulting module path: C:\Windows\SYSTEM32\KERNELBASE.dll
    Report Id: 2e5de768-d259-11e2-be86-742f68828cd0
    Faulting package full name:
    Faulting package-relative application ID:
    I really hope to hear from someone soon, my job requires me to work with Photoshop every day and I run into errors and bugs almost constantly and all of the help I've received so far from people in my office doesn't seem to make much difference at all.  I'll be checking in regularly, so if you need any further details or need me to elaborate on anything, I should be able to get back to you fairly quickly.
    Thank you.

    Here you go Conroy.  These are probably a mess after various attempts at getting help.

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • How to transfer large files(1GB) to pc

    How to transfer large files(1GB) to pc

    Or possibly alternatively, and if really desperate, upload it to file distribution service like Fileserve or Rapidshare and then download it from the other machine (and then delete the upload).  Many of these services have a free mode although they may have size limitations and they certainly throttle the download speed down (Rapidshare may be one of the worse, Fileserve one of the best).  Personally I never tried something the size of 3GB.  What I do see is that stuff that large is generally broken up into multiple files to get around the size limitations and to be glued together when all the parts are downloaded.
    Just throwing this "out there" as an alternative but as I said, you probably would need to really be desparate to go this route

  • Working with Large files in Photoshop 10

    I am taking pictures with a 4X5 large format film camera and scanning them at 3,000 DPI, which is creating extremely large files. My goal is to take them into Photoshop Elements 10 to cleanup, edit, merge photos together and so on. The cleanup tools don't seem to work that well on large files. My end result is to be able to send these pictures out to be printed at large sizes up to 40X60. How can I work in this environment and get the best print results?

    You will need to work with 8bit files to get the benefit of all the editing tools in Elements.
    I would suggest resizing at resolution of 300ppi although you can use much lower resolutions for really large prints that will be viewed from a distance e.g. hung on a gallery wall.
    That should give you an image size of 12,000 x 18,000 pixels if the original aspect ratio is 2:3
    Use the top menu:
    Image >> Resize >> Image Size

  • Are the brushes in Photoshop CC faster than CS6 - still need to use CS5 for large files

    Hey,
    Are the brushes in Photoshop CC any faster than Photoshop CS6.
    Here's my standard large file, which makes the CS6 brushes crawl:
    iPad 3 size - 2048 x 1536
    About 20-100 layers
    A combination of vector and bitmap layers
    Many of the layers use layer styles
    On a file like this there is a hesitation to every brush stroke in CS6. Even a basic round brush has the same hesitation, it doesn't have to be a brush as elaborate as a mixer brush.
    This hesitation happens on both the mac and pc, on systems with 16 gb of ram. Many of my coworkers have the same issue.
    So, for a complicated file, such as a map with many parts, I ask my coworkers to please work in CS5. If they work in CS6 I ask them to not use any CS6 only features, such as group layer styles. The only reason why one of them might want to use CS6 is because they're working on only a small portion of the map, such as a building. The rest of the layers are flattened in their file.
    Just wondering if there has ever been a resolution to this problem...or this is just the way it is.
    Thanks for your help!

    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster, who is not a Windows user).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • Mounting CIFS on MAC with large file support

    Dear All,
    We are having issues copying large files ( > 3.5 GB) from MAC to a CIFS Share (smb mounted on MAC) whereby the copy fails if files are larger than 3.5 GB in size and hence I was wondering if there is any special way to mount CIFS Shares (special option in the mount_smbfs command perhaps ) to support large file transfer?
    Currently we mount the share using the command below
    mount_smbfs //user@server/<share> /destinationdir_onMAC

    If you haven't already, I would suggest trying an evaluation of DAVE from Thursby Software. The eval is free, fully functional, and supported.
    DAVE is able to handle large file transfer without interruption or data loss when connecting to WIndows shared folders. If it turns out that it doesn't work as well as you like, you can easily remove it with the uninstaller.
    (And yes, I work for Thursby, and have supported DAVE since 1998)

  • DW MX 2004 Slow with large files?

    I'm using DW MX 2004 with a static website with a few
    thousand files.
    It's very slow when opening multiple files at the same time
    or
    opening a large html file.
    But DW4 is fine, nice and quick compared with DW MX 2004.
    Is there any way to help DW MX 2004 work better with these
    files?
    Many thanks, Craig.

    Many thanks.
    But already running the 7.0.1 update.
    "Randy Edmunds" <[email protected]> wrote in
    message
    news:eggcl0$6ke$[email protected]..
    > Be sure that you've installed the 7.0.1 updater. There
    were a few
    > performance fixes, especially on the Mac, that may help
    your workflow.
    >
    > HTH,
    > Randy
    >
    >
    >> I'm using DW MX 2004 with a static website with a
    few thousand files.
    >> It's very slow when opening multiple files at the
    same time or
    >> opening a large html file.
    >> But DW4 is fine, nice and quick compared with DW MX
    2004.
    >>
    >> Is there any way to help DW MX 2004 work better with
    these files?

  • IdcApache2Auth.so Compiled With Large File Support

    Hi, I'm installing UCM 10g on solaris 64 Bit plattform and Apache 2.0.63 , everything went fine until I update configuration in the httpd.conf file. When I query server status it seems to be ok:
    +./idcserver_query+
    Success checking Content Server  idc status. Status:  Running
    but in the apache error_log and I found the next error description:
    Content Server Apache filter detected a bad request_rec structure. This is possibly a problem with LFS (large file support). Bad request_rec: uri=NULL;
    Sizing information:
    sizeof(*r): 392
    +[int]sizeof(r->chunked): 4+
    +[apr_off_t]sizeof(r->clength): 4+
    +[unsigned]sizeof(r->expecting_100): 4+
    If the above size for r->clength is equal to 4, then this module
    was compiled without LFS, which is the default on Apache 1.3 and 2.0.
    Most likely, Apache was compiled with LFS, this has been seen with some
    stock builds of Apache. Please contact Support to obtain an alternate
    build of this module.
    When I search at My Oracle Support for suggestions about how to solve my problem I found a thread which basically says that Oracle ECM support team could give me a copy IdcApache2Auth.so compiled with LFS.
    What do you suggest me?
    Should I ask for ECM support team help? (If yes please tell me How can I do it)
    or should I update the apache web server to version 2.2 and use IdcApache22Auth.so wich is compiled with LFS?
    Thanks in advance, I hope you can help me.

    Hi ,
    Easiest approach would be to use Apache2.2 and the corresponding IdcApache22Auth.so file .
    Thanks
    Srinath

  • Read parameter error -50 with larger file - please help

    I have this line of code: (read sfRef from SourcePosition as data)
    It works fine with these 811.2MB files but when I try to read from a 1.62GB file I get parameter error -50. Did I miss something about larger files?

    found it!
    You can't read larger than 1gb. I had it grab the data in multiple pieces and it works now.

  • Sample editor display jumps to file begin on zoom with large files

    Hey - does anyone else have this problem:
    When I zoom in to the max possible zoom level in the sample editor in Logic (eg to edit single samples with the pencil), in any audio file longer than 12:41, the waveform display suddenly jumps to very beginning of the file.
    It happens using either the zoom-in key, zoom slider or zoom tool. It is somewhat infuriating because I have to guess when to stop pressing zoom-in to get as close as i can without triggering the jump. (If i go one zoom level too far, I have to go back, zoom out and re-find my place and try again.)
    I did some investigation and the bug in zoom behavior starts happening with audio files a little shy of 12 min 41 sec ( 12:40.871ms to be more exact).
    Here are the results in "length in samples" of a test audio file (AIFF 24-bit Stereo, 44100Hz):
    33554453 samples and greater => sample editor jumps to beginning when zoomed in to max
    33554432 - 33554452 samples => sample editor jumps to END of file when zoomed to max (bizarre, eh? a 20-sample window in which the bug works in the OPPOSITE direction!)
    33554431 samples and less => sample editor zoom is normal and zooms in perfectly to the proper location at max zoom.
    I also tested other things like trashing my logic prefs and starting from an empty song with nothing in it - none of which make any difference. This bug is present in Logic 8.0.2 on both my Macbook Pro with OS X 10.5.7 and my Powerbook G4 with 10.4.11
    Maybe time to report this to apple - can anyone corroborate by just continually pressing your zoom-in key with the cursor in the middle of an audio file longer than 12:41 and see if the display jumps to the beginning?
    Thanks!

    bump?

  • XSLT with large file size

    Hello all,
    I am very new to XSL. I have been reading a lot on it. I have a small transformation program which takes an xml file, an xsl file and performs the transformation. Everything works fine when the file size is in terms of KB. When I tried the same program with the file size of around 118MB, it is giving me out of memory exception. I would appreciate any comments to make my program work for bigger file size. I am posting my java code and xsl file
    public static void xsl(String inFilename, String outFilename, String xslFilename) {
    try {
    // Create transformer factory
    TransformerFactory factory = TransformerFactory.newInstance();
    // Use the factory to create a template containing the xsl file
    Templates template = factory.newTemplates(new StreamSource(
    new FileInputStream(xslFilename)));
    // Use the template to create a transformer
    Transformer xformer = template.newTransformer();
    // Prepare the input and output files
    Source source = new StreamSource(new FileInputStream(inFilename));
    Result result = new StreamResult(new FileOutputStream(outFilename));
    // Apply the xsl file to the source file and write the result to the output file
    xformer.transform(source, result);
    } catch (FileNotFoundException e) {
    System.out.println("Exception " + e);
    } catch (TransformerConfigurationException e) {
    // An error occurred in the XSL file
    System.out.println("Exception " + e);
    } catch (TransformerException e) {
    // An error occurred while applying the XSL file
    // Get location of error in input file
    SourceLocator locator = e.getLocator();
    int col = locator.getColumnNumber();
    int line = locator.getLineNumber();
    String publicId = locator.getPublicId();
    String systemId = locator.getSystemId();
    System.out.println("Exception " + e);
    System.out.println("locator " + locator.toString());
    System.out.println("line : " + line);
    System.out.println("col : " + col);
    System.out.println("No Exception");
    xsl file :
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl:output method="xml" indent="yes"/>
    <xsl:template match="/">
    <xsl:element name="Hosts">
    <xsl:apply-templates select="//maps/map/hosts"/>
    </xsl:element>
    </xsl:template>
    <xsl:template match="//maps/map/hosts">
    <xsl:for-each select="*">
    <xsl:element name="host">
    <xsl:element name="ip"><xsl:value-of select="./@ip"/></xsl:element>
    <xsl:element name="known"><xsl:value-of select="./known/@v"/></xsl:element>
    <xsl:element name="targeted"><xsl:value-of select="./targeted/@v"/></xsl:element>
    <xsl:element name="asn"><xsl:value-of select="./asn/@v"/></xsl:element>
    <xsl:element name="reverse_dns"><xsl:value-of select="./reverse_dns/@v"/></xsl:element>
    </xsl:element>
    </xsl:for-each>
    </xsl:template>
    Thanks,
    Namrata
    </xsl:stylesheet>

    One thing you could try to do is avoid using xpath like ".//" and "*".
    I had many problems in terms of memory consuptiom and performance with xpaths like above.
    Altrought you have a little more work to code your xslt it performs better, and probably you will write once and run many times.

Maybe you are looking for

  • Problems sending email using javax.mail.*

    I need to send an email from an application I am working on. I am using the features of the javax.mail package to do so. In looking at the code I am unsure why this is not working. This is my first time using this package so it might be something sil

  • Printer issue with Oracle11i Arabic report

    Hi We are using Oracle11i(11.5.10.2) on Aix 5.3 Oracle11i Arabic report output is not properly printing on printer. Number in Arabic in Report is not printing ,it is printing international number like 1,2,3 we are using PAST driver and report in Text

  • Transferring email files from hotmail to my mac

    Is there an easy way to transfer folders in hotmail to mac mail as I am not use to macs

  • Problem in starting Integration Repository

    Contrary to another thread that I found, I am having problems starting Integration Repository for the first time after installation. I followed the instructions of the installation guide up to the point of running SXMB_IFR to launch SAP Exchange Infr

  • New iPhoto web galleries and iPhone

    I've made several web galleries via the new iPhoto and put them on my .mac site. When I go to view the gallery from my iPhone I never get past the login window. I type the password really carefully and it just gives me a new blank login window again.