IdcApache2Auth.so Compiled With Large File Support

Hi, I'm installing UCM 10g on solaris 64 Bit plattform and Apache 2.0.63 , everything went fine until I update configuration in the httpd.conf file. When I query server status it seems to be ok:
+./idcserver_query+
Success checking Content Server  idc status. Status:  Running
but in the apache error_log and I found the next error description:
Content Server Apache filter detected a bad request_rec structure. This is possibly a problem with LFS (large file support). Bad request_rec: uri=NULL;
Sizing information:
sizeof(*r): 392
+[int]sizeof(r->chunked): 4+
+[apr_off_t]sizeof(r->clength): 4+
+[unsigned]sizeof(r->expecting_100): 4+
If the above size for r->clength is equal to 4, then this module
was compiled without LFS, which is the default on Apache 1.3 and 2.0.
Most likely, Apache was compiled with LFS, this has been seen with some
stock builds of Apache. Please contact Support to obtain an alternate
build of this module.
When I search at My Oracle Support for suggestions about how to solve my problem I found a thread which basically says that Oracle ECM support team could give me a copy IdcApache2Auth.so compiled with LFS.
What do you suggest me?
Should I ask for ECM support team help? (If yes please tell me How can I do it)
or should I update the apache web server to version 2.2 and use IdcApache22Auth.so wich is compiled with LFS?
Thanks in advance, I hope you can help me.

Hi ,
Easiest approach would be to use Apache2.2 and the corresponding IdcApache22Auth.so file .
Thanks
Srinath

Similar Messages

  • Mounting CIFS on MAC with large file support

    Dear All,
    We are having issues copying large files ( > 3.5 GB) from MAC to a CIFS Share (smb mounted on MAC) whereby the copy fails if files are larger than 3.5 GB in size and hence I was wondering if there is any special way to mount CIFS Shares (special option in the mount_smbfs command perhaps ) to support large file transfer?
    Currently we mount the share using the command below
    mount_smbfs //user@server/<share> /destinationdir_onMAC

    If you haven't already, I would suggest trying an evaluation of DAVE from Thursby Software. The eval is free, fully functional, and supported.
    DAVE is able to handle large file transfer without interruption or data loss when connecting to WIndows shared folders. If it turns out that it doesn't work as well as you like, you can easily remove it with the uninstaller.
    (And yes, I work for Thursby, and have supported DAVE since 1998)

  • [REQ] gpac with Large File Support (solved)

    Hi All
    I´m getting desperated.
    I´m using mp4box from the gpac package to mux x264 and aac into mp4 containers.
    Unfortunately all mp4box version don´t support files bigger then 2Gb.
    There is already a discussion going on at the bugtracker
    http://sourceforge.net/tracker/index.ph … tid=571738
    I tried all those methods with gpac 4.4 and the csv version.
    But it still breaks up after 2Gb when importing files with mp4box.
    So..anybody an idea how to get a version build on arch which supports big files?
    thanks
    Last edited by mic64 (2007-07-16 17:16:44)

    ok  after looking at this tuff with patience I got it working.
    You have to use the cvs version AND the patch AND extra flags from the link above.
    After that Files >2Gb work.
    Last edited by mic64 (2007-07-16 17:27:33)

  • (urgent) SQL*Loader Large file support in O734

    hi there,
    i have the following sqlloader error when trying to upload data file(s),
    each has size 10G - 20G to Oracle 734 DB on SunOS 5.6 .
    >>
    SQL*Loader-500: Unable to open file (..... /tstt.dat)
    SVR4 Error: 79: Value too large for defined data type
    <<
    i know there's bug fix for large file support in Oracle 8 -
    >>
    Oracle supports files over 2GB for the oracle executable.
    Contact Worldwide Support for information about fixes for bug 508304,
    which will add large file support for imp, exp, and sqlldr
    <<
    however, really want to know if any fix for Oracle 734 ?
    thx.

    Example
    Control file
    C:\DOCUME~1\MAMOHI~1>type dept.ctl
    load data
    infile dept.dat
    into table dept
    append
    fields terminated by ',' optionally enclosed by '"'
    trailing nullcols
    (deptno integer external,
    dname char,
    loc char)
    Data file
    C:\DOCUME~1\MAMOHI~1>type dept.dat
    50,IT,VIKARABAD
    60,INVENTORY,NIZAMABAD
    C:\DOCUME~1\MAMOHI~1>
    C:\DOCUME~1\MAMOHI~1>dir dept.*
    Volume in drive C has no label.
    Volume Serial Number is 9CCC-A1AF
    Directory of C:\DOCUME~1\MAMOHI~1
    09/21/2006  08:33 AM               177 dept.ctl
    04/05/2007  12:17 PM                41 dept.dat
                   2 File(s)          8,043 bytes
                   0 Dir(s)   1,165 bytes free
    Intelligent sqlldr command
    C:\DOCUME~1\MAMOHI~1>sqlldr userid=hary/hary control=dept.ctl
    SQL*Loader: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:26 2007
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 2
    C:\DOCUME~1\MAMOHI~1>sqlplus hary/hary
    SQL*Plus: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:37 2007
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    As I am appending I got two extra rows. One department in your district and another in my district :)
    SQL> select * from dept;
        DEPTNO DNAME          LOC
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            40 OPERATIONS     BOSTON
            50 IT             VIKARABAD
            60 INVENTORY      NIZAMABAD
    6 rows selected.
    SQL>

  • Large File Support OEL 5.8 x86 64 bit

    All,
    I have downloaded some software from OTN and would like to combine the individual zip files into one large file. I assumed that I would unzip the individual files and then create a single zip file. However, my OEL zip utility fails with the file is too large error.
    Perhaps related, I can't ls the directories that contain huge files. For example, a huge VM image file.
    Do I need to rebuild the utilities to support huge files or do I need install/upgrade rpm's?  If I need to relink lint programs, what are the commands to do such?
    Thanks.

    Correction:  This is occurring on a 5.7 release and not a 5.8.  The command uname -a yields 2.6.32-200.13.1.2...
    The file size I'm trying to assemble is ~ 4 GB (the zip files for 12c DB).  As for using tar, as I understand it, EM 12c wants a single zip file for software deployments.
    I'm assuming that the large file support was not linked into the utilities. For example when I try to list contents of a directory which has huge/large files, I get the following message:
    ls:  V29653-01.iso:  Value too large for defined data type
    I'm assuming that I need to upgrade to later release.
    Thanks.

  • Large File Support

    Hi all,
    I'm new to the fourms but not new to the products. Recently
    I've been working on a project that requires very large XML files,
    like over 200,000 lines of code. I have noticed that when I work on
    these large files the software is really buggy. First I can't use
    the scroll wheel on my mouse. It just doesn't work. Second, half
    the time when I open a file it will trunkate the code after a few
    hundred thousand lines. I don't think it should do this. Visual
    Studio sure doesn't. I would rather work with dreamweaver, but
    because of these problems on large files I am forced to use Visual
    Studio. So, here's the question, Is there something I should do to
    make dreamweaver work with large files or is this something that
    Adobe needs to fix?
    Thanks for the replys.

    man mount_ufs
    -o largefiles | nolargefiles

  • Photoshop CS6 keeps freezing when I work with large files

    I've had problems with Photoshop CS6 freezing on me and giving me RAM and Scratch Disk alerts/warnings ever since I upgraded to Windows 8.  This usually only happens when I work with large files, however once I work with a large file, I can't seem to work with any file at all that day.  Today however I have received my first error in which Photoshop says that it has stopped working.  I thought that if I post this event info about the error, it might be of some help to someone to try to help me.  The log info is as follows:
    General info
    Faulting application name: Photoshop.exe, version: 13.1.2.0, time stamp: 0x50e86403
    Faulting module name: KERNELBASE.dll, version: 6.2.9200.16451, time stamp: 0x50988950
    Exception code: 0xe06d7363
    Fault offset: 0x00014b32
    Faulting process id: 0x1834
    Faulting application start time: 0x01ce6664ee6acc59
    Faulting application path: C:\Program Files (x86)\Adobe\Adobe Photoshop CS6\Photoshop.exe
    Faulting module path: C:\Windows\SYSTEM32\KERNELBASE.dll
    Report Id: 2e5de768-d259-11e2-be86-742f68828cd0
    Faulting package full name:
    Faulting package-relative application ID:
    I really hope to hear from someone soon, my job requires me to work with Photoshop every day and I run into errors and bugs almost constantly and all of the help I've received so far from people in my office doesn't seem to make much difference at all.  I'll be checking in regularly, so if you need any further details or need me to elaborate on anything, I should be able to get back to you fairly quickly.
    Thank you.

    Here you go Conroy.  These are probably a mess after various attempts at getting help.

  • Emacs compiled with S/MIME support in Arch Linux

    I have been trying without success to get Emacs 23.2 and Gnus 5.13 in Arch Linux to generate messages signed with S/MIME certificates, which are frequently used by the U.S. government and by businesses.  My configuration works fine on my Ubuntu workstation (as opposed to my Arch Linux laptop).  The error I usually get is "smime-sign-buffer is void."  I am beginning to wonder whether Emacs/Gnus on Arch is compiled with S/MIME support.  Does anyone happen to know, or has anyone had any success getting signatures with S/MIME to work on Arch?
    Thanks,
    Bill Day

    I can't help you with emacs, but to find out what options the packages in the repos are compiled with you want abs. With abs you can then use the same PKGBUILD that was used to make the official packages, and change only what you need to change, as opposed to starting from scratch when compiling.
    A few useful links:
    http://wiki.archlinux.org/index.php/Creating_Packages
    http://wiki.archlinux.org/index.php/Makepkg

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • My iPhone 4s will not forward emails with large files, pictures etc attached. How can I fix this?

    How do I get my iPhone to forward or send large files? Such as pictures.

    Check Settings>Messages>MMS Messaging to see if it's set to "On". Check with your carrier to see that your account is properly provision. Check to see if your carrier is a supported iPhone carrier. Not all features work or work properly on unsupported carriers.
    Best of luck.

  • Two work-arounds for Siena when working with large files

    MonaTech pointed out that Project Siena will crash when switching to the desktop or another app.  I've also noticed this happening even when the monitor goes to sleep with inactivity.  To be clear, this is *developing* in Siena, not after the app
    you're working on has been compiled and installed.
    The 'non-technical' work-around that has been successful for me is to split the screen before I switch to another app.  It's not perfect but at least I can switch to another app (browser, desktop app, etc.) without losing where I'm at.
    A more technical work-around is proposed in this thread:http://social.technet.microsoft.com/Forums/en-US/c768be8f-3c85-444e-bb44-6f29abdecee7/project-siena-app-memory-issues-with-large-project-?forum=projectsiena 
    If you have a version of Visual Studio (that's not Express) it may do the trick for you.
    FYI - In the post you'll see that Olivier responded and indicates that this is a known issue.
    Thor

    Wow - that's what you get for multi-tasking! :)
    The link is now fixed in my original post and posted here for convenience:
    http://social.technet.microsoft.com/Forums/en-US/c768be8f-3c85-444e-bb44-6f29abdecee7/project-siena-app-memory-issues-with-large-project-?forum=projectsiena
    Thor

  • Working with Large files in Photoshop 10

    I am taking pictures with a 4X5 large format film camera and scanning them at 3,000 DPI, which is creating extremely large files. My goal is to take them into Photoshop Elements 10 to cleanup, edit, merge photos together and so on. The cleanup tools don't seem to work that well on large files. My end result is to be able to send these pictures out to be printed at large sizes up to 40X60. How can I work in this environment and get the best print results?

    You will need to work with 8bit files to get the benefit of all the editing tools in Elements.
    I would suggest resizing at resolution of 300ppi although you can use much lower resolutions for really large prints that will be viewed from a distance e.g. hung on a gallery wall.
    That should give you an image size of 12,000 x 18,000 pixels if the original aspect ratio is 2:3
    Use the top menu:
    Image >> Resize >> Image Size

  • Probs with large files in adapter engine

    Hi all,
    in a scenario I'm currently working on, I have to process files (from file adapter). These files have 1KB up to 15MB. So I set up the tuning parameters, that files with more than 2MB are queued in this special queue for large messages. This works fine. small messages are processed directly and the larger messages are queued and processed in sequence by the integration engine.
    My problem is now on the point where the integration engine sends this large files to the adapter engine. There are always 4 messages in parallel sent to the adapter engine and this slows down the whole system due to an additional self-developed module.
    So my question is: Can I restrict the sending of the messages to only 1 at a time from integration engine to adapter engine?
    The time of processing is not important. I think I can handle this with EOIO, but is there an other way? Perhaps depending on the queue?
    Thanks
    Olli

    Hi William,
    thx for your reply.
    Yes I think it is the easiest way. But the problem is, if a message runs to a failure the others will not be processed until the failure is fixed or processing is canceled.
    So I hoped to find a solution where I can restrict the sending to the adapter engine to one message at a time in order to not affect the processing of the other messages as in EOIO.
    Regards
    Olli

  • Bug report - Finder crashes in Cover Flow mode with large files

    I just came back from the Apple store and was told that I have discovered another bug in Leopard. When in Cover Flow view in Finder and trying to browse directories with large ( multiple GB) files, Finder continually crashes and reboots, oftentimes with 2 new FInder windows.
    I created a new user on my MBP to remove the effect of any preferences and the problem repeated itself.
    Come up Apple... get on top of these bugs and issue some patches...
    Not the kind of new OS I expected from a top notch company like Apple.

    Ah... that'll be it then, they are 512 x 512. well i guess that bugs been hanging around for quite some time now, anyone got any ideas if anyone's trying to fix it? i guess making 128 x 128 icons wouldn't be the end of the world but it does seem like a step backwards.
    one thing that still confuses me... an icon file contains icons of all sizes, why does cover flow not select the right size icon to display?
    thanks for that info V.K., much obliged.
    regards, Pablo.

  • Apache 2.0.x and large file support (bigger than 2GB)

    I've seen on the mail list people complaining that apache 2.0.55 can't serve files bigger than 2GB.
    The sollution is to compile with this option:
    export CPPFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64"
    (and this is default in 2.2.x)
    Now, an Apache compiled with those options is most probably not binary compatible with mod_php or mod_python or other out-of-tree apache modules, so they'll need recompileing too.
    Now, I've noticed that /home/httpd/build/config_vars.mk needs to be patched too, so that the php and python modules pick up the setting when compiled.
    You need to find the EXTRA_CPPFLAGS variable and add the two defines from above to it.. At the end it should look something like this:
    EXTRA_CPPFLAGS = -DLINUX=2 -D_REENTRANT -D_XOPEN_SOURCE=500 -D_BSD_SOURCE -D_SVID_SOURCE -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
    BTW in lighttpd (well at least in 1.4.11) the --enable-lfs is enabled by default.

    I've seen on the mail list people complaining that apache 2.0.55 can't serve files bigger than 2GB.
    The sollution is to compile with this option:
    export CPPFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64"
    (and this is default in 2.2.x)
    Now, an Apache compiled with those options is most probably not binary compatible with mod_php or mod_python or other out-of-tree apache modules, so they'll need recompileing too.
    Now, I've noticed that /home/httpd/build/config_vars.mk needs to be patched too, so that the php and python modules pick up the setting when compiled.
    You need to find the EXTRA_CPPFLAGS variable and add the two defines from above to it.. At the end it should look something like this:
    EXTRA_CPPFLAGS = -DLINUX=2 -D_REENTRANT -D_XOPEN_SOURCE=500 -D_BSD_SOURCE -D_SVID_SOURCE -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
    BTW in lighttpd (well at least in 1.4.11) the --enable-lfs is enabled by default.

Maybe you are looking for

  • Exit in transaction BP

    Hi All, I have a question. Our customer would like to see a popup in transaction BP (obviously in CRM) if a certain field is filled or if another checks results into a predetermined value. So I am looking for an exit/BADI which is triggered during st

  • Getting the message "This computer is not authorized" when trying to sync iPhone.

    When I try to sync my iPhone I get the message that "Find my Friends" can't be copied from the iPhone because this computer is not authorized. When I go to authorize the computer I get the message that "This computer is already authorized". Anyone kn

  • Regression test for Controlling area and assets

    Hi, Friends We are up grading from version 4.7 to ECC 6.0 .In this case we have to do regression test . May i know what are the topics should be tested .Could you give me a brief explanation including the transaction codes. Waiting for you informatio

  • Reader X

    Since downloading the 10.0 I've noticed that I can't print or save a pdf document.  How do I go about accomplishing such ??  Thanks in advance. Mc D

  • A row delimiter should be seen after column number

    A column delimiter was seen after column number <70> for row number <533394> in file The total number of columns defined is <70>, so                                                       a row delimiter should be seen after column number <70>. Please