2gb file size

While reading in the release notes I notice the commit "Linux
does not support files > 2gb". Does this mean the maxium file
size for a database is 2gb? Does oracle have a way to connect
over that. Most databases won't be any bigger than this but
whats the sollution?
Jim
null

Kai (guest) wrote:
: I made the experience, that the file limit for Oracle is not
: 2GB, it is 2000M instead. Take care of that limit, especially
: when using autoextend on files !
: Kai
: Pieter Holthuijsen (guest) wrote:
I've had the following experience with Oracle 8.0.5 and Linux.
At some point I created a data file with the max size of 2000M
(Oracle 8.0.5). Later my database grew larger than this limit and
I then added yet another data file and all seemed to work
perfectly. Much later when I had to do a maintenance shutdown of
the Oracle database, I ran into big problems on starting the
database again.
It failed to start because of a check that a data file must be a
multiple of 2Kb. This occurred because the maximum file size in
Linux is not 2Gb but (2Gb-1Kb), in other words no longer a
multiple of 2Kb. So it cannot be advised to use datafiles in
Oracle on 2000M, instead one should smaller e.g. 1500M or
similar.
I solved the problem by a little hack. Instead of deleting the
inconsistent data file and restore it from backup, I observed
that there where no data at the end of the file, it only
contained zeroes for the last several Kb. I guess that Oracle
extended the file but failed to place data in the file.
Well all I did was to truncate the last 1Kb of the file, so that
it now was a multiple of 2Kb and all was fine (Oracle started). I
then alter tablespace to resize the file so that I was sure that
it would not happen again.
In my ears it does sound as a bug in Oracle (or Linux), I guess
Oracle failed to test this condition.
-- Martin
null

Similar Messages

  • Is there a 2GB file size limit on saving a custom format plugin for a custom file type?

    I have resolved issues on reading files > 2GB. Now when I save files >2GB, I get a popup that states there is a 2GB limit for plugins.
    I an using photoshop CS6 on a MAC OSX 10.9 with 16 GB RAM.
    The custom file type is what should be selected when the Save As Dialogue is visible. 
    I looked for a setting in the Photoshop preferences for Files & setting to PSB, but did not see one.
    I tried modifying values in the PiPL for formatmaxSize and PluginMaxSize, but the only values that the code will compile in with:
    32767 & 2147483647.
    I saw a posting in previous years that custom plugins were limited to 2 GB. 

    I would like to rephrase my question, now that I have worked a little more with my code.
    Is there a 2GB limit per channel and 4GB limit for total file size for a custom photoshop format plugin?

  • Maximum file size of 2,0 Gb exceeded

    Searched all over the place and found one thread with no solutions, so I´ll try again..
    I have a project with two mono recordings, each aiff and 530 mb big. Thats 2 hours and 24 minutes. When I try to bounce this in logic 8 (to mp3), I get the message "maximum file size of 2,0 Gb exceeded - please choose a shorter bounce time". ??? If anyone knows how to get around this, pleeaase let me know.
    I imported the aiff files to garageband, and garageband had no trouble bouncing it to mp3, so if garageband is able to bounce it, surely logic should be!? The reason I need to bounce it from logic is because of the editing features there, so doesn´t really help that garageband is able to bounce it....also tried importing the mp3 to logic and then bounce, but same problem occured. I´ve also tried turning down the quality to lowest, but doesn´t help. Please help!

    Just some thoughts on this...
    If the OP is saving to a Mac formatted drive it's not a problem with creating a file on the drive. Might try saving to different location.
    It's possible this may is an internal error from Logic and. It probably has little to do with the actual file size as much as it does with the overall length of time of the bounce.
    In other words... if Logic sees a certain length of time for a bounce, no matter the file type, it thinks the bounced file will exceed the 2GB file limit, which is leftover programming from earlier version of Mac & PC Logic when both operating systems imposed a 2GB file size restriction.
    pancenter-

  • Increase Project File Size limit?

    When I'm recording a longer song with real instruments, occasionally I will be told I can't record because I'm close to the 2gb file size limit. How do you increase the file size limit, or just get rid of it all together?
    Thanks

    I didn't know there was a size limit. I have Projects that are over 2 GB.

  • 2Gb Temp Size Limit / Reports Server

    Hi everybody,
    We are experiencing a problem with Reports 6i Server (NT) that bugs when the size of the tempfile generated to create the output (in our case : pdf) equals 2Gb ...
    My question is : is it a reports server bug ? And, if so, do you know if it is corrected in 10gAS Reports Server ?
    Thank you everybody ...

    That is
    Bug1309259 - ELIMINATE THE 2GB FILE SIZE LIMITATION FOR TEMP FILE SIZE
    and is not fixed yet (in 10g).
    1) If this is critical to you, and you have a support contract, you can try escalating the bug with Oracle Support.
    2) When somebody else reported this for their particular report, how it was solved was that their particular report was tuned and optimised and this issue was solved. (So you can also try tuning your report and SQL queries)

  • 2gb file limit

    Does anyone know if the linux 2gb file size limit also limits
    the oracle table size.
    I appriciate that the oracle datafiles must be limited to 2gb
    since bount by the limits of linux but can it spread tables over
    a number of datafiles. Either way i need to move some 6gb+
    tables from a sizey as400 to the linux/oracle box and i want to
    know how.
    btw. does anyone know if the linux 2gb limit is due for
    resolution in a forthcoming kernal release?
    cheers
    adam
    null

    Timothy Weaver (guest) wrote:
    : You can create multiple files for a tablespace. I've only done
    : it in Enterprise Manager's Storage Manager software.
    : The 2GB file size limit isn't a kernel issue, it's a
    limitation
    : of the ext2 filesystem. A journaling filesystem donated by SGI
    : is being ported to Linux and will probably be considered a
    : replacement of the ext2 system.
    : adam hawkins (guest) wrote:
    : : Does anyone know if the linux 2gb file size limit also
    limits
    : : the oracle table size.
    : : I appriciate that the oracle datafiles must be limited to
    2gb
    : : since bount by the limits of linux but can it spread tables
    : over
    : : a number of datafiles. Either way i need to move some 6gb+
    : : tables from a sizey as400 to the linux/oracle box and i want
    : to
    : : know how.
    : : btw. does anyone know if the linux 2gb limit is due for
    : : resolution in a forthcoming kernal release?
    : : cheers
    : : adam
    Yes, tables can be > 2gb in size, spread over a number of
    datafiles.
    regards
    Simon
    null

  • Maximum File Size - 2GB?

    Does java have a formal maximum file size for java.io.File?
    Somewhere along the way here where I work we started assuming it was 2GB, is that accurate?
    Our customer output files are starting to get in the neighborhood of 2GB, and management wants a plan.
    Do 64-bit JDKs support larger files?
    Does the max file size depend on JDK vendor?
    Is there a way to get 32-bit JDKs to open say 5-6GB files?

    In all cases, I believe, the file size limits are imposed by the underlying operating system, and the value varies (by system and os version).
    Refer to the documentation for your os.

  • I import mp4 1920X1080 - the file size is 320MB . I publish it as same mp4 and it takes 2GB . Why? Why is it that huge? HELP!

    I import mp4 1920X1080 - the file size is 320MB . I publish it as same mp4 and it takes 2GB . Why? Why is it that huge? HELP!

    hi,
    *Thank you for you response. *
    *I found where to set the bit rate and it solved the problem. (the default
    was very high...)*
    Now it takes same space and the quality is the same as the original.
    Thank you!
    Oren
    2014-09-11 3:54 GMT+03:00 A.T. Romano <[email protected]>:
        I import mp4 1920X1080 - the file size is 320MB . I publish it as
    same mp4 and it takes 2GB . Why? Why is it that huge? HELP!
    created by A.T. Romano <https://forums.adobe.com/people/A.T.+Romano> in *Premiere
    Elements* - View the full discussion
    <https://forums.adobe.com/message/6719671#6719671>

  • How to enable file size 2GB for linux RHEL4.0

    Hi
    I am on oracle 9.2.0.6 on linux RHEL 4.0. How do i enable large file size for filesystems.
    When i query ulimit -a ulimit -f it is returning me unlimited.
    But my database listener crashed as listener log file reached 2gb in size.
    I couldnt find how to enable the largefile size for the filesystem.
    Thanks
    SV

    Are you sure that the filesystem is limiting your listener log file size?
    Please try to concatenate some lines into the logfile and see if the filesystem prevents it. I expect you to find out that the limit is only with the listener.
    In any case, run a weekly/monthly job that creates an empty log file.
    If the problem is with ext3 then you should check with someone more experienced because it does not sound to me like the proper (or default) behavior of ext3.

  • File size limit of 2gb on File Storage from Linux mount

    Hi!
    I setup a storage account and used the File Storage (preview). I have followed instructions at
    http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
    I created a CentOS 7 virtual server and mounted the system as
    mount -t cifs //MYSTORAGE.file.core.windows.net/MYSHARE /mnt/MYMOUNTPOINT -o vers=2.1,username=MYUSER,password=MYPASS,dir_mode=0777,file_mode=0777
    The problem is that I have a file size limit of 2gb on that mount. Not on other disks. 
    What am I doing wrong?

    Hi,
    I would suggest you check your steps with this video:
    http://channel9.msdn.com/Blogs/Open/Shared-storage-on-Linux-via-Azure-Files-Preview-Part-1, hope this could give you some tips.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • LabView RT FTP file size limit

    I have created a few very large AVI video clips on my PXIe-8135RT (LabView RT 2014).  When i try to download these from the controller's drive to a host laptop (Windows 7) with FileZilla, the transfer stops at 1GB (The file size is actually 10GB).
    What's going on?  The file appears to be created correctly and I can even use AVI2 Open and AVI2 Get Info to see that the video file contains the frames I stored.  Reading up about LVRT, there is nothing but older information which claim the file size limit is 4GB, yet the file was created at 10GB using the AVI2 VIs.
    Thanks,
    Robert

    As usual, the answer was staring me right in the face.  FileZilla was reporting the size in an odd manner and the file was actually 1GB.  The vi I used was failing.  After fixing it, it failed at 2GB with error -1074395965 (AVI max file size reached).

  • What is a Resonable Shared Review File Size?

    I have a user group that is having some problems with their shared reviews.  They are snding thier documents out on a WedDav server and the files are about 50 to 100 mg.  Doe to the file size some of the reviews are experiening download times of up to 30 minutes and when they try to oprn the review in Reader or Acroabt they get a error massage that the file is too large.  Is there a internal issue here or should a start exploring on how to break these files up into multiple reviews.  If the later what is a reasonable file size?

    2GB is the maximum size limit.
    Anything over 20mb requires download via Wi-Fi or docked connection.
    See: http://support.apple.com/kb/PH2808

  • [Help!] Log file size Messaging Server 2005Q4

    Hi,
    I have a large environment where I need to keep detailed IMAP log.
    I tried with logfile.imap.maxlogfilesize = 4294967296but I notice that imap file size is limited to 2MB:
    ls -l
    -rw-------   1 mail  mail      763225 Feb 20 09:16 imap
    -rw-------   1 mail mail     2097263 Feb 20 08:56 imap.7307.1203494098
    -rw-------   1 mail  mail     2097374 Feb 20 08:59 imap.7308.1203494213
    -rw-------   1 mail mail     2097212 Feb 20 09:01 imap.7309.1203494341
    -rw-------   1 mail  mail     2097248 Feb 20 09:03 imap.7310.1203494468
    -rw-------   1 mail  mail     2097235 Feb 20 09:05 imap.7311.1203494608
    -rw-------   1 mail  mail     2097273 Feb 20 09:08 imap.7312.1203494727Is this an implicit limit not configurable?
    My system is:
    logfile.imap.buffersize = 0
    logfile.imap.expirytime = 604800
    logfile.imap.flushinterval = 60
    logfile.imap.loglevel = Debug
    logfile.imap.logtype = NscpLog
    logfile.imap.maxlogfiles = 10
    logfile.imap.maxlogfilesize = 4294967296
    logfile.imap.maxlogsize = 42949672960
    logfile.imap.minfreediskspace = 524288000
    logfile.imap.objectclass = top
    logfile.imap.rollovertime = 86400
    logfiles.imap.alias = |logfile|imap
    Sun Java(tm) System Messaging Server 6.2-6.01 (built Apr  3 2006)
    libimta.so 6.2-6.01 (built 11:20:35, Apr  3 2006)
    SunOS srvmsg01 5.9 Generic_117171-07 sun4u sparc SUNW,Sun-Fire-V440
    I thank you very much for every hints you could let me know.
    Best Regards
    marco

    ziopino wrote:
    I have a large environment where I need to keep detailed IMAP log.
    I tried with logfile.imap.maxlogfilesize = 4294967296
    The maximum value of maxlogfilesize is 2GB (a common filesystem limit).
    There is a whole write-up on logging and how to configure your system to keep a large amount of logs here:
    http://blogs.sun.com/factotum/entry/messaging_server_more_on_managing
    Regards,
    Shane.

  • Suggested data file size for Oracle 11

    Hi all,
    Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    Any help would be greatly appreciated.
    Thanks!

    Ben Daniels wrote:
    Hi all,
    >
    > Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    >
    > I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    >
    > I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    >
    > Any help would be greatly appreciated.
    >
    > Thanks!
    Hi Ben,
    Check the note 129439 - Maximum file sizes with Oracle
    Best regards,
    Orkun Gedik

Maybe you are looking for

  • Creation of a new PSCD Event in FQEVENTS

    Hello, does anybody know how to create a new custom-own event as in transaction FQEVENTS? I did not find anything in the online help. Many thanks in advance for any helpful answer. Frank Edited by: Zieger Frank on Sep 27, 2010 4:25 PM

  • Function results in SELECT and WHERE

    Is there a way to avoid calling a function twice in order to have its results appear in the SELECT clause and used in the WHERE clause? This SELECT statement will not work (as promised by documents): select name, getLocation(name) as location where l

  • Porting of SPARC application to x86

    Hi All - I have SPARC working application code, which needs to be ported to solaris 10 x86. I added -D__386 to the compile flags. Compiled fine, but have problems in running. If I compile with -D_BIG_ENDIAN, seems like the application processes go fu

  • Current itunes Track for non-ichat users?

    hello, well when i'm on ichat i put my status as current itunes track. However, my friends who dont have ichat or mac can not see anything. Is there away that they can see it in there aim buddy list. Thanks!

  • Can't install After Effects with Production Studio

    I have an older version of Production Studio that includes AE 7.0  I has to wipte my drive and reinstall Windows XP and all my software. However, when i try to install the full Production Studio, it says that AE 7 is already installed. It is not!! Th