Imsbackup iMS5.1 on Solaris 8 2GB file limit

I'm in the process of migrating 450 users and 70GB of E-mail from a single Solaris system to a Cluster (running iMS5.2). It appears imsbackup cannot write a file greater than 2GB. Is there a hotfix available for this? I'm moving from 5 store partitions to 4 store partitions and using imsbackup would make this process a no-brainer. Is this fixed in iMS5.2? Could I use the binary from 5.2?
Thanks,
RS

dont know it is fixed in 5.2, but i doubt you simply can take the binary of 5.2 and use it in 5.1 (i've tried it with 5.2's sister command "imsimport", and it dumped several "relocation error in libxxx.so"
maybe you could do the following, bypass the 2GB limit by cat'ing imsbackup's output to a file :
imsbackup -f - <other options> | cat > bigfile.bck
then restore it (if imsrestore has the same 2GB limitation)
cat bigfile.bck | imsrestore -f - <other options>
i remember having done someting like this some time ago

Similar Messages

  • 2gb file limit

    Does anyone know if the linux 2gb file size limit also limits
    the oracle table size.
    I appriciate that the oracle datafiles must be limited to 2gb
    since bount by the limits of linux but can it spread tables over
    a number of datafiles. Either way i need to move some 6gb+
    tables from a sizey as400 to the linux/oracle box and i want to
    know how.
    btw. does anyone know if the linux 2gb limit is due for
    resolution in a forthcoming kernal release?
    cheers
    adam
    null

    Timothy Weaver (guest) wrote:
    : You can create multiple files for a tablespace. I've only done
    : it in Enterprise Manager's Storage Manager software.
    : The 2GB file size limit isn't a kernel issue, it's a
    limitation
    : of the ext2 filesystem. A journaling filesystem donated by SGI
    : is being ported to Linux and will probably be considered a
    : replacement of the ext2 system.
    : adam hawkins (guest) wrote:
    : : Does anyone know if the linux 2gb file size limit also
    limits
    : : the oracle table size.
    : : I appriciate that the oracle datafiles must be limited to
    2gb
    : : since bount by the limits of linux but can it spread tables
    : over
    : : a number of datafiles. Either way i need to move some 6gb+
    : : tables from a sizey as400 to the linux/oracle box and i want
    : to
    : : know how.
    : : btw. does anyone know if the linux 2gb limit is due for
    : : resolution in a forthcoming kernal release?
    : : cheers
    : : adam
    Yes, tables can be > 2gb in size, spread over a number of
    datafiles.
    regards
    Simon
    null

  • 2GB AVI file limit

    There seems to be a 2GB limit for AVI files in QuickTime (I have 7.0.3 Pro).
    I have a few larger files with recordings made with EvolutionTV. They all play in their entirety in VLC, for example, but any QuickTime application only sees the first 2GB.
    Is there any workaround? I have tried both DivX 5.2.1 and the DivX Fusion beta codecs, with the same result.
    Luis Sequeira

    Timothy Weaver (guest) wrote:
    : You can create multiple files for a tablespace. I've only done
    : it in Enterprise Manager's Storage Manager software.
    : The 2GB file size limit isn't a kernel issue, it's a
    limitation
    : of the ext2 filesystem. A journaling filesystem donated by SGI
    : is being ported to Linux and will probably be considered a
    : replacement of the ext2 system.
    : adam hawkins (guest) wrote:
    : : Does anyone know if the linux 2gb file size limit also
    limits
    : : the oracle table size.
    : : I appriciate that the oracle datafiles must be limited to
    2gb
    : : since bount by the limits of linux but can it spread tables
    : over
    : : a number of datafiles. Either way i need to move some 6gb+
    : : tables from a sizey as400 to the linux/oracle box and i want
    : to
    : : know how.
    : : btw. does anyone know if the linux 2gb limit is due for
    : : resolution in a forthcoming kernal release?
    : : cheers
    : : adam
    Yes, tables can be > 2gb in size, spread over a number of
    datafiles.
    regards
    Simon
    null

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • 2gb file size

    While reading in the release notes I notice the commit "Linux
    does not support files > 2gb". Does this mean the maxium file
    size for a database is 2gb? Does oracle have a way to connect
    over that. Most databases won't be any bigger than this but
    whats the sollution?
    Jim
    null

    Kai (guest) wrote:
    : I made the experience, that the file limit for Oracle is not
    : 2GB, it is 2000M instead. Take care of that limit, especially
    : when using autoextend on files !
    : Kai
    : Pieter Holthuijsen (guest) wrote:
    I've had the following experience with Oracle 8.0.5 and Linux.
    At some point I created a data file with the max size of 2000M
    (Oracle 8.0.5). Later my database grew larger than this limit and
    I then added yet another data file and all seemed to work
    perfectly. Much later when I had to do a maintenance shutdown of
    the Oracle database, I ran into big problems on starting the
    database again.
    It failed to start because of a check that a data file must be a
    multiple of 2Kb. This occurred because the maximum file size in
    Linux is not 2Gb but (2Gb-1Kb), in other words no longer a
    multiple of 2Kb. So it cannot be advised to use datafiles in
    Oracle on 2000M, instead one should smaller e.g. 1500M or
    similar.
    I solved the problem by a little hack. Instead of deleting the
    inconsistent data file and restore it from backup, I observed
    that there where no data at the end of the file, it only
    contained zeroes for the last several Kb. I guess that Oracle
    extended the file but failed to place data in the file.
    Well all I did was to truncate the last 1Kb of the file, so that
    it now was a multiple of 2Kb and all was fine (Oracle started). I
    then alter tablespace to resize the file so that I was sure that
    it would not happen again.
    In my ears it does sound as a bug in Oracle (or Linux), I guess
    Oracle failed to test this condition.
    -- Martin
    null

  • How to delete a 6.2gb file from my ipad.

    I downloaded 6.2gb of holiday video from my PC through iTunes to iMovie and synced onto my ipad. It used up the 6.2gb on my ipad (in settings) but does not appear anywhere at all.  How can I find and delete the 6.2gb from my ipad?

    I assume you meant that you synced the finished file via iTunes to your iPad. If you did this, did you make sure you sleeked the make iPad version from the tool bar in iTunes.
    If you didn't. Then sync your iPad again, but select no movies to be synced.  This should remove the 6.2gb file.  Just as a thought I have a number dof digital copy movies from DVDs on my pad and none are more that 2.5 gb and that's for a 2 hr 30 min movie.  How long is yours?
    If it's HD then it may not play on an iPad as the display is only 1024 wide in landscape

  • I downloaded a 1.2GB file in Aptosid linux with Firefox and cannot locate the 'Download' directory where the file went by default - please show location, ta

    Where is the default ' Downloads ' location?
    I have downloaded a 1.2GB file and it shows in the Downloads window, but I am unable to locate it in the filesystem.
    I forgot to make a Downloads directory in my home directory first, and have since made one.
    Thanks
    Denis

    If you can find the downloads window (Keyboard shortcut Ctlr+Shift+Y in Linux) right clicking on the file entry should give an option to open the folder it is in.
    If the option is greyed out it was probably only put in a temporary location and may have been deleted. You could try searching by name and date & size.
    You may be interested in
    * [[What to do if you can't download or save files]]
    Note you could have looked at the prefs to try to figure out the location that would have been used, but not once you have reset it.
    Does your distro not provide the newest releases of Firefox, you appear to be on Iceweasel 9, the standard Release of Firefox is now fx14.

  • Solaris 10:unable to mount a solaris root file system

    Hi All,
    I am trying to install Solaris 10 X86 on a Proliant DL385 Server it has a Smart array 6i, I have download the driver from the HP web site, on booting up the installation CD 1, adding the device driver, it sees the device but now says it can���t mount the device. Any clues what I need to do?
    Screen Output:
    Unable to mount a Solaris root file system from the device
    DISK: Target 0, Bios primary drive - device 0x80
    on Smart Array 6i Controller on Board PCI bus 2, at Dev 4
    Error message from mount::
    /pci&#64;0,0/pci1022,7450&#64;7/pcie11,4091&#64;4/cmdk&#64;0,0:a: can't open - no vtoc
    any assistence would be appreciated.

    Hi,
    I read the Message 591 (Agu 2003) and the problem is quite the same. A brief description: I have aLaptop ASUS with HDD1 60GB and a USB storage HDD (in next HDD2) 100GB. I installed Solaris 10 x86 on HDD2 (partition c2t0d0s0). At the end of installation I removed the DVD and using BIOS features I switched the boot to HDD2. All ok; I received the SUN Blue Screen and I choose the active Solaris option; but at the beginning of the boot I received the following error message
    Screen Output:
    Unable to mount a Solaris root file system from the device
    DISK: Target 0: IC25N060 ATMR04-0 on Board ....
    Error message from mount::
    /pci&#64;0,0/pci-ide2,5/ide&#64;1/cmdk&#64;0,0:a: can't open
    any assistence would be appreciated.
    Regards

  • EA4500 router and media server file limit

    I purchased an EA4500 router yesterday and it arrived today. Set it up, copiued my media library over to a new Seagate Expansion drive (2 TB) and it currently is filled with 320GB of files. Music folder, Pictures folder and Video folder. Thing is only some of the files are showing up. Looks very much like some stupid limitation on file numbers. I specifically purchased this router because of the media server and now it is useless.
    Is there a fix for this sillyness? I can't seem to find a way to turn the file limit off. I truly hope I haven't purchased another rubbish router. The TP Link router I replaced had no such file size limit and was 2/3 of the price, easily.
    Yours, not amused

    Yes, I checked that before I bought myself a Seagate Expansion drive 2TB. The router list shows 1.5 and 3TB drives in that range and I would say that the 2TB is veyr likely to be supported also.
    The files are bog standard file types! Some files show up but not all of them.
    FLAC (you missed that filetype off the supported audio types btw) and MP3 show up but not all of them.
    MPEG2 files show and play..but not all are visible.
    None of my images in my photo library show up and they are all JPEG. No esoteric file formats.
    It seems though that the media server just doesn't show anything after it reaches a file limit. It really isn't up to anything much if that is the case and for the money that router should be better. Furthermore, the folders with audio albums in doesn't display the contects in the correct alphanumeric order. I was playing an album by Cassandra Wilson and immediately noticed that the songs were in the wrong order. Something totall wrong there...the server has to be displaying them like that.
    I have Marantz CR603 all-in-one hif fis in 3 different rooms in the house (living room and two bedrooms). Sometimes 2 can connect simultatenously but 3 won't as the server falls over. Even with two units connected it will suddenly disconnect for no reason. My connection isn't slow It is fast anbd yet accessing the media server is very very slow and especially simultatenous access.
     It seems to me that the media server element is a stripped down token effort but all the sales garb for the router doesn't mention aything about its silly limitations.
    I think the best thing for me to do now is return this to Amazon for a full refund and find another brand of router.  It was bad enough 'adjusting' to the cloud managment - a firmware download initiated on install and installed that. Some of the config pages don't work properly (that needs an urgent fix as well...IP address fields with only 3 fields that won't allow you to enter last 3 octets of address and you can't apply setting) and the dumbned down way of presentating the options makes it a pain to setup if you are used to the usual way of setting up a router. If I setup a router manually it takes a fraction of the time that the hand holding setup nonsense takes. I get the idea, and it looks pretty and all big buttony how everyhting is going these days but there should be a seriously 'normal' advanced mode for people that don't require their hands held through the setup and configuration.
    Very dissappointed. It's a fast router as well.

  • 4GB file Limit

    I have an iPod classic with 30GB. My music takes up about 6GB. I want to use the rest of the drive as a backup for the important files on my PC, The backup is about 15GB in size. When I try to backup to a directory on my iPod the backup always fails at a few bytes short of 4GB. I have see various references to a 4GB file limit but didn't see any conformation that such a limit exists.
    I'd like to know
    1)Is there a limit to the size of individual files on an iPod?
    2)Is there a way to remove this seemingly arbitrary limit?
    Thanks,
    Ron MacRae

    Is there a limit to the size of individual files on an iPod?
    Yes, 4GB. See:http://docs.info.apple.com/article.html?artnum=61131 (bottom of page)
    Is there a way to remove this seemingly arbitrary limit?
    It's a FAT32 file system limitation and cannot be removed.

  • Open file limit in limits.conf not being enforced

    So I am running a Arch installation without a graphical UI and Im running elasticsearch on it as its own user (elasticsearch). Since elasticsearch needs to be able to handle more than the 4066 files that seems to the be the default, I edit /etc/security/limits.conf
    #* soft core 0
    #* hard rss 10000
    #@student hard nproc 20
    #@faculty soft nproc 20
    #@faculty hard nproc 50
    #ftp hard nproc 0
    #@student - maxlogins 4
    elasticsearch soft nofile 65000
    elasticsearch hard nofile 65000
    * - rtprio 0
    * - nice 0
    @audio - rtprio 65
    @audio - nice -10
    @audio - memlock 40000
    I restart the system, but the limit is seemingly still 4066. What gives? I read on the wiki that in order to enforce the values you need a pam enabled login. I dont have a graphical login manager and
    grep pam_limits.so /etc/pam.d/*
    Gives me this..
    /etc/pam.d/crond:session required pam_limits.so
    /etc/pam.d/polkit-1:session required pam_limits.so
    /etc/pam.d/su:session required pam_limits.so
    /etc/pam.d/system-auth:session required pam_limits.so
    /etc/pam.d/system-services:session required pam_limits.so
    Any ideas on what I have to do to raise the open file limit here?
    Thanks

    Seems like adding the LimitNOFILE parameter to the systemd service file did the trick, but still doesnt explain why limits.conf isnt being enforced.
    Last edited by Zygote (2014-07-17 10:04:43)

  • Solaris Virtual File System

    Have Solaris virtual file system been changed since Solaris 2.5.1? How much?

    AFAIK, the VFS is not an official (and documented) interface
    and may change from solaris release to solaris release
    (perhaps even with a new kernel patch).
    Other, you can probably get the Solaris 8 Foundation Source,
    and use it as the definitive reference documentation ;-)

  • System Check error : Actual open files limit:7000, needed 8000 nodes:

    Hi,
    System check showing up error 'Actual open files limit:7000, needed 8000 nodes:' in our BIA system.
    SAP BIA Admin guide suggested open files limit should not be less than 8000. I verified at OS level  
    ulimit for open files (-n) is displayed as '8000' which is good.
    Please suggest me how to fix this error. What if we increase ulimit value for open files at OS level?
    Thanks in advance
    Regards,
    Srinivas.

    Hello Srinivas,
    please see SAP Note <a href="http://service.sap.com/sap/support/notes/1273695">1273695</a>.
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • Database files limit

    I work on Oracle database Version 10 R 10.2.0.4.0
    My database has db_files = 200 specified as a part of Initialization parameter.
    I have 200 database files already on my database, Today I tried to create 4 datatabase files for some reason my database allowed my creating those files even if it exceeded number of database files limit.
    Will I be able to create database objects in these database files?
    Any reply is truely appreciated.
    Thanks in advance
    J

    J1604 wrote:
    I work on Oracle database Version 10 R 10.2.0.4.0
    My database has db_files = 200 specified as a part of Initialization parameter.
    I have 200 database files already on my database, Today I tried to create 4 datatabase files for some reason my database allowed my creating those files even if it exceeded number of database files limit.
    Will I be able to create database objects in these database files?
    Any reply is truely appreciated.
    Thanks in advance
    J
    From the fine Reference Manual:
    DB_FILES specifies the maximum number of database files that can be opened for this database. (emphasis mine)
    What does DBA_DATA_FILES say about the STATUS and ONLINE_STATUS of the files in question?
    What does V$DATAFILE have to say?
    Will you be able to create database objects in these database files?   What does it cost to try it and see for yourself?  With the proviso that you cannot directly specify what file an object goes in.  You can only specify a tablespace.  If that tablespace has only one data file the test is simple.  If the tablespace contains more than one data file, it gets more complex because oracle will use data files within a TS as it sees fit.

  • Can we share 2GB file?

    Hi Folks,
    Can we share >2GB file using NFS?
    Regards,
    Hameed....

    This is interesting though, i wasn't aware of the fact that you could run 'share' on a file. But i did some testing and it seems to work.
    It makes me wonder how to access it though, i don't think you can mount a file, you would have to mount the directory it resides in and then access it..
    Its of course possible that you would be able to access a shared file with automount and /net..
    Anyway, the man page indicates that the design is intended to be used on directories...
    .7/M.

Maybe you are looking for