File limitation

Hi,
what is the maximum file size that is recomanded to transfer via XI
would a 50 MB file work ? is it to big?
Thanks,
        Udi

We had to deal with 1.5GB but i tested only a chunk of it in the size of 1MB on a DEV env.
I reccomend you to look around in the forum and learn from other pepole expiriance as well.
I think 50MB might turn into 0.5GB...XI will handle the file faster if you devide it (remember we are talking about Message Broker and it expects request-response behavior and not - all the master data in on go.
You have to do a bit of research and test you system after you've increased the memory allocation for the adapter. If the file will be drawn to XI fast enought and with no memory problem you can split it by buisness logic inside the mappings (user defined functions,1:n...)
I hope the example file the customer gave you is representing the "real life" file size , cause if you develope the interface thinking the file is maximun 0f 50-60MB and in PROD you'll get 100...things can be trouble.
Good luck Udi.points please...
Nimrod.

Similar Messages

  • 2GB OR NOT 2GB - FILE LIMITS IN ORACLE

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-11
    2GB OR NOT 2GB - FILE LIMITS IN ORACLE
    ======================================
    Introduction
    ~~~~~~~~~~~~
    This article describes "2Gb" issues. It gives information on why 2Gb
    is a magical number and outlines the issues you need to know about if
    you are considering using Oracle with files larger than 2Gb in size.
    It also
    looks at some other file related limits and issues.
    The article has a Unix bias as this is where most of the 2Gb issues
    arise but there is information relevant to other (non-unix)
    platforms.
    Articles giving port specific limits are listed in the last section.
    Topics covered include:
    Why is 2Gb a Special Number ?
    Why use 2Gb+ Datafiles ?
    Export and 2Gb
    SQL*Loader and 2Gb
    Oracle and other 2Gb issues
    Port Specific Information on "Large Files"
    Why is 2Gb a Special Number ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Many CPU's and system call interfaces (API's) in use today use a word
    size of 32 bits. This word size imposes limits on many operations.
    In many cases the standard API's for file operations use a 32-bit signed
    word to represent both file size and current position within a file (byte
    displacement). A 'signed' 32bit word uses the top most bit as a sign
    indicator leaving only 31 bits to represent the actual value (positive or
    negative). In hexadecimal the largest positive number that can be
    represented in in 31 bits is 0x7FFFFFFF , which is +2147483647 decimal.
    This is ONE less than 2Gb.
    Files of 2Gb or more are generally known as 'large files'. As one might
    expect problems can start to surface once you try to use the number
    2147483648 or higher in a 32bit environment. To overcome this problem
    recent versions of operating systems have defined new system calls which
    typically use 64-bit addressing for file sizes and offsets. Recent Oracle
    releases make use of these new interfaces but there are a number of issues
    one should be aware of before deciding to use 'large files'.
    What does this mean when using Oracle ?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The 32bit issue affects Oracle in a number of ways. In order to use large
    files you need to have:
    1. An operating system that supports 2Gb+ files or raw devices
    2. An operating system which has an API to support I/O on 2Gb+ files
    3. A version of Oracle which uses this API
    Today most platforms support large files and have 64bit APIs for such
    files.
    Releases of Oracle from 7.3 onwards usually make use of these 64bit APIs
    but the situation is very dependent on platform, operating system version
    and the Oracle version. In some cases 'large file' support is present by
    default, while in other cases a special patch may be required.
    At the time of writing there are some tools within Oracle which have not
    been updated to use the new API's, most notably tools like EXPORT and
    SQL*LOADER, but again the exact situation is platform and version specific.
    Why use 2Gb+ Datafiles ?
    ~~~~~~~~~~~~~~~~~~~~~~~~
    In this section we will try to summarise the advantages and disadvantages
    of using "large" files / devices for Oracle datafiles:
    Advantages of files larger than 2Gb:
    On most platforms Oracle7 supports up to 1022 datafiles.
    With files < 2Gb this limits the database size to less than 2044Gb.
    This is not an issue with Oracle8 which supports many more files.
    In reality the maximum database size would be less than 2044Gb due
    to maintaining separate data in separate tablespaces. Some of these
    may be much less than 2Gb in size.
    Less files to manage for smaller databases.
    Less file handle resources required
    Disadvantages of files larger than 2Gb:
    The unit of recovery is larger. A 2Gb file may take between 15 minutes
    and 1 hour to backup / restore depending on the backup media and
    disk speeds. An 8Gb file may take 4 times as long.
    Parallelism of backup / recovery operations may be impacted.
    There may be platform specific limitations - Eg: Asynchronous IO
    operations may be serialised above the 2Gb mark.
    As handling of files above 2Gb may need patches, special configuration
    etc.. there is an increased risk involved as opposed to smaller files.
    Eg: On certain AIX releases Asynchronous IO serialises above 2Gb.
    Important points if using files >= 2Gb
    Check with the OS Vendor to determine if large files are supported
    and how to configure for them.
    Check with the OS Vendor what the maximum file size actually is.
    Check with Oracle support if any patches or limitations apply
    on your platform , OS version and Oracle version.
    Remember to check again if you are considering upgrading either
    Oracle or the OS in case any patches are required in the release
    you are moving to.
    Make sure any operating system limits are set correctly to allow
    access to large files for all users.
    Make sure any backup scripts can also cope with large files.
    Note that there is still a limit to the maximum file size you
    can use for datafiles above 2Gb in size. The exact limit depends
    on the DB_BLOCK_SIZE of the database and the platform. On most
    platforms (Unix, NT, VMS) the limit on file size is around
    4194302*DB_BLOCK_SIZE.
    Important notes generally
    Be careful when allowing files to automatically resize. It is
    sensible to always limit the MAXSIZE for AUTOEXTEND files to less
    than 2Gb if not using 'large files', and to a sensible limit
    otherwise. Note that due to <Bug:568232> it is possible to specify
    an value of MAXSIZE larger than Oracle can cope with which may
    result in internal errors after the resize occurs. (Errors
    typically include ORA-600 [3292])
    On many platforms Oracle datafiles have an additional header
    block at the start of the file so creating a file of 2Gb actually
    requires slightly more than 2Gb of disk space. On Unix platforms
    the additional header for datafiles is usually DB_BLOCK_SIZE bytes
    but may be larger when creating datafiles on raw devices.
    2Gb related Oracle Errors:
    These are a few of the errors which may occur when a 2Gb limit
    is present. They are not in any particular order.
    ORA-01119 Error in creating datafile xxxx
    ORA-27044 unable to write header block of file
    SVR4 Error: 22: Invalid argument
    ORA-19502 write error on file 'filename', blockno x (blocksize=nn)
    ORA-27070 skgfdisp: async read/write failed
    ORA-02237 invalid file size
    KCF:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx
    file limit exceed.
    Unix error 27, EFBIG
    Export and 2Gb
    ~~~~~~~~~~~~~~
    2Gb Export File Size
    ~~~~~~~~~~~~~~~~~~~~
    At the time of writing most versions of export use the default file
    open API when creating an export file. This means that on many platforms
    it is impossible to export a file of 2Gb or larger to a file system file.
    There are several options available to overcome 2Gb file limits with
    export such as:
    - It is generally possible to write an export > 2Gb to a raw device.
    Obviously the raw device has to be large enough to fit the entire
    export into it.
    - By exporting to a named pipe (on Unix) one can compress, zip or
    split up the output.
    See: "Quick Reference to Exporting >2Gb on Unix" <Note:30528.1>
    - One can export to tape (on most platforms)
    See "Exporting to tape on Unix systems" <Note:30428.1>
    (This article also describes in detail how to export to
    a unix pipe, remote shell etc..)
    Other 2Gb Export Issues
    ~~~~~~~~~~~~~~~~~~~~~~~
    Oracle has a maximum extent size of 2Gb. Unfortunately there is a problem
    with EXPORT on many releases of Oracle such that if you export a large table
    and specify COMPRESS=Y then it is possible for the NEXT storage clause
    of the statement in the EXPORT file to contain a size above 2Gb. This
    will cause import to fail even if IGNORE=Y is specified at import time.
    This issue is reported in <Bug:708790> and is alerted in <Note:62436.1>
    An export will typically report errors like this when it hits a 2Gb
    limit:
    . . exporting table BIGEXPORT
    EXP-00015: error on row 10660 of table BIGEXPORT,
    column MYCOL, datatype 96
    EXP-00002: error in writing to export file
    EXP-00002: error in writing to export file
    EXP-00000: Export terminated unsuccessfully
    There is a secondary issue reported in <Bug:185855> which indicates that
    a full database export generates a CREATE TABLESPACE command with the
    file size specified in BYTES. If the filesize is above 2Gb this may
    cause an ORA-2237 error when attempting to create the file on IMPORT.
    This issue can be worked around be creating the tablespace prior to
    importing by specifying the file size in 'M' instead of in bytes.
    <Bug:490837> indicates a similar problem.
    Export to Tape
    ~~~~~~~~~~~~~~
    The VOLSIZE parameter for export is limited to values less that 4Gb.
    On some platforms may be only 2Gb.
    This is corrected in Oracle 8i. <Bug:490190> describes this problem.
    SQL*Loader and 2Gb
    ~~~~~~~~~~~~~~~~~~
    Typically SQL*Loader will error when it attempts to open an input
    file larger than 2Gb with an error of the form:
    SQL*Loader-500: Unable to open file (bigfile.dat)
    SVR4 Error: 79: Value too large for defined data type
    The examples in <Note:30528.1> can be modified to for use with SQL*Loader
    for large input data files.
    Oracle 8.0.6 provides large file support for discard and log files in
    SQL*Loader but the maximum input data file size still varies between
    platforms. See <Bug:948460> for details of the input file limit.
    <Bug:749600> covers the maximum discard file size.
    Oracle and other 2Gb issues
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    This sections lists miscellaneous 2Gb issues:
    - From Oracle 8.0.5 onwards 64bit releases are available on most platforms.
    An extract from the 8.0.5 README file introduces these - see <Note:62252.1>
    - DBV (the database verification file program) may not be able to scan
    datafiles larger than 2Gb reporting "DBV-100".
    This is reported in <Bug:710888>
    - "DATAFILE ... SIZE xxxxxx" clauses of SQL commands in Oracle must be
    specified in 'M' or 'K' to create files larger than 2Gb otherwise the
    error "ORA-02237: invalid file size" is reported. This is documented
    in <Bug:185855>.
    - Tablespace quotas cannot exceed 2Gb on releases before Oracle 7.3.4.
    Eg: ALTER USER <username> QUOTA 2500M ON <tablespacename>
    reports
    ORA-2187: invalid quota specification.
    This is documented in <Bug:425831>.
    The workaround is to grant users UNLIMITED TABLESPACE privilege if they
    need a quota above 2Gb.
    - Tools which spool output may error if the spool file reaches 2Gb in size.
    Eg: sqlplus spool output.
    - Certain 'core' functions in Oracle tools do not support large files -
    See <Bug:749600> which is fixed in Oracle 8.0.6 and 8.1.6.
    Note that this fix is NOT in Oracle 8.1.5 nor in any patch set.
    Even with this fix there may still be large file restrictions as not
    all code uses these 'core' functions.
    Note though that <Bug:749600> covers CORE functions - some areas of code
    may still have problems.
    Eg: CORE is not used for SQL*Loader input file I/O
    - The UTL_FILE package uses the 'core' functions mentioned above and so is
    limited by 2Gb restrictions Oracle releases which do not contain this fix.
    <Package:UTL_FILE> is a PL/SQL package which allows file IO from within
    PL/SQL.
    Port Specific Information on "Large Files"
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Below are references to information on large file support for specific
    platforms. Although every effort is made to keep the information in
    these articles up-to-date it is still advisable to carefully test any
    operation which reads or writes from / to large files:
    Platform See
    ~~~~~~~~ ~~~
    AIX (RS6000 / SP) <Note:60888.1>
    HP <Note:62407.1>
    Digital Unix <Note:62426.1>
    Sequent PTX <Note:62415.1>
    Sun Solaris <Note:62409.1>
    Windows NT Maximum 4Gb files on FAT
    Theoretical 16Tb on NTFS
    ** See <Note:67421.1> before using large files
    on NT with Oracle8
    *2 There is a problem with DBVERIFY on 8.1.6
    See <Bug:1372172>

    I'm not aware of a packaged PL/SQL solution for this in Oracle 8.1.7.3 - however it is very easy to create such a program...
    Step 1
    Write a simple Java program like the one listed:
    import java.io.File;
    public class fileCheckUtl {
    public static int fileExists(String FileName) {
    File x = new File(FileName);
    if (x.exists())
    return 1;
    else return 0;
    public static void main (String args[]) {
    fileCheckUtl f = new fileCheckUtl();
    int i;
    i = f.fileExists(args[0]);
    System.out.println(i);
    Step 2 Load this into the Oracle data using LoadJava
    loadjava -verbose -resolve -user user/pw@db fileCheckUtl.java
    The output should be something like this:
    creating : source fileCheckUtl
    loading : source fileCheckUtl
    creating : fileCheckUtl
    resolving: source fileCheckUtl
    Step 3 - Create a PL/SQL wrapper for the Java Class:
    CREATE OR REPLACE FUNCTION FILE_CHECK_UTL (file_name IN VARCHAR2) RETURN NUMBER AS
    LANGUAGE JAVA
    NAME 'fileCheckUtl.fileExists(java.lang.String) return int';
    Step 4 Test it:
    SQL> select file_check_utl('f:\myjava\fileCheckUtl.java') from dual
    2 /
    FILE_CHECK_UTL('F:\MYJAVA\FILECHECKUTL.JAVA')
    1

  • Help: I download "Yellow Submarin" which is 294mb, my iBooks 2 keep forbidden me open it to read saying exceed file limite. Please tell me how shall i do ?

    help: I download "Yellow Submarin" which is 294mb, my iBooks 2 keep forbidden me open it to read saying exceed file limite. Please tell me how shall i do ?

    Having the same issue, can someone please enlighten us?

  • Finding hard and soft open file limits from within jvm in linux

    Hi All,
    I have a problem where I need to find out the hard and soft open file limits for the process in linux from within a java program. When I execute ulimit from the terminal it gives separate values for hard and soft open file limits.
    From shell if I run the command then the output is given below:
    $ ulimit -n
    1024
    $ ulimit -Hn
    4096
    The java program is given below:
    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.io.Reader;
    import java.io.StringWriter;
    import java.io.Writer;
    public class LinuxInteractor {
    public static int executeCommand(String command, boolean waitForResponse, OutputHandler handler) {
    int shellExitStatus = -1;
    ProcessBuilder pb = new ProcessBuilder("bash", "-c", command);
    pb.redirectErrorStream(true);
    try {
    Process shell = pb.start();
    if (waitForResponse) {
    // To capture output from the shell
    InputStream shellIn = shell.getInputStream();
    // Wait for the shell to finish and get the return code
    shellExitStatus = shell.waitFor();
    convertStreamToStr(shellIn, handler);
    shellIn.close();
    catch (IOException e) {
    System.out
    .println("Error occured while executing Linux command. Error Description: "
    + e.getMessage());
    catch (InterruptedException e) {
    System.out
    .println("Error occured while executing Linux command. Error Description: "
    + e.getMessage());
    return shellExitStatus;
    public static String convertStreamToStr(InputStream is, OutputHandler handler) throws IOException {
    if (is != null) {
    Writer writer = new StringWriter();
    char[] buffer = new char[1024];
    try {
    Reader reader = new BufferedReader(new InputStreamReader(is,
    "UTF-8"));
    int n;
    while ((n = reader.read(buffer)) != -1) {
    String output = new String(buffer, 0, n);
    writer.write(buffer, 0, n);
    if(handler != null)
    handler.execute(output);
    } finally {
    is.close();
    return writer.toString();
    } else {
    return "";
    public abstract static class OutputHandler {
    public abstract void execute(String str);
    public static void main(String[] args) {
    OutputHandler handler = new OutputHandler() {
    @Override
    public void execute(String str) {
    System.out.println(str);
    System.out.print("ulimit -n : ");
    LinuxInteractor.executeCommand("ulimit -n", true, handler);
    System.out.print("ulimit -Hn : ");
    LinuxInteractor.executeCommand("ulimit -Hn", true, handler);
    If I run this program the output is given below:
    $ java LinuxInteractor
    ulimit -n : 4096
    ulimit -Hn : 4096
    I have used ubuntu 12.04, Groovy Version: 1.8.4 JVM: 1.6.0_29 for this execution.
    Please help me in understanding this behavior and how do I get a correct result from withing the java program.

    Moderator Action:
    As mentioned in one of the earlier responses:
    @OP this is not a Java question. I suggest you take it elsewhere, i.e. to a Unix or Linux forum.You posted this to a Java programming forum.
    It is not a Java programming inquiry.
    This off-topic thread is locked.
    Additionally, you have answered your own question.
    Don't bother posting it to one of the OS forums. It will get deleted as a duplicate cross-post.

  • Writing to file limiting system performanc​e

    Hello,
    I really could use some help with my VI in terms of writing data.  I’ve had a LOT of help optimizing my code and am trying to enhance the performance in terms of data acquisition.  However, it seems as though writing to a data file is really limiting the frequency I can sample at.  I’ve done some research and understand that writing data at every iteration of the while loop and the build array function slows things down.  How would I modify the code so that the array buffer would store maybe 5000 data points before writing to a file, then clearing the array?  That would keep the array size small, as well as reduce the number of times the program is performing the write to file function.  Is there a better way of doing this?  I’m open to any other ideas as well.    
    I am taking data from 14 channels, and would like to sample at 1 kHz each.  The task right now is created within Measurement and Automation Explorer, and the number of samples is at 100.  I also use a buffer indicator, which will generally grow out of control, no matter how much I modify the number of samples and the frequency.  The length of my test can last upwards to 6 hours, so it needs to work that long without crashing.     
    The code and attached subVI’s is attached.  Hopefully it's all there.
    Thanks for your help,
    Alex
    Attachments:
    Test Program.zip ‏295 KB

    Lynn,
    Yea, I'll have to keep on the block diagram size in the future.  Can get unwieldy.
    I tried incorporating the Recent History Buffer example into my code.  I did have a few hang ups, which are giving me some trouble.  Mostly, how do I connect my waveform data to the Buffer VI?  Will I be able to have all my channels connect to this?  Also, ultimately I will have two write to file VI's.  Can the buffer differentiate between which file to write?
    Thanks,
    Alex
    Attachments:
    Instrument Panel V1.1 (Labview 8.0).vi ‏159 KB

  • Memory cards File limitation - IMPORTANT

    Hello all,
    This may be old news to many of the techies out there. But just wanted to get this out for other people that may not know this.
    Memory cards are formatted into FAT (File Allocation Table), format. This format (also used in PC Hard Drives) has a specific limit on the number of files it may have on a single folder.
    The limit is 1000 (one thousand) files.
    So if you are trying to move 1000 files into the root folder of a 2 GB memory card chances are it will fail or give and error message.
    This limitation is not Nokia's but part of the FAT format. So to avoid problems keep this in mind if you are trying to save a lot of pictures or songs in a single folder, it won't take more than 1000 individual files. Sometimes when you drag and drop all the hidden files will also be moved, so you may not know that you are transfering more than 1000 files. You have to change the options in the Windows to show hidden files and folders so that you know what is actually being transferred.
    Of course you can have more than 1000 files, you just have to put them in more than one folder.
    This is also true for the N91 and N91 8GB HDD.
    Enjoy!
    640K Should be enough for everybody
    El_Loco Nokia Video Blog

    People, please read my post.
    You can have 10,000+ files on a FAT format card or HDD, but you can't have 1000 (one thousand) in a single folder or the Root folder (i.e. If you just copy all the files to E
    Also, I have used N93 with 1 and 2 GB card and it works, albeith it gets a bit slower when there is a lot of data on the card and the card is getting low on memory.
    So again, you can have lots and lots of files just not in a single folder in the memory card.
    EDIT: Read my signature.
    I've seen video footage of Bill Gates actually saying the 640K comment. Or at least I think I have... anyways, the fact is that up until Windows 98 (or was it 95 OSR2) they still were struggling with this memory limit. So even if he didn't say it, they sure designed the SW architecture to be limited by 640 KB.Message Edited by el_loco on 08-Feb-2007
    03:15 PM
    640K Should be enough for everybody
    El_Loco Nokia Video Blog

  • Question on EAR file Limitations

    To Whom it may concern,
    We are utilizing Weblogic 9.2 R3 in a 32-Bit Windows Server environment and I am having trouble deploying a ear file. The EAR file contains the help files for our IBM application but it seems to not want to show many of the files when you look at the help for the applicatons. I believe it has to do with the fact that the number of files in the EAR file have exceeded 65535. The # is around 69000 files.
    I removed several application help folders to bring the build down to 65100. I redeployed he ear and the help files seem to appear.
    Is there a way to enable Weblogic to deploy the EAR file if the number of files in the ear is past 65535? Also is there a technote for the limitations of Weblogic 9.2 32 bit EAR files?
    Thanks in Advance!
    Edited by: user2237078 on Jul 26, 2011 4:59 PM

    An EAR should have an "application.xml" in the META-INF directory within the JAR. You might take a look at it. I suspect there is an EJB module specified by that name.

  • Problem with loading a Sequence File - Limited Windows Account

    Dear Sirs,
    I have a problem and a very less time to fix it; so I'm asking for your kind help.
    I have a production PC, with a Debug Deploy License of TS 3.1 and LV 7.1; the operating system is Win XP SP2.
    No problem when I login the session with my Administrator Windows Account; on the contrary, when I try to login with a Limited Windows Account, I'm not able to load my Sequence File, and the text appearing is attached to this message.
    Someone can advise me a solution to quickly solve my problem?
    Thank you very much and Best Regards.
    Stefano 
    Attachments:
    TS_error_winXP_login_limited_account.jpg ‏224 KB

    Stefano,
    if you open files in TestStand, TestStand tracks this in different configuration files (for example for the "most recently used" list). Therefore, TestStand tries to save these changes in the ini-files. It seems to me, that the installationdirectory of TestStand is not permitted to be used for writing for your user login in Windows.
    So you have different solutions here, the simpliest being:
    Give your user the right to write files into the < TestStand >\-folder.
    hope this helps,
    Norbert B.Message Edited by Norbert B on 11-16-2006 07:28 AM
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Solaris 9 - Flarcreate File Limitation

    Hello All,
    Background: Migration from a Solaris9 system w/ Oracle to Solaris10 container
    Method: Create a flar file of the existing system to install into a Solaris9 container on the new Solaris 10 box.
    Issue: The created flar file does not contain any files larger than 2GB (the oracle DB files are approx 4.9gb each totalling around 380GB).
    I've researched the issue and it appears to be related to the flarcreate command using cpio, which has a file size limitation - this isn't an issue in Solaris 10 because you can specify a switch that allows flarcreate to use pax, instead of cpio.
    Can anyone recommend a way to get around this in Solaris9? Besides the obvious solution of excluding the database files and manually moving them across via a 3rd party backup solution or tar/ftp.
    Thanks,
    Marcus

    If anyone else hits this issue it seems there is a known bug in flar create on Solaris 9, which makes the p2v process of creating a zone or container on Solaris 10 difficult.
    Files over 2 gb are missed by flarcreate. Luckily it was a critical file and we spotted it straight away, not in a few years time ;-)
    The Solaris 10 fix was added at some time to 119534-17:
    6256048 flarcreate will not copy single file larger than 2GB
    As a workaround for us we used the flarcreate and then a manual copy of files over 2GB'ish.
    find / -size +3900000 -exec cp {} /mnt/NFS \;
    At least that's the plan anyway ;-)
    Cheers
    Neil

  • Imported music file limited in length?

    I'm trying splice apart a long music file. In the main upper Tracks window there is a little purple, left-facing arrow on the right that won't let me play the rest of the music beyond it. I'm kindof new to this software. Can someone help me?

    Drag the purple arrow to the end of your track and you "should" be good to goo. Previously GB had a limit of 999 measures, and from what I have read, GB3 now has 1999, so you are just dealing the the End Of Song Marker (that is what the purple arrow is called) being NOT at the end of your song. Personally, I think it should automatically go to the end of any song that is within its limits. But sometimes, you have to deal with GB a bit to get what you want out of it. This is one of those instance for you.

  • How to dynamically load jar files - limiting scope to that thread

    Dynamically loading jar files has been discussed a lot. I have read a quite a few posts, articles, and demo code for doing just that. However, I have yet to find a solution to my problem. Most people modify their system class loader and are happy. I have done that and was happy for a time. Occasionally, you will see reference to an application server or tomcat or some other large project that have successfully been able to load and unload jar files, allow for dynamic deployment of code, etc. However, I have not been able to achieve similar success; And my problem is much less complicated.
    I have an application that executes a thread to send a given file/message to a standard JMS Server Queue. Depending on the parameters selected by the user, this thread may need to communicate with one of a number of JMS Servers, ie. JBoss, WebLogic, EAServer, Glassfish, etc. All of which can be done with the same code, but each needs to load their own flavor of JMS Client Jar files. In this instance, spawning a separate JVM for each communication would work from a classloader perspective. However, I need to keep it in the family and run under the same JVM, albeit each JMS Server Connection will be created and maintained in separate Threads.
    I am close, I am doing the following...
    1. Creating a new URLClassLoader in the run() method of each thread.
    2. Set this threads contextClassLoader to the new URLClassLoader.
    3. Load the javax.jms.JMSException class with the URLClassLoader.loadClass() method.
    4. Create an initialContext object within this thread.
    Note: I read that the initialContext and subsequent conext lookup calls would use the Thread�s
    contextClassLoader for finding/loading classes.
    5. Perform context.lookup calls for a connectionFactory and Queue name.
    6. Create JMS Connection, etc. Send Message.
    Most of this seems to work. However, I am still getting a NoClassDefFoundError exception for the javax.jms.JMSException class ( Note step #3 - tried to cure unsuccessfully).
    If I include one of the JMS Client jar files ( ie wljmsclient.jar for weblogic ) in the classpath then it works for all the different JMS Servers, but I do not have confidence that each of the providers implemented these classes that now resolve the same way. It may work for now, but, I believe I am just lucky.
    Can anyone shine some light on this for me and all the others who have wanted to dynamically load classes/jar files on a per Thread basis?

    Thanks to everyone - I got it working!
    First, BenSchulz' s dumpClassLoader() method helped me to visualize the classLoader hierarchy. I am still not completely sure I understand why my initial class was always found by the systemClassLoader, but knowning that - was the step I needed to find the solution.
    Second, kdgregory suggested that I use a "glue class". I thought that I already was using a "glue class" because I did not have any JMSClient specific classes exposed to the rest of the application. They were all handled by my QueueAdmin class. However...
    The real problem turned out to be that my two isolating classes (the parent "MessageSender", and the child "QueueAdmin") were contained within the same jar file that was included in the classpath. This meant that no matter what I did the classes were loaded by the systemClassLoader. Isolating them in classes was just the first step. I had to remove them from my jar file and create another jar file just for those JMSClient specific classes. Then this jar file was only included int custom classLoader that I created when I wanted to instantiate a JMSClient session.
    I had to create an interface in the primary jar file that could be loaded by the systemClassLoader to provide the stubs for the individual methods that I needed to call in the MessageSender/QueueAdmin Classes. These JMSClient specific classes had to implement the interface so as to provide a relationship between the systemClassLoader classes and the custom classLoader classes.
    Finally, when I loaded and instantiated the JMSClient specific classes with the custom classLoader I had to cast them to the interface class in order to make the method calls necessary to send the messages to the individual JMS Servers.
    psuedu code/concept ....
    Primary Jar File   -  Included in ClassPath                                                      
    Class<?> cls = ClassLoader.loadClass( "JMSClient.MessageSender" )
    JMSClientInterface jmsClient = (JMSClientInterface) cls.newInstance()                            
    jmsClient.sendMessage()                                                                      
    JMSClient Jar File  -  Loaded by Custom ClassLoader Only
    MessageSender impliments Primary.JMSClientInterface{
        sendMessage() {
            Class<?> cls=ClassLoader.loadClass( "JMSClient.QueueAdmin" )
            QueueAdmin queueAdmin=(QueueAdmin) cls.newInstance()
            queueAdmin.JMSClientSpecificMethod()
        }

  • IAd Unit sizes: I am a bit confused about the file limits

    Say I have the bottom banner for iPad (hr) and full size iPad banner, what is the max iAdUnit size?
    Is it: 350kb for full size iPad banner (including all the images included on that page), + 200kb for the bottom banner
    Or how does it even work with sizes?

    It appears that Lightroom will warn you when your original file (or XMP sidecar) is out of sync with the Lightroom database. In other words, if you import an image with no metadata, then add metadata within Lightroom, the Lightroom database will have metadata that the file (or XMP sidecar) will not. That is what the warning is concerning.
    Nothing more, and no worries if you choose to ignore it. All current data would be in the database.
    Actually, I suppose it is a little more complicated than that, but basically speaking, there's nothing to worry about. That being said, if you edit metadata in an outside application, and then tell Lightroom to sync the folder, I expect the warning will appear because the file (or XMP sidecar) now has data that the Lightroom database doesn't. The point is still the same...
    Anyone with more than a day's worth of knowledge care to jump in and correct me?

  • NAM2 5.1(2) Capture to File Limitation?

    I recently upgraded my NAM2 from v4.2(1) to 5.1(2). It now appears that I can only run a single concurrent capture when the destination is the disk. This has occurred on multiple NAMs so I assume it's either operating as designed or I downloaded faulty code. Any thoughts? Thanks!

    Sorry - just saw your reply. Here is the version information:
    Cisco Network Analysis Module (WS-SVC-NAM-2) Software version 5.1(2) RELEASE SOFTWARE [fc4]
    Maintenance image version: 2.1(5)
    BIOS Version: 4.0-Rel 6.0.9
    NAM Daughter Card Micro code version: 1.34.1.28 (NAM)
    Fri Mar 23 16:20:36 2012 Patch: nam-app.strong-crypto-patchK9-5.1.2-1 Description: Strong Crypto Patch for NAM.

  • Stdio - file descriptor limits - Solaris 10

    Hi
    New to Solaris from HP-UX and we are porting an application.
    I run into a problem whereby we run out of file descriptors - application is TCP/IP and file I/O intensive. Sometimes it happens, sometimes not.
    It manifests itself as an error when calling setsockopt.
    Increasing the file limits in /etc/system has not relieved the problem.
    A google suggests there is a hard limit of 255 on file descriptors for 32-bit applications using stdio - does this still apply? Any workarounds?
    Specs:
    Solaris 10 01/06
    SunOS saturn 5.10 Generic_118822-25 sun4u sparc SUNW,Sun-Fire-v240
    Thanks in advance.

    What shell do you start the application from?
    If you use sh/bash/ksh; type "ulimit -a" too see what limits the process will inherit from the shell, if the value for 'open files' is very low you can increase it with:
    ulimit -n <new value>
    for example;
    ulimit -n 4096
    if you are using tcsh/csh type "limit -h" to view the limits.
    The values you set in /etc/system is the maximum allowed amount of file descriptors per process, it means that the process is allowed to increase its own limit of open files til that limit, but it doesn't mean that the process gets the highest limit automatically.
    See also:
    man ulimit
    man setrlimit
    7/M.

  • HELP! File already open for writing?

    I've got a file that has been Stuffed (using Stuffit) and now I'm trying to unstuff it and/or open it but get the "File is already open with write permission (-49)" error with an "OK". When I click OK, it stops doing what it was doing. But this file has not been opened for months. What do you think happened to this file prior to stuffing it? Could it have been open and renamed while it was still open? Is there a way that I can open this file? This is a file that can be very helpful with the reconstruction of the next year's edition. We're a printing company. Any input would be great.

    There are a multitude of possibilities as to how a file might get corrupted. Specifically with Stuffit is that older versions had a file limitation, i.e., about 2GB of the total to be compressed, but the user does not get any warning to this fact. You may want to check the support section of their website. Hopefully, there is an uncompressed backup else where.

Maybe you are looking for

  • I go settings then store to sign in but I can't sign because it says my apple id has not yet been used before the iTunes store? That's why I cant download apps?

    I go settings then store to sign in but I can't sign because it says my apple id has not yet been used before the iTunes store? That's why I cant download apps?

  • ABAP-HR related

    Please use a meaningful subject for all future posts.  Take the time to read the Rules of Engagement for these forums. Tcodes : PA20,PA30,PA40 are used for what purpose ? If some 20 persons data(legacy is available) then how to convert that data into

  • How disable JSF2 edit my script tag?

    I am using jsf2. Problem is the script tag being edited to append closing script tag. It cause the jquery not work(I also not sure why but it does not work when i come with closing tag). How to disable jsf to format my script tag? original <script sr

  • New Patient Forms online

    We currently have a  PDF of our new patient forms on our website for patients to print off, fill out, and bring to their appointment with them.  We've had way too many patients go through that process but then forget to bring the paperwork with them.

  • Flash question on website using Explorer

    I have a problem when using Flash on explorer for a particular website. When I access my website using explorer, I have to click once to enter the flash area then I can access the flash sequence. I cannot just click on the flash and go. Is this a set