File lock timeout length

When a user opens a document and a file lock timeout period is set on it in the database, how does SharePoint 2013 determine the length of time to make the lock period?  This is an issue when a user has their system crash, but the file lock from their
original session remains open until the file lock timeout expires. 
I already know there's a non-supported way to fix this in the database, but Microsoft's official recommendation is to wait "10 minutes", yet I see on most files the lock timeout is set to hours in the future.  This causes users an unbelievable
amount of frustration to either wait multiple hours, or save a copy and merge changes later.  Is there a fix to lower the file lock timeout value to something more reasonable?  Why was this designed so terribly?

It is actually depending on your session timeout settings. 
This prevents other users to modify \ overwrite it.
Check if below can answer some more question of yours
Office has to be continuing to extend that short term lock. You may check if this is the issue:
http://paulliebrand.com/2008/01/04/document-is-locked-for-editing/ or check this
http://paulliebrand.com/2010/04/12/document-is-locked-for-editing-part-2/
If this helped you resolve your issue, please mark it Answered

Similar Messages

  • Cannot open file. Internal error: Tride to lock 0 length block

    I am working in PageMaker 6.5
    If I open the pagemaker file, following messages in appeared
    cannot open file.
    Internal error: Tride to lock 0 length block
    8009:20540
    Please help me and solve the the problem

    My guess is that file is toast. Do you have a backup?
    Bob

  • File lock() method problem

    I know this may be a common question, but can someone explain why this code:
    import java.io.*;
    import java.nio.channels.*;
    import java.util.*;
    public class TestFileLock
         private static File file;
         private static RandomAccessFile fileRandom;
         private static FileChannel fileChannel;
         private static FileLock fileLock;
         private static String process;
         public static void main(String[] args)
              process = args[0];
              try
                   file = new File("/home/fauxn/work/blast/java/java_blast/test.log");
                   //fileWriter = new FileWriter(file, true);
                   fileRandom = new RandomAccessFile(file, "rw");
                   fileChannel = fileRandom.getChannel();
                   for(int i =0; i < 1000; i++)
                        writeLogFile(process + ": happy days\n");
              catch(Exception exception)
                   System.out.println(exception.getMessage());
                   exception.printStackTrace();
         * Method lockes the logFile and then append the string to the file and unlocks the file.
         * @author Noel Faux.
         * @param s The string to be written to be appended to the file.
         private static void writeLogFile(String s) throws IOException
         System.out.println(process + ": trying to lock the log file");
         fileLock = fileChannel.tryLock();
         while(fileLock == null)
                   System.out.println(s + ": waiting!!!!");
                   fileLock = fileChannel.tryLock();
         System.out.println(s + ": logfile locked");
              fileRandom.seek(fileRandom.length());
         fileRandom.write(s.getBytes());
              fileLock.release();
              System.out.println(s + ": logfile unlocked");
    produces this error:
    Invalid argument
    java.io.IOException: Invalid argument
    at sun.nio.ch.FileChannelImpl.lock0(Native Method)
    at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:528)
    at java.nio.channels.FileChannel.tryLock(FileChannel.java:967)
    at TestFileLock.writeLogFile(TestFileLock.java:43)
    at TestFileLock.main(TestFileLock.java:23)
    on a intel linux box, java 1.4 and this code works fine:
    import java.io.*;
    import java.nio.channels.*;
    import java.util.*;
    public class TestFileLock
         private static File file;
         private static RandomAccessFile fileRandom;
         private static FileChannel fileChannel;
         private static FileLock fileLock;
         private static String process;
         public static void main(String[] args)
              process = args[0];
              try
                   file = new File("/home/fauxn/work/blast/java/java_blast/test.log");
                   //fileWriter = new FileWriter(file, true);
                   fileRandom = new RandomAccessFile(file, "rw");
                   fileChannel = fileRandom.getChannel();
                   for(int i =0; i < 1000; i++)
                        writeLogFile(process + ": happy days\n");
              catch(Exception exception)
                   System.out.println(exception.getMessage());
                   exception.printStackTrace();
         * Method lockes the logFile and then append the string to
    * the file and unlocks the file.
         private static void writeLogFile(String s) throws IOException
         System.out.println(process + ": trying to lock the log file");
         fileLock = fileChannel.tryLock(0, fileRandom.length(), false);
         while(fileLock == null)
                   System.out.println(s + ": waiting!!!!");
                   fileLock = fileChannel.tryLock(0, fileRandom.length(), false);
         System.out.println(s + ": logfile locked");
         fileRandom.seek(fileRandom.length());
         fileRandom.write(s.getBytes());
         fileLock.release();
         System.out.println(s + ": logfile unlocked");
    Any suggestions welcome, or is this a bug????
    Thanks in advance :)

    Its a known bug. The default tryLock() method calls the parametered
    tryLock method as follows
    tryLock(0,Long.MAX_VALUE,false);unfortunately under linux the parameter Long.MAX_VALUE is too big for
    the underlying OS file locking. This causes the IOException to be thrown
    It has been fixed in version 1.4.1 i believe.
    For more info
    http://developer.java.sun.com/developer/bugParade/bugs/4532474.html
    matfud

  • CFFILE Holding File Locks

    I have a CF page that reads some tab-delimited text files for
    processing. After it reads a file it then calls a CF page to move
    the file to a new location. It moves the file, but it does not
    delete the old version -- and a filesystem delete on the file fails
    -- it appears as if the CF server has the file locked. If I shut
    down the server and restart it, it clears the lock.
    <cfif #url.vendor# is "LOWES">
    <cflock timeout="10" scope="server" throwOnTimeout="no"
    type="exclusive">
    <cffile action="move"
    source="c:\edifiles\export\#url.fileno#"
    destination="c:\edifiles\export\done\lowes\#url.fileno#">
    </cflock>
    </cfif>

    Generally when manipulating files you used a named
    <cflock> instead of a scope like so:
    <cflock name="#Variables.myFileName#" timeout="5"
    type="exclusive">
    In your code in the second example I'm not sure about the
    function of the loop at all... I guess you are trying to delay to
    make sure the copy operation completes before deleting. But looping
    from 1 to 10 is arbitrary and depending on other concurrent
    processes, server speed, RAM, etc. and so basically it is
    unreliable.

  • Really persistent file lock

    I've got a problem where connections with certain file locks do not seem to close for more then a day even on reboot / shutdown of the client.
    We have a .net/C# based document processor that runs on every user session that requires it that manages to put some weird file lock on its log files that are located on users home directory's (unable to change that).
    For some reason our OES11 server every now and then does not close the connection holding the log file open and users can not start the processor in the morning because it can not access its own log file.
    This -occasionally- happens with normal workstations running XP but more frequently on Win2k8r2 with Xenapp although all our Xenapp servers get rebooted daily.
    I would expect the OES server closing the connections upon reboot of the Xenapp servers or when the workstation is turned off for 10+ hours. It would appear it takes somewhere over 24 hours for the connection / lock to clear but I have nothing to back this up.
    Can anybody explain this behaviour and/or come up with possible workarounds, I always assumed connections were actively checked to see if they were alive.
    I also need to have a talk with the people where this amazing piece of .NET software came from about the way they lock files.
    Thanks

    In article <[email protected]>, Conz wrote:
    > For some reason our OES11 server every now and then does not close the
    > connection holding the log file open and users can not start the
    > processor in the morning because it can not access its own log file.
    >
    The default watchdog timeout process isn't as effective as it used to be
    on NetWare, but there are things you can do. I had hit it with OES2 and
    mostly resolved with http://www.novell.com/support/kb/doc.php?id=7004848
    and while looking for that TID found this useful looking one.
    http://www.novell.com/support/kb/doc.php?id=3138614
    Andy Konecny
    Knowledge Partner (voluntary SysOp)
    KonecnyConsulting.ca in Toronto
    Andy's Profile: http://forums.novell.com/member.php?userid=75037

  • File Locking Issue with Concurrent Locks

    Hi. I'm trying to implement a procedure using the File Lock class for two instances of the same program (running on different servers) accessing files shared over an NFS mount in Unix (this is using Java 1.5). 99% of the time, this works great. One box will get the lock, and the other will respect that lock and not touch the file. However, the other 1% of the time, both boxes seem to access the same file simultaniously, allowing BOTH to acquire locks on the same file.
    I'm wondering if anyone has any advice on how to solve this problem. I've been searching all over but haven't found a solution. Like I said, the locking works most of the time - I've even independantly verified that it does (and that it's not just timing) with a test program. It's only when both programs access the file at the exact same does that there seems to be a problem.
    If it helps, here is the gist of my code:
          FileLock javaLock = null;
          FileChannel fileChannel = null;
          for(int i = 0; i < fileArray.length; i++)
             javaLock = null;
             fileChannel = null;
             if(!fileArray.exists())
    continue;
    try
    RandomAccessFile raLockFile = new RandomAccessFile(fileArray[i], "rw");
    try
    fileChannel = raLockFile.getChannel();
    if(fileArray[i].exists())
    javaLock = fileChannel.tryLock();
    catch(OverlappingFileLockException ofle)
    // Error printed here
    continue;
    catch(Exception e)
    // Error
    javaLock = null;
    if(javaLock == null)
    continue; // File is already locked, move to next one
    // Processing is done on the file here
    catch(Exception e)
    // Error
    finally
    try
    if(javaLock != null)
    javaLock.release();
    if(fileChannel != null)
    fileChannel.close();
    catch(Exception e)
    // Error
    Edited by: kksmith on Jul 21, 2008 7:31 AM
    Edited by: kksmith on Jul 21, 2008 7:32 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I don't have the errors on hand at the moment, but most were File Not Found IOExceptions (because one box would finish processing the file and rename it out from under the other box). Using a bunch of System.out.println()s I was able to determine that both boxes were locking the same specific files producing those errors.
    The other weird thing that would happen is that the file would be processed correctly on one box, but somehow leave behind a zero length version of itself, which the other box would finish processing normally as if we had received a zero length file (as we have special code in place for that condition). This is probably the biggest problem, since I can trap File Not Founds (though it's messy and kind of defeats the purpose of using locks to begin with), but I don't have a real easy way to catch this (and I can't just toss out all zero lengths because we want to know if we legitamently got a zero length file in).
    As far as I can tell though, this problem doesn't occur outside of an NFS mount, but I haven't tested it extensively enough to be sure (since it's a random timing thing, it's difficult to reproduce outside of just dumping a ton of files on it). I'm not entirely sure how our NFS system in configured, since that is completely out of my hands (System Admins have contol over all of that). But I believe it has most of the latest patches.

  • Lock Timeouts and Heap Space Exhaustion

    I'm having some trouble figuring out the best way to handle resource constraints in my application. Generally speaking, the application works well after starting for about a day or so, but inevitably starts generating "Lock timeout" messages and eventually runs out of heap space.
    Here is the main entity class:
    http://github.com/justindthomas/flower_as/blob/master/src/java/name/justinthomas/flower/analysis/statistics/StatisticalInterval.java
    Here is a supporting, persistent class:
    http://github.com/justindthomas/flower_as/blob/master/src/java/name/justinthomas/flower/analysis/statistics/StatisticalFlow.java
    This is the data accessor:
    http://github.com/justindthomas/flower_as/blob/master/src/java/name/justinthomas/flower/analysis/statistics/StatisticsAccessor.java
    And here is the class that controls the insertion and retrieval of data:
    http://github.com/justindthomas/flower_as/blob/master/src/java/name/justinthomas/flower/analysis/statistics/StatisticsManager.java
    The general flow is that a netflow packet is received from a sensor and the StatisticsManager normalizes the flow as it is inserted into the database. The normalization converts the summarized flow into several "resolutions." For example, one resolution might be 5000 ms. So the StatisticsManager takes the flow's end time/5000 - start time/5000 and divides the volume by the result and inserts that data into the database. It sounds kind of confusing, but results in a dataset that allows me to query for netflow activity for any time frame.
    It also means that the database is queried as it is written to; existing data is updated more frequently than new data is written.
    Regardless, Sleepycat seems to be holding on to more data than it needs to. Once an interval has passed (maybe 5 minutes or so), it is unlikely to be accessed again until queried to create charts. However, the memory usage grows out of control despite the lack of necessity for keeping all but recently entered entries in cache.
    This is how the trouble generally starts (note that I've increased the timeout to 15000 ms to try to accommodate for longer wait times, but that just seems to delay the onset of the issue by a day or so):
    [#|2010-10-23T13:46:00.854-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=64;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 30264205 -1_pool-6-thread-8_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:32426808 node=17057517 type=READ grant=WAIT_NEW timeoutMillis=15000 startTime=1287866745448 endTime=1287866760795
    Owners: [<LockInfo locker="29616818 -1_pool-6-thread-5_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="7246740 -1_pool-6-thread-6_ThreadLocker" type="READ"/>, <LockInfo locker="26940477 -1_pool-6-thread-2_ThreadLocker" type="WRITE"/>, <LockInfo locker="5099094 -1_pool-6-thread-4_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:00:41.343-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=65;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 14184769 -1_pool-6-thread-1_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:32730917 node=17057517 type=READ grant=WAIT_NEW timeoutMillis=15000 startTime=1287871223679 endTime=1287871241341
    Owners: [<LockInfo locker="23975039 -1_pool-6-thread-8_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="19650664 -1_pool-6-thread-6_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:32:13.090-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=66;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 17937364 -1_pool-6-thread-7_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:20265315 node=17057517 type=WRITE grant=WAIT_NEW timeoutMillis=15000 startTime=1287873113904 endTime=1287873133089
    Owners: [<LockInfo locker="3251671 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="32174859 -1_pool-6-thread-8_ThreadLocker" type="READ"/>, <LockInfo locker="33186148 -1_pool-6-thread-4_ThreadLocker" type="WRITE"/>, <LockInfo locker="17825718 -1_pool-6-thread-2_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:32:13.096-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=64;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 32174859 -1_pool-6-thread-8_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:20265315 node=17057517 type=READ grant=WAIT_NEW timeoutMillis=15000 startTime=1287873118064 endTime=1287873133092
    Owners: [<LockInfo locker="3251671 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="33186148 -1_pool-6-thread-4_ThreadLocker" type="WRITE"/>, <LockInfo locker="17825718 -1_pool-6-thread-2_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:32:13.367-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=67;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 33186148 -1_pool-6-thread-4_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:20265315 node=17057517 type=WRITE grant=WAIT_NEW timeoutMillis=15000 startTime=1287873118366 endTime=1287873133366
    Owners: [<LockInfo locker="3251671 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="17825718 -1_pool-6-thread-2_ThreadLocker" type="READ"/>, <LockInfo locker="25145711 -1_pool-6-thread-6_ThreadLocker" type="READ"/>, <LockInfo locker="5544029 -1_pool-6-thread-5_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:33:14.030-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=68;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 31602565 -1_pool-6-thread-5_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:11219992 node=17057517 type=WRITE grant=WAIT_NEW timeoutMillis=15000 startTime=1287873175916 endTime=1287873194019
    Owners: [<LockInfo locker="27649147 -1_pool-6-thread-2_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="3895581 -1_pool-6-thread-7_ThreadLocker" type="WRITE"/>, <LockInfo locker="8345933 -1_pool-6-thread-8_ThreadLocker" type="WRITE"/>, <LockInfo locker="12576013 -1_pool-6-thread-6_ThreadLocker" type="WRITE"/>, <LockInfo locker="5695501 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>]
    |#]
    [#|2010-10-23T15:33:23.334-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=64;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 8345933 -1_pool-6-thread-8_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:11219992 node=17057517 type=WRITE grant=WAIT_NEW timeoutMillis=15000 startTime=1287873184851 endTime=1287873203333
    Owners: [<LockInfo locker="3895581 -1_pool-6-thread-7_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="12576013 -1_pool-6-thread-6_ThreadLocker" type="WRITE"/>, <LockInfo locker="5695501 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>, <LockInfo locker="13327115 -1_pool-6-thread-4_ThreadLocker" type="READ"/>, <LockInfo locker="11939897 -1_pool-6-thread-5_ThreadLocker" type="READ"/>]
    |#]
    [#|2010-10-23T15:33:23.344-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=69;_ThreadName=Thread-1;|addStatisticalSeconds Failed: (JE 4.0.104) Lock expired. Locker 12576013 -1_pool-6-thread-6_ThreadLocker: waited for lock on database=persist#Statistics#name.justinthomas.flower.analysis.statistics.StatisticalInterval LockAddr:11219992 node=17057517 type=WRITE grant=WAIT_NEW timeoutMillis=15000 startTime=1287873184893 endTime=1287873203343
    Owners: [<LockInfo locker="3895581 -1_pool-6-thread-7_ThreadLocker" type="WRITE"/>]
    Waiters: [<LockInfo locker="5695501 -1_pool-6-thread-1_ThreadLocker" type="WRITE"/>, <LockInfo locker="13327115 -1_pool-6-thread-4_ThreadLocker" type="READ"/>, <LockInfo locker="11939897 -1_pool-6-thread-5_ThreadLocker" type="READ"/>]
    |#]
    Those errors go on and on and on, until I eventually see this:
    [#|2010-10-23T17:37:29.876-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=71;_ThreadName=Thread-1;|Exception in thread "ContainerBackgroundProcessor[StandardEngine[com.sun.appserv]]" |#]
    [#|2010-10-23T17:37:34.915-0700|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=23;_ThreadName=Thread-1;|In main loop, we have serious trouble: java.lang.OutOfMemoryError: Java heap space|#]
    [#|2010-10-23T17:37:56.516-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=66;_ThreadName=Thread-1;|Exception in thread "pool-6-thread-7" |#]
    [#|2010-10-23T17:39:11.060-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=23;_ThreadName=Thread-1;|Exception in thread "{felix.fileinstall.poll=5000, felix.fileinstall.bundles.new.start=true, service.pid=org.apache.felix.fileinstall.fd8877ce-71aa-41d2-8ddc-15ce996cde1b, felix.fileinstall.dir=/opt/glassfishv3/glassfish/domains/domain1/autodeploy/bundles/, felix.fileinstall.filename=org.apache.felix.fileinstall-autodeploy-bundles.cfg, service.factorypid=org.apache.felix.fileinstall, felix.fileinstall.debug=1}" |#]
    [#|2010-10-23T17:39:11.070-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=66;_ThreadName=Thread-1;|java.lang.OutOfMemoryError: Java heap space
         at java.util.IdentityHashMap.init(IdentityHashMap.java:261)
         at java.util.IdentityHashMap.<init>(IdentityHashMap.java:207)
         at com.sleepycat.je.utilint.IdentityHashMap.<init>(IdentityHashMap.java:25)
         at com.sleepycat.je.cleaner.LocalUtilizationTracker.<init>(LocalUtilizationTracker.java:39)
         at com.sleepycat.je.recovery.Checkpointer.flushDirtyNodes(Checkpointer.java:665)
         at com.sleepycat.je.recovery.Checkpointer.syncDatabase(Checkpointer.java:604)
         at com.sleepycat.je.dbi.DatabaseImpl.sync(DatabaseImpl.java:977)
         at com.sleepycat.je.dbi.DatabaseImpl.handleClosed(DatabaseImpl.java:863)
         at com.sleepycat.je.Database.closeInternal(Database.java:458)
         at com.sleepycat.je.Database.close(Database.java:314)
         at com.sleepycat.je.SecondaryDatabase.close(SecondaryDatabase.java:331)
         at com.sleepycat.persist.impl.Store.closeDb(Store.java:1454)
         at com.sleepycat.persist.impl.Store.close(Store.java:1059)
         at com.sleepycat.persist.EntityStore.close(EntityStore.java:630)
         at name.justinthomas.flower.analysis.persistence.FlowReceiver.addFlow(FlowReceiver.java:94)
         at name.justinthomas.flower.analysis.persistence.FlowReceiver.addFlow(FlowReceiver.java:65)
         at name.justinthomas.flower.collector.FlowWorker.parseData(FlowWorker.java:382)
         at name.justinthomas.flower.collector.FlowWorker.v9(FlowWorker.java:111)
         at name.justinthomas.flower.collector.FlowWorker.run(FlowWorker.java:61)
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
         at java.lang.Thread.run(Thread.java:636)
    |#]
    [#|2010-10-23T17:39:11.124-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=23;_ThreadName=Thread-1;|java.lang.OutOfMemoryError: Java heap space
    |#]
    [#|2010-10-23T17:39:11.141-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=72;_ThreadName=Thread-1;|Exception in thread "pool-6-thread-3" |#]
    [#|2010-10-23T17:39:11.144-0700|SEVERE|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=72;_ThreadName=Thread-1;|java.lang.OutOfMemoryError: Java heap space
    |#]
    It's very frustrating, because things run so well at first and then just deteriorate into a resource nightmare. Any suggestions would be welcome. The application is running with 3 CPU cores and 2GB RAM.
    Edited by: justindthomas on Oct 23, 2010 7:19 PM: I initially tried to use the forum's "URL" mechanism, but that doesn't seem to work worth anything, so I un-did it.

    I've disabled that thread for now. While debugging that, I ran into a SecondaryIntegrityException and I read that I shouldn't use secondary indexes without also using transactions. I enabled transactional processing, but the locking issues grew far worse. I opted to find ways to not use secondary indexes instead.You're right that with secondaries it is important to use txns. But I'm not sure why you're having such severe locking problems with txns. Were you using a txn with a cursor, to perform a scan? If so, I can probably suggest ways of doing that without the txn, if you can describe what you're doing and/or point me to your source code. Or, perhaps you've decided not to use secondaries, and this isn't an issue anymore?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • File locking with OSX Server & Microsoft Word

    We have a small office LAN based on a Airport Extreme bases station. Periodically Microsoft Word reports a break in connection to the shared folder where we store documents to get disconnected. When that happens and the user reconnects to the share, the user gets the file read-only message for the Word document that was open at the time.
    I have found that in order to clear the read-only flag, I have to restart the server and open the Word file on the server with the ID of the user who was editing the file on the remote computer. Only that seems to clear the file locking that causes the file to be read-only.
    I cannot figure out a less drastic way to release the lock file. I don't see any hidden temp files in the same folder as the document that is locked for editing found by ls -a or in the root direction of the shared folder in the .TemporaryItems/folders.<UID> that seems to be holding the lock.
    I have run chflags nouchg <Word file> from Terminal, but that seems to have no effect either. I also used xattr to see if there were any locks held that way, but I don't see anything that way either.
    File sharing to Macs only, which are configured just to use AFP through OSX Server.
    Does anyone understand how the file locks of Word documents can be released directly with out restarting the server?
    I should add that the underlying problems seems to be WiFi related. The behavior is that the WiFi connection seems to break long enough that the shared volumes disconnects. Outlook also causes a break. This behavior seems to have started with 10.9 and the purchase of new Retina Display MacBook Pros. I turned off AppNap on the Office applications but that is not clearly helpful. I also have been told the problem seems more likely to occur when Word is open in the background and another program like Outlook is in the foreground.
    Any suggestions appreciated.

    Apple write an operating system and also produce file sharing software as part of that to be used as a file server. Apple provide documentation for third-party software developers on how they should write software to work with Apple's software and also give those developers early access to new versions so those third-party developers can test and if needed make adjustments and issue updates to cope with any Apple changes.
    Some third-party developers are good at dealing with this, some are bad, and some totally ignore what Apple does and give the impression they don't care if their product works properly or not. I think we can all judge where Microsoft sits.
    It appears Microsoft have never paid any attention in particular to how Apple expect file-locking to be handled when accessing files on a Mac server. There have been problems for years and years and years with Office. Two other problems I have seen which seem different to yours but probably related are -
    With Office documents it is supposed to be possible for more than one person to edit the same file at the same time, consider it a miracle if this actually works
    With Office in particular Word, there is an auto-save function, unfortunately the way this seems to be implemented it seems that Word creates a new temporary file each time it auto-saves the document and keeps all the previous temporary files open still, this eventually means potentially over a hundred temporary files are open - just for one Word document and you can then hit a limit on the total number of files you are allowed to have open at the same time. At this point further auto-saves fail, and you also encounter great difficulty doing a real full save of the document.
    I do not hold Apple completely blameless over this issue, it is likely their file-locking implementations change too often, and have inadequate documentation, however even considering this a company the size of Microsoft with the amount of sales (and profit) they derive from Office for the Mac has no excuses at all for failing to put the effort in to resolve any such clearly critical problems.
    We could go on and on about other areas where Microsoft don't play by the (software) rules. Even in Windows Office does not obey the standard print dialog rules Microsoft specified themselves!
    Unfortunately not only is Office for Mac upgraded infrequently, but even when new paid for upgrades are released Microsoft have a history of still not fixing bugs. It goes without saying that a mere free update is even less likely to actually fix a bug, typically such free updates only address security issues. The next version of Office is going to be Office for Mac 2014 see http://www.macworld.com/article/2106643/microsoft-will-release-a-new-version-of- office-for-mac-this-year.html
    One area I confidently predict Microsoft not to resolve in Office for Mac 2014 is that fact that Word for Mac still does not support right-to-left languages like Hebrew and Arabic. This is despite the fact that OS X itself has supported this for years and years, and despite the fact other Mac programs support this including the free TextEdit and Pages - both of which can read Word files. Some people may remember that at one point the Israeli Government temporarily banned all Microsoft software over this issue. See http://apple-beta.slashdot.org/story/03/10/15/2215249/israeli-government-suspend s-microsoft-contracts This issue goes back over TEN years!!
    I note that Microsoft has now taken their OfficeForMac blog offline, probably due to the weight of criticisms. I would not say it is due to out-right anti-Microsoft hate, that war ended long ago. We just want products that work. I myself do use Microsoft products, even at home - where they are the best solution, so I use Microsoft Media Center for example. Sadly this is now being neglected by Microsoft.

  • Intermittent Lock Timeout Exceptions with JE 5.0.58

    Hi,
    During tests, the system continues to experience Lock Timeout problems from time to time, even with the latest version 5.0.58. The collection contains data which is typically removed shortly after inserted, so records usually have a short lifetime. There are multiple writer threads and 1 delete / reader thread.
    Any tips? Note the timeout is already a healthy 3 minutes long, so I don't think that's the problem. Stack trace and details below.
    Maybe the best thing to do is to reduce the lock timeout in order to not block the data pipeline, if a simple retry fixes it?
    Thanks,
    Karl
    ERROR@00:08:03 com.sleepycat.je.LockTimeoutException: (JE 5.0.58) Lock expired. Locker 879535510 -1_dbreader--Thread-5_ThreadLocker: waited for lock on database=persist#ListenerStore#com.procon.data.msg.local.BerkeleyDBMessage LockAddr:
    892328566 LSN=0xffffffff/0x5882af type=READ grant=WAIT_NEW timeoutMillis=180000 startTime=1344323103818 endTime=1344323283850
    Owners: [<LockInfo locker="1242902705 -1_CAL-dbwriter--Thread-18_ThreadLocker" type="WRITE"/>]
    Waiters: []
    com.sleepycat.je.txn.LockManager.newLockTimeoutException(LockManager.java:664)
    com.sleepycat.je.txn.LockManager.makeTimeoutMsgInternal(LockManager.java:623)
    com.sleepycat.je.txn.SyncedLockManager.makeTimeoutMsg(SyncedLockManager.java:97)
    com.sleepycat.je.txn.LockManager.lockInternal(LockManager.java:390)
    com.sleepycat.je.txn.LockManager.lock(LockManager.java:276)
    com.sleepycat.je.txn.BasicLocker.lockInternal(BasicLocker.java:118)
    com.sleepycat.je.txn.Locker.lock(Locker.java:443)
    com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2589)
    com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2390)
    com.sleepycat.je.dbi.CursorImpl.searchAndPosition(CursorImpl.java:2118)
    com.sleepycat.je.Cursor.searchInternal(Cursor.java:2666)
    com.sleepycat.je.Cursor.searchAllowPhantoms(Cursor.java:2576)
    com.sleepycat.je.Cursor.searchNoDups(Cursor.java:2430)
    com.sleepycat.je.Cursor.search(Cursor.java:2397)
    com.sleepycat.je.Cursor.readPrimaryAfterGet(Cursor.java:3703)
    com.sleepycat.je.SecondaryCursor.readPrimaryAfterGet(SecondaryCursor.java:1470)
    com.sleepycat.je.SecondaryCursor.retrieveNext(SecondaryCursor.java:1448)
    com.sleepycat.je.SecondaryCursor.getNext(SecondaryCursor.java:560)
    com.sleepycat.util.keyrange.RangeCursor.doGetNext(RangeCursor.java:897)
    com.sleepycat.util.keyrange.RangeCursor.getNext(RangeCursor.java:451)
    com.sleepycat.persist.BasicCursor.next(BasicCursor.java:80)
    com.sleepycat.persist.BasicIterator.hasNext(BasicIterator.java:49)
    com.procon.data.msg.local.BerkeleyDBMessageStore.queryMessages(BerkeleyDBMessageStore.java:498)
    com.procon.listener.DatabaseReader.runMain(DatabaseReader.java:161)
    com.procon.base.BasicRunnable.run(BasicRunnable.java:274)
    java.lang.Thread.run(Thread.java:662)

    Karl,
    With a 3 minute timeout, you may have a true deadlock (which are sometimes expected) or a bug in your app where you neglect to close a cursor or close a transaction. Increasing the lock timeout is usually not the solution, but the first step is to diagnose what is causing it, and you haven't given us enough information to do that. You haven't said what isolation modes you're using, what transactions are active, how many records are involved in each transactions, etc.
    I suggest that you read:
    http://docs.oracle.com/cd/E17277_02/html/TransactionGettingStarted/index.html
    in particular the Concurrency chapter.
    Also see:
    http://www.oracle.com/technetwork/database/berkeleydb/je-faq-096044.html#HowdoIdebugalocktimeout
    for how to get more information about this particular lock conflict.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Field in data file exceeds maximum length

    Hi,
    I am trying to run the following SQL*Loader control job on my Oracle 11gR2 . Running the SQL*Loader control job results in the ‘Field in data file exceeds maximum length’ error message. Below, I am listing the control file.Please Suggest. Thanks
    It's giving me an error when I run SQL Loader on it,
    Record 61: Rejected - Error on table RMS_TABLE, column GEOM.SDO_POINT.X.
    Field in data file exceeds maximum length.
    Here is my SQL Loader Control file,
    LOAD DATA
    INFILE *
    TRUNCATE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE RMS_TABLE
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS (
       Status NULLIF Status = BLANKS,
       Score,
       Match_type NULLIF Match_type = BLANKS,
       Match_addr NULLIF Match_addr = BLANKS,
       Side NULLIF Side = BLANKS,
       User_fld NULLIF User_fld = BLANKS,
       Addr_type NULLIF Addr_type = BLANKS,
       ARC_Street NULLIF ARC_Street = BLANKS,
       ARC_City NULLIF ARC_City = BLANKS,
       ARC_State NULLIF ARC_State = BLANKS,
       ARC_ZIP NULLIF ARC_ZIP = BLANKS,
       INCIDENT_N NULLIF INCIDENT_N = BLANKS,
       CDATE NULLIF CDATE = BLANKS,
       CTIME NULLIF CTIME = BLANKS,
       DISTRICT NULLIF DISTRICT = BLANKS,
    LOCATION
    NULLIF LOCATION = BLANKS,
       MAPLOCATIO
    NULLIF MAPLOCATIO = BLANKS,
       LOCATION_T
    NULLIF LOCATION_T = BLANKS,
       DAYCODE
    NULLIF DAYCODE = BLANKS,
       CAUSE
    NULLIF CAUSE = BLANKS,
       GEOM COLUMN OBJECT
         SDO_GTYPE       INTEGER EXTERNAL,
         SDO_POINT COLUMN OBJECT
           (X            FLOAT EXTERNAL,
            Y            FLOAT EXTERNAL)
    CREATE TABLE RMS_TABLE (
      Status VARCHAR2(1),
      Score NUMBER,
      Match_type VARCHAR2(2),
      Match_addr VARCHAR2(120),
      Side VARCHAR2(1),
      User_fld VARCHAR2(120),
      Addr_type VARCHAR2(20),
      ARC_Street VARCHAR2(100),
      ARC_City VARCHAR2(40),
      ARC_State VARCHAR2(20),
      ARC_ZIP VARCHAR2(10),
      INCIDENT_N VARCHAR2(9),
      CDATE VARCHAR2(10),
      CTIME VARCHAR2(8),
      DISTRICT VARCHAR2(4),
      LOCATION VARCHAR2(128),
      MAPLOCATIO VARCHAR2(100),
      LOCATION_T VARCHAR2(42),
      DAYCODE VARCHAR2(1),
      CAUSE VARCHAR2(17),
      GEOM MDSYS.SDO_GEOMETRY);

    Hi,
    Looks like you have a problem with record 61 in your data file. Can you please post it in reply.
    Regards
    Ivan

  • Office 2010 & 2007 - Excel and Access File Locking Out On the Network With Multiple Users

    This is also posted in the Office 2010 - IT Pro General Discussions, but was suggested to repost here, since a definitive answer was not found.
    Hi,
    An issue that's happening is that Excel and Access files are locking on the network. We're currently using Office 2007 and 2010.
    Here are some different scenarios that are happening:
    When opening the file it is locked out by “User X” which is the person that has the file locked out and no one else can open the file.
    When opening the file it is locked out by “User Y” which is NOT the actual person, but is locked out by “User X” and no one else can access the file.
    When opening the file  it is locked out by “…another user” which is generic and no one else can access the file.
    The two more common events are incident 1 and 2 with 3 happening the less common.
    This message will continue until the sessions are closed through computer management on the file server.
    The file server is running Windows Server 2003.
    This does happen on both Windows XP and Windows 7 clients.
    This does happen for users using Office 2007 and Office 2010.
    There are two sets of Office 2010 Users when it comes to patches. Everyone has the most current patches with Office 2010 SP2 while anyone that has Microsoft Project 2010 is using all the current update before Office 2010 SP2.
    All users that are using Office 2007 have all the current patches and service packs.
    Another variable is that we have users that will leave a file open on the network for 3+ days and after a while it will lock the file out.
    Also we have Shadow Copy that runs daily on the system which I'm not for sure if that impacts anything if a file is opening during that time.
    Any ideas on how to mitigate the lock out issues would be appreciated.
    Thanks,
    Binary Process
    Edit November 12, 2013: This issue can occur if and if not another person actually has the file open. If the person doesn't have the file open then there is a hung connection which needs to be disconnected by going to the Computer Management of the File
    server.

    Hi Binary,
    I know that the description of the hotfix does not relate to the issue. The purpose is to install it for upgrading SMB related file.
    A similar issue I encountered before:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/b7fcc59b-52d9-4a02-863a-1a529bcb8cb1/temp-doc-etc-files-dont-close-after-a-file-closes-this-causes-locked-files?forum=winserverfiles
    It is resolved by upgrading SMB files so maybe it will help on your case.
    Another hotfix which may related:
    http://support.microsoft.com/kb/983458
    If you have any feedback on our support, please send to [email protected]

  • File locked message when saving a file with Photoshop

    We've been seeing a problem with several users that is hard to reproduce. Every once in a while when they try to save a file in Photoshop on the server they get a dialogue saying 'the file is locked, use Finder info to unlock the file'.
    This file is located on the server and the problem only happens when doing a 'Save', not with a 'Save as'. The server is running 10.5.5 and the clients are 10.4.11. This is happening with Photoshop versions CS1, 2 and 3. It also happend with Illustrator CS3 once.
    We've been experimenting with different scenario's using different ACL's and POSIX settings. Also, when the client reboots their machine the exact same 'Save' actions works again.

    I've been receiving complaints about a similar issue over the past few weeks, except using Quark and Excel. I wonder if it's related to the 10.5.5 update. Are you sure you didn't get the version numbers reversed in your post? We're having the issue with 10.4.11 SERVER and 10.5.5. CLIENTS.
    It's a file locking issue, so you probably won't fix it by changing the ACLs or POSIX permissions. The server thinks a different process still has the file open, so it's preventing you from changing the file. "Save As" works because it creates a new file.
    You can probably eject & reconnect the volume from the client instead of fully rebooting.
    The question is what's keeping the file open. You can run:
    sudo lsof | grep <filename>
    ... in Terminal to see whether a given Mac has the file open. If the server comes back with a response to that command (don't worry what it says - you just want to see if anything comes back), then you know that the file is locked.
    Next, go to each client that could have the file open and run the same command. If none of them have it open, you're probably looking at the same glitch that I've been. The server thinks a file is open but none of the clients do.

  • File locked, intern was using (file sharing). odd?

    An intern, was working on my excel file via file sharing.
    Sometimes, when I just go to open it it says, file locked - Julie working on it.... ok
    But what's odd it she left the office, closed, saved, quit, and I'm still getting this on the file.

    Check the privilages.
    Other resources
    Mac 101: File sharing
    Mac OS X: File can't be moved if locked
    Unable to move, unlock, modify, or copy an item in Mac OS X

  • Flat file with fixed lengths to XI 3.0 using a Central File Adapter---Error

    Hi
    According to the following link
    /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
    In Adapter Monitor I got the following error,
    In sender Adapter,
    Last message processing started 23:47:35 2008-10-25, Error: Conversion of complete file content to XML format failed around position 0 with java.lang.Exception: ERROR converting document line no. 1 according to structure 'Substr':java.lang.Exception: Consistency error: field(s) missing - specify 'lastFieldsOptional' parameter to allow this
      last retry interval started 23:47:35 2008-10-25
      length 15,000 secs
    some one help me out ?
    Thanks
    Ram

    from the blog you referenced -
    <u> /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
    <b>goto step 4</b>
    <u>additional parameters</u>
    add as the last entry
    <recordset structure>.lastFieldsOptional            Yes
    e.g.,
      Substr.lastFieldsOptional            Yes

  • On load, getting error:  Field in data file exceeds maximum length

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0    Production
    TNS for Solaris: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    I'm trying to load a table, small in size (110 rows, 6 columns).  One of the columns, called NOTES is erroring when I run the load.  It is saying that the column size exceeds max limit.  As you can see here, the table column is set to 4000 Bytes)
    CREATE TABLE NRIS.NRN_REPORT_NOTES
      NOTES_CN      VARCHAR2(40 BYTE)               DEFAULT sys_guid()            NOT NULL,
      REPORT_GROUP  VARCHAR2(100 BYTE)              NOT NULL,
      AREACODE      VARCHAR2(50 BYTE)               NOT NULL,
      ROUND         NUMBER(3)                       NOT NULL,
      NOTES         VARCHAR2(4000 BYTE),
      LAST_UPDATE   TIMESTAMP(6) WITH TIME ZONE     DEFAULT systimestamp          NOT NULL
    TABLESPACE USERS
    RESULT_CACHE (MODE DEFAULT)
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                INITIAL          80K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
                FLASH_CACHE      DEFAULT
                CELL_FLASH_CACHE DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    I did a little investigating, and it doesn't add up.
    when i run
    select max(lengthb(notes)) from NRIS.NRN_REPORT_NOTES
    I get a return of
    643
    That tells me that the largest size instance of that column is only 643 bytes.  But EVERY insert is failing.
    Here is the loader file header, and first couple of inserts:
    LOAD DATA
    INFILE *
    BADFILE './NRIS.NRN_REPORT_NOTES.BAD'
    DISCARDFILE './NRIS.NRN_REPORT_NOTES.DSC'
    APPEND INTO TABLE NRIS.NRN_REPORT_NOTES
    Fields terminated by ";" Optionally enclosed by '|'
      NOTES_CN,
      REPORT_GROUP,
      AREACODE,
      ROUND NULLIF (ROUND="NULL"),
      NOTES,
      LAST_UPDATE TIMESTAMP WITH TIME ZONE "MM/DD/YYYY HH24:MI:SS.FF9 TZR" NULLIF (LAST_UPDATE="NULL")
    BEGINDATA
    |E2ACF256F01F46A7E0440003BA0F14C2|;|DEMOGRAPHICS|;|A01003|;3;|Demographic results show that 46 percent of visits are made by females.  Among racial and ethnic minorities, the most commonly encountered are Native American (4%) and Hispanic / Latino (2%).  The age distribution shows that the Bitterroot has a relatively small proportion of children under age 16 (14%) in the visiting population.  People over the age of 60 account for about 22% of visits.   Most of the visitation is from the local area.  More than 85% of visits come from people who live within 50 miles.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02046A7E0440003BA0F14C2|;|VISIT DESCRIPTION|;|A01003|;3;|Most visits to the Bitterroot are fairly short.  Over half of the visits last less than 3 hours.  The median length of visit to overnight sites is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of those visits are shorter than 3 hours long.   Most visits come from people who are fairly frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times per year.  Another 8 percent of visits are from people who report visiting more than 100 times per year.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02146A7E0440003BA0F14C2|;|ACTIVITIES|;|A01003|;3;|The most frequently reported primary activity is hiking/walking (42%), followed by downhill skiing (12%), and hunting (8%).  Over half of the visits report participating in relaxing and viewing scenery.|;07/29/2013 16:09:27.000000000 -06:00
    Here is the full beginning of the loader log, ending after the first row return.  (They ALL say the same error)
    SQL*Loader: Release 10.2.0.4.0 - Production on Thu Aug 22 12:09:07 2013
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Control File:   NRIS.NRN_REPORT_NOTES.ctl
    Data File:      NRIS.NRN_REPORT_NOTES.ctl
      Bad File:     ./NRIS.NRN_REPORT_NOTES.BAD
      Discard File: ./NRIS.NRN_REPORT_NOTES.DSC
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table NRIS.NRN_REPORT_NOTES, loaded from every logical record.
    Insert option in effect for this table: APPEND
       Column Name                  Position   Len  Term Encl Datatype
    NOTES_CN                            FIRST     *   ;  O(|) CHARACTER
    REPORT_GROUP                         NEXT     *   ;  O(|) CHARACTER
    AREACODE                             NEXT     *   ;  O(|) CHARACTER
    ROUND                                NEXT     *   ;  O(|) CHARACTER
        NULL if ROUND = 0X4e554c4c(character 'NULL')
    NOTES                                NEXT     *   ;  O(|) CHARACTER
    LAST_UPDATE                          NEXT     *   ;  O(|) DATETIME MM/DD/YYYY HH24:MI:SS.FF9 TZR
        NULL if LAST_UPDATE = 0X4e554c4c(character 'NULL')
    Record 1: Rejected - Error on table NRIS.NRN_REPORT_NOTES, column NOTES.
    Field in data file exceeds maximum length...
    I am not seeing why this would be failing.

    HI,
    the problem is delimited data defaults to char(255)..... Very helpful I know.....
    what you need to two is tell sqlldr hat the data is longer than this.
    so change notes to notes char(4000) in you control file and it should work.
    cheers,
    harry

Maybe you are looking for

  • Error while activating update rule in Inventory

    I was working with migration of datasource 2LIS_03_BF from 3.x to 7.0 in MM module. Recieved an error while activating update rule  0RT_C17 2LIS_03_BF in Inventory module. When I checked the routine ,it shows the following error. Errror:E:The Diction

  • Had anyone worked with 23.976 fps video?

    It's the new hi def video frame rate that is supported in Logic 8, but I'm having some problems. I also have a Canopus ADVC110 firewire to analog video out and the video is glitchy and crash prone. I don't know if the Canopus really supports 23.976.

  • Norton Log-in Management is not working after downloading Firefox 4.0

    Since downloading Firefox version 4.0, Norton Log-In Management no longer works as it used to. Prior to the download, and after signing on to Norton, the Log-In Manager would automatically populate any web page which requires me to log in. Also, the

  • EE and BT to join forces

    I am very disappointed to hear that EE is to be sold (out) to BT.BT are the most vile company i have ever had the misfortune to deal with.Their customer service is appalling; incorrect and over billing; forced to pay reconnection fees after being dis

  • Somebody please help

    A couple months ago, my computer contracted a virus. To make a long story short, everything on my computer ended up getting deleted. I had about $60 worth of music from iTunes that I lost. Does anyone know if there is any way that I can get it back?