ThreadInterruptedException in Environment.sync JE 4.1.10

After a series of updates I have code that issues Environment.sync, but takes a ThreadInterruptedException. This is an intermittent problem. I have done some searches, but have not found a similar problem. I would appreciate any suggestions on what may be causing this or how to further debug it. Running 4.1.10.
Caused by: com.sleepycat.je.ThreadInterruptedException: (JE 4.1.10) /home/atpdata/arbmaps/intl Channel closed, may be due to thread interrupt THREAD_INTERRUPTED: InterruptedException may cause incorrect internal state, unable to continue. Environment is invalid and must be closed.
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.force(FileManager.java:2720)
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.access$500(FileManager.java:2390)
at com.sleepycat.je.log.FileManager.syncLogEnd(FileManager.java:1713)
at com.sleepycat.je.log.FSyncManager.executeFSync(FSyncManager.java:275)
at com.sleepycat.je.log.FSyncManager.fsync(FSyncManager.java:226)
at com.sleepycat.je.log.FileManager.groupSync(FileManager.java:1740)
at com.sleepycat.je.log.LogManager.multiLog(LogManager.java:427)
at com.sleepycat.je.log.LogManager.log(LogManager.java:334)
at com.sleepycat.je.log.LogManager.log(LogManager.java:323)
at com.sleepycat.je.log.LogManager.logForceFlush(LogManager.java:176)
at com.sleepycat.je.recovery.Checkpointer.doCheckpoint(Checkpointer.java:777)
at com.sleepycat.je.dbi.EnvironmentImpl.invokeCheckpoint(EnvironmentImpl.java:1832)
at com.sleepycat.je.Environment.sync(Environment.java:1473)
at com.farecompare.atpcore.atparbitrarylocationmapmodule.impl.berkeleydb.components.storage.ArbitraryMapStorageImpl.endFeed(ArbitraryMapStorageImpl.java:168)
at com.farecompare.atpcore.atparbitrarylocationmapmodule.impl.berkeleydb.components.feedprocessing.FeedProcessingControllerImpl.endFeed(FeedProcessingControllerImpl.java:54)
at com.farecompare.atpcore.atparbitrarylocationmapmodule.impl.berkeleydb.components.feedprocessing.FeedProcessingEventHandler.processEndFeed(FeedProcessingEventHandler.java:115)
at com.farecompare.atpcore.atparbitrarylocationmapmodule.impl.berkeleydb.components.feedprocessing.FeedProcessingEventHandler.notify(FeedProcessingEventHandler.java:73)
at com.farecompare.atpcore.atparbitrarylocationmapmodule.impl.berkeleydb.BerkeleyDbArbitraryMapModule.notify(BerkeleyDbArbitraryMapModule.java:94)
at com.farecompare.atpcore.feedprocessingmodule.impl.components.notifier.FeedProcessingEventNotifier.notifyListeners(FeedProcessingEventNotifier.java:65)
... 12 more
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:352)
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.force(FileManager.java:2711)
... 30 more

Interesting thing is now I've seen the exception during a put. Not sure where the thread interrupt would be coming from. Anyone have insight into the java.nio.channels.ClosedChannelException? Looking at the EnvironmentConfig class there are some properties that mention NIO but they are all deprecated with the comment "NIO is no longer used by JE and this parameter has no effect." Why would I be getting a java.nio exception?
StackTrace:
com.sleepycat.je.ThreadInterruptedException: (JE 4.1.10) Environment must be closed, caused by: com.sleepycat.je.ThreadInterruptedException: Environment invalid because of previous exception: (JE 4.1.10) /home/atpdata/storageengine/intl/fares Channel closed, may be due to thread interrupt THREAD_INTERRUPTED: InterruptedException may cause incorrect internal state, unable to continue. Environment is invalid and must be closed.
at com.sleepycat.je.ThreadInterruptedException.wrapSelf(ThreadInterruptedException.java:91)
at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1455)
at com.sleepycat.je.Database.checkEnv(Database.java:1778)
at com.sleepycat.je.Database.put(Database.java:1050)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.components.ControlDataStore.setFeedSubTimestamp(ControlDataStore.java:141)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.components.FeedProcessingController.updateFeedTimes(FeedProcessingController.java:274)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.components.FeedProcessingController.endChangeFeedProcessing(FeedProcessingController.java:245)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.components.FeedProcessingController.endFeedProcessing(FeedProcessingController.java:108)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.components.DbEnvironment.endFeedProcessing(DbEnvironment.java:407)
at com.farecompare.atpcore.storageenginemodule.impl.berkeleydb.BerkeleyDbStorageEngineModule.internalEndFeedProcessing(BerkeleyDbStorageEngineModule.java:280)
at com.farecompare.atpcore.storageenginemodule.impl.AbstractStorageEngineModule.endFeedProcessing(AbstractStorageEngineModule.java:247)
at com.farecompare.atpcore.storageenginemodule.impl.composite.CompositeStorageEngineModule.endFeedProcessing(CompositeStorageEngineModule.java:205)
at com.farecompare.atpcore.feedprocessingmodule.impl.FeedProcessingModule.notifyStorageEngineEndFeedProcessing(FeedProcessingModule.java:483)
at com.farecompare.atpcore.feedprocessingmodule.impl.FeedProcessingModule.endFeedProcessing(FeedProcessingModule.java:468)
at com.farecompare.atpcore.feedprocessingmodule.impl.FeedProcessingModule.performFeedProcessing(FeedProcessingModule.java:415)
at com.farecompare.atpcore.feedprocessingmodule.impl.FeedProcessingModule.processSubTypes(FeedProcessingModule.java:337)
at com.farecompare.atpcore.feedprocessingmodule.impl.FeedProcessingModule.notify(FeedProcessingModule.java:311)
at com.farecompare.atpcore.feedfilesetmodule.impl.FeedFileSetModule.performNotify(FeedFileSetModule.java:678)
at com.farecompare.atpcore.feedfilesetmodule.impl.FeedFileSetModule.performCopyAndNotify(FeedFileSetModule.java:616)
at com.farecompare.atpcore.feedfilesetmodule.impl.FeedFileSetModule.access$000(FeedFileSetModule.java:66)
at com.farecompare.atpcore.feedfilesetmodule.impl.FeedFileSetModule$1.performTask(FeedFileSetModule.java:570)
at com.farecompare.atpcore.feedfilesetmodule.impl.FeedFileSetModule$FeedFileSetTask.run(FeedFileSetModule.java:1152)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.sleepycat.je.ThreadInterruptedException: Environment invalid because of previous exception: (JE 4.1.10) /home/atpdata/storageengine/intl/fares Channel closed, may be due to thread interrupt THREAD_INTERRUPTED: InterruptedException may cause incorrect internal state, unable to continue. Environment is invalid and must be closed.
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.force(FileManager.java:2720)
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.access$500(FileManager.java:2390)
at com.sleepycat.je.log.FileManager.syncLogEnd(FileManager.java:1713)
at com.sleepycat.je.log.FSyncManager.executeFSync(FSyncManager.java:275)
at com.sleepycat.je.log.FSyncManager.fsync(FSyncManager.java:226)
at com.sleepycat.je.log.FileManager.groupSync(FileManager.java:1740)
at com.sleepycat.je.log.LogManager.multiLog(LogManager.java:427)
at com.sleepycat.je.log.LogManager.log(LogManager.java:334)
at com.sleepycat.je.log.LogManager.log(LogManager.java:323)
at com.sleepycat.je.log.LogManager.logForceFlush(LogManager.java:176)
at com.sleepycat.je.recovery.Checkpointer.doCheckpoint(Checkpointer.java:777)
at com.sleepycat.je.recovery.Checkpointer.onWakeup(Checkpointer.java:507)
at com.sleepycat.je.utilint.DaemonThread.run(DaemonThread.java:162)
... 1 more
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:339)
at com.sleepycat.je.log.FileManager$LogEndFileDescriptor.force(FileManager.java:2711)
... 13 more
Edited by: user4173612 on Feb 15, 2012 6:27 AM

Similar Messages

  • Berkeley  DB JE is getting hanged while calling environment sync method.

    Hi,
    I am developing an application in Java and I am using JE edition of Berkeley. I am facing an issue, during shutdown operation I am calling syn method of EntityStore class and Environment class.
    The EntityStore.sync() is getting completed but during executing of Environment.sync() the control is not coming out of this statement. Below is the code that I am using to perform sync operation.
    if (myEnv != null) {
    try {
    entityStore.sync(); //Calling sync method of Entity Store Class.
    log.error("Finished Syncing of Entity Store");
    myEnv.sync(); // Calling Sync method of Environment Class. Control is not coming out of this sync call.
    log.error("Finished Syncing of Environment");
    } catch (DatabaseException dbe) {
    log.error("DataBase exception duing close operation ",dbe);
    } catch (IllegalStateException ie) {
    logger.fatal("Cursor not closed: ", ie);
    During my unit testing I was changing system date and perfoming some db operation. While I during down the application the the above code gets executed but the system is getting hanged and control is not coming out of Environment sync method call. Can some one tell why the sync is causing the system to hang.
    Thanks in advance.

    Hello,
    You did not mention the version of BDB JE. In any case for BDB JE questions the correct forum is at:
    Berkeley DB Java Edition
    Thanks,
    Sandra

  • Dev and Prod Environment Sync.

    Hello Experts,
    I would like to ask for your kind help in the following matter:  How to resynch the dev and production environments.
    For some (inextricable) reason, the production and development servers are out of sync.  Mainly the SLD content is in different versions. The production one is older. Oh and there is no QA environment.
    Besides that the business systems were created manually in each server and have inconsistencies. No SLD information was transported between them, only the repository and some directory information. Other directory information was created in the production environment directly. And I am affraid some development took place in production server directly too.
    So I have been asked to produce a plan to get all this mess into shape.  So far at a macro level I have these tasks planned:
    10.- Backup prod and dev servers.
    20.- Load latest CIM model and CR content on both servers (and see what happens).
    30.- Connect the SAP systems as SLD data sources to prod SLD.
    40.- Redo configuration in prod to point to automatically created business systems (and retest).
    50.- Cleanup of manually created business systems the prod environment.
    60.- Config the prod SLD as data source for dev SLD.
    70.- Redo configuration in dev server to make it match prod SLD.
    80.- Cleanup of manually created business systems the dev environment.
    90.- Create in the dev server the repository objects that were directly created in prod.
    100.- Test everything for consistency.
    Please help me improve the task list/order.
    -Sam.

    Hi Sam
    You are planning a lot of activity on a production environment. Two coments:
    1. When you have your dev SLD configured as you require, with CIM, PI Content & Business Systems updated etc .. perhaps you should do a level of regression testing on this environment before embarking on changing the Live env.
    2. Once you are happy that the Dev env It is possible to Export and import manually the SLD content from the Dev to the Live to align both systems, refer to link:
    https://websmp104.sap-ag.de/~sapidb/011000358700000315022005E.PDF
    This is available through the Export Link in Administration page.
    Thanks
    Damien

  • Service Manager 2010 lab environment syncing with production database

    I am trying to setup a lab environment for SCSM 2010 to test the upgrade to SCSM 2012. I have been following the instructions found here:
    http://technet.microsoft.com/en-us/library/hh914226.aspx
    I have successfully made it to step 16. However, when I open the SCSM console in the lab environment it still shows the Data Warehouse and Reporting buttons. When I create a new ticket in either the lab or production environment it shows up in both consoles,
    even though the production console is connected to the production server and the lab console is connected to the lab server.
    Any ideas on why the lab environment is still syncing with the production server?
    Thanks

    Thread of the Necro-Dancer:
    Regardless, the step Katie was apparently implying (but seems to be missing from her description) is the bit where you backup the database and restore it to a separate instance. after this is done, you can then install a new management server targeting this
    new instance, and then promote the new management server to the workflow server, and run the isolated database using the new management server. 
    i would recommend, however, that you use
    the supported method of producing a upgrade lab with production data, which is very similar to the method Katie implied, includes directions covering all of these steps involved, and allows you to periodically restore production data to the lab database
    with minimal overhead. 
    I didn't notice the date before now. I just look at the latest unanswered posts, so not sure how I got into this one. But technet has been acting weird lately. I get an internal server error far too often.
    http://codebeaver.blogspot.dk/

  • How can I close BDB  Environment and EntityStore in RMI Application

    Hi,all.
    I create Only one instance of the Environment object and one instance of the EntityStore Object when the RMI server is first started,like this:
    public class CalculatorServer {
    protected Environment env;
    protected EntityStore store;
    public CalculatorServer() {
    try {
    initBDB();
    String hostkey = "java.rmi.server.hostname";
    String rmisHost = "127.0.0.1";
    int port = 30000;
    Registry reg = (Registry) null;
    System.setProperty(hostkey, rmisHost);
    reg = LocateRegistry.createRegistry(port);
    reg.bind("/service/testrmi", new CalculatorImpl().initObject());
    } catch (RemoteException e) {
    e.printStackTrace();
    } catch (AlreadyBoundException e) {
    e.printStackTrace();
    public void initBDB() {
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setAllowCreate(true);
    /** 设置事务 **/
    envConfig.setTransactional(false);
    envConfig.setLockTimeout(5, TimeUnit.SECONDS);
    env = new com.sleepycat.je.Environment(new File("test1"), envConfig);
    StoreConfig storeConfig = new StoreConfig();
    storeConfig.setAllowCreate(true);
    /** 设置事务 **/
    storeConfig.setTransactional(false);
    /** 是否开启延迟写入**/
    storeConfig.setDeferredWrite(true);
    store = new EntityStore(env, "entityStore", storeConfig);
    public static void main(String args[]) {
    new CalculatorServer();
    1) in this approach,Do I need to close Environment and EntityStore?
    thanks

    The code you posted is not the BDB JE code you're using, so it doesn't help to diagnose the problem you're having. Please post your EnvironmentConfig and StoreConfig setup code, and your code to use Transactions, if any.
    If your store is not transactional (EntityStore.setTransactional(true)), the data is not guaranteed to be durable/persistent until you call Environment.sync, or close the Environment cleanly. If your store is transactional, then durability is controlled by the TransactionConfig you use, or the Durability parameter of the Transaction.commit method. The default durability is set using EnvironmentMutableConfig.setDurability.
    Be sure to close the EntityStore and Environment cleanly when your program exists, as I mentioned earlier.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Licensing of apps for multiple iPads in an enterprise environment

    I have 21 iPads in an enterprise environment, syncing to one iPad.  Am I violating licensing terms.? These are 21 iPads used by 21 board members on each Friday.  Do I need to purchase this app 21 times?  If so how?

    Yes, you may be. The applicable terms of sale are these:
    (ii) If you are a commercial enterprise or educational institution, you may download and sync an App Store Product for use by either (a) a single individual on one or more iOS Devices you own or control or (b) multiple individuals, on a single shared iOS Device you own or control. For example, a single employee may use the Product on both the employee's iPhone and iPad, or multiple students may serially use the Product on a single iPad located at a resource center or library.
    If these iPads are under your own control and you just loan them out for the board meetings, then that might fall under clause (a), though I'm no lawyer and I don't speak for Apple.
    If these iPads are kept by the board members for their use, then your company needs to purchase the app twenty-one times, and the only way at present to do that is through twenty-one different iTunes Store accounts. So each board member should probably buy through his or her own iTunes Store account, and get reimbursed by the company if appropriate. I know this is awkward; perhaps Apple will come up with a better mechanism in the future.
    Regards.
    Message was edited by: Dave Sawyer

  • How do you recommend a JE configuration according to app. construction?

    Hi to all,
    When we use JE as following configuration, CPU utilization is consistently high ans sometimes low.
    How do you recommend a JE configuration according to application construction?
    Our Application and use of JE configuration is as follows.
    - App. logic and JE usage as following;
    Application creates 4 Environments(each one has data-store-Database for class-store-Database). Environments are not linked to each other, completely independent.
    App. executes concurrent non-transactional 1600 inserts(Database.put), 1600 updates(Database.put) and 1600 deletes(Database.put) in a second ( It means 1600/4 inserts, 1600/4 updates, 1600/4 deletes for one environment)
    Also using SerialBinding to store app. object that implements java.io.Serializable.
    Environment.sync method called 1 time in minutes.
    Insert-Update objects are the same and size of the object approximately 500 bytes-1 K.
    %80 of inserted records will be updated 1 time in one second and than deleted in a 5 seconds.
    %20 of inserted records will be updated 10 times in 28,5 hours and than deleted.
    We have 50 GB disk space, totally 24 cpu with model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz
    - Our JVM settings as following;
    -Xmx33g
    -Xms33g
    -XX:+UseParNewGC
    -XX:+UseConcMarkSweepGC
    -XX:CMSInitiatingOccupancyFraction=50
    -XX:+PrintTenuringDistribution
    -verbose:gc
    -XX:+PrintGCApplicationConcurrentTime
    -XX:+PrintGCApplicationStoppedTime
    - je.properties as following;
    ## Environment
    je.env.isReadOnly=false
    je.env.isTransactional=false
    je.maxMemoryPercent=30
    je.log.bufferSize=20971520
    je.log.numBuffers=2
    je.log.fileMax=209715200
    je.cleaner.minUtilization=30
    je.cleaner.minFileUtilization=5
    je.cleaner.threads=2
    je.cleaner.lockTimeout=100000
    je.cleaner.expunge=true
    je.cleaner.bytesInterval=104857600
    je.checkpointer.bytesInterval=104857600
    je.checkpointer.highPriority=true
    - Last JE Statistics;
              (1)CacheTotalBytes= The total amount of JE cache in use, in bytes.
              (2)DataBytes= The amount of JE cache used for holding data, keys and internal Btree nodes, in bytes
              (3)AdminBytes= The number of bytes of JE cache used for log cleaning metadata and other administrative structures
              (4)LockBytes= The number of bytes of JE cache used for holding locks and transactions.
              (5)NEvictPasses= Number of eviction passes, an indicator of the eviction activity level.
              (6)NCacheMiss= The total number of requests for database objects which were not in memory.
              (7)TotalLogSize= An approximation of the current total log size in bytes.
              (8)SharedCacheTotalBytes= The total amount of the shared JE cache in use, in bytes.
              (9)NTotalLocks= Total locks currently in lock table.
         Environment-1:(1)=174,757,042 (2)=132,804,551 (3)=9,235 (4)=216 (5)=0 (6)=188 (7)=284,132,682 (8)=0 (9)=2
         Environment-2:(1)=174,803,884 (2)=132,850,463 (3)=10,165 (4)=216 (5)=0 (6)=179 (7)=277,346,265 (8)=0 (9)=2
         Environment-3:(1)=174,739,257 (2)=132,776,071 (3)=19,930 (4)=216 (5)=0 (6)=184 (7)=278,147,194 (8)=0 (9)=2
         Environment-4:(1)=174,767,092 (2)=132,776,471 (3)=47,365 (4)=216 (5)=0 (6)=185 (7)=292,163,876 (8)=0 (9)=2
         Sum of Data-store-Database.count = 1137265
    - Last CPU utilization ;
         top - 10:33:58 up 102 days, 22:42, 3 users, load average: 2.55, 3.22, 3.48
         Tasks: 403 total, 1 running, 402 sleeping, 0 stopped, 0 zombie
         Cpu0 : 2.6%us, 2.3%sy, 0.0%ni, 95.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu1 : 6.8%us, 1.3%sy, 0.0%ni, 91.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu2 : 3.3%us, 0.3%sy, 0.0%ni, 96.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu3 : 8.1%us, 2.3%sy, 0.0%ni, 89.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu4 : 7.5%us, 2.0%sy, 0.0%ni, 90.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu5 : 7.5%us, 1.3%sy, 0.0%ni, 90.9%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu6 : 7.1%us, 0.6%sy, 0.0%ni, 92.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu7 : 8.4%us, 1.0%sy, 0.0%ni, 90.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu8 : 6.5%us, 2.3%sy, 0.0%ni, 91.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu9 : 7.5%us, 2.6%sy, 0.0%ni, 89.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu10 : 31.2%us, 0.3%sy, 0.0%ni, 68.2%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu11 : 2.9%us, 0.6%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu12 : 6.8%us, 0.3%sy, 0.0%ni, 92.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu13 : 9.2%us, 2.9%sy, 0.0%ni, 87.6%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu14 : 8.1%us, 0.6%sy, 0.0%ni, 91.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu15 : 12.1%us, 5.2%sy, 0.0%ni, 81.8%id, 0.0%wa, 0.0%hi, 1.0%si, 0.0%st
         Cpu16 : 2.6%us, 0.6%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu17 : 12.6%us, 5.5%sy, 0.0%ni, 80.3%id, 0.0%wa, 0.3%hi, 1.3%si, 0.0%st
         Cpu18 : 7.5%us, 0.6%sy, 0.0%ni, 91.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu19 : 11.3%us, 4.9%sy, 0.0%ni, 83.2%id, 0.0%wa, 0.0%hi, 0.6%si, 0.0%st
         Cpu20 : 6.8%us, 0.3%sy, 0.0%ni, 92.5%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu21 : 8.5%us, 8.1%sy, 0.0%ni, 82.1%id, 0.0%wa, 0.0%hi, 1.3%si, 0.0%st
         Cpu22 : 26.9%us, 0.6%sy, 0.0%ni, 72.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu23 : 6.5%us, 0.3%sy, 0.0%ni, 92.2%id, 0.0%wa, 0.3%hi, 0.6%si, 0.0%st
         Mem: 49454804k total, 26187100k used, 23267704k free, 1933064k buffers
         Swap: 50331640k total, 66092k used, 50265548k free, 4475980k cached
         PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
         22509 mtsmsc 18 0 34.3g 17g 14m S 260.8 37.6 1552:12 java
         top - 10:16:56 up 102 days, 22:25, 2 users, load average: 5.43, 4.32, 3.31
         Tasks: 400 total, 1 running, 399 sleeping, 0 stopped, 0 zombie
         Cpu0 : 7.3%us, 0.7%sy, 0.0%ni, 92.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu1 : 7.7%us, 2.0%sy, 0.0%ni, 90.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu2 : 1.7%us, 0.0%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu3 : 6.3%us, 1.3%sy, 0.0%ni, 92.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu4 : 2.3%us, 0.3%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu5 : 6.6%us, 1.3%sy, 0.0%ni, 91.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu6 : 27.7%us, 0.0%sy, 0.0%ni, 72.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu7 : 7.0%us, 1.0%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu8 : 7.6%us, 0.7%sy, 0.0%ni, 91.4%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu9 : 7.0%us, 2.7%sy, 0.0%ni, 90.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu10 : 6.3%us, 2.0%sy, 0.0%ni, 91.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu11 : 7.3%us, 2.3%sy, 0.0%ni, 90.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu12 : 3.0%us, 2.0%sy, 0.0%ni, 95.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu13 : 9.3%us, 3.3%sy, 0.0%ni, 86.4%id, 0.0%wa, 0.0%hi, 1.0%si, 0.0%st
         Cpu14 : 5.0%us, 0.3%sy, 0.0%ni, 94.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu15 : 11.3%us, 5.6%sy, 0.0%ni, 81.8%id, 0.0%wa, 0.3%hi, 1.0%si, 0.0%st
         Cpu16 : 6.3%us, 0.3%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu17 : 11.9%us, 5.6%sy, 0.0%ni, 81.5%id, 0.0%wa, 0.0%hi, 1.0%si, 0.0%st
         Cpu18 : 40.9%us, 0.0%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu19 : 10.6%us, 4.3%sy, 0.0%ni, 84.4%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st
         Cpu20 : 2.3%us, 0.7%sy, 0.0%ni, 96.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Cpu21 : 14.9%us, 6.6%sy, 0.0%ni, 76.8%id, 0.0%wa, 0.3%hi, 1.3%si, 0.0%st
         Cpu22 : 5.6%us, 0.3%sy, 0.0%ni, 94.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
         Cpu23 : 3.3%us, 0.7%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
         Mem: 49454804k total, 33344956k used, 16109848k free, 1931968k buffers
         Swap: 50331640k total, 66092k used, 50265548k free, 11654696k cached
         PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
         22509 mtsmsc 18 0 34.3g 17g 14m S 260.4 37.6 1511:00 java
    Many thanks in advance.
    Edited by: 906759 on Jan 16, 2012 1:12 AM

    When we use JE as following configuration, CPU utilization is consistently high ans sometimes low.Do you consider this a problem? Or are you just asking for general tuning advice? Normally, high CPU utilization is considered to be a good thing.
    Some additional questions:
    * What version of JE are you using? If you are writing a new app, be sure to use JE 5.0.
    * What is the total/maximum size of your data set?
    * What is the key size (byte length)? (I see the data size is 0.5 to 1.0 KB)
    * Are there no read operations?
    * You mention throughput numbers (for example "%80 of inserted records will be updated 1 time in one second"). Is this a goal, or the measured value in your test, or both?
    A couple initial comments:
    * Calling Environment.sync is not appropriate for durability in JE 5.0, and will negatively impact performance. Instead use Environment.flushLog.
    * Using multiple Environments in a single process is not recommended for maximum write performance. A single Environment with a shared cache (EnvironmentConfig.setSharedCache(true)) will result in better write throughput, because all writes are appending to the same log file and disk head movement is minimized.
    * You may see better performance by using the DPL (persist package) or a custom binding, rather than SerialBinding. In general, if you find you are CPU-limited, take several thread dumps during this period too see if most threads are executing in the same place.
    Performance tuning is an iterative process. Be sure to try only one change at a time to determine whether the impact is positive or negative.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • InDesign CS6 opens all documents from the previous day on startup

    First of all I am on a Mac using a Mobile Account. I have no admin control over this setup. Every 2 hours or the local environment syncs with the server and saves my 'computer'.
    The problem is when opening InDesign in the morning to start the day hundreds of documents begin to open. They appear to be every document opened from the previous day (or more).
    InDesign does not crash. They do not appear to be recovering, just opening.
    I tried to clear the recovery cache, but that had no effect.
    Is there something I can clear to prevent this from happening?

    I was hoping to find an answer as they are not sure either.
    I think Larry's answer is right, restart (without the option key) and uncheck reopen windows. It will stay unchecked unless you check it on a later restart.

  • Bulk-loading performance

    I'm loading Twitter stream data into JE. There's about 2 million data pieces daily, each about 1K. I have a user class and a twit (status) class, and for each twit, I update the user; I also have secondaries on twits for replies, and use DPL. In fact this is all in Scala, but works with JE just fine, as it should. Since each twit insertion updates its user, e.g. with total count of twits per user incremented, originally I had a transaction for each UserTwit insertion, and several threads working on inserting, similar to the architecture I developed first for PostgreSQL. However, that was too slow. So then I switched to a single thread, no transactions, and deferred write. Here's what happens with that: the loading works very quickly through all twits, in about 10-20 minutes, and then spends about 1-2 hours on store.sync; store.close; env.sync; env.close. Do I need to sync both if I have only one DPL store and nothing else in this environment, and/or do I lose any more time with two syncs? Should I do anything special to stop checkpointing thread or cleaning one?
    I already have 2,000+ small 10M jdb files, and wonder how can I agglomerate them together into say 1 GB files each, since this is about how much the database grows daily.
    Overall, the PostgreSQL performance is about 2-4 hours per bulkload, similar to BDB JE. I implemented exactly the same loading logic with the PG or BDB backends, and hoped that BDB will be faster, but for now not by an order of magnitude... And this is given that PG doesn't use RAM cache, while with JE I specify the cache size of 50 GB explicitly, and it takes about 15 GB of RAM when quickly going through the put phase, before hanging for an hour or two in sync.
    The project, tfitter, is open source, and is available at github:
    http://github.com/alexy/tfitter/tree/master
    I use certain tricks to convert the Java classes from and back to Scala's, but all the time is spent in sync, so it's a JE question --
    I'd appreciate any recommendations to make it faster with the JE.
    Cheers,
    Alexy

    Aleksy,
    A few of us were talking about your question, and had some more options to add. Without more detailed data, such as the stats obtained from Environment.getStats() or the thread dumps as Charles and Gordon (gojomo) suggested, our suggestions are bit hypothetical.
    Gordon's point about GC options and Charlie's suggestion of je.checkpointer.highPriority are CPU oriented. Charlie's point about Entity.sync vs Environment.sync is also in that category. You should try those suggestions because they will certainly reduce the workload some. (If you need to essentially sync up everything in an environment, it is less overhead to call Environment.sync, but if only some of the entity stores need syncing, it is more worthwhile to call Entity.sync).
    However, your last post implied that you are more I/O bound during the sync phase. In particular, are you finding that you have a small number of on-disk files before the call to sync, and a great many afterwards? In that case, the sync is dumping out the bulk of the modified objects at that time, and it may be useful to change the .jdb file size during this phase by setting je.log.fileMax through EnvironmentConfig.setConfigParam().
    JE issues a fsync at the boundary of each .jdb file, so increasing the .jdb file dramatically can reduce the number of fsyncs, and improve your write throughput. As a smaller, secondary benefit, JE is storing some metadata on a per-file basis, and increasing the file size can reduce that overhead, though generally that is a minor issue. You can see the number of fsyncs issued through Environment.getStats()
    There are issues to be careful about when changing the .jdb file size. The file is the unit of log cleaning. Increasing the log file size can make later log cleaning expensive if that data becomes obsolete later. If the data is immutable, that is not a concern.
    Enabling the write disk cache can also help during the write phase.
    Again, send us any stats or thread dumps that you generate during the sync phase.
    Linda

  • Inexplicable

    First of all, I'm running Oracle 9i (OEM 9.2.0.1.0.), but the 9i forum seems to have been decomissioned... w/e
    I've been trying to set up automated backup for a database I've got on a standalone comp for prototyping. When I submitted the backup job, with default backup configuration, it gave me a 'Job successfully entered. Check Active Job log for job status.' or something to that effect. On checking the log I found the job listed as failed upon moment of submission, reason given being that the preferred credentials did not have the 'Log on as a batch job' privilege. Using the Local Security Settings utility I gave said privilege to Everyone and, upon resubmitting the job, received the same error.
    Well, restarts solve problems, right? So... I restarted the comp to only to find that I could no longer log into the OMS, getting the following error:
    'VKT-1000: Unable to connect to the management server. Please verify that you have entered the correct host name and the status of the Oracle Management Server.'
    So I logged into OEM as standalone and shutdown/restarted the OMS database as a way to 'verify status' to no effect.
    So the question remains: wtf?
    Note that I've been self-taught so it's quite likely that there's some horribly idiotic thing that I completely missed.

    I'm not sure, Vinoth, but the first suspect is that recovery is doing a lot of work when re-opening the environment with the smaller cache, and this starts a cycle of eviction/cleaning.
    Are you doing a normal shutdown before re-opening with the smaller cache?
    In any case perhaps the JE 5 change that impacts this situation is this one:
    Improvements were made to recovery (Environment open) performance by changing the behavior of checkpoints in certain cases. Recovery should always be very quick after the following types of checkpoints:
    When CheckpointConfig.setMinimizeRecoveryTime(true) is used along with an explicit checkpoint performed by calling the Environment.checkpoint method.
    When Environment.sync is called.
    When Environment.close is called, since it performs a final checkpoint.
    In addition, a problem was fixed where periodic checkpoints (performed by the checkpointer thread or by calling Environment.checkpoint) would cause long recovery times under certain circumstances. As a part of this work, the actions invoked by ReplicatedEnvironment.shutdownGroup() were streamlined to use the setMinimizeRecoveryTime() option and to reduce spurious timeouts during the shutdown processing. [#19559]
    --mark

  • Unexpected internal Exception (RelatchRequiredException)

    Hi,
    I have recently upgraded to version 5.0.34 and was testing our application when I saw the following error:
    com.sleepycat.je.EnvironmentFailureException: (JE 5.0.34) com.sleepycat.je.utilint.RelatchRequiredException UNEXPECTED_EXCEPTION: Unexpected internal Exception, may have side effects.
    at com.sleepycat.je.EnvironmentFailureException.unexpectedException(EnvironmentFailureException.java:286)
    at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1268)
    at com.sleepycat.je.Cursor.checkForInsertion(Cursor.java:3006)
    at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:2926)
    at com.sleepycat.je.Cursor.retrieveNextNoDups(Cursor.java:2789)
    at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:2763)
    at com.sleepycat.je.Cursor.getNext(Cursor.java:1116)
    As a rough overview, the application has many threads (transactionally) inserting and removing into three separate tables. Commits are non synchronous (i.e. Transaction.commitNoSync() is being used). A separate thread regularly calls Environment.flushLog(true).
    I've only seen this once so far, when the application was under very heavy load...
    Any ideas?
    Cheers,
    Rob

    OK... I've been doing some more testing and have some more information.
    Firstly I have seen this a few more times though by no means is this particular error easily reproducable.
    I have determined that it occurs on both jdk 1.7.0 and 1.6.0_26 (both linux, 64bit) as I wanted to rule out jdk 1.7 as a factor.
    Since I had never seen this error when using v4.x of BDB I looked at the code change I hade made simultaneous with the upgrade, namely that previously in order to synchronously flush the asyncronoush transaction commits to disk I was calling Environment.sync() whereas with v5.0.34 I was calling Environment.flushFlog(true) (since this had demonstrated better performance and covered exactly the use case that I desired).
    After switching this code back to sync() I could not reproduce the issue. Moreover neither did I see any of the other perplexing errors that I had been investigating (see below).
    The part in my code where the error was occuring is the only place where cursors are actively used. All other transactional activity is either single record inserts, or single record deletes drived by a unique key. In the area of the code where I saw this exception being thrown, looks somewhat like this:
    DatabaseEntry value = new DatabaseEntry();
    value.setPartial(0, 0, true);
    cursor = database.openCursor(tx, null);
    status = cursor.getSearchKeyRange(key, value, LockMode.RMW);
    while (status == OperationStatus.SUCCESS)
    keyObject = binding.entryToObject(key);
    if(keyObject.getId() != id)
    break;
    else
    status = cursor.delete();
    status = cursor.getNext(key, value, LockMode.RMW);
    cursor.close();
    tx.commitNoSync();
    Where the idea is that the actual primary key for the table is a compound of (id,number), and we start the cursor from the key (id,0) and delete all entires until we find an entry with a different id.
    Now as it happens, for the test I am running I know that there will only ever be at most one entry of the form (id, 0).
    And if I change the code to remove the cursor, and instead just do a straight delete operation on the database, I again see no errors.
    The other thing to note is that in addition to the Unexpected Internal Exception described above, I was seeing far more frequent LockTimeout exceptions... moreover backing off and retrying the transactions didn't seem to work - the locks seemed to be permanently locked. These timeouts also only occurred in the above part of my code, and not in any of the other transactions it was performing.
    After changing to use Environment.sync() (rather than flushLog), or after removing the cursor (but keeping flishLog) these lock timeout exceptions also went away.
    Finally I was seeing an even stranger (to me) error... Occaisionally my tests seemed to be completing fine, but when I came to shut down my process, closing the environment was reporting that some transactions were still uncommitted. Again - the only instances were from the transactions involving the cursor above. Now as above the only way that you can get through that code without committing is (AFAICT) by some method throwing an exception... and I was seeing no such exceptions reported.
    Once again, either changing from flushLog(true) to sync(), or replacing the cursor with a straight delete stopped these issues from occurring.
    So, in summary I was seeing a number of weird behaviours that only seemed to occur when I was using a cursor to delete records from my database, and when I was using flushLog() to flush the log records to disk.
    Hope this is helpful... If there's anything more I can do to help you debug this issue, please let me know,
    Cheers,
    Rob

  • Sync or ASync for a Data Warehouse environment?

    We have a 7 TB DW environment that we're using High Availability on. Almost all of the data is bulk loaded nightly/ weekly. We've been running this in Sync-commit mode, but lately the Transaction Log in our Primary DB has grown to > 1/2 TB waiting on
    the bulk loads to be committed on the Secondary server. 
    Would this scenario be mitigated if we were using ASync-commit mode? Would the TLog, no longer waiting for the Secondary to synchronize, be able to shrink as it normally would (of course considering TLog backups, etc.)?
    TIA, ChrisRDBA

    We have a 7 TB DW environment that we're using High Availability on. Almost all of the data is bulk loaded nightly/ weekly. We've been running this in Sync-commit mode, but lately the Transaction Log in our Primary DB has grown to > 1/2 TB waiting on
    the bulk loads to be committed on the Secondary server. 
    Would this scenario be mitigated if we were using ASync-commit mode? Would the TLog, no longer waiting for the Secondary to synchronize, be able to shrink as it normally would (of course considering TLog backups, etc.)?
    Hi,
    Whether you use Sync or Async mirroring( I guess you are talking about mirroring) logs generated by transaction would be same. IMO problem here is logs being generated. Of course in sync mirroring transaction will wait for commit on mirror to commit
    it on principal so in sync commit would be delayed there would not be affect to transaction log.
    From what you gave I suggest you to break bulk load operation in more quantised batches. I mean to say decrease amount of data being loaded followed by (If possible) more transaction log backups and in this case Async would help to speed up commit but remember
    chances of Data loss in case of disaster will increase in Async Mirroring plus you need to have Enterprise edition to take benefit of Async mirroring.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Custom Table Entry not in Sync between Different Environment

    Hi All,
    we have different environment like Dev System, Quality system, performance testing system etc... these are all environment customs table entry not in syn .. Table entry should be syn. How to make it syn all the entry between all environment without do any manual work. Kindly help on this.
    Thanks

    Hello Mohammed,
    It depends on what type of tables you are referring to. If you have a table of type 'Customizing', you have to transport your table entries across different environments and they will be in Sync.
    If you are referring to tables of type 'Master Data', they have to be maintained in all environments.
    Table of type 'Transaction Data' by default have different records based on your day to day transactions.
    You cannot have all custom tables with same data in all environments. Hope this helps.
    Rgds,
    Vijay.

  • #DATA Sync & # CONTEXT error in BI 4.1 with ECC 6 environment

    I am getting the #DATA Sync & # CONTEXT error in BI 4.1 with ECC 6 environment. When i am adding Excel Sheet as asecondary source for BO report am getting this error. because of below,
    Ecc data
    Profit center -- 000100
    Fiscal Year --- 2,014
    Posting Period - 1  to12
    Amount 2000
    Excel Data
    Profit center -- 000100
    Discharge Date --- Q1 2014, Q2 2014,
    Discharge Port --- Excelerate or Expediate, etc
    I ahve created report Using ECC Data like, Profit center,Fiscal Year,Posting Period and Amount.
    Now i want to Add from Excel Discharge Date and Discharge Port am getting the #DATA Sync & # CONTEXT error or Imcompatible object.
    because i am anot able to merge these objects with ECC Objects. Can anyone suggest me what i can do to achieve this?
    Cheers
    Murali Durairaj

    Hi
    Have you created two different data providers (one for ECC, another for Excel data)in
    the webI report,
    If yes -- check whether able to see data sample in the Excel data provider
    Run the WebI Report -- Design Mode -- Edit -- Select the Excel Data Provider --
    Check able to see the Data Sample data whichever there in the excel sheet.
    If you are able to see the Sample data…then merge common objects (able to merge
    only if both the objects have same data types.) then check the data in the
    report.

  • Best Prctice to sync two environment Planning data

    Hi Experts,
    What could be best feasible option to keep two environment in sync?
    1) Shared services console
    2) using migration utiliy (Utility.bat)
    Ver: 11.1.2
    Regards
    Kumar

    Thanks Alp/John for the help.
    It’s very useful information regarding migration using LCM.
    1. Lcm utility is best for automating the migration whereas shared services is mostly used for defining the migration and running them manually.I got answer of my Q1.(LCM utility is better in my scenario)
    2. Planning data migration is essentially essbase data migration. So you can either use lcm or export/import.export/import is OK may be maxl, Could you please hint how to use LCM to migrate Planning data (essbase)? (As I am not sure if LCM-shared services will allow exporting Planning apps data. though you can migrate essbase data using LCM but it allows only essbase applications data)
    Regards
    Kumar

Maybe you are looking for