SQL Toolkit crashing with multiple threads

Hello everyone and Happy New Year!
I was hoping someone might be able to shed some light on this problem. I am updating an older application to use multiple threads. Actually the thread that is causing a problem right now is created by using an Asynchronous timer.
I am using CVI 2010, and I think the SQL toolkit is version 2.2.
If I execute an SQL statement from the main thread, there is no problem.
stat = DBInit (DB_INIT_MULTITHREADED);
hdbc = DBConnect( "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=sample.mdb;Mode=ReadWrite|Share Deny None" );
hstmt = DBActivateSQL( hdbc, "SELECT * FROM SAMPLES" );
DBDeactivateSQL(hstmt);
DBDisconnect(hdbc);
If I add code to do the same functions in a timer callback, it causes a stack overflow error.
.. start main thread
stat = DBInit (DB_INIT_MULTITHREADED);
NewAsyncTimer (5.0, -1, 1, &gfn_quicktest, 0);
 .. end main thread
.. and then the timer callback
int CVICALLBACK gfn_quicktest (int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2)
int hdbc = DBConnect( "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=params\\sample.mdb;Mode=ReadWrite|Share Deny None" );
int hstmt = DBActivateSQL( hdbc, "SELECT * FROM SAMPLES" );
DBDeactivateSQL(hstmt);
DBDisconnect(hdbc);
return 0;
The program crashes with a stack overflow error when the DBActivateSQL statement is called.
I understand that the ODBC driver for Access may not support multithreading, but I am only connecting to that database from the same thread with those 2 statements only so it should be fine ?
Any insight would be appreciated,
Thanks,
Ed.
Solved!
Go to Solution.

I just tried this using the sample access database that comes with CVI. It uses a DSN instead of mdb. It worked fine though. I don't see any reason multithreading would be a problem here if you are opening and closing the connection in the same code segment. I do notice that you are using params in the asyn callback connection string. Where does this come from? Maybe try using the sample database and see if that works.
National Instruments
Product Support Engineer

Similar Messages

  • How do you disable "Run with Multiple Threads" in a standalone EXE

    I have a program written in Labview 6.1.  After moving to a different hardware platform, my program has started crashing at the same point every time it is run.  I eventually found out that the cause of the crash is the fact that the new hardware has a dual core processor.  I confirmed this by disabling "Run with multiple threads" and now the program works fine.  What I need to know now is how to disable the same setting in a built EXE file, since as far as I know the "Run with multple threads" setting only affects execution in the Labview Dev environment.
    Thanks for any help,
    Dave

    Greg McKaskle once posted that using a non-reentrant (VI is NOT re-entrant) wrapper VI to make the calls to the dll will prevent simultaneous execution of the dll.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • How to proces the record in Table with multiple threads using Pl/Sql & Java

    I have a table containing millions of records in it; and numbers of records also keep on increasing because of a high speed process populating this table.
    I want to process this table using multiple threads of java. But the condition is that each records should process only once by any of the thread. And after processing I need to delete that record from the table.
    Here is what I am thinking. I will put the code to process the records in PL/SQL procedure and call it by multiple threads of Java to make the processing concurrent.
    Java Thread.1 }
    Java Thread.2 }
    .....................} -------------> PL/SQL Procedure to process and delete Records ------> <<<Table >>>
    Java Thread.n }
    But the problem is how can I restrict a record not to pick by another thread while processing(So it should not processed multiple times) ?
    I am very much familiar with PL/SQL code. Only issue I am facing is How to fetch/process/delete the record only once.
    I can change the structure of table to add any new column if needed.
    Thanks in advance.
    Edited by: abhisheak123 on Aug 2, 2009 11:29 PM

    Check if you can use the bucket logic in your PLSQL code..
    By bucket I mean if you can make multiple buckets of your data to be processed so that each bucket contains the different rows and then call the PLSQL process in parallel.
    Lets say there is a column create_date and processed_flag in your table.
    Your PLSQL code should take 2 parameters start_date and end_date.
    Now if you want to process data say between 01-Jan to 06-Jan, a wrapper program should first create 6 buckets each of one day and then call PLSQL proc in parallel for these 6 different buckets.
    Regards
    Arun

  • Iphoto/preview crashing with multiple user accounts

    Hi everybody,
    I'm stuck for a while now with my iMac and parental controlled user accounts.
    It's for a few months now iPhoto and Preview keep crashing at starting up in these other accounts.
    All works fine in my own administrator account. I'll copy a the first part of the crash state, maybe that will help.
    I think the problem started with connecting a photocamera (Sony W125) directly to the computer.
    It seems that viewing pictures (PDF/iphoto) is corrupted in this way.
    I'm working on an late 2011 iMac 2,5 Ghz Intel Core i5 /12GB 1333 / OSX 10.7.5
    Allready done:
    removed plists
    repaired permissions
    re-installed Lion
    re-installed iPhoto
    give full permissions too all other accounts (checking out parental control i.o.w.)
    made a new user acount with full permissions except administrator function (also crashes of iPhoto)
    tried some searching in Root-account but where afraid to damage stuff.
    looked around on the web but I couldn't find people with the same problem
    Thanks a lot for your help in advance.
    Michiel
    Process:         iPhoto [1058]
    Path: /Applications/iPhoto.app/Contents/MacOS/iPhoto
    Identifier:      com.apple.iPhoto
    Version:         9.4.2 (9.4.2)
    Build Info:      iPhotoProject-710042000000000~2
    Code Type:       X86 (Native)
    Parent Process:  launchd [807]
    Date/Time:       2012-11-05 10:30:59.667 +0100
    OS Version:      Mac OS X 10.7.5 (11G63)
    Report Version:  9
    Interval Since Last Report:          247 sec
    Crashes Since Last Report:           2
    Per-App Crashes Since Last Report:   1
    Anonymous UUID: 19E6D82E-8DD9-4044-B141-C67D09268E1F
    Crashed Thread:  0 Dispatch queue: com.apple.main-thread
    Exception Type:  EXC_BAD_INSTRUCTION (SIGILL)
    Exception Codes: 0x0000000000000001, 0x0000000000000000
    Application Specific Information:
    dyld: launch, running initializers
    /usr/lib/libSystem.B.dylib
    xpchelper reply message validation: code signature invalid
    The code signature is not valid: The operation couldn’t be completed. (OSStatus error 100005.)
    Application Specific Signatures:
    code signature invalid
    Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
    0 libxpc.dylib                      0x96b2254e runtime_init + 2014
    1 libdispatch.dylib                         0x92d10c27 dispatch_once_f + 50
    2 libxpc.dylib                      0x96b22d92 _xpc_runtime_set_domain + 350
    3 libxpc.dylib                      0x96b1f9af _libxpc_initializer + 578
    4 libSystem.B.dylib                        0x94ba77a7 libSystem_initializer + 199
    5 dyld                                  0x8fef1203 ImageLoaderMachO::doModInitFunctions(ImageLoader::LinkContext const&) + 251
    6 dyld                                  0x8fef0d68 ImageLoaderMachO::doInitialization(ImageLoader::LinkContext const&) + 64
    7 dyld                                  0x8feee2c8 ImageLoader::recursiveInitialization(ImageLoader::LinkContext const&, unsigned int, ImageLoader::InitializerTimingList&) + 256
    8 dyld                                  0x8feee25e ImageLoader::recursiveInitialization(ImageLoader::LinkContext const&, unsigned int, ImageLoader::InitializerTimingList&) + 150

    After searching the web and the discussion here, here's my minimal impact solution for multiple Macs with multiple users in a household:
    1) Set up Mac1 for myself only
    2) Set up Mac2 for the wife and kid
    3) Set up each Mac to backup to the Airport base station using Time Machine (this would create two separate backups on the Airport base station's drive, which from what I've read has its own problems)
    4) On Mac1 setup "empty user accounts" for the wife and kid. These will not have any files in them - just an access mechanism. If they want to access their files, they can use Time Machine's "The Browse Other Backup Disks Option" to get their file from Mac2, work on it and then drop it in the Shared Folder. Next time they are on Mac2, remember to copy the updated/created file from the Shared Folder into their Mac2 user account. If possible, get Time Machine to not backup the "empty user accounts".
    5) Do the same for me on Mac2.
    Not the most elegant solution, but until Apple get off their backside and make this seamless, I can't think of anything else :-( .
    P.S. iCloud is not a soluton since it costs hundreds of dollars a year, uses up intenet data allowance and is slow.

  • Firefox 21.0 crashes with multiple tabs

    When I open firefox in my Note 2 with multiple tabs already opened it hangs for some time, crashes and requests for restart. Although after restart it works fine with my closed tabs listed in the first page.

    please find the report ids below.
    Submitted Crash Reports
    Report ID Date Submitted
    bp-2062a152-8225-408a-b1ac-5635c2130528 05/28/13 20:43
    bp-7461017c-b545-4eb7-948c-37a112130524 05/24/13 07:20
    bp-0a28877a-d4b4-437b-83fe-3799e2130524 05/24/13 07:20
    bp-566e728e-b3a5-42b6-aeb2-ae7a32130524 05/24/13 07:15
    bp-c92986f0-3cc0-44c7-a8aa-150c02130524 05/24/13 07:07
    bp-00efba8f-9bf9-4c37-bf63-ff8222130523 05/23/13 17:45
    I could'nt possibly find anything. pls check the same.

  • How can I use SQL TOOLKIT concurrently with Database Connectivity ?

    I have installed LabVIEW 6.1 with the Database Connectivity Toolkit and the SQL Compatibility Toolkit (e.g. _SQL folder). I am trying to make the transition from the SQL Toolkit VIs to the Database Connectivity toolkits, but for compatability with existing systems I would like to be able to run the two sets of VIs concurrently (but not in the same app).
    When I read into 6.1 a connection VI that I wrote with LabVIEW 6.0 and the SQL toolkit the connection reference type gets changed from a number to type connection (see attachments) . Does compatbility mean that my SQL toolkit VIs are converted to a form compatible with the new ADO ?
    Can I use the SQL Toolkit VIs or the Database connectivit
    y VIs in the same installation of 6.1 ?
    Can I have the SQL Toolkit VIs appear on the functions pallette and function as they did when only the SQL toolkit was installed ?
    Attachments:
    CNNCT.vi ‏20 KB
    CNNCT.vi ‏22 KB

    In response to your #2 below:
    Actually it is possible to have the old SQL Toolkit and new Database
    connectivity in the same installation of LabVIEW. I have only tried it on
    6i, but don't see why it wouldn't work on 6.1. The trick is not to install
    the SQL toolkit compatabitily VI's. The old SQL toolkit uses the Intersolve
    dll through ODBC while the new Database connectivity uses ADO so it is
    possible to use both methods not only in the same LabVIEW install, but it
    the same running application. It has been a while since I originally did
    this, so I am posting only to mention that it is possible and not exactly
    how to do it. If anyone is interested in more details just respond.
    Brian
    "Jeff B" wrote in message
    news:[email protected]...
    > First, direct answers to your direct questions:
    >
    > 1. Does compatbility mean that my SQL toolkit VIs are converted to a
    > form compatible with the new ADO ?
    >
    > Yes
    >
    > 2. Can I use the SQL Toolkit VIs or the Database connectivity VIs in
    > the same installation of 6.1 ?
    >
    > No
    >
    > 3. Can I have the SQL Toolkit VIs appear on the functions pallette
    > and function as they did when only the SQL toolkit was installed ?
    >
    > No
    >
    >
    > Now an elaboration:
    >
    > Having the old SQL Toolkit and the new Database Connectivity Toolset
    > installed on the same version of LabVIEW on the same computer is not
    > supported.
    >
    > Once you install the Database Connectivity Toolset, any VIs written
    > with the SQL Toolkit will run, but with the ADO layer, as you
    > suspected.
    >
    > Internally, the only way we can have both the SQL Toolkit and the
    > Database Connectivity Toolset installed on the same computer for
    > troubleshooting customer issues is to have them installed on different
    > versions of LabVIEW. I, for example, have LabVIEW 5.1.2, 6.0.3, and
    > 6.1 all installed on my computer, and I have the SQL Toolkit install
    > on LabVIEW 5.1.2, and the Database Connectivity Toolset installed on
    > LabVIEW 6.0.3. In this configuration I can still run SQL Toolkit VIs
    > independent of the Database Connectivity Toolset if I open and run
    > them in LabVIEW 5.1.2.

  • Problem with multiple threads accessing the same Image

    I'm trying to draw into one Image from multiple threads. It works fine for a while, but then suddenly, the image stops updating. Threads are still running but the image won't update. I'm using doublebuffering and threads are simply drawing counters into Image with different speed.
    It seems like the Image gets deadlocked or something. Anyone have any idea what's behind this behavior or perhaps better solution to do such thing.
    Any help will be appreciated.

    Sorry Kglad, I didn't mean to be rude. With "No coding
    errors" I meant the animation itself runs with no errors. I'm sure
    you could run the 20 instances with no freezing (that's why I put
    the post :) ) But I'm affraid it is an animation for a client, so I
    cannot distribute the code.
    Perhaps I didnt explain the situation clearly enough (in part
    because of my poor english...).-
    - By 20 instances I mean 20 separated embedded objects in the
    html
    - The animation is relatively simple. A turned on candle, in
    each cycle I calculate the next position of the flame (that
    oscilates from left to right). The flame is composed by 4
    concentric gradients. There is NO loops, only an 'onEnterFrame'
    function refreshing the flame each time.
    - It's true that I got plenty variables at the _root level.
    If that could be the problem, how can I workaround it?
    - It is my first time trying to embed so many objects at the
    same time too. No idea if the problem could be the way I embed the
    object from the html :(
    - The only thing I can guess is that when a cycle of one of
    the object is running, the other 19 objects must wait their turn.
    That would explain why the more instances I run, the worst results
    I get. In that case, I wonder if there's a way to run them in a
    kind of asynchronous mode, just guessing...
    Any other comment would be appreciated. Anyway, thanks a lot
    everybody for your colaboration.

  • Premiere Elements 9 crashes with multiple NVIDIA graphics cards under Windows 7

    I've just installed Premiere Elements 9 and found that it will not run at all with multiple NVIDIA graphics cards active under Windows 7.  Per all the other discussions, I updated the NVIDIA drivers and Quicktime support to the latest versions.  (NVIDIA version 260.99).  I have both a 9400 GT and an 8400 GS installed, to support multiple monitors.  They both use the same driver.  The PRE 9 organizer will run, but as soon as I create or load a project it quickly crashes.
    On a long shot I disabled the 8400 GS card in the Device Manager and now I can run PRE 9.  Of course this means giving up one of my monitors.  Any other ideas?

    Over in the Premiere Pro area (or maybe it was in the Hardware section... didn't pay a lot of attention 'cause it really didn't matter to me) I read a message about using two nVidia cards with no problems (other than no dual-card SLI support)
    I **think** that was with two SAME model nVidia cards
    And, of course, PPro is completely different code, so PreEl may simply not work with two nVidia cards

  • SQL Adapter Crashes with large XML set returned by SQL stored procedure

    Hello everyone. I'm running BizTalk Server 2009 32 bit on Windows Server 2008 R2 with 8 GB of memory.
    I have a Receive Port with the Transport Type being SQL and the Receive Pipeline being XML Receive.
    I have a Send Port which processes the XML from this Receive Port and creates an HIPAA 834 file.
    Once a large file is created (approximately 1.6 GB in XML format, 32 MB in EDI form), a second file 1.7 GB fails to create.
    I get the following error in the Event Viewer:
    Event Type: Warning
    Event Source: BizTalk Server 2009
    Event Category: (1)
    Event ID: 5740
    Date:  10/28/2014
    Time:  7:15:31 PM
    User:  N/A
    The adapter "SQL" raised an error message. Details "HRESULT="0x80004005" Description="Unspecified error"
    Is there a way to change some BizTalk server settings to help in the processing of this large XML set without the SQL adapter crashing?
    Paul

    Could you check Sql Profiler to trace or determine if you are facing deadlock?
    Is your Adapter running under 64 bits?
    Have you studied the possibility of using SqlBulkInser Adapter?
    http://blogs.objectsharp.com/post/2005/10/23/Processing-a-Large-Flat-File-Message-with-BizTalk-and-the-SqlBulkInsert-Adapter.aspx

  • Please help me with multiple threads

    I don't understand much about multiple threads, so can anyone help me with this? can you show me a small simple code about multiple threads?

    Do you understand much about google? yahoo?

  • Read ini file with multiple threads

    I have a state machine architecture, but I have multiple threads. For instance, one is dealing with listening for mulitple tcp connections and storing their refnums to an array in a functional global. Another thread is accessing these refnums and collecting data it's receiving from each tcp connection. This data is queued up and then dequeued in a third thread which does some modification and sends it out via serial. My question is, when you have a situation like this and have to read an ini file, where should you read it? It seems like the most logical place would be outside your loops so you can get all the tcp and serial info (port, baud rate, etc) then wire it it to your create listener or initialize serial connection despite them being in different threads. But then again, normal state machine architecture would want an "initialize" case. If you did this though which loop would you put the init case in? And you would then have to worry about synchronizing loops becuase you wouldn't want one to try and create a listener while another thread was still reading ini data which would include the port to listen on. Maybe I'm overthinking this haha. Suggestions? Edit: one more question. Does it seem overkill that I have a tcp loop listening for data and queuing it up and a separate loop sending out the processed data via serial? Or should I just have one state that gets tcp data stores it in a shift register, then another state that sends it out via serial, and returns the state machine to the tcp read state?
    Message Edited by for(imstuck) on 03-03-2010 01:13 PM
    Message Edited by for(imstuck) on 03-03-2010 01:17 PM
    CLA, LabVIEW Versions 2010-2013

    Most of the applications I work on at the moment are used for testing barcode and label printers. The test applications I design are focused on testing the printer's firmware, not the hardware. Within our applications we have three primary objects (Unfortunately they are not native LabVIEW objects yet. They were developed before native LVOOP.) The primary objects we use in our applications are a log object, a connection object (communication interface to the printer) and a printer object. In any single instance of a test we only have a single printer, a single connection to the printer and one or more discrete logs. Each instance of these objects represent a single, real physical entity. A singleton object is a virtual representation of the physical world. Let's take the log object since that is the most simple of the objects described above. Naturally for a given log file you have the log file name and path. We also provide other attributes such as the maximum size of a single file (we allow log files to span multiple files), whether it is a comma delimited file or if it contains raw data, if timestamps should be included with a log entry and so forth. Most of these attributes are static for a log file with the exception of the name and such things as whether the logging is actually enabled or disabled. If we split a wire and had multiple instances of the log file (the way native LVOOP actually works) the attribute for whether logging is currently enabled or disabled will only pertain to the specific instance, or specific wire for the that object. Since this truly represents a single item, one log file, we need that attribute to be shared for all references to the instance of the log object. Since we allow this we can set an attribute on the log object in any task and it will be reflected in any other task that is using it. Think of the way a action engine or functional global works. However, in this case we provide discrete methods for the various actions.
    I hope that made some sense. If not let me know since I just whipped up this response.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • After Effects CS6 crashing with multiple moniters

    I recently upgraded from CS4 to CS6 in after effects.
    CS4 ran smoothly and fine.
    However, when I start CS6 i get the typical loading screen, it then tells me that
    GPUSniffer.exe has stopped working *I click close the program*
    Another window comes up
    After Effects error: Crash in progress. Last logged message was: "<2076><GPUManager><2>Sniffer Result Code: 1
    Then it closes everything. I read online this can happen with multiple moniters. My main screen is connected via SVGA to my graphics card, however my second (and smaller screen) is connected to the mobo and runs on VGA.
    My graphics card is fully updated as is most of my other drivers.
    I can take out the moniter and then AE boots fine, but I don't want to do this everytime just for AE.
    My only solution to this is leaving the error message there and waiting for AE to load, or clicking close and having everything crash.
    Thanks in advance,
    Snow.

    Nobody can say anything without exact system info, but using two different graphics adaptors is not the best of ideas. AE most likely tries to use your crappy intel chip or whatever and then crashes. You will have to disable it and only use your dedicated graphics.
    Mylenium

  • DB_INIT_CB Segfaults on put with multiple threads

    Hi,
    I am working on a project that uses several bdb databases both btree, and hashmap inside an environment. This system is multithreaded and as such I decided to use the concurrent data storage subsystem to manage the threads. However this doesn't seem to to be working to well.
    When two threads call db->put on on the btree, bdb crashes. The stack trace is below:
    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x46e0a940 (LWP 24704)]
    0x00002aaaab492eb1 in __lock_get_internal () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    (gdb) ba
    #0 0x00002aaaab492eb1 in __lock_get_internal () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    #1 0x00002aaaab4937d2 in __lock_get () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    #2 0x00002aaaab4b53cf in __db_cursor () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    #3 0x00002aaaab4a4eb8 in __db_put () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    #4 0x00002aaaab4b81a9 in __db_put_pp () from /usr/local/BerkeleyDB.5.1/lib/libdb-5.1.so
    #5 0x00002aaaab787df8 in Db::put(DbTxn*, Dbt*, Dbt*, unsigned int) () from /usr/local/BerkeleyDB.5.1/lib/libdb_cxx-5.1.so
    #6 0x0000000000410da5 in BerkleyDbDataSlab::insertRecord (this=0x7261c0, rKey=0, value=0x46e0a050, size=<value optimized out>) at src/berkleydbbtree.cpp:69
    #7 0x000000000046a2a1 in ReadStoreManager::transferThread (this=0x6b0000, id=<value optimized out>) at src/rsmanager.cpp:55
    #8 0x00002aaaaaaca914 in thread_proxy () from /usr/lib/libboost_thread.so.1.46.1
    #9 0x00000035c560673d in start_thread () from /lib64/libpthread.so.0
    #10 0x0000003ed20d3d1d in clone () from /lib64/libc.so.6
    There are about 20 different databases in use here, and the setup is like so:
    Env Setup:
    u_int32_t env_flags = DB_CREATE | DB_INIT_CDB | DB_INIT_MPOOL;
    this->env = new DbEnv(0);
    env->set_cachesize(0, cacheSize, 1);
    u_int32_t m = 0;
    env->mutex_set_max(2000000);
    env->open(dir.c_str(), env_flags, 0);
    BTree database setup:
    db = new Db(env, 0);
    db->set_bt_compare(compare_double);
    db->set_flags(DB_DUPSORT);
    db->set_pagesize(pageSize);
    db->set_dup_compare(compare_double);
    u_int32_t oFlags = DB_CREATE;
    try {
    db->open(NULL, uuid.c_str(), NULL, DB_BTREE, oFlags, 0);
    This application is running on centos 5.5 with the latest version of bdb installed manually.
    Cheers
    Michael

    Hi Michael,
    820039 wrote:
    I am working on a project that uses several bdb databases both btree, and hashmap inside an environment. This system is multithreaded and as such I decided to use the concurrent data storage subsystem to manage the threads. However this doesn't seem to to be working to well.
    When two threads call db->put on on the btree, bdb crashes.The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. From what I understand you have two threads that are writing without any serialization. You either use the Berkeley DB Transactional Data Store, use just one thread to do the db->put operations or you do the serialization yourself, at the application level.
    Please let me know if it helps.
    Additional Documentation:
    The Berkeley DB products - http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/intro_products.html#id2707424
    Berkeley DB Concurrent Data Store Applications - http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/cam.html
    Berkeley DB Transactional Data Store Applications - http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/transapp.html
    Thanks,
    Bogdan

  • Berkeley DB XML crash with multiple readers (dbxml-2.5.16 and db-4.8.26)

    I am using Berkeley DB XML (v. 2.5.16 and the bundled underlying Berkeley DB 4.8.26, which I suppose is now fairly old) to manage an XML database which is read by a large number (order 100) of independent worker processes communicating via MPI. These processes only read from the database; a single master process performs writes.
    Everything works as expected with one or two worker processes. But with three or more, I am experiencing database panics with the error
    pthread lock failed: Invalid argument
    PANIC: Invalid argument
    From searching with Google I can see that issues arising from incorrectly setting up the environment to support concurrency are are fairly common. But I have not been able to find a match for this problem, and as far as I can make out from the documentation I am using the correct combination of flags; I use DB_REGISTER and DB_RECOVER to handle the fact that multiple processes join the environment independently. Each process uses on a single environment handle, and joins using
    DB_ENV* env;
    db_env_create(&env, 0);
    u_int32_t env_flags = DB_INIT_LOG | DB_INIT_MPOOL | DB_REGISTER | DB_RECOVER | DB_INIT_TXN | DB_CREATE;
    env->open(env, path to environment, env_flags, 0);
    Although the environment requests DB_INIT_TXN, I am not currently using transactions. There is an intention to implement this later, but my understanding was that concurrent reads would function correctly without the full transaction infrastructure.
    All workers seem to join the environment correctly, but then fail when an attempt is made to read from the database. They will all try to access the same XML document in the same container (because it gives them instructions about what work to perform). However, the worker processes open each container setting the read-only flag:
    DbXml::XmlContainerConfig models_config;
    models_config.setReadOnly(true);
    DbXml::XmlContainer models = this->mgr->openContainer(path to container, models_config);
    Following the database panic, the stack trace is
    [lcd-ds283:27730] [ 0] 2   libsystem_platform.dylib            0x00007fff8eed35aa _sigtramp + 26
    [lcd-ds283:27730] [ 1] 3   ???                                 0x0000000000000000 0x0 + 0
    [lcd-ds283:27730] [ 2] 4   libsystem_c.dylib                   0x00007fff87890bba abort + 125
    [lcd-ds283:27730] [ 3] 5   libc++abi.dylib                     0x00007fff83aff141 __cxa_bad_cast + 0
    [lcd-ds283:27730] [ 4] 6   libc++abi.dylib                     0x00007fff83b24aa4 _ZL25default_terminate_handlerv + 240
    [lcd-ds283:27730] [ 5] 7   libobjc.A.dylib                     0x00007fff89ac0322 _ZL15_objc_terminatev + 124
    [lcd-ds283:27730] [ 6] 8   libc++abi.dylib                     0x00007fff83b223e1 _ZSt11__terminatePFvvE + 8
    [lcd-ds283:27730] [ 7] 9   libc++abi.dylib                     0x00007fff83b21e6b _ZN10__cxxabiv1L22exception_cleanup_funcE19_Unwind_Reason_CodeP17_Unwind_Exception + 0
    [lcd-ds283:27730] [ 8] 10  libdbxml-2.5.dylib                  0x000000010f30e4de _ZN5DbXml18DictionaryDatabaseC2EP8__db_envPNS_11TransactionERKNSt3__112basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEERKNS_15ContainerConfigEb + 1038
    [lcd-ds283:27730] [ 9] 11  libdbxml-2.5.dylib                  0x000000010f2f348c _ZN5DbXml9Container12openInternalEPNS_11TransactionERKNS_15ContainerConfigEb + 1068
    [lcd-ds283:27730] [10] 12  libdbxml-2.5.dylib                  0x000000010f2f2dec _ZN5DbXml9ContainerC2ERNS_7ManagerERKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEEPNS_11TransactionERKNS_15ContainerConfigEb + 492
    [lcd-ds283:27730] [11] 13  libdbxml-2.5.dylib                  0x000000010f32a0af _ZN5DbXml7Manager14ContainerStore13findContainerERS0_RKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEEPNS_11TransactionERKNS_15ContainerConfigEb + 175
    [lcd-ds283:27730] [12] 14  libdbxml-2.5.dylib                  0x000000010f329f75 _ZN5DbXml7Manager13openContainerERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEPNS_11TransactionERKNS_15ContainerConfigEb + 101
    [lcd-ds283:27730] [13] 15  libdbxml-2.5.dylib                  0x000000010f34cd46 _ZN5DbXml10XmlManager13openContainerERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEERKNS_18XmlContainerConfigE + 102
    Can I ask if it's clear to anyone what I am doing wrong?

    Is it possible that the root problem to this is in the MPI code or usage?  Because if the writer process crashes while holding an active transaction or open database handles, it could leave the environment in an inconsistent state that would result in the readers throwing a PANIC error when they notice the inconsistent environment.
    Thanks for looking into this.
    It looks like there was a small typo in the code I quoted, and I think it was this which caused the segmentation fault or memory corruption. Although I checked a few times that the code snippet produced expected results before posting it, I must have been unlucky that it just happened not to cause a segfault on those attempts.
    This is a corrected version:
    #include <iostream>
    #include <vector>
    #include "dbxml/db.h"
    #include "dbxml/dbxml/DbXml.hpp"
    #include "boost/mpi.hpp"
    static std::string envname = std::string("test");
    static std::string pkgname = std::string("packages.dbxml");
    static std::string intname = std::string("integrations.dbxml");
    int main(int argc, char *argv[])
        boost::mpi::environment  mpi_env;
        boost::mpi::communicator mpi_world;
        if(mpi_world.rank() == 0)
            std::cerr << "-- Writer creating environment" << std::endl;
            DB_ENV *env;
            int dberr = ::db_env_create(&env, 0);
            std::cerr << "**   creation response = " << dberr << std::endl;
            if(dberr > 0) std::cerr << "**   " << ::db_strerror(dberr) << std::endl;
            std::cerr << "-- Writer opening environment" << std::endl;
            u_int32_t env_flags = DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_REGISTER | DB_RECOVER | DB_INIT_TXN | DB_CREATE;
            dberr = env->open(env, envname.c_str(), env_flags, 0);
            std::cerr << "**   opening response = " << dberr << std::endl;
            if(dberr > 0) std::cerr << "**   " << ::db_strerror(dberr) << std::endl;
            // set up XmlManager object
            DbXml::XmlManager *mgr = new DbXml::XmlManager(env, DbXml::DBXML_ADOPT_DBENV | DbXml::DBXML_ALLOW_EXTERNAL_ACCESS);
            // create containers - these will be used by the workers
            DbXml::XmlContainerConfig pkg_config;
            DbXml::XmlContainerConfig int_config;
            pkg_config.setTransactional(true);
            int_config.setTransactional(true);
            std::cerr << "-- Writer creating containers" << std::endl;
            DbXml::XmlContainer packages       = mgr->createContainer(pkgname.c_str(), pkg_config);
            DbXml::XmlContainer integrations   = mgr->createContainer(intname.c_str(), int_config);
            std::cerr << "-- Writer instructing workers" << std::endl;
            std::vector<boost::mpi::request> reqs(mpi_world.size() - 1);
            for(unsigned int                 i = 1; i < mpi_world.size(); i++)
                reqs[i - 1] = mpi_world.isend(i, 0); // instruct workers to open the environment
            // wait for all messages to be received
            boost::mpi::wait_all(reqs.begin(), reqs.end());
            std::cerr << "-- Writer waiting for termination responses" << std::endl;
            // wait for workers to advise successful termination
            unsigned int outstanding_workers = mpi_world.size() - 1;
            while(outstanding_workers > 0)
                boost::mpi::status stat = mpi_world.probe();
                switch(stat.tag())
                    case 1:
                        mpi_world.recv(stat.source(), 1);
                        outstanding_workers--;
                        break;
            delete mgr; // exit, closing database and environment
        else
            mpi_world.recv(0, 0);
            std::cerr << "++ Reader " << mpi_world.rank() << " beginning work" << std::endl;
            DB_ENV *env;
            ::db_env_create(&env, 0);
            u_int32_t env_flags = DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_REGISTER | DB_RECOVER | DB_INIT_TXN | DB_CREATE;
            env->open(env, envname.c_str(), env_flags, 0);
            // set up XmlManager object
            DbXml::XmlManager *mgr = new DbXml::XmlManager(env, DbXml::DBXML_ADOPT_DBENV | DbXml::DBXML_ALLOW_EXTERNAL_ACCESS);
            // open containers which were set up by the master
            DbXml::XmlContainerConfig pkg_config;
            DbXml::XmlContainerConfig int_config;
            pkg_config.setTransactional(true);
            pkg_config.setReadOnly(true);
            int_config.setTransactional(true);
            int_config.setReadOnly(true);
            DbXml::XmlContainer packages     = mgr->openContainer(pkgname.c_str(), pkg_config);
            DbXml::XmlContainer integrations = mgr->openContainer(intname.c_str(), int_config);
            mpi_world.isend(0, 1);
            delete mgr; // exit, closing database and environment
        return (EXIT_SUCCESS);
    This repeatably causes the crash on OS X Mavericks 10.9.1. Also, I have checked that it repeatably causes the crash on a virtualized OS X Mountain Lion 10.8.5. But I do not see any crashes on a virtualized Ubuntu 13.10. My full code likewise works as expected with a large number of readers under the virtualized Ubuntu. I am compiling with clang and libc++ on OS X, and gcc 4.8.1 and libstdc++ on Ubuntu, but using openmpi in both cases. Edit: I have also compiled with clang and libc++ on Ubuntu, and it works equally well.
    Because the virtualized OS X experiences the crash, I hope the fact that it works on Ubuntu is not just an artefact of virtualization. (Unfortunately I don't currently have a physical Linux machine with which to check.) In that case the implication would seem to be that it's an OS X-specific problem. 2nd edit (14 Feb 2014): I have now managed to test on a physical Linux cluster, and it appears to work as expected. Therefore it does appear to be an OS X-specific issue.
    In either OS X 10.8 or 10.9, the crash produces this result:
    -- Writer creating environment
    **   creation response = 0
    -- Writer opening environment
    **   opening response = 0
    -- Writer creating containers
    ++ Reader 7 beginning work
    -- Writer instructing workers
    -- Writer waiting for termination responses
    ++ Reader 1 beginning work
    ++ Reader 2 beginning work
    ++ Reader 3 beginning work
    ++ Reader 4 beginning work
    ++ Reader 5 beginning work
    ++ Reader 6 beginning work
    pthread lock failed: Invalid argument
    PANIC: Invalid argument
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    PANIC: fatal region error detected; run recovery
    libc++abi.dylib: terminate called throwing an exception
    [mountainlion-test-rig:00319] *** Process received signal ***
    [mountainlion-test-rig:00319] Signal: Abort trap: 6 (6)
    [mountainlion-test-rig:00319] Signal code:  (0)
    David
    Message was edited by: ds283

  • Acrobat Pro 9.5 crashing with Multiple prints

    I have Acrobat 9.5 pro on multiple machines in my office, when trying to print to ANY printer, even to another PDF, if I select more than 200ish pages (8 1/2x11-simple prints), it crashes Adobe Acrobat. This happens wether its a binder or a portfolio. Also when when printing from Acrobat, it really hogs down the systems.
    Says no repair is needed, and I know my system is strong enough (these are built for High graphics CAD use)
    Windows 7 Ultimate
    Intel i7-2700k CPU @3.5GHz  3.90 GHz
    16 GB ram
    64bit OS
    Any Suggestions or patches would be grateful. I would really like to be able to send my 400+ page PDF and walk away.
    -Lisa

    I already have 9.5.4. 
    I have over 500Gigs Free space on my HD
    I print to B&W, and Color and just back to the Adobe Driver (to create another PDF as a test-and still crashes if I attempt more than about 200 PDFs). I have tried it with 6 Different Printers, even at Economy printing. I always keep all my drivers (like software updates) to the latest and greatest. in fact, I checked to make sure they were all when I got the 1st crash.
    If I select more than about 200, the entire program crashes, it doesn't even attempt to start printing.
    Printer Spooling Not on.
    Just printing simple B & W line drawings (at 8 1/2 X 11)
    Here are just a few printers I have tried...
    Kyocera TASKalfa 520i KX PS and PCL
    Xerox 4110 PCL & PS
    Xerox 4112 PCL & PS
    Xerox 7545 PS & PCL
    Xerox Phaser 7760

Maybe you are looking for