Berkeley DB and DB optimization

Hi,
I have been testing BerkeleyDB-4.7.25 with 16 *.bdb files using BTREE on a Linux server 64bits. Each *.bdb file reaches approximately a size of 3.2 GB.
I have run a set of operations that include puts/gets/updates/deletes
I would like to ask a couple of questions, please:
1)
Is there is any Berkeley DB tool/function to optimize the *.bdb files for/after deletion.
2)
I have been running dbstat -e (please find the output of db_stat below) trying to improve some of the DB_CONFIG parameters.
set_flags DB_TXN_WRITE_NOSYNC
set_cachesize 0 2147483648 1
mutex_set_max 1000000
set_tx_max 500000
set_lg_regionmax 524288
set_lg_bsize 4194304
set_lg_max 20971520
set_lk_max_locks 10000
set_lk_max_lockers 10000
set_lk_max_objects 10000
I have increased the cache size, but it does not seem to be helping to improve the operation response times.
I would really appreciate any help.
Would the use of DB_SYSTEM_MEM (create the shared regions in system shared memory) help ?
Would the preallocation of the db files help ?
Would the increase of the log buffer help ?
Would the size of the log help (based on the values related to data written since last checkpoint in db_stat) ?
Could you please help ?
Thanks,
Mariella
This is the output of db_stat -e:
0x40988 Log magic number
14 Log version number
4MB Log record cache size
0 Log file mode
20Mb Current log file size
72M Records entered into the log (72944260)
92GB 761MB 385KB 636B Log bytes written
1GB 805MB 40KB 747B Log bytes written since last checkpoint
6596982 Total log file I/O writes
0 Total log file I/O writes due to overflow
7295 Total log file flushes
39228 Total log file I/O reads
4749 Current log file number
18526992 Current log file offset
4748 On-disk log file number
20970984 On-disk log file offset
1 Maximum commits in a log flush
1 Minimum commits in a log flush
4MB 512KB Log region size
303613 The number of region locks that required waiting (0%)
100 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
10000 Maximum number of locks possible
10000 Maximum number of lockers possible
10000 Maximum number of lock objects possible
40 Number of lock object partitions
16 Number of current locks
274 Maximum number of locks at any one time
7 Maximum number of locks in any one bucket
0 Maximum number of locks stolen by for an empty partition
0 Maximum number of locks stolen for any one partition
100 Number of current lockers
108 Maximum number of lockers at any one time
16 Number of current lock objects
176 Maximum number of lock objects at any one time
4 Maximum number of lock objects in any one bucket
0 Maximum number of objects stolen by for an empty partition
0 Maximum number of objects stolen for any one partition
118M Total number of locks requested (118356655)
118M Total number of locks released (118356639)
119802 Total number of locks upgraded
16 Total number of locks downgraded
20673 Lock requests not available due to conflicts, for which we waited
0 Lock requests not available due to conflicts, for which we did not wait
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
500000 Transaction timeout value
0 Number of transactions that have timed out
7MB 768KB The size of the lock region
5019 The number of partition locks that required waiting (0%)
328 The maximum number of times any partition lock was waited for (0%)
0 The number of object queue operations that required waiting (0%)
280 The number of locker allocations that required waiting (0%)
958 The number of region locks that required waiting (0%)
4 Maximum hash bucket length
2GB Total cache size
1 Number of caches
1 Maximum number of caches
2GB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
150M Requested pages found in the cache (92%)
12M Requested pages not found in the cache (12855704)
8449044 Pages created in the cache
12M Pages read into the cache (12855704)
20M Pages written from the cache to the backing file (20044721)
32M Clean pages forced from the cache (32698230)
1171137 Dirty pages forced from the cache
9227380 Dirty pages written by trickle-sync thread
505880 Current total page count
356352 Current clean page count
149528 Current dirty page count
262147 Number of hash buckets used for page location
184M Total number of times hash chains searched for a page (184542797)
34 The longest hash chain searched for a page
945M Total number of hash chain entries checked for page (945465289)
430 The number of hash bucket locks that required waiting (0%)
34 The maximum number of times any hash bucket lock was waited for (0%)
5595 The number of region locks that required waiting (0%)
0 The number of buffers frozen
0 The number of buffers thawed
0 The number of frozen buffers freed
34M The number of page allocations (34375350)
76M The number of hash buckets examined during allocations (76979039)
18 The maximum number of hash buckets examined for an allocation
33M The number of pages examined during allocations (33869157)
4 The max number of pages examined for an allocation
2 Threads waited on page I/O
Pool File: file_p10.bdb
4096 Page size
0 Requested pages mapped into the process' address space
9376233 Requested pages found in the cache (92%)
800764 Requested pages not found in the cache
4096 Page size
0 Requested pages mapped into the process' address space
9376233 Requested pages found in the cache (92%)
800764 Requested pages not found in the cache
526833 Pages created in the cache
800764 Pages read into the cache
1179504 Pages written from the cache to the backing file
Pool File: file_p3.bdb
4096 Page size
4658/8873223 File/offset for last checkpoint LSN
Thu Apr 30 22:00:23 2009 Checkpoint timestamp
0x806584b8 Last transaction ID allocated
500000 Maximum number of active transactions configured
0 Active transactions
8 Maximum active transactions
6653112 Number of transactions begun
60327 Number of transactions aborted
6592785 Number of transactions committed
144048 Snapshot transactions
257302 Maximum snapshot transactions
0 Number of transactions restored
185MB 24KB Transaction region size
90116 The number of region locks that required waiting (0%)
Active transactions:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
129MB 720KB Mutex region size
108 The number of region locks that required waiting (0%)
4 Mutex alignment
200 Mutex test-and-set spins
1000000 Mutex total count
331261 Mutex free count
668739 Mutex in-use count
781915 Mutex maximum in-use count
Mutex counts
331259 Unallocated
16 db handle
1 env dblist
2 env handle
1 env region
43 lock region
274 logical lock
1 log filename
1 log flush
2 log region
16 mpoolfile handle
16 mpool filehandle
17 mpool file bucket
1 mpool handle
262147 mpool hash bucket
262147 mpool buffer I/O
1 mpool region
1 mutex region
1 twister
1 txn active list
1 transaction checkpoint
144050 txn mvcc
1 txn region

user11096811 wrote:
i have same questionWhat is the question exactly? What DB release are you using?
user11096811 wrote:
the app throws com.sleepycat.db.LockNotGrantedException .what should i do?The LockNotGrantedException exception being thrown is a subclass of DeadlockException.
A LockNotGrantedException is thrown when a lock requested using the Environment.getLock or Environment.lockVector methods, where the noWait flag or lock timers were configured, could not be granted before the wait-time expired.
Additionally, LockNotGrantedException is thrown when a Concurrent Data Store database environment configured for lock timeouts was unable to grant a lock in the allowed time.
Additionally, LockNotGrantedException is thrown when lock or transaction timeouts have been configured and a database operation has timed out. Applications can handle all deadlocks by
catching the DeadlockException. You can read more on how to configure the locking subsystem and resolve the deadlocks at [The Locking Subsystem|http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/JAVA/lockingsubsystem.html].
Thanks,
Bogdan Coman

Similar Messages

  • Check and Update  Optimizer Statistics Error

    Hello,
    I have scheduled 'Check and Update  Optimizer Statistics' from DB 13 but I always get this error.
      BR0280I BRCONNECT time stamp: 2009-01-05 01.00.46
      BR0301E SQL error -20000 at location stats_ind_collect-3, SQL statement:
      'BEGIN DBMS_STATS.GATHER_INDEX_STATS (OWNNAME => '"SAPSR3"', INDNAME => '"/BIC  /B0000412000KE"', ESTIMATE_PERCENT => 30, DEGREE => NULL, NO_INVALIDATE => FALSE); END;'
      ORA-20000: index "SAPSR3"."/BIC/B0000412000KE"  or partition of such index is in unusable state
      ORA-06512: at "SYS.DBMS_STATS", line 10610
      ORA-06512: at "SYS.DBMS_STATS", line 10645
      ORA-06512: at line 1
      BR0886E Checking/collecting statistics failed for index SAPSR3./BIC/B0000412000KE
    Tried checking the table /BIC/B0000412000KE and got this from the check:
      Enhancement category for table missing
      Enhancement category for include or subtype missing
      Table /BIC/B0000412000 was checked with warnings
    I am still checking what this means but appreciate you have any idea on how to solve it.
    Thank you.
    Best Regards,
    Julius
    Edited by: Julius Baron Manuel on Jan 7, 2009 7:39 AM

    Hi Julius,
    Have your tried scheduling the update stats via DB13 'Immediately'. ??? Does it still gives you the same error when its done 'immediately'?
    Try out DB02 and check your whole Database along with missing indexes. Refresh database and update histories.
    Regards,
    Pranay

  • DB13 Check and update optimizer stat error: ORA-01652: unable to extend tem

    Hi SAP Gurus,
    When running Check and Update Optimizer Statistics in DB13, an error occurs.
    BR0280I BRCONNECT time stamp: 2011-03-10 06.35.52                    
    BR0301E SQL error -1652 at location stats_tab_collect-16             
    ORA-01652: unable to extend temp segment by 12137 in tablespace SYSTEM
    BR0886E Checking/collecting statistics failed for table SAPR3.ACCTIT 
    BR0280I BRCONNECT time stamp: 2011-03-10 06.36.49                    
    BR0850I 3 of 39479 objects processed - 3.522 of 342.781 units done   
    BR0204I Percentage done: 1.03%, estimated end time: 15:47            
    Looking at tablespace SYSTEM in DB02, the percent used is 58. Auto-extent feature is OFF. Will turning the auto-extent ON, remove the error?

    It seems like Your user (by a mistake) have tablespace SYSTEM as temporary tablespace.
    Connect as sysdba in Your database and check which temp tablespaces awaiable, and size of them:
    select tablespace_name, sum(bytes)/1024/1024 "Size of TEMP TBS in MB" from dba_temp_files  group by tablespace_name;
    Check which users having SYSTEM as tamp-tbs:
    select username, temporary_tablespace from dba_users where temporary_tablespace like 'SYSTEM' order by 1;
    Change those users to have one of your temp-tablespaces:
    alter user &username temporary tablespace &&TempTBS;
    It's also a good idea to have autoextend on on Your temp TBS, but remember to set maxsize so you doesn't fill up your disksystem.
    Hope this solve Your problems.
    Regards
    Audun
    DBA

  • Check and update optimizer statistics failed

    Hellow Friends,
    After running job Check and update optimizer statistics each and every time from db13 in my BI production server it got failed please
    view the logs
    BR0301E SQL error -20000 at location stats_tab_collect-20, SQL statement:                                                                    
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BIC/B0000228000"', ESTIMATE_PERCENT => NULL, METHOD_OPT => 'FOR ALL
    ORA-20000: index "SAPSR3"."/BIC/B0000228000KE"  or partition of such index is in unusable state                                              
    ORA-06512: at "SYS.DBMS_STATS", line 13159                                                                               
    ORA-06512: at "SYS.DBMS_STATS", line 13179                                                                               
    ORA-06512: at line 1                                                                               
    BR0886E Checking/collecting statistics failed for table SAPSR3./BIC/B0000228000                                                              
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.19                                                                               
    BR0883I Table selected to collect statistics after check: SAPSR3./BIC/B0000229000 (161130/1480:0:0)                                          
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.19                                                                               
    BR0881I Collecting statistics for table SAPSR3./BIC/B0000229000 with method/sample E/P10 ...                                                 
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.21                                                                               
    BR0301E SQL error -20000 at location stats_tab_collect-20, SQL statement:                                                                    
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BIC/B0000229000"', ESTIMATE_PERCENT => 10, METHOD_OPT => 'FOR ALL C
    ORA-20000: index "SAPSR3"."/BIC/B0000229000KE"  or partition of such index is in unusable state                                              
    ORA-06512: at "SYS.DBMS_STATS", line 13159                                                                               
    ORA-06512: at "SYS.DBMS_STATS", line 13179                                                                               
    ORA-06512: at line 1                                                                               
    BR0886E Checking/collecting statistics failed for table SAPSR3./BIC/B0000229000                                                              
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.21                                                                               
    BR0883I Table selected to collect statistics after check: SAPSR3./BIC/B0000230000 (0/13545:0:0)                                              
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.21                                                                               
    BR0881I Collecting statistics for table SAPSR3./BIC/B0000230000 with method/sample E/P30 ...                                                 
    BR0280I BRCONNECT time stamp: 2009-05-19 18.17.21                                                                               
    BR0301E SQL error -20000 at location stats_tab_collect-20, SQL statement:                                                                    
    'BEGIN DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => '"SAPSR3"', TABNAME => '"/BIC/B0000230000"', ESTIMATE_PERCENT => 30, METHOD_OPT => 'FOR ALL C
    ORA-20000: index "SAPSR3"."/BIC/B0000230000KE"  or partition of such index is in unusable state                                              
    ORA-06512: at "SYS.DBMS_STATS", line 13159                                                                               
    ORA-06512: at "SYS.DBMS_STATS", line 13179                                                                               
    ORA-06512: at line 1                                                                               
    BR0886E Checking/collecting statistics failed for table SAPSR3./BIC/B0000230000                                                              
    Please help me to resolve the issue.

    some of your PSA tables are inconsistent - ex: /BIC/B0000228000 - check PSA partitioning in RSRV to correct the same or delete data from the same using SE14 and try running the same.

  • Re: Full Optimize and Lite Optimize Disabled

    Hi,
    We have an BPC App.
    What we need now is that for a particular group of users to have no access to run Full Optimize or Lite Optimize rights to trigger it.
    Please can any one advice how to disable this functionality or un authorize this for particular set of users so that they cannot execute any optimization.
    Regards.

    Hi,
    Follow below steps:
    1. Create a Team and assign users those should have an access for Full and light optimization
    2. Goto  Manage data>Maintain data management>Manage Packages
        -->Select the team which you created in step 1
         -->Select add package
        --> Choosee process chain as /CPMB/LIGHT_OPTIMIZE
        --> Give package name and description
        --> Select Taks type as User Package
        -->Click Save
    3. Create one more package for Full Optimization as expalined in step 2 under same team
    Now users who are related to this new team can able to run both Full and light optimizations provided Execute task assigned under task profile.
    By following above steps you can assign many package to team and can modify team access from Manage Team user package access  if you need.
    Hope it helps..
    Regards,
    Raju

  • Cannot execute planning operators Unconstrained and Profit Optimization in SAP2

    Hi,
    We cannot execute supply planning operators Unconstrained and Profit Optimization with user JSMITH in planning area SAP2, although these jobs were successfully executed by the same user in the past.
    Any idea as to what could be the problem?
    We are using SOP 3.0.2.1, EPM version 10.0 SP 18 patch 2 build 8873.
    Thanks,
    Bernard

    looks like you are running in batch mode. perhaps the background user for your system is not set up (an operational issue for which you can file a message).
    if you run in simulate  mode (using simulate button), does it work? if not perhaps you're current period is not may 2014 and the data set assume this.. (you can change the current period offset similar to: update "SAP_SFND"."sap.sop.sopfnd.catalogue::SOPDM_PLANAREASET" set planoffset = 0 where plset = 'SM1BASESAPMODEL1'

  • Touch Optimizer and Mouse optimizer dont work since updates

    I noticed during the last update that the mouse optimizer and touch optimizer programs dont work. When you go to open them it says the application failed to start because its side by side configuration is incorrect. Any ideas?

    Fortunately, I had a ghosted image of my hd, so I just had  to roll it back one week. It is now where it was, one week prior to these upgrades, and there is no sign of any sort of "optomizers".
    I had the update program run again, and sure enough these same upgrades are showing up again, but upon reading them more carefully, they are for the 8 series, might be the reason it is creating a problem on the 5 series. Unfortunately, a complete restore is the only way to be rid of these "upgrades" 
    They really should only be sent down to series 8 machines, as I didn't read it too carefully, and installed it myself, next time I will be a little more careful...

  • Differece between SNP Optimizer and Deployment Optimizer

    Hi,
    Can anyone please list down the difference in the planning method for a deployment optimizer and SNP Optimizer?
    Thanks & Regards,
    Sanjog Mishrikotkar

    Hi Sanjog,
    First of all if we understand the difference between SNP Heuristic Planning run which finds the 'Source' of Supply with dates and quantities and Deployment Planning run which CONFIRMs the Supply .... then it is easy to understand the difference between SNP & Deployment Optimizer.
    The Optimizer as you know Optimizes based on Costs and Objectively tries to MINIMIZE it.  Therefore while the SNP Optimizer finds the most cost effective way of finding the Source of Supply with dates and quantities, identifying where in SC it is better to Store or move the Product,  while the Deployment Optimizer generates the Best way to CONFIRM whether the Supply can ACTUALLY be done for the next few days.  Deployment precedes the TLB Run which after confirmation we put the Quantities on a Transport Load to build Orders for Execution (Shipping).
    Both are Cost based & use the Same Costs Information ... however one Plans and the Other Confirms.  During Deployment Optimization run, the Optimizer may decide to confirm the supply from a different source than what the SNP Optimizer planned based on the Available to Deploy Stock Quanties and the Cost of Confirming the Supply.  The Deployment Optimizer will apply Fairshare & Push/Pull Rules and looks at the Push and Pull Deployment Horizons which SNP Optimizer cannot.  The Difference is also in Planning Time Range.  You plan SNP Supply for mid to long term time range.  The Deployment looks at Confirming the supply in the next few days from TODAY. 
    So in short first understand the Difference between a SNP Heuristic and Deployment Heuristic and apply the same principle to a cost based optimization. This should tell you the difference between the two.
    Try read this ...  First para on Deployment Optimizer ...
    http://help.sap.com/saphelp_scm50/helpdata/en/1c/4d7a375f0dbc7fe10000009b38f8cf/frameset.htm
    Read the First Paragragraph as well as 'Distribution based on Lowest Costs' section
    Hope you find this answer usefull.  Reward Points if it is.
    Regards,
    Ambrish Mathur

  • Differences listed between ASCP Optimization and Inventory Optimization ?

    Does anyone has a listed readymade document listing the differences between ASCP optimization and Inventory optimization modules.
    thx for your help.
    thx and rgds,
    Pankaj

    Hi,
    As explained IO will recommend time phased Safety stocks it also provides various constraint option, we can make use of it as per our requirement.
    In ASCP, following optimization are available:
    1. Maximize inventory turns
    2. Maximize plan profit
    3. Maximize on-time delivery
    If you want to use the same optmization in IO, we can fulfill the requirement by available IO constraints say max on-time delivery thro enforce service level constraint.
    Hope it will help u.
    Tks
    M J

  • Berkeley DB and Tuxedo

    Dear all,
    I am trying to set up Berkeley DB from Sleepycat Software (an open
    source database implementation) as a backend database for Tuxedo with
    X/Open transaction support on a HP-UX 11 System. According to the
    documentation, this should work. I have successfully compiled and
    started the resource manager (called DBRM) and from the logs
    everything looks fine.
    The trouble starts, however, when I try to start services that use
    DBRM. The startup call for opening the database enviroment ("database
    enviroment" is a Berkeley DB specific term that refers to a grouping
    of files that are opened together with transaction support) fails with
    the error message
    error: 12 (Not enough space)
    Some digging in the documentation for Berkeley DB reveals the
    following OS specific snippet (DBENV->open is the function call that
    causes the error message above):
    <quote>
    An ENOMEM error is returned from DBENV->open or DBENV->remove.
    Due to the constraints of the PA-RISC memory architecture, HP-UX
    does not allow a process to map a file into its address space
    multiple times. For this reason, each Berkeley DB environment may
    be opened only once by a process on HP-UX, i.e., calls to
    DBENV->open will fail if the specified Berkeley DB environment
    has been opened and not subsequently closed.
    </quote>
    OK. So it appears that a call to DBENV->open does a mmap and that
    cannot happen twice on the same file in the same process. Looking at
    the source for the resource manager DBRM it appears, that there is
    indeed a Berkeley DB enviroment that is opened (once), otherwise
    transactions would not work. A ps -l on the machine in question looks
    like this (I have snipped a couple of columns to fit into a newsreader):
    UID PID PPID C PRI NI ADDR SZ TIME COMD
    101 29791 1 0 155 20 1017d2c00 84 0:00 DBRM
    101 29787 1 0 155 20 10155bb00 81 0:00 TMS_QM
    101 29786 1 0 155 20 106d54400 81 0:00 TMS_QM
    101 29790 1 0 155 20 100ed2200 84 0:00 DBRM
    0 6742 775 0 154 20 1016e3f00 34 0:00 telnetd
    101 29858 6743 2 178 20 100ef3900 29 0:00 ps
    101 29788 1 0 155 20 100dfc500 81 0:00 TMS_QM
    101 29789 1 0 155 20 1024c8c00 84 0:00 DBRM
    101 29785 1 0 155 20 1010d7e00 253 0:00 BBL
    101 6743 6742 0 158 20 1017d2e00 222 0:00 bash
    So every DBRM is started as its own process and the service process
    (which does not appear above) would be its own process as well. So how
    can it happen that mmap on the same file is called twice in the same
    process? What exactly does tmboot do in terms of startup code? Is it
    just a couple of fork/execs or there more involved?
    Thanks for any suggestions,
    Joerg Lenneis
    email: [email protected]

    Peter Holditch:
    Joerg,
    Comments in-line.
    Joerg Lenneis wrote:[snip]
    I have no experience of Berkley DB. Normally the xa_open routine provided by
    your database, and called by tx_open, will connect the server process itself to
    the database. What that means is database specific. I expect in the case of
    Berkley DB, it has done the mmap for you. I guess the open parameters in your
    code above are also in your OPENINFO string in the Tuxedo ubbconfig file?
    It does not sound to me like you have a problem.Fortunately, I do not any more. Your comments and looking at the
    source for the xa interface have put me on the right track. What I did
    not realise is that (as you point out in the praragraph above) a
    Tuxedo service process that uses a resource manager gets the
    following structure linked in:
    const struct xa_switch_t db_xa_switch = {
    "Berkeley DB", /* name[RMNAMESZ] */
    TMNOMIGRATE, /* flags */
    0, /* version */
    __db_xa_open, /* xa_open_entry */
    __db_xa_close, /* xa_close_entry */
    __db_xa_start, /* xa_start_entry */
    __db_xa_end, /* xa_end_entry */
    __db_xa_rollback, /* xa_rollback_entry */
    __db_xa_prepare, /* xa_prepare_entry */
    __db_xa_commit, /* xa_commit_entry */
    __db_xa_recover, /* xa_recover_entry */
    __db_xa_forget, /* xa_forget_entry */
    __db_xa_complete /* xa_complete_entry */
    This is database specific, of course, so it would look different for,
    say, Oracle. The entries in that structure are pointers to various
    functions which are called by Tuxedo on behalf of the server process
    on startup and whenever transaction management is necessary. xa_open
    does indeed open the database, which means opening an enviroment
    with a mmap somwhere in the case of Berkeley DB. In my code I then
    tried to open the enviroment again (you are right, the OPENINFO string
    is the same in ubbconfig as in my code) which led to the error message
    posted in my initial message.
    I had previously thougt that the service process would contact the
    resource manager via some IPC mechanism for opening the database.
    >>
    >>
    If I am mistaken, then things look a bit dire. Provided that this is
    even the correct thing to do I could move the tx_open() after the call
    to env->open, but this would still mean there are two mmaps in the
    same process. I also need both calls to i) initiate the transaction
    subsystem and ii) get hold of the pointer DB_ENV *env which is the
    handle for all subsequent DB access.
    In the case or servers using OCI to access Oracle, there is an OCI API that
    allows a connection established through xa to be associated with an OCI
    connection endpoint. I suspect there is an equivalent function provided by
    Berkley DB?There is not, but see my comments below about how to get to the
    Berkeley DB enviroment.
    [snip]
    I doubt it. xa works because xa routines are called in the same thread as the
    data access routines. Typically, a server thread will run like this...
    xa_start(Tuxedo Transaction ID) /* this is done by the Tux. service dispatcher
    before your code is executed */
    manipulate_data(whatever parameters necessary) /* this is the code you wrote in
    your service routine */
    xa_end() /* Tuxedo calls this after your service calls tpreturn or tpforward */
    The association between the Tuxedo Transaction ID and the data manipulation is
    made by the database because of this calling sequence.OK, this makes sense. Good to know this as well ...
    [snip]
    For somebody else trying this, here is the correct way:
    ==================================================
    int
    tpsvrinit(int argc, char *argv[])
    int ret;
    if (tpopen() < 0)
         userlog("error tpopen");
    userlog("startup, opening database\n");
    if (ret = db_create(&dbp, NULL, DB_XA_CREATE)) {
         userlog("error %i db_create: %s", ret, db_strerror(ret));
         return -1;
    if (ret = dbp->open(dbp, "sometablename", NULL, DB_BTREE, DB_CREATE, 0644)) {
         userlog("error %i db->open", ret);
         return -1;
    return(0);
    ==================================================
    What happens is that the call to the xa_open() function implicitly
    opens the Berkely DB enviroment for the database in question, which is
    given in the OPENINFO string in the configuration file. It is an error
    to specify the enviroment in the call to db_create() in such a
    context. All calls to change the database do not need an enviroment
    specified and the calls to begin/commit/abort transactions that are
    normally used by Berkeley DB which use the enviroment are superseded
    by tpopen(), tpclose() and friends. It would be an error to use those
    calls anyway.
    Thank you very much Peter for your comments which have helped a lot.
    Joerg Lenneis
    email: [email protected]

  • CSV vhdx files and SAN optimization - sequential or random access or...?

    Is there a best practice on the SAN optimization of LUNs for CSV VHDX files - e.g. sequential vs. random access?
    We recently set up a small two-node Hyper-V 2012 R2 Failover Cluster. As I was creating LUNs for the CSV vhdx files, our SAN (like most, I think) has some pre-set optimization options which are more or less sequential vs. random access. There's now the abstraction
    layer of shared VHDX files and the actual data those VHDXs are being used to store.  Are there any best-practices for SAN optimization in this regard?
    In other words, I could see:
    A. Cluster-shared VHDXs are accessed (more-or-less) based on the type of data they're used for
    B. All cluster-shared VHDXs are (more-or-less) accessed sequentially
    C. All cluster-shared VHDXs are (more-or-less) accessed randomly.
    I have one source that says that for a relatively simple SMB setup like we have that "C" is the recommendation.  I'm curious if anyone else has run into this or seen an official best-practice...?

    Is there a best practice on the SAN optimization of LUNs for CSV VHDX files - e.g. sequential vs. random access?
    We recently set up a small two-node Hyper-V 2012 R2 Failover Cluster. As I was creating LUNs for the CSV vhdx files, our SAN (like most, I think) has some pre-set optimization options which are more or less sequential vs. random access. There's now the abstraction
    layer of shared VHDX files and the actual data those VHDXs are being used to store.  Are there any best-practices for SAN optimization in this regard?
    In other words, I could see:
    A. Cluster-shared VHDXs are accessed (more-or-less) based on the type of data they're used for
    B. All cluster-shared VHDXs are (more-or-less) accessed sequentially
    C. All cluster-shared VHDXs are (more-or-less) accessed randomly.
    I have one source that says that for a relatively simple SMB setup like we have that "C" is the recommendation.  I'm curious if anyone else has run into this or seen an official best-practice...?
    There as good article published recently by Jose Barreto about CSV performance counters. See:
    Cluster Shared Volume: Performance Counters
    http://blogs.msdn.com/b/clustering/archive/2014/06/05/10531462.aspx
    You can run DiskSPD or Intel I/O Meter yourself to see what workload you'll get @ CSV with 10+ VMs doing different I/O types. We did and you'll get
    4-8KB 100% random reads and writes (just make sure you gather statistics for a long time).
    So that's type of workload you could optimize your LUN @ SAN level.
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Oracle Mobile Server with SQLLite/Berkeley Db and dbsql

    Hi all,
    i am not sure if i am correct here but hopefully i am.
    In the past we have had Oracle Mobile Server with Oracle Lite.
    We decided to switch to new mobile Server because oracle webtogo is not longer supported and incompatible with windows 7. My administrator did migration of mobile server but migration utility reported that the available applications are incompatible.
    So I decided to create a completely New Publication with a Java application. The new Publication contains only one publication Item. For the first tests I simply wanted to spool out the data contained in my local database.
    In bin directory of sqlite folder i can find a utility named "dbsql". I understood it in this way that I can attach to an existing database file and take a look into that database.
    If i call dbsql.exe BerkeleyTest all seems to be ok. But if i try to select some data from that file i only get the errormessage that databse is in wrong format or encrypted. What am i doing wrong there?
    Am I right that the sql interface (I need that interface because I dont want to rewrite dataaccesslayer of my app) is only available in sqlite but not on "BerkeleyDb"?
    Is anyone here to help me a little bit with my problem here?
    Regards!
    Martin

    I do not know much about Oracle Mobile Server with Oracle Lite, does it use SQLite or BDB?  I do know that databases created by SQLite cannot be read by Berkeley DB SQL (of which dbsql.exe is part of), and databases created by Berkeley DB SQL cannot be read by SQLite.  Also, databases created by Berkeley DB outside of the SQL API cannot be read by the BDB SQL API.  You can open BDB SQL databases with BDB outside of the SQL API, but I would not recommend that outside of a few BDB utilities described in the documentation.  So if your BerkeleyTest database was created by SQLite or BDB outside of the SQL API, then it makes sense that dbsql.exe is returning an error when trying to read it.
    Calling dbsql.exe BerkeleyTest does not open the database, that happens when the first operation is performed on it, which is why you did not get an error until you tried to select something.
    Lauren Foutz

  • Oracle 11.2.0.2 and nchar optimizer problem

    Hello,
    sorry in advance for not being able to give all details to narrow better this possible problem.
    I admit it could be elsewhere.....
    I have a complex query where in particular an nchar(2) field (say col1) of table tab is involved
    I'm experimenting a sort of problem in optimizer estimating the number of rows it will get from this condition and so the corresponding overall cost of the query
    the table contains about 300k records and this field has at this moment 2 different values
    blankblank --> almost all records
    AA --> about 30 records
    there is at the moment a single index IND1 defined on this column (not a bitmap index)
    Both table and index are analyzed
    if my query contains the condition
    AND TAB.COL1 = ' ' ---> single blank
    Then the optimizer erroneously thinks with explain plan to get about 30 rows
    |* 5 | INDEX RANGE SCAN     | IND1 |     29 |     |     3 (0)
    | 00:00:01 |
    and the overall about 98, but the query actually doesn't come to an end ... (waited 10 minutes)
    if my query contains the condition
    AND TAB.COL1 = ' ' --->double space
    it doesn't use that wrong path and completes in about 1 second.
    If I replicate the query with single space on a test db with 11.2.0.1 it behaves correctly.
    If I force it to behave as the default wrong query in 11.2.0.2 (I have to use 3 hints to duplicate it)
    I get
    |* 5 | INDEX RANGE SCAN     | IND1 | 135K|     | 306 (1)
    | 00:00:04 |
    and overall cost is 304K so it is not used as a possible path...
    ANyone knows if anything changed between 11.2.0.1 and 11.2.0.2 for nchar and these possible different behaviours?
    (BTW: also in another 11.1.0.6 db I don't have this problem)
    The query is created by application and I cannot put the amount of necessary spaces.....
    And I'm afraid this problem could exist also for other tables where I have nchar(N) fields with N >2 too....
    Thanks in advance,
    Gianluca

    1) do you have a frequency histogram on this column?
    and
    2) is the client application using bind variables or string literals when querying?

  • Constant declaration and the optimizer

    Assume there is a Plsql package A with a defined constant 'constA'
    Assume we have an sql query in PLSQL package B that references A.constA.
    Will the optimizer always give the same plan for the query whether I used the literal constant or the reference A.constA?
    I assume that replacement of references to A.constA would have to be don't before the optimizer begins it's evaluation. I assume that does not happen; therefore, using A.constA has prevented the optimizer from using the statistics on the table.
    Ie. It is true that use of literals may provide better performance than references to declared constants.

    Hi,
    Will the optimizer always give the same plan for the query whether I used the literal constant or the reference A.constA?Assuming bind peeking is not turned off and all other parameters influencing CBO are the same - yes.
    "The same plan" in this context means not "the same piece of memory in library cache", but "the same execution path", because query with literal and a constant will result in different SQL queries, hence different shareable parent cursors.
    It is true that use of literals may provide better performance than references to declared constants.Yes. If you have a constant to use in SQL, then it's better to use it as a literal rather than bind (using a PL/SQL constant in a query results in using bind variable - but maybe that behavior will be changed sometime).
    The main issue with using binds for constant is a case you use several constants for exactly the same query and there's data skew. CBO is not doing well with that until 11g (which introduced adaptive cursor sharing).

  • Font embed and PDF optimizer question

    using Quark 7, Acrobat 8 professional, non-intel mac OS10.4.11. According to the quark dialog, the fonts are imbedded. When I look in PDF optimizer, the fonts don't show up. Does this mean they were never imbedded? Also, when I save a file as PDF optimized I get a zero kb figure in the finder panel. Is this correct?
    Thanks

    Although you may have embedding turned on, Doesn't necessarily mean That fonts are embedded. Only fonts that allowed to be embedded, are those from Adobe, plus other font houses that have agreements with Adobe to allo it. If your trying to embed MicroSoft Fonts. Forget it it They share nothing with anyone. And The chace of it happening are about like the old say saying The chances of it happening are *Slim* and *None* *and Slim just left town*.
    Also if you receive a PDF form someone else. They may have run through Optimizer to reduce the size and removed all instances of Embedded font.
    If you use common fonts between windows and Apple say for example Ariel There should only be subtle differences in the look of the PDF if you have use system fonts turned on.

Maybe you are looking for

  • Help needed in web service example. Thank!!!

    Hi, I tested WebLogic 6.1 message-style web service example. For the ProducerClient.java example, I tested successfully. But for the ConsumerClient.java example, I kept getting error messages. Can any expert offers some explanation or advice? Need to

  • FCP to DVD Pro

    I have been helping a friend who is using FCP 5.1.4 on a 1GHz G4 - with 2GB RAM running, slowly, Tiger. After finishing a project in FCP i want to Import it to DVD Pro. I have been changing this finished project file to MPEG2 and then importing to DV

  • Signature is corrupted. Could not verify  HELP!

    Hi, i have a big problem i write a small code who creates a PKCS7 signed file. The PKCS7 contains a self signed X509Certificate witch i create with openssl and the PKCS7 contains a Message like "Hello". Second i write a small prog who extracts the Si

  • Embedding an interactive flash into Authorware

    I want to embed a flash interactive into Authorware (see: http://www.cellsalive.com/meiosis.htm In the past I embedded a basic Flash animation into Authorware and it seem to work ok. I'm not sure of more complex ones i.e. that have interactive contro

  • Call core function/constructor by its String name

    Hi, I worked out a way to call a function whose name is in a String variable. That is: function myFunction()      trace("Hello!"); var callFunction:String = "myFunction"; this[callFunction]();     // "Hello!" However, this does not work with "core" f