Secondary database performance and CacheMode

This is somewhat a follow-on thread related to: Lock/isolation with secondary databases
In the same environment, I'm noticing fairly low-performance numbers on my queries, which are essentially a series of key range-scans on my secondary index.
Example output:
08:07:37.803 BDB - Retrieved 177 entries out of index (177 ranges, 177 iters, 1.000 iters/range) in: 87ms
08:07:38.835 BDB - Retrieved 855 entries out of index (885 ranges, 857 iters, 0.968 iters/range) in: 346ms
08:07:40.838 BDB - Retrieved 281 entries out of index (283 ranges, 282 iters, 0.996 iters/range) in: 101ms
08:07:41.944 BDB - Retrieved 418 entries out of index (439 ranges, 419 iters, 0.954 iters/range) in: 160ms
08:07:44.285 BDB - Retrieved 2807 entries out of index (2939 ranges, 2816 iters, 0.958 iters/range) in: 1033ms
08:07:50.422 BDB - Retrieved 253 entries out of index (266 ranges, 262 iters, 0.985 iters/range) in: 117ms
08:07:52.095 BDB - Retrieved 2838 entries out of index (3021 ranges, 2852 iters, 0.944 iters/range) in: 835ms
08:07:58.253 BDB - Retrieved 598 entries out of index (644 ranges, 598 iters, 0.929 iters/range) in: 193ms
08:07:59.912 BDB - Retrieved 143 entries out of index (156 ranges, 145 iters, 0.929 iters/range) in: 32ms
08:08:00.788 BDB - Retrieved 913 entries out of index (954 ranges, 919 iters, 0.963 iters/range) in: 326ms
08:08:03.087 BDB - Retrieved 325 entries out of index (332 ranges, 326 iters, 0.982 iters/range) in: 103ms
To explain those numbers, a "range" corresponds to a sortedMap.subMap() call (ie: a range scan between a start/end key) and iters is the number of iterations over the subMap results to find the entry we were after (implementation detail).
In most cases, the iters/range is close to 1, which means that only 1 key is traversed per subMap() call - so, in essence, 500 entries means 500 ostensibly random range-scans, taking only the first item out of each rangescan.
However, it seems kind of slow - 2816 entries is taking 1033ms, which means we're really seeing a key/query rate of ~2700 keys/sec.
Here's performance profile output of this process happening (via jvisualvm): https://img.skitch.com/20120718-rbrbgu13b5x5atxegfdes8wwdx.jpg
Here's stats output after it running for a few minutes:
I/O: Log file opens, fsyncs, reads, writes, cache misses.
bufferBytes=3,145,728
endOfLog=0x143b/0xd5b1a4
nBytesReadFromWriteQueue=0
nBytesWrittenFromWriteQueue=0
nCacheMiss=1,954,580
nFSyncRequests=11
nFSyncTime=12,055
nFSyncTimeouts=0
nFSyncs=11
nFileOpens=602,386
nLogBuffers=3
nLogFSyncs=96
nNotResident=1,954,650
nOpenFiles=100
nRandomReadBytes=6,946,009,825
nRandomReads=2,577,442
nRandomWriteBytes=1,846,577,783
nRandomWrites=1,961
nReadsFromWriteQueue=0
nRepeatFaultReads=317,585
nSequentialReadBytes=2,361,120,318
nSequentialReads=653,138
nSequentialWriteBytes=262,075,923
nSequentialWrites=257
nTempBufferWrites=0
nWriteQueueOverflow=0
nWriteQueueOverflowFailures=0
nWritesFromWriteQueue=0
Cache: Current size, allocations, and eviction activity.
adminBytes=248,252
avgBatchCACHEMODE=0
avgBatchCRITICAL=0
avgBatchDAEMON=0
avgBatchEVICTORTHREAD=0
avgBatchMANUAL=0
cacheTotalBytes=2,234,217,972
dataBytes=2,230,823,768
lockBytes=224
nBINsEvictedCACHEMODE=0
nBINsEvictedCRITICAL=0
nBINsEvictedDAEMON=0
nBINsEvictedEVICTORTHREAD=0
nBINsEvictedMANUAL=0
nBINsFetch=7,104,094
nBINsFetchMiss=575,490
nBINsStripped=0
nBatchesCACHEMODE=0
nBatchesCRITICAL=0
nBatchesDAEMON=0
nBatchesEVICTORTHREAD=0
nBatchesMANUAL=0
nCachedBINs=575,857
nCachedUpperINs=8,018
nEvictPasses=0
nINCompactKey=268,311
nINNoTarget=107,602
nINSparseTarget=468,257
nLNsFetch=1,771,930
nLNsFetchMiss=914,516
nNodesEvicted=0
nNodesScanned=0
nNodesSelected=0
nRootNodesEvicted=0
nThreadUnavailable=0
nUpperINsEvictedCACHEMODE=0
nUpperINsEvictedCRITICAL=0
nUpperINsEvictedDAEMON=0
nUpperINsEvictedEVICTORTHREAD=0
nUpperINsEvictedMANUAL=0
nUpperINsFetch=11,797,499
nUpperINsFetchMiss=8,280
requiredEvictBytes=0
sharedCacheTotalBytes=0
Cleaning: Frequency and extent of log file cleaning activity.
cleanerBackLog=0
correctedAvgLNSize=87.11789
estimatedAvgLNSize=82.74727
fileDeletionBacklog=0
nBINDeltasCleaned=2,393,935
nBINDeltasDead=239,276
nBINDeltasMigrated=2,154,659
nBINDeltasObsolete=35,516,504
nCleanerDeletions=96
nCleanerEntriesRead=9,257,406
nCleanerProbeRuns=0
nCleanerRuns=96
nClusterLNsProcessed=0
nINsCleaned=299,195
nINsDead=2,651
nINsMigrated=296,544
nINsObsolete=247,703
nLNQueueHits=2,683,648
nLNsCleaned=5,856,844
nLNsDead=88,852
nLNsLocked=29
nLNsMarked=5,767,969
nLNsMigrated=23
nLNsObsolete=641,166
nMarkLNsProcessed=0
nPendingLNsLocked=1,386
nPendingLNsProcessed=1,415
nRepeatIteratorReads=0
nToBeCleanedLNsProcessed=0
totalLogSize=10,088,795,476
Node Compression: Removal and compression of internal btree nodes.
cursorsBins=0
dbClosedBins=0
inCompQueueSize=0
nonEmptyBins=0
processedBins=22
splitBins=0
Checkpoints: Frequency and extent of checkpointing activity.
lastCheckpointEnd=0x143b/0xaf23b3
lastCheckpointId=850
lastCheckpointStart=0x143a/0xf604ef
nCheckpoints=11
nDeltaINFlush=1,718,813
nFullBINFlush=398,326
nFullINFlush=483,103
Environment: General environment wide statistics.
btreeRelatchesRequired=205,758
Locks: Locks held by data operations, latching contention on lock table.
nLatchAcquireNoWaitUnsuccessful=0
nLatchAcquiresNoWaitSuccessful=0
nLatchAcquiresNoWaiters=0
nLatchAcquiresSelfOwned=0
nLatchAcquiresWithContention=0
nLatchReleases=0
nOwners=2
nReadLocks=2
nRequests=10,571,692
nTotalLocks=2
nWaiters=0
nWaits=0
nWriteLocks=0
My database(s) are sizeable, but on an SSD in a machine with more RAM than DB size (16GB vs 10GB). I have CacheMode.EVICT_LN turned on, however, am thinking this may be harmful. I have tried turning it on, but it doesn't seem to make a dramatic difference.
Really, I only want the secondary DB cached (as this is where all the read-queries happen), however, I'm not sure if it's (meaningfully) possible to only cache a secondary DB, as presumably it needs to look up the primary DB's leaf-nodes to return data anyway.
Additionally, the updates to the DB(s) tend to be fairly large - ie: potentially modifying ~500,000 entries at a time (which is about 2.3% of the DB), which I'm worried tends to blow the secondary DB cache (tho don't know how to prove one way or another).
I understand different CacheModes can be set on separate databases (and even at a cursor level), however, it's somewhat opaque as to how this works in practice.
I've tried to run DbCacheSize, but a combination of variable length keys combined with key-prefixing being enabled makes it almost impossible to get meaningful numbers out of it (or at the very least, rather confusing :)
So, my questions are:
- Is this actually slow in the first place (ie: 2700 random keys/sec)?
- Can I speed this up with caching? (I've failed so far)
- Is it possible (or useful) to cache a secondary DB in preference to the primary?
- Would switching from using a StoredSortedMap to raw (possibly reusable) cursors give me a significant advantage?
Thanks so much in advance,
fb.

nBINsFetchMiss=575,490The first step in tuning the JE cache, as related to performance, is to ensure that nBINsFetchMiss goes to zero. That tells you that you've sized your cache large enough to hold all internal nodes (I know you have lots of memory, but we need to prove that by looking at the stats).
If all your internal nodes are in cache, that means your entire secondary DB is in cache, because you've configured duplicates (right?). A dup DB does not keep its LNs in cache, so it consists of nothing but internal nodes in cache.
If you're using EVICT_LN (please do!), you also want to make sure that nEvictPasses=0, and I see that it is.
Here are some random hints:
+ In general always use getStats(new StatsConfig().setClear(true)). If you don't clear the stats every time interval, then they are cumulative and it's almost impossible to correlate them to what's going on in that time interval.
+ If you're starting with a non-empty env, first load the entire data set and clear the stats, so the fetches for populating the cache don't show up in subsequent stats.
+ If you're having trouble using DbCacheSize, you may want to find out experimentally how much cache is needed to hold the internal nodes, for a given data set in your app. You can do this simply by reading your data set into cache. When nEvictPasses becomes non-zero, the cache has overflowed. This is going to be much more accurate than DbCacheSize anyway.
+ When you measure performance, you need to collect the JE stats (as you have) plus all app performance info (txn rate, etc) for the same time interval. They need to be correlated. The full set of JE environment settings, database settings, and JVM params is also needed.
On the question of using StoredSortedMap.subMap vs a Cursor directly, there may be an optimization you can make, if your LNs are not in cache, and they're not if you're using EVICT_LN, or if you're not using EVICT_LN but not all LNs fit. However, I think you can make the same optimization using StoredSortedMap.
Namely when using a key range (whatever API is used), it is necessary to read one key past the range you want, because that's the only way to find out whether there are more keys in the range. If you use subMap or the Cursor API in the most obvious way, this will not only have to find the next key outside the range but will also fetch its LN. I'm guessing this is part of the reason you're seeing a lower operation rate than you might expect. (However, note that you're actually getting double the rate you mention from a JE perspective, because each secondary read is actually two JE reads, counting the secondary DB and primary DB.)
Before I write a bunch more about how to do that optimization, I think it's worth confirming that the extra LN is being fetched. If you do the measurements as I described, and you're using EVICT_LN, you should be able to get the ratio of LNs fetched (nLNsFetchMiss) to the number of range lookups. So if there is only one key in the range, and I'm right about reading one key beyond it, you'll see double LNs fetched as number of operations.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Problem using secondary database, sequence (and custom tuple binding)

    I get an exception when I try to open a Sequence to a database that has a custom tuple binding and a secondary database. I have a guess what the issue is (below), but it boils down to my custom tuple-binding being invoked when opening the sequence. Here is the exception:
    java.lang.IndexOutOfBoundsException
    at com.sleepycat.bind.tuple.TupleInput.readUnsignedInt(TupleInput.java:4
    14)
    at com.sleepycat.bind.tuple.TupleInput.readInt(TupleInput.java:233)
    at COM.shopsidekick.db.community.Shop_URLTupleBinding.entryToObject(Shop
    _URLTupleBinding.java:72)
    at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java
    :73)
    at COM.tagster.db.community.SecondaryURLKeyCreator.createSecondaryKey(Se
    condaryURLKeyCreator.java:38)
    at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.
    java:546)
    at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.ja
    va:42)
    at com.sleepycat.je.Database.notifyTriggers(Database.java:1343)
    at com.sleepycat.je.Cursor.putInternal(Cursor.java:770)
    at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:352)
    at com.sleepycat.je.Sequence.<init>(Sequence.java:139)
    at com.sleepycat.je.Database.openSequence(Database.java:332)
    Here is my code:
    // URL ID DB
    DatabaseConfig urlDBConfig = new DatabaseConfig();
    urlDBConfig.setAllowCreate(true);
    urlDBConfig.setReadOnly(false);
    urlDBConfig.setTransactional(true);
    urlDBConfig.setSortedDuplicates(false); // No sorted duplicates (can't have them with a secondary DB)
    mURLDatabase = mDBEnv.openDatabase(txn, "URLDatabase", urlDBConfig);
    // Reverse URL lookup DB table
    SecondaryConfig secondaryURLDBConfig = new SecondaryConfig();
    secondaryURLDBConfig.setAllowCreate(true);
    secondaryURLDBConfig.setReadOnly(false);
    secondaryURLDBConfig.setTransactional(true);
    TupleBinding urlTupleBinding = DataHelper.instance().createURLTupleBinding();
    SecondaryURLKeyCreator secondaryURLKeyCreator = new SecondaryURLKeyCreator(urlTupleBinding);
    secondaryURLDBConfig.setKeyCreator(secondaryURLKeyCreator);
    mReverseLookpupURLDatabase = mDBEnv.openSecondaryDatabase(txn, "SecondaryURLDatabase", mURLDatabase, secondaryURLDBConfig);
    // Open the URL ID sequence
    SequenceConfig urlIDSequenceConfig = new SequenceConfig();
    urlIDSequenceConfig.setAllowCreate(true);
    urlIDSequenceConfig.setInitialValue(1);
    mURLSequence = mURLDatabase.openSequence(txn, new DatabaseEntry(URLID_SEQUENCE_NAME.getBytes("UTF-8")), urlIDSequenceConfig);
    My secondary key creator class looks like this:
    public class SecondaryURLKeyCreator implements SecondaryKeyCreator {
    // Member variables
    private TupleBinding mTupleBinding; // The tuple binding
    * Constructor.
    public SecondaryURLKeyCreator(TupleBinding iTupleBinding) {
    mTupleBinding = iTupleBinding;
    * Create the secondary key.
    public boolean createSecondaryKey(SecondaryDatabase iSecDB, DatabaseEntry iKeyEntry, DatabaseEntry iDataEntry, DatabaseEntry oResultEntry) {
    try {
    URLData urlData = (URLData)mTupleBinding.entryToObject(iDataEntry);
    String URL = urlData.getURL();
    oResultEntry.setData(URL.getBytes("UTF-8"));
    catch (IOException willNeverOccur) {
    // Success
    return(true);
    I think I understand what is going on, and I only noticed it now because I added more fields to my custom data (and tuple binding):
    com.sleepycat.je.Sequence.java line 139 (version 3.2.44) does this:
    status = cursor.putNoOverwrite(key, makeData());
    makeData creates a byte array of size MAX_DATA_SIZE (50 bytes) -- which has nothing to do with my custom data.
    The trigger causes an call to SecondaryDatable.updateSecondary(...) to the secondary DB.
    updateSecondary calls createSecondaryKey in my SecondaryKeyCreator, which calls entityToObject() in my tuple-binding, which calls TupleInput.readString(), etc to match my custom data. Since what is being read goes for more than the byte array of size 50, I get the exception.
    I didn't notice before because my custom tuple binding used to read fewer that 50 bytes.
    I think the problem is that my tuple binding is being invoked at all at this point -- opening a sequence -- since there is no data on which it can act.

    Hi,
    It looks like you're making a common mistake with sequences which is to store the sequence itself in a database that is also used for application data. The sequence should normally be stored in separate database to prevent configuration conflicts and actual data conflicts between the sequence record and the application records.
    I suggest that you create another database whose only purpose is to hold the sequence record. This database will contain only a single record -- the sequence. If you have more than one sequence, storing all sequences in the same database makes sense and is safe.
    The database used for storing sequences should not normally have any associated secondary databases and should not be configured for duplicates.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to use the mirrored and log shipped secondary database for update or insert operations

    Hi,
    I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
    or insert operations?
    Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
    Thanks,
    Preetha

    Hmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case.. 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Need help with sorting records in primary and secondary databases

    Hi,
    I would like to store data into primary and secondary db in different order. For the main primary, I want it to be ordered by login_ts instead of uuid which is the key.
    For the user secondary database, I want it to be ordered by sip_user. For the timestampe secondary db, I want it to be ordered by login_ts.
    This is what I have right now,
    this is for main
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200003 login_ts=1264327630 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200003 login_ts=1264327688 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200003 login_ts=1264327618 logout_ts=
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    this is for user search
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200003 login_ts=1264327630 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200003 login_ts=1264327688 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200003 login_ts=1264327618 logout_ts=
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    this is for timestamp
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200003 login_ts=1264327630 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200003 login_ts=1264327688 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200003 login_ts=1264327618 logout_ts=
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    but what I want is :
    this is for main
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200004 login_ts=1264327618 logout_ts=
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200003 login_ts=1264327630 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200005 login_ts=1264327688 logout_ts=
    this is for user search
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200003 login_ts=1264327630 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200004 login_ts=1264327618 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200004 login_ts=1264327688 logout_ts=
    this is for timestamp
    uuid=e2f9a3cb-02d1-47ff-8af8-a3a371e20b5 sip_user=200003 login_ts=1264327613 logout_ts=
    uuid=e1846e4a-e1f5-406d-b903-55905a2533a sip_user=200003 login_ts=1264327618 logout_ts=
    uuid=029ae227-a188-4ba8-aea4-7cbc26783d6 sip_user=200004 login_ts=1264327630 logout_ts=
    uuid=22966f76-8c8a-4ab4-b832-b36e8f8e14d sip_user=200004 login_ts=1264327688 logout_ts=
    Right now, I have:
    int compare_login_ts(dbp, a, b)
         DB *dbp;
         const DBT a, b;
         int time_a = 0;
         int time_b = 0;
         time_a = atoi ( (char *)a->data);
         time_b = atoi ( (char *)b->data);
         return time_a - time_b ;
    for the timestamp secondary, I set that compare function:
              if ((ret = (*sdb)->set_bt_compare(*sdb , compare_login_ts )) != 0){
    Does anyone know how can I make it sorted according?

    Hi,
    The DB->set_bt_compare() is used to compare keys in Btree database. In the callback function, both the DBTs are key, but not data. Please refer to http://www.oracle.com/technology/documentation/berkeley-db/db/api_reference/C/dbset_bt_compare.html.
    If you want any field in the data to be sorted, you might create a secondary index on it and define the compare function as you wish.
    Regards,
    Emily Fu, Oracle Berkeley DB

  • Setting up a primary and secondary Database in Oracle 10G

    Hi Experts
    can you please tel me the steps involved in creation of primary and secondary database? This is the first time i am going to configure this setup. Please provide your helping hands.
    Thanks alot in advance,
    Ram

    Absolutely glad to help.
    Step 1: Clarify what it is you are trying to build. Are you talking about a Standby Database? Perhaps Physical or Logical Data Guard? If so what protection mode? Stand-alone or RAC? Or are you just trying to dup an existing database on another server in which case you can just use RMAN.
    Step 2: Go to http://tahiti.oracle.com and read the relevant docs
    Step 3: Go to http://metalink.oracle.com and look at the Knowledge Base docs
    If you have any question thereafter contact us and be sure to include your version number, not a marketing label. 10g is a marketing label that refers to everything from the 10.1 Beta through 10.2.0.4.

  • New HP EVA6000 SAN and now bad database performance problems

    Hello,
    we changed our SAN Hardware to a HP EVA6000 and moved all Data to there.
    It is intended that storagesystem is for all Servers (File/Print, Exchange, Oracle Databases and MSSQL Databases).
    According to the best practice papers of HP we created one big discgroup (fata harddiscs) and created virtual discs for our servers.
    After doing this the database performance was terrible bad!
    Especially the multiple random IO is absolutely worse.
    As a first countermeasure we created a second discgroup on faster harddiscs and moved the database contents to there. We analyzed the IO and moved several database file to different virtual discs.
    The performance is better now, but still not like the 4 year old SAN System!
    Of course we questioned HP and even let them do a performance analysis but up to now we have no solution... The performance analysis report will be available on thursday.
    Does anybody had the same experience or how did you configure the Database and EVA SAN to work with an appropiate performance?

    Hi,
    I'm not an Oracle person, but do work with EVA SAN's.
    Your 48 FATA disk, disk group is capable of 4800 I/O operations per second [48 x 100] but the disks are only rated for 30% duty cycle and spin at 7200 rpm.
    The 16 FC drive Disk Group I/O operations rating depends on the speed the disks spin at. 10K rpm disks are rated at 130 I/O per sec [2080 for the group] and if they are 15K rpm disks then 170 I/O per sec [2720 for the group]. Both are rated for 100% duty cycle.
    I seem to recall having have read somewhere that Oracle prefers to have it's logs on separate storage to it's data.
    If your shelves had the spare disk slots I would put in 72 GB 15K rpm disks up to the capacity required {+ overhead } + head room for reasonably predictable growth over the anticpated life of the equipment.
    Here is a link to the HP Best Practice guide for EVA 4/6/8000's
    http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf
    I hope this helps you understand the storage that you are working with a bit better. The old saying of "more heads make for better performance" is still true, however budget can have some affect in performance. ;-)
    Jim

  • Application and Database Performance Issue ?

    Hi
    I am designing tables, can any one suggest me which is best for database and application performace
    1) One table more column and developer can work in single query
    2) devided the table in two part and developer wok with two query
    3) i can use table parion
    also i would like to know maximum no of record stored in a table in 11g and 10g
    regards

    user9098698 wrote:
    Hi
    I am designing tables, can any one suggest me which is best for database and application performace
    1) One table more column and developer can work in single query
    2) devided the table in two part and developer wok with two query This decision should come from normalizing your data to 3NF. Only after that is done+ should you consider de-normalizing it for performance, and then only after careful testing and consideration other options.
    3) i can use table parion ????
    >
    also i would like to know maximum no of record stored in a table in 11g and 10g
    regards

  • Autogrowth, shrink and database performance

    I have already asked some questions about Autogrowth and shrink. But I have some doubts related to it.
    1)I have a database which grows about 300 MB every month. So is it best to set Autogrowth of 300 MB to it? Now 10 MB is set.
    2)I have so many small databases which really have data of about 50 MB. But it's physical size is about 900 MB.(So 850 is free space). I have not shrinked this database. As if i shrink it then again when data gets added autogrowth occurs. But adding data to
    this database is rare. So I think it will not gro more than 200 mb for next one year. So should i shrink the database? If I keep it as it is Should that cause any performance problem? Or having more free space(more than 90%) in database will cause any problem.
    I think most of the people think having large physical size will cause performance problem. So is it true or not?
    3) SQL Server 2008 R2 Express Database Size Limit is 10 GB. So is it physical size of both mdf and ldf file together? because I need to consider this if i should shrink database at any point.

    1) If this is a database where you expect performance, the current auto growth setting gives the following disadvantages:
    - It will affect performance. Database files grow too often and it's unnecessary work for SQL Server during normal operations. It takes time to grow database files and especially if you have not turned Instant File Initialization on, which is highly recommended
    by the way. I would prefer to grow databases off peak hours and once a year.
    - The database file(s) gets fragmented. If SQL keeps allocating small 10 MB chunks the file will eventually have hundreds of fragments instead of just a few. Fragmentation reduces IO performance.
    I would set the database file sizes to what you expect it to grow to within a year, plus 10 % and set auto growth to perhaps 300 MB that will kick in if your calculations were off. Next year you manually grow the database again according to your new 1-year
    prediction. That way you will hopefully get less fragmentation and very few operational disturbances due to auto growth in peak hours.
    2)
    Keeping the database as is will not cause anything else than that the database is consuming unnecessary disk space, a lot of free space does not affect performance negatively. SQL Server does not do more IO just because you have large file(s). Having to little
    free space in a db can cause internal fragmentation and will affect performance negatively. However, you should not fill up the spindles to more than 80% (some say 85%) since your storage will be noticeable slower.
    You can shrink the database but that usually causes performance problems due to the shrink operation creates massive internal fragmentation in the database. But you can fix that by rebuilding all indexes and tables, starting with the clustered ones first.
    So don't shrink the database so much that it must start with growing itself to accommodate for all index rebuilds.
    Hope this helps!
    Peter

  • Direct IO,Asynchronous IO and relationship with database performance

    What is concept of Direct IO, Asynchronous IO from the DBA prospective? How does it relates to the database performance?
    Any simple explanation will be highly appreciated. Thanks in advance.

    918868 wrote:
    What is concept of Direct IO, Asynchronous IO from the DBA prospective? How does it relates to the database performance?
    Any simple explanation will be highly appreciated. Thanks in advance.yet another interview question from you?

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • FAQ's, intros and memorable discussions in the Performance and Tuning Forum

    Welcome to the SDN ABAP Performance and Tuning Forum!
    In addition to release dependent information avalaible by:
    - pressing the F1 key on an ABAP statement,
    - or searching for them in transaction ABAPDOCU,
    - using the [SDN ABAP Development Forum Search|https://www.sdn.sap.com/irj/sdn/directforumsearch?threadid=&q=&objid=c42&daterange=all&numresults=15&rankby=10001],
    - the information accessible via the [SDN ABAP Main Wiki|https://wiki.sdn.sap.com/wiki/display/ABAP],
    - the [SAP Service Marketplace|http://service.sap.com] and see [SAP Note 192194|https://service.sap.com/sap/support/notes/192194] for search tips,
    - the 3 part [How to write guru ABAP code series|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f2dac69e-0e01-0010-e2b6-81c1e8e5ce50] ... (use the search to easily find the other 2 documents...)
    ... this "sticky post" lists some threads from the ABAP forums as:
    - An introduction for new members / visitors on topics discussed in threads,
    - An introduction to how the forums are used and the quality expected,
    - A collection of some threads which provided usefull answers to questions which are frequently asked, and,
    - A collection of some memorable threads if you feel like reading some ABAP related material.
    The listed threads will be enhanced from time to time. Please feel welcome to post to [this thread|Suggestions thread for ABAP FAQ sticky; to suggest any additional inclusions.
    Note: When asking a question in the forum, please also provide sufficient information such that the question can be answered usefully, do not repeat interview-type questions, and once closed please indicate which solution was usefull - to help others who search for it.

    ABAP Performance and Tuning
    Read Performance   => Gurus take over the discussion from Guests caught cheating the points-system.
    SELECT INTO TABLE => Initial questions often result in interesting follow-up discussions.
    Inner Joins vs For all Entries. => Including infos about system parameters for performance optimization.
    Inner Join Vs Database view Vs for all entries => Usefull literature recommended by performance guru YukonKid.
    Inner Joins vs For All Entries - performance query => Performance legends unplugged... read the blogs as well.
    The ABAP Runtime Trace (SE30) - Quick and Easy => New tricks in old tools. See other blogs by the same author as well.
    Skip scan used instead of (better?) range scan => Insider information on how index access works.
    DELETE WHERE sample case that i would like to share with you => Experts discussing the deletion of data from internal tables.
    Impact of Order of fields in Secondary  index => Also discussing order of fields in WHERE-clause
    "SELECT SINGLE" vs. "SELECT UP TO 1 ROWS" => Better for performance or semantics?
    into corresponding fields of table VERSUS into table => detailed discussion incl. runtime measurements
    Indexes making program run slower... => Everything you ever wanted to know about Oracle indexes.
    New! Mass reading standard texts (STXH, STXL) => avoiding single calls to READ_TEXT for time-critical processes
    New! Next Generation ABAP Runtime Analysis (SAT) => detailed introduction to the successor of SE30
    New! Points to note when using FOR ALL ENTRIES => detailed blog on the pitfall(s) a developer might face when using FAE
    New! Performance: What is the best way to check if a record exist on a table ? => Hermann's tips on checking existence of a record in a table
    Message was edited by: Oxana Noa Zubarev

  • How ADBC connection is benefits by using SAP HANA as secondary database ?

    Hi,
    I have one more important question.
    How ADBC connection is benefits by using SAP HANA as secondary database in terms of performance wise for the access of data from HANA database as a secondary database.
    I have 2 options and which is better for the good performance for accessing the data-
    1 .  In ABAP Reports in the SELECT statements by using CONNECTION (“HDB”) will this improve the
         performance.
          e.g : select * from BSEG into TABLE IT_TAB CONNECTION (“HDB”).
    2. Will Create the Stored procedure in HANA studio and Call
       from ABAP as below by using NATIVE SQL–
         EXEC SQL
          SET CONNECTION (‘HDB’).
         ENDEXEC.
        EXEC SQL.
          EXECUTE PROCEDURE proc (IN    p_in1                       
                                 OUT   p_out1   OUT   p_out2 )
       ENDEXEC.
    Regards,
    Pravin
    Message was edited by: Jens Weiler
    Branched from http://scn.sap.com/thread/3498161

    Hi Pravin,
    Option 1: In this case ADBC might even worsen the performance due to the overhead in the ADBC framework. OpenSQL is the method to go here, as OpenSQL - from the ABAP point of view - features the optimal communication with the database while ADBC has overhead like constructor-calls for the statement, parameter binding, etc.
    Option 2: In this case ADBC is comparable with EXEC SQL but features more options, e.g. clean concept of multiple connection (connection objects via CL_SQL_CONNECTION), exception handling, etc. So I strongly propose to favour ADBC over EXEC SQL, but not simply for performance reasons. You might have a look at the ABAP Language Help in your system on more information on ADBC and the advantages over Exec SQL.
    Cheers,
      Jasmin

  • Secondary database connections

    I am looking for information on Secondary database connections, especially with Oracle. Is there any other information than note 323151 (I am using the MiniSAP system for testing, therefor I do not have an official SAP customer status).
    I found something about this on help.sap.com, but this is more like an overview. I am looking for some examples on what to configure and how to use it.
    Any help would be appreciated.
    Thanks

    Hi Klaus,
    You need to prepare entries in the TNSNAME.ORA file (on the Oracle database server). The systemguys will know where to find it.
    Oracle File TNSNAMES.ORA (contains also the lines below)
    texd.world = (DESCRIPTION = (ADDRESS = (COMMUNITY = tcp.world)
                 (PROTOCOL = TCP) (Host = <servername>) (Port = 1521))
                 (CONNECT_DATA = (SID = <SID>) (GLOBAL_NAME = texd.world)
                 (SERVER = DEDICATED)))
    Replace <servername> with the actual servername of the oracle database.
    Replace <SID> with the SAP system ID (Like DEV for development or PRD for production).
    In SAP use transactiom SM30 to maintain table DBCON.
    Give the connection a name (this is used in your code). Example = MYCONNECTION.
    Set DBMS to ORA.
    Set the username to a user with sufficient rights
    Supply (2x) the password for this user
    Set the Verb.info to textd.world
    Do NOT check (leave unchecked) the Permanent checkbox.
    Save your work.
    In ABAP code you can connect to and use this connection like this:
    Declaration
    DATA: WA TYPE T000.
    Init connection
    EXEC sql.
    connect to 'MYCONNECTION' as 'MYDB'
    ENDEXEC.
    Open connection
    EXEC sql.
    SET CONNECTION 'MYDB'
    ENDEXEC.
    Do your trick
    EXEC sql PERFORMING your_form.
    SELECT * INTO :WA FROM T000.
    ENDEXEC.
    Stop connection
    EXEC sql.
    disconnect 'MYDB'
    ENDEXEC.
    FORM your_form.
      WRITE: / wa-mandt, wa-mtext.
    ENDFORM.
    Further information on using this (beside note 323151) can be found on notes 339092, 323151 and 178949.
    See also http://www.akadia.com/services/ora_dblinks.html
    Hope this helps you on your way.
    Regards,
    Rob.

  • Concurrency questions, Secondary Databases, etc.

    Hi,
    i have the following requirements:
    - Multiple threads, every thread opens one ore more database
    - Databases have RefCounting if they are used in more than one thread
    - Every database has a SecondaryDatabase associated
    - All Threads are performing only put operations one the databases
    - Keys and SecondaryKeys are unique (no duplicates)
    I tested with normal Databases and SecondaryDatabases:
    - no Transactions used
    - deferredWrite is on true
    Everything worked and the performance is within our expectations.
    my Questions now:
    - Does this setup work as long as all threads are only writing (put)
    (i read in another post that SecondaryDatabases work only with transactions...
    i think thats true only if you read/write ??)
    - Is there anything i should take care of? I already checked my SecondaryKeyCreator
    for concurrency issues...
    - Does it help (in this setup) to disable the CheckpointerThread? The Databases are
    synced and closed after all writes are finished. We don't need recovery...
    - Are there any penalties if i increase the LogFileSize? We are writing around 80 to 150GB
    of data and with the default size (10MB) we get a lot of files...
    - Caching is a non-issue as long as we are only writing... is this correct?
    Sorry for the amount of questions & thanks in advance for any answers!
    Greets,
    Chris

    Hi Chris,
    - Does this setup work as long as all threads are only writing (put)(i read in another post that SecondaryDatabases work only with transactions...
    i think thats true only if you read/write ??)>
    When using secondaries, if you don't configure transactions and there is an exception during the write operation, corruption can result. If you are reading and writing, lock conflict exceptions are likely to occur -- with transactions you can simply retry, but without transactions you can't. Since you are not reading, it is unlikely that this type of exception will occur. See below for more.
    - Is there anything i should take care of? I already checked my SecondaryKeyCreatorfor concurrency issues...>
    Since your secondary keys are unique, you'll get an exception if you attempt to write a primary record containing a secondary key that already exists. To avoid corruption, you'll have to prevent this from happening. If you are assigning secondary keys from a sequence, or something similar, then you'll be fine. Another way is to check the keys for existence in the secondary before the write. To do this, open the secondary as a regular database (openDatabase not openSecondaryDatabase). You don't want to read the primary (that could cause lock conflicts), which is what happens when you use openSecondaryDatabase and read via the secondary.
    - Does it help (in this setup) to disable the CheckpointerThread? The Databases aresynced and closed after all writes are finished. We don't need recovery...>
    Yes, if you don't care about recovery time, then disabling the checkpointer during the write operations will reduce the amount of disk space used and overall overhead.
    When you say you don't need recovery, what do you mean? In general, this means that if there is a crash, you can either 1) revert to a backup or 2) recreate the data from scratch.
    - Are there any penalties if i increase the LogFileSize? We are writing around 80 to 150GBof data and with the default size (10MB) we get a lot of files...>
    The log cleaner may become inefficient if the log files are too large, so I don't recommend a file size larger than 50 MB.
    - Caching is a non-issue as long as we are only writing... is this correct?The JE cache is not just a cache, it's the memory space for internal information and the Btree. For good performance during the write operations you should configure the cache large enough to hold all internal Btree nodes. The DbCacheSize program (in com.sleepycat.je.util) can be used to calculate this size.
    An exception to this rule is when you are inserting keys sequentially. If both the primary keys and secondary keys are assigned and written sequentially, then the cache size can normally be much smaller, perhaps only 10 MB. But this is an unusual use case, especially with secondaries.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Logshipping secondary database

    Hi Team,
    In logshipping secondary database wants to access for reporing  purpose database was in standbymode
    Could any body please guide me  how can we give access for  read only purpose
    Thanks
    subu

    Hi Team,
    In logshipping secondary database wants to access for reporing  purpose database was in standbymode
    Could any body please guide me  how can we give access for  read only purpose
    Thanks
    subu
    Hi,
    There is no big deal in achieving this.You need to create a login for user and map it to database.Now important point  here is the option you select when restoring database disconnect users ( while configuring log shipping) in such case moment restore
    starts your users will be disconnected.So you should plan for this before hand.If you un check this option no restore will be performed if users are connected.
    You can set restore frequency to match users request.If you set it every 15 mins its sure to cause issue with report query.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

Maybe you are looking for