Statpack analyzing of 9i database.
hi Expertise
Please help me for sorting the statpack report of my production DB in 9i. Also advise some recommendation after analyzing my statpack view.
Elapsed: 3.75 (min) 225 (sec)
DB Time: 7.84 (min) 470.65 (sec)
Cache: 10,016 MB
Block Size: 8,192 bytes
Transactions: 2.01 per second
Performance Summary
Physical Reads: 15,666/sec MB per second: 122.39 MB/sec
Physical Writes: 22/sec MB per second: 0.17 MB/sec
Single-block Reads: 1,412.69/sec Avg wait: 0.03 ms
Multi-block Reads: 1,916.26/sec Avg wait: 0.05 ms
Tablespace Reads: 3,346/sec Writes: 22/sec
Top 5 Events
Event Percentage of Total Timed Events
CPU time 79.89%
PX Deq: Execute Reply 6.38%
db file scattered read 4.32%
SQL*Net more data from dblink 4.29%
db file sequential read 2.00%
Tablespace I/O Stats
Tablespace Read/s Av Rd(ms) Blks/Rd Writes/s Read% % Total IO
TS_CCPS 3,117 0 2.5 0 100% 92.5%
TS_OTHERS 204 0.2 26.2 1 99% 6.09%
TS_AC_POSTED03 19 1.9 127 2 89% 0.63%
Load Profile
Logical reads: 42,976/s Parses: 39.41/s
Physical reads: 15,666/s Hard parses: 5.43/s
Physical writes: 22/s Transactions: 2.01/s
Rollback per transaction: 0% Buffer Nowait: 100%
4 Recommendations:
Your database has relatively high logical I/O at 42,976 reads per second. Logical Reads includes data block reads from both memory and disk. High LIO is sometimes associated with high CPU activity. CPU bottlenecks occur when the CPU run queue exceeds the number of CPUs on the database server, and this can be seen by looking at the "r" column in the vmstat UNIX/Linux utility or within the Windows performance manager. Consider tuning your application to reduce unnecessary data buffer touches (SQL Tuning or PL/SQL bulking), using faster CPUs or adding more CPUs to your system.
You are performing more than 15,666 disk reads per second. High disk latency can be caused by too-few physical disk spindles. Compare your read times across multiple datafiles to see which datafiles are slower than others. Disk read times may be improved if contention is reduced on the datafile, even though read times may be high due to the file residing on a slow disk. You should identify whether the SQL accessing the file can be tuned, as well as the underlying characteristics of the hardware devices.
Check your average disk read speed later in this report and ensure that it is under 7ms. Assuming that the SQL is optimized, the only remaining solutions are the addition of RAM for the data buffers or a switch to solid state disks. Give careful consideration these tablespaces with high read I/O: TS_CCPS, TS_OTHERS, TS_AC_POSTED03, TS_RATING, TS_GP.
You have more than 1,222 unique SQL statements entering your shared pool, with the resulting overhead of continuous RAM allocation and freeing within the shared pool. A hard parse is expensive because each incoming SQL statement must be re-loaded into the shared pool; with the associated overhead involved in shared pool RAM allocation and memory management. Once loaded, the SQL must then be completely re-checked for syntax & semantics and an executable generated. Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is paged out) or when you have non-reusable SQL statements without host variables. See the cursor_sharing parameter for an easy way to make SQL reentrant and remember that you should always use host variables in you SQL so that they can be reentrant.
Instance Efficiency
Buffer Hit: 69.13% In-memory Sort: 100%
Library Hit: 96.4% Latch Hit: 99.99%
Memory Usage: 95.04% Memory for SQL: 64.19%
2 Recommendations:
Your Buffer Hit ratio is 69.13%. The buffer hit ratio measures the probability that a data block will be in the buffer cache upon a re-read of the data block. If your database has a large number of frequently referenced table rows (a large working set), then investigate increasing your db_cache_size. For specific recommendations, see the output from the data buffer cache advisory utility (using the v$db_cache_advice utility). Also, a low buffer hit ratio is normal for applications that do not frequently re-read the same data blocks. Moving to SSD will alleviate the need for a large data buffer cache.
Your shared pool maybe filled with non-reusable SQL with 95.04% memory usage. The Oracle shared poolcontains Oracle´s library cache, which is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database. You can check the dba_hist_librarycache table in Oracle10g to see your historical library cache RAM usage.
SQL Statistics
Click here to see all SQL data
Wait Events
Event Waits Wait Time (s) Avg Wait (ms) Waits/txn
PX Deq: Execute Reply 137 30 219 0.3
db file scattered read 431,159 20 0 951.8
SQL*Net more data from dblin 51,140 20 0 112.9
db file sequential read 317,856 9 0 701.7
io done 6,842 5 1 15.1
db file parallel read 21 1 52 0.0
local write wait 250 1 4 0.6
db file parallel write 825 1 1 1.8
SQL*Net message from dblink 208 1 3 0.5
log file parallel write 2,854 1 0 6.3
0 Recommendations:
Instance Activity Stats
Statistic Total per Second per Trans
SQL*Net roundtrips to/from client 87,889 390.6 194.0
consistent gets 10,141,287 45,072.4 22,387.0
consistent gets - examination 884,579 3,931.5 1,952.7
db block changes 100,342 446.0 221.5
execute count 18,913 84.1 41.8
parse count (hard) 1,222 5.4 2.7
parse count (total) 8,868 39.4 19.6
physical reads 3,525,003 15,666.7 7,781.5
physical reads direct 539,879 2,399.5 1,191.8
physical writes 5,132 22.8 11.3
physical writes direct 29 0.1 0.1
redo writes 1,598 7.1 3.5
session cursor cache hits 4,378 19.5 9.7
sorts (disk) 0 0.0 0.0
sorts (memory) 4,988 22.2 11.0
table fetch continued row 310 1.4 0.7
table scans (long tables) 82 0.4 0.2
table scans (short tables) 18,369 81.6 40.6
workarea executions - onepass 0 0.0 0.0
5 Recommendations:
You have high network activity with 390.6 SQL*Net roundtrips to/from client per second, which is a high amount of traffic. Review your application to reduce the number of calls to Oracle by encapsulating data requests into larger pieces (i.e. make a single SQL request to populate all online screen items). In addition, check your application to see if it might benefit from bulk collection by using PL/SQL "forall" or "bulk collect" operators.
You have 3,931.5 consistent gets examination per second. "Consistent gets - examination" is different than regular consistent gets. It is used to read undo blocks for consistent read purposes, but also for the first part of an index read and hash cluster I/O. To reduce logical I/O, you may consider moving your indexes to a large blocksize tablespace. Because index splitting and spawning are controlled at the block level, a larger blocksize will result in a flatter index tree structure.
You have high update activity with 446.0 db block changes per second. The DB block changes are a rough indication of total database work. This statistic indicates (on a per-transaction level) the rate at which buffers are being dirtied and you may want to optimize your database writer (DBWR) process. You can determine which sessions and SQL statements have the highest db block changes by querying the v$session and v$sessatst views.
You have high disk reads with 15,666.7 per second. Reduce disk reads by increasing your data buffer size or speed up your disk read speed by moving to SSD storage. You can monitor your physical disk reads by hour of the day using AWR to see when the database has the highest disk activity.
You have high small table full-table scans, at 81.6 per second. Verify that your KEEP pool is sized properly to cache frequently referenced tables and indexes. Moving frequently-referenced tables and indexes to SSD or theWriteAccelerator will significantly increase the speed of small-table full-table scans.
Buffer Pool Advisory
Current: 3,599,469,418 disk reads
Optimized: 1,207,668,233 disk reads
Improvement: 66.45% fewer
The Oracle buffer cache advisory utility indicates 3,599,469,418 disk reads during the sample interval. Oracle estimates that doubling the data buffer size (by increasing db_cache_size) will reduce disk reads to 1,207,668,233, a 66.45% decrease.
Init.ora Parameters
Parameter Value
cursor_sharing similar
db_block_size 8,192
db_cache_size 8GB
db_file_multiblock_read_count 32
db_keep_cache_size 1GB
hash_join_enabled true
log_archive_start true
optimizer_index_caching 90
optimizer_index_cost_adj 25
parallel_automatic_tuning false
pga_aggregate_target 2GB
query_rewrite_enabled true
session_cached_cursors 300
shared_pool_size 2.5GB
optimizercost_model choose
1 Recommendations:
You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
Systemwide Tuning using STATSPACK Reports [ID 228913.1] and http://jonathanlewis.wordpress.com/statspack-examples/ should be useful.
Similar Messages
-
Hyperion Analyzer with Relational Database
Can Analyzer be used on a relational database or only Essbase?Thanks, Cathy
Yes..analyzer can be used with any relational databases.
-
Schema analyzing is taking too much time.
Hi All,
A schema analyze in our database that should take max 2hours.It has been running for last 18 hours.
So, what i need to check for that.
RanjanHandle: 788442
Status Level: Newbie
Registered: Aug 14, 2010
Total Posts: 87
Total Questions: 25 (18 unresolved)
so many questions & so few answers.
A schema analyze in our database that should take max 2hours.based upon what evidence do you make this claim?
It has been running for last 18 hours.post SQL & results that provides proof above is true
You can NOT necessarily depend upon LAST_ANALYZED to determine if/when statistics were last run.
ANALYZE has been obsoleted & replaced by DBMS_STATS
do as below so we can know complete Oracle version & OS name.
Post via COPY & PASTE complete results of
SELECT * from v$version; -
Hi ,
I installed analyzer using mysql database, can any one telll me ..is there any open source GUI to do the backup's and restore of mysql tables. If anyone have this ..please share the link...
Thank you!!!Hava read of :- http://www.devshed.com/c/a/MySQL/Backing-up-and-restoring-your-MySQL-Database/
Cheers
John
http://john-goodwin.blogspot.com/ -
I created a unicode mode application in Essbase XTD Analytic Servers.When I tried to use Analyzer to query databases, I could not link database in unicode mode application.Is that true I can't use Analyzer for unicode database?Or, is there something wrong when I created unicode mode database?Sandy
Hi Denis,
I didn't change font in BEx Query Designer, everything is default. Font is Arial.
In Crystal Report call to the same Bex Query, I can read unicode characters, but in Web Intelligence I can't.
I think that, the problem occur because I don't config in Universe or in local machine. But I don't know to check that.
If you have any ideas, please help me.
Thanks -
Hi Guys .
Please how do you RUN analyze schema or database or table?
I have ran gather all schema statistics and now my told to run analyze schema by my boss .. How do you do this please?
Platform is 11.5.9
OS: solarais
Thanks in advanceLook at the SQL Area report for your database. If you see more than maybe 1 SYS query in the top ten, you need to gather stats for SYS as well (9i and above).
SQLAREAT.SQL - SQL Area, Plan and Statistics for Top DML (expensive SQL in terms of logical or physical reads)
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=238684.1
Collecting Statistics with Oracle Apps 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=368252.1 -
How to disable the timer job for "Database Statistics"
I want to disable the database statistics timer job using the powershell, but I cant find the exact name of it in the list of timer jobs.
tried
Get-SpTimerjob to manually look for it Get-SPTimerJob SPDatabaseStatisticsJobDefinitionor and also looked for staadmin name job-database-statistics
I even tried to look for it in the list of timer job definitions for a web application with no luck.
I want to disable the timerjob, so that I can run proc_DefragmentIndices on sharepoint config and SharePoint admin databases.
SELECT DB_ID() AS [sharepoint_admincontent_cab0c190-db30-4f2c-8b71-504b81d2b5d1];
GO
SELECT * FROM sys.dm_db_index_physical_stats
(1, NULL, NULL, NULL , 'DETAILED');
1 1131151075
1 1
CLUSTERED INDEX
IN_ROW_DATA 2
0 52.6315789473684
1 1131151075
1 1
CLUSTERED INDEX
IN_ROW_DATA 2
1 0
1 1131151075
2 1
NONCLUSTERED INDEX
IN_ROW_DATA
2 0
80
Thanks
NateThank you Trevor,
As per this blog (http://www.houberg.com/2008/05/16/sharepoint-database-indexes-and-statistics/) it seems like
Microsoft allows this (not explicitly though)
I ran the Health Analyzer rule for "Databases used by SharePoint have fragmented Indexes" for all servers.
I am still seeing a lot of cluster and non clustered indexes having avg_fragmentation_in_percent
> 60%.
SELECT * FROM sys.dm_db_index_physical_stats (7, NULL, NULL, NULL , 'DETAILED');
I finally thought that I should rebuild the indexes to fix the issue, but please suggest if that is not the correct approach to resolve the high fragmented indexes.
thanks
Nate -
HI,
My SCM 5.0system is running on oracle 10g . I have checked LC & found below error in DB analyzer logs.
W3 11523 primary key range accesses, selectivity 0.01%: 140299528 rows read, 12593 rows qualified
CON: PKeyRgSel < 0.3
VAL: 0.01 < 0.3
W3 Selects and fetches selectivity 0.21%: 197822 selects and fetches, 71523587 rows read, 152554 rows qualified
CON: SelFetSel < 0.3
VAL: 0.21 < 0.3
* W3 76125 primary key range accesses, selectivity 0.11%: 71444927 rows read, 76461 rows qualified
CON: PKeyRgSel < 0.3
VAL: 0.11 < 0.3
W2 Number of symbol resolutions to create call stacks: 234
CON: SymbolResolutions > 0
VAL: 234 > 0
LC Version - X64/HPUX 7.6.03 Build 012-123-169-237
As i am new in MAX DB so i need experts advice on the above issues.Hello,
You got the WARNING messages in the DB analyzer protocol. Those are NOT errors.
You used the DB analyzer to find the bottleneck in liveCache.
Itu2019s the performance analysis tool for database.
1. In general, the MAXDB library has the explanations about the DB analyzer warning messages.
http://maxdb.sap.com/doc/7_7/default.htm -> Database Analyzer
In the database analyzer messages section at Optimizer Strategies and Selects and Fetches documents you will see the u201CUser Responseu201D, when you will get the warnings like:
W3 11523 primary key range accesses, selectivity 0.01%...
W3 Selects and fetches selectivity 0.21% u2026
W3 76125 primary key range accesses, selectivity 0.11%...
=>Find what liveCache application scenario was running at that time.
Repeat this application scenario & create the SQL trace. Find the statement that causes this warning.
2. If you are not able to find the reason for those warnings on your system => create the SAP message to help you on this issue.
Thank you and best regards, Natalia Khlopina -
Open Source Log Analyzer Project
Hi people,
I have a question whether there is a open source project which analyze logs from database. I mean I have a table(Log table which is like syslog message format). I need to analyze this table with a web based project. So, Do you know any open source project that do this? ThanksHuh? How is this question related to JSF?
Anyway, is Google installed at your machine? After feeding it with the topic title "Open Source Log Analyzer Project", it told me here something about AWStats and SourceTree and so on. More can't I (and Google) be of help.
You can also consider writing one yourself with help of smart coding and nice API's like JFreeChart. -
How to findout the sharepoint job which responsible for database re indxing
Hi
In sharepoint 2010 i configured RBS storage for Web application content database
in our org form has two web frontend servers,
two application servers, and two index servers ,one database server
so when users upload BLOBs to sharepoint library we faced some time RBS storage space problem
Exception:Microsoft.Data.SqlRemoteBlobs.BlobStoreException: There was a generic database
error. For more information, see the included exception. --->
System.Data.SqlClient.SqlException: RBS Error. Original Error: Number 1101,
Severity 17, State 12, Procedure -, Line 1,<o:p></o:p>
Message: Could not allocate a new page for
database 'WSS_Content_80' because of insufficient disk space in filegroup 'PRIMARY'.
here when i ask our DBA he said there is one sharepoint job is running which saving audit data daily every data and database is re indexing every time , to sharepont content database size is increasing
so how to findout the job which responsible for database re indxing
adilAudit data is created when an audit event his triggered. Auditing is configured on a per-Site Collection basis.
http://office.microsoft.com/en-us/sharepoint-server-help/configure-audit-settings-for-a-site-collection-HA102031737.aspx
There is a Health Analyzer rule named "Databases used by SharePoint have outdated index statistics".
http://technet.microsoft.com/en-us/library/hh564122(v=office.14).aspx
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Possible Issues with Duplicate Training Database?
Hello everyone!
First off, please forgive my ignorance! I've only been working with databases lightly over the last couple of years, and thus far only with MySQL and MS SQL.
Recently the company I work for purchased a new software package that required Oracle. We purchased a copy of Oracle 10.2 as part of the deal and the company that sold it to us, and set it up for us, has denied us dev access to the database, on our server, because it would expose their underlying data schema. I wasn't in change when the decision was made and we spent the money on a crippled copy of Oracle, but that's in the past.
Because we don't have dev access to Oracle we're unable to create a training database for this new software in house. We have to pay this company to create a test environment for us. I was told it would cost $500, until the tech said he was going to install Oracle on our training server.
At this point I stopped him and said at $500 they'll either be breaching their licensing agreement, or giving us a free copy! The other possibility is that they were going to install this and then just bill us for Oracle later on, despite quoting the job at $500.
The tech who was about to do the installation went to check with someone more senior and then returned to inform me that installing the second copy of Oracle would indeed require more licensing fees.
This verified what I already had told him, and I said we want a training database on our existing Oracle server, and then training copies of the new software can just point to that training database.
He now needs to take time with the more senior technicians while they're "analyzing the existing database server/environment". He was trying to talk me out of installing a training database on the existing server, and instead buying another copy of Oracle, which I believe to be a total cash grab. He had no solid reason why he couldn't do it, but refused to anyway and said he needs to talk to a more senior tech (the same one who said that yes, we'll need to purchase a second copy of Oracle rather than use a second database on the existing server).
So, these two databases, one live data and one a duplicate of that live data used in a training environment where changes won't be reported against, should be able to exist peacefully should they not? The same version of the same software accessing the same data from the same server, just a different database, shouldn't present any problems with my limited knowledge of Oracle! There will only ever be maybe 5 connections to the training environment at a time, and the server should be able to currently handle twice as many users connected as it's current maximum (edit: expected) load.
I know this wouldn't be a problem with MySQL, or MS SQL, and Oracle is a superior database package, so can anyone think of any legitimate reason that might back up the claim that having that duplicate database will be a problem?
Thanks in advance for any guidance here!Thank you very much John!
We have a per user license and I don't expect our training needs to exceed even a half dozen people at a time to start (A small, significant, fraction of the initial user base). In the future we can easily recommission other resources, or purchase more. The whole project is in the earlier stages anyway, so the needs are smaller now (but the money we spend on licensing could be recurring).
I understand the requirements angle from a server perspective, but those points you brought up drill down into what I need to know a bit more.
I doubt rollback segments would be necessary, I was thinking more along the lines of scheduled replication/duplication of the production database on a weekly or monthly basis, just to keep the material at least somewhat up to date.
I also don't think temporary table space would put much load on storage either, at least in the long run.
RAM isn't much an issue with a 64 bit version running either, for our needs now at least.
My instinct, and what you mention about even potential impact of altering the existing database schema enforces this, is that it's basically safer to create a separate database, but less resource intensive to alter the schema of an existing Oracle database to suit training needs (in this situation, maybe, making some vague assumptions about software probably), if done correctly (although still carrying risks).
I do wish I had the time to read up on this more now, and I will as soon as I have the time.
I probably will download 10g express edition to familiarize myself a bit more now, as soon as I come up with an application for it, to at least try out whatever I read.
Thanks for the advice -
Write-up for 10g database analysis
Hi Gurus,
I need to analyze a 10g database in details (which have too many issues-database related) since I cannot do direct connection to it cause of legal restrictions.
Please let me know what are the main points I need to get information about during preparing the write-up.
Thanks
Amitava.amitavachatterjee1975 wrote:
Hi Gurus,
I need to analyze a 10g database in details (which have too many issues-database related) since I cannot do direct connection to it cause of legal restrictions.
Please let me know what are the main points I need to get information about during preparing the write-up.
Thanks
Amitava.
What is " the write-up" that you are preparing? A justification for granting access to the database?
What are the "issues" that you are supposed to address/investigate? -
Poor performance of the BDB cache
I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec. -
Changing the length of a key field in a table
Hi,
I want to increase the length of the field from 2 to 4 in a standard SAP table and deliver it to the customers. This field is a key field in table. This field from this table is also used in view and view clusters.
What is the implication of changing the length to the customers. The customers would have already data in this field and they should not loose any data. Will the existing data for customers remain at length 2 or do they have to do some conversion?
Regards,
Srini.
Edited by: Srinivasa Raghavachar on Feb 7, 2008 12:45 PMhi,
The database table can be adjusted to the changed definition in the ABAP Dictionary in three different
ways:
By deleting the database table and creating it again. The table on the database is deleted, the inactive
table is activated in the ABAP Dictionary, and the table is created again on the database. Data
existing in the table is lost.
By changing the database catalog (ALTER TABLE). The definition of the table on the database is
simply changed. Existing data is retained. However, indexes on the table might have to be built again.
By converting the table. This is the most time-consuming way to adjust a structure.
If the table does not contain any data, it is deleted in the database and created again with its new
structure. If data exists in the table, there is an attempt to adjust the structure with ALTER TABLE. If the
database system used is not able to do so, the structure is adjusted by converting the table.
Field 1 Field 2, Field 3
NUMC,6 CHAR 8 CHAR, 60
Field 1 Field 2 Field 3
NUMC,6 CHAR, 8 CHAR,30
The following example shows the steps necessary during conversion.
Starting situation: Table TAB was changed in the ABAP Dictionary. The length of field 3 was reduced
from 60 to 30 places.
The ABAP Dictionary therefore has an active (field 3 has a length of 60 places) and an inactive (field 3
still has 30 places) version of the table.
The active version of the table was created in the database, which means that field 3 currently has 60
places in the database. A secondary index with the ID A11, which was also created in the database, is
defined for the table in the ABAP Dictionary.
The table already contains data.
Step 1: The table is locked against further structure changes. If the conversion terminates due to an
error, the table remains locked. This lock mechanism prevents further structure changes from being
made before the conversion has been completed correctly. Data could be lost in such a case.
Step 2: The table in the database is renamed. All the indexes on the table are deleted. The name of the
new (temporary) table is defined by the prefix QCM and the table name. The name of the temporary
Step 3: The inactive version of the table is activated in the ABAP Dictionary. The table is created on the
database with its new structure and with the primary index. The structure of the database table is the
same as the structure in the ABAP Dictinary after this step. The database table, however, does not
contain any data.
The system also tries to set a database lock for the table being converted. If the lock is set, application
programs cannot write to the table during the conversion.
The conversion is continued, however, even if the database lock cannot be set. In such a case
application programs can write to the table. Since in such a case not all of the data might have been
loaded back into the table, the table data might be inconsistent.
You should therefore always make sure that no applications access the table being converted
during the conversion process.
Step 4: The data is loaded back from the temporary table (QCM table) to the new table (with MOVECORRESPONDING).
The data exists in the database table and in the temporary table after this step.
When you reduce the size of fields, for example, the extra places are truncated when you reload the
data.
Since the data exists in both the original table and temporary table during the conversion, the storage
requirements increase during the process. You should therefore verify that sufficient space is available in
the corresponding tablespace before converting large tables.
There is a database commit after 16 MB when you copy the data from the QCM table to the original
table. A conversion process therefore needs 16 MB resources in the rollback segment. The existing
database lock is released with the Commit and then requested again before the next data area to be
converted is edited.
When you reduce the size of keys, only one record can be reloaded if there are several records whose
key cannot be distinguished. It is not possible to say which record this will be. In such a case you should
clean up the data of the table before converting.
Step 5: The secondary indexes defined in the ABAP Dictionary for the table are created again.
Step 6: The temporary table (QCM table) is deleted.
Step 7: The lock set at the beginning of the conversion is deleted.
If the conversion terminates, the table remains locked and a restart log is written.
Caution: The data of a table is not consistent during conversion. Programs therefore should not access
the table during conversion. Otherwise a program could for example use incorrect data when reading the
table since not all the records were copied back from the temporary table. Conversions therefore
should not run during production! You must at least deactivate all the applications that use tables to
be converted.
You must clean up terminated conversions. Programs that access the table might otherwise run
incorrectly. In this case you must find out why the conversion terminated (for example overflow of the
corresponding tablespace) and correct it. Then continue the terminated conversion.
Since the data exists in both the original table and temporary table during conversion, the storage
requirements increase during conversion. If the tablespace overflows when you reload the data from the
temporary table, the conversion will terminate. In this case you must extend the tablespace and start the
conversion in the database utility again.
If you shorten the key of a table (for example when you remove or shorten the field length of key fields),
you cannot distinguish between the new keys of existing records of the table. When you reload the data
from the temporary table, only one of these records can be loaded back into the table. It is not possible
to say which record this will be. If you want to copy certain records, you have to clean up the table
before the conversion.
During a conversion, the data is copied back to the database table from the temporary table with the
ABAP statement MOVE-CORRESPONDING. Therefore only those type changes that can be executed
with MOVE-CORRESPONDING are allowed. All other type changes cause the conversion to be
terminated when the data is loaded back into the original table. In this case you have to recreate the old
state prior to conversion. Using database tools, you have to delete the table, rename the QCM table to
its old name, reconstruct the runtime object (in the database utility), set the table structure in the
Dictionary back to its old state and then activate the table.
If a conversion terminates, the lock entry for the table set in the first step is retained. The table can no
longer be edited with the maintenance tools of the ABAP Dictionary (Transaction SE11).
A terminated conversion can be analyzed with the database utility (Transaction SE14) and then
resumed. The database utility provides an analysis tool with which you can find the cause of the error
and the current state of all the tables involved in the conversion.
You can usually find the precise reason for termination in the object log. If the object log does not
provide any information about the cause of the error, you have to analyze the syslog or the short dumps.
If there is a terminated conversion, two options are displayed as pushbuttons in the database utility:
After correcting the error, you can resume the conversion where it terminated with the Continue
adjustment option.
There is also the Unlock table option. This option only deletes the existing lock entry for the table .
You should never choose Unlock table for a terminated conversion if the data only exists in the
temporary table, i.e. if the conversion terminated in steps 3 or 4. table for table TAB is therefore QCMTAB.
Hope this is helpful,Do reward. -
Subsequent update of record, long time to appear in Journalized View
Hi,
I'm running some integration tests that do an insert into a source table, commits the insert, updates that record, commits the update. The cscn numbers are widely spaced, for example, the insert is 69997742 and the update is 70000579. I have a scheduled scenario running every few minutes that as a first step, extends window and locks subscriber.
What I'm seeing is the Insert will get propagated to the target immediately, but then the cscn number doesn't change for a long time, and at some random time in the future it will be updated and the update record will make it throguh to the target. I'm seeing time differences of 7 - 11 minutes between the arrival of the insert to the target and the arrival of the update.
Does anyone know how to decrease this "latency", is there a way of speeding up the time it takes for the cscn number to increment?
I found the following two tuning commands and have tried them, but I'm still seeing a long period of time between the insert and update,
begin dbms_capture_adm.alter_capture(capture_name=>'CDC$C_SAMS_INTEG', checkpoint_retention_time => 7); end;
begin dbms_capture_adm.set_parameter('CDC$C_SAMS_INTEG', 'parallelism','4'); end;
Any ideas would be appreciated!
Cheers
DamianWe temporarily solved the problem by switching to synchronous CDC. When we ran a performance analyzer over the database while data capture was taking place we found logmnr was taking around 6 minutes to query the data dictionary which looked like this bug: https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=458214.1 which is apparently fixed. So the either the bug isnt fixed or we haven't configured the database correctly. Anyone know a good tutorial for getting the right configuration for archive logging, number of redo logs and their size, and retention policies etc?
Maybe you are looking for
-
How to count no of line present in internal table depending on condtion.
Hi, I want to count no of line present in one internal table. For example: I have an internal table with output tax line item lwa_gt_alv-ty_auste_ep consider this internal table having 100 lines depending on the condition copany code(BUKRS), Year(GJA
-
I am getting error 1015 while trying to restore
while i m trying to restore iphone ...i m getting error 1015 and it is not working now..any body can help?
-
Build a custom based view where rows are columns in view
Bit of an odd request but here goes. We have a system whereby we want to give users the ability to add their own fields against our tables. Because we do not want to create columns in each table we had the thought of creating a table that contains ro
-
Re: Moved: Never Again
I won't preorder again either. I also won't suggest anyone else pre-order because of this run around I'm getting. I pre-ordered hoping to get my s4 in a timely convenient manner. It's unfair to see many others have their phone in hand while I still
-
Microsoft exchange console critical error "RunspaceServerSettingsPresentationObject"
Hi All, Running Exchange 2010 SP3 with Rollup Update 8V2. (Role –HT ; certificate - self signed) When use EMC giving an error. But I can do all the stuff using EMS. Even getting same error when connect the server from local system EMC. I have not f