Performance Tuning Memory
I have an application where I expect to be pushing the bounds of the process memory limit on windows. I have done a lot of optimization to make sure that I am using as little memory as possible. I expect it to be around 650M given my current data set.
When I run the application, I have to bump the heap up to almost twice that size so that I don't run out of memory. I believe that the virtual machine is allocating memory for the different generations in order to prepare to garbage collect all of these objects. The objects will never need to be garbage collected though because they will live for the life of the app. I've looked through the garbage colletion FAQs and documents and none of the VM parameters have enabled me to bring the size of the app down to the size I expect it to be.
Any ideas?
My last resort is to write it in C++ do I have complete control but I would much rather have this written in Java.
Michael Connor
Hi,
don't have an answer to your problem, but a tip: Go and look for "jvmstat" - a gc and memory visualisation tool. It will help you to find out what's going on in your vm
Then you can try to tweak your jvm with all those pretty -XX options,
Documentation starting point: http://java.sun.com/docs/performance/
Peter
Similar Messages
-
Oracle Memory Issue/ performance tuning
I have Oracle 9i running on Window 2003 server. 2 GB memory is allocated to Oralce DB( even though server has 14GB memory)
Recently, the oracle process has been slow.. running query
I ran the window task manager. Here is the numbers that I see
Mem usage: 556660k
page Faults: 1075029451
VM size: 1174544 K
I am not sure how to analyze this data. why the page fault is so huge. and Mem usage is half of VM size?
How can I do the performance tuning on this box?I'm having a similar issue with Oracle 10g R2 64-bit on Windows 2003 x64. Performance on complicated queries is abysmal because [I think] most of the SGA is sitting in a page file, even though there is plenty of physical RAM to be had. Performance on simple queries is probably bad also, but it's not really noticable. Anyway, page faults skyrocket when I hit the "go" button on big queries. Our legacy system runs our test queries in about 5 minutes, but the new system takes at least 30 if not 60. The new system has 24 gigs of RAM, but at this point, I'm only allocating 1 gig to the SGA and 1/2 gig to the PGA. Windows reports oracle.exe has 418,000K in RAM and 1,282,000K in the page file (I rounded a bit). When I had the PGA set to 10 gigs, the page usage jumped to over 8 gigs.
I tried adding ORA_LPENABLE=1 to the registry, but this issue seems to be independent. Interestingly, the amount of RAM taken by oracle.exe goes down a bit (to around 150,000K) when I do this. I also added "everyone" to the security area "lock pages in memory", but again, this is probably unrelated.
I did an OS datafile copy and cloned the database to a 32-bit windows machine (I had to invalidate and recompile all objects to get this to work), and this 32-bit test machine now has the same problem.
Any ideas? -
hi,
I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an ALV report.
please suggest me how to proced to change the code.Performance tuning for Data Selection Statement
<b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector. -
Performance tuning in EP6 SP14
Hi,
We just migrated our development EP5 SP6 portal to NW04 EP6 SP14 and are seeing some performance problems with a limited number of users (about 3).
Please point me to some good clear documentation related to performance tuning. Better yet, please tell me some things you have done in your EP6 portal to improve performance.
Any help is greatly appreciated.
Regards,
RickHi
In general we have experienced that NW04 is faster than earlier versions.
One of the big questions is if the response time is SERVER time (used for processing on the J2EE server), NETWORK time (latency and many roundtrips), CLIENT time (rendering, virus-scanning etc) or a combination.
1) Is the capacity on the server ok? CPU utilization and queue lenght high? Memory swapping?
2) A quick optimazation server-side is logging: Plese verify that that log and trace levels are ERROR /FATAL or NONE on J2EE and also avoid logging on IIS and IIS proxy is used
3) Using a HTTP trace you can see if the NW portal for some reason generetes more roundtrips (more GETS) that the old portal for the same content - what is the network latency? Try to do a "ping <server>" from a client pc and see the time (latency) - if it is below 20 ms the network should do a big difference.
4) On the client try to call the portal with anti-virus de-activated if the delta/difference in response times are HUGE your client could be to old? (don't underestimate the client). Maybee the compression should be set different to avoid compres (server) and uncompres (client) - this is a tradeoff with network latency however.
Also client-side caching (in browser) is important.
These are just QUICK point to consider - tuning is complex
BR
Tom Bo -
Hello All,
We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
Regards,
RajSQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
To tune your bqy documents...
1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
5) Saved compressed documents
6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
7) Remove all duplicate images and keep the image file size small.
Hope this helps!
PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
Edited by: user10899957 on Feb 10, 2009 6:07 AM -
We have to inverstigate about a reporting solution where things are getting slow (may be material, database design, network matters).
I have red a lot in MSDN and some books about performance tuning on SQL Server 2008 R2 (or other) but frankly, I feel a little lost in all that stuff
I'am looking for practical steps in order to do the tuning. Someone had like a recipe for that : a success story...
My (brain storm) Methodology should follow these steps:
Resource bottlenecks: CPU, memory, and I/O bottlenecks
tempdb bottlenecks
A slow-running user query : Missing indexes, statistics,...
Use performance counters : there are many, can one give us the list of the most important
how to do fine tuning about SQL Server configuration
SSRS, SSIS configuration ?
And do the recommandations.
Thanks
"there is no Royal Road to Mathematics, in other words, that I have only a very small head and must live with it..."
Edsger W. DijkstraHello,
There is no clear defined step which can be categorized as step by step to performance tuning.Your first goal is to find out cause or drill down to factor causing slowness of SQL server it can be poorly written query ,missing indexes,outdated stats.RAM crunch
CPU crunch so on and so forth.
I generally refer to below doc for SQL server tuning
http://technet.microsoft.com/en-us/library/dd672789(v=sql.100).aspx
For SSIS tuning i refer below doc.
http://technet.microsoft.com/library/Cc966529#ECAA
http://msdn.microsoft.com/en-us/library/ms137622(v=sql.105).aspx
When I face issue i generally look at wait stats ,wait stats give you idea about on what resource query was waiting.
--By Jonathan KehayiasSELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
AS percent_total_waits ,
100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
AS percent_total_signal_waits ,
100.0 * ( wait_time_ms - signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 -- remove zero wait_time
AND wait_type NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
ORDER BY wait_time_ms DESC
use below link to analyze wait stats
http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
HTH
PS: for reporting services you can post in SSRS forum
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
hi,
i have developed a report program.its taking too much time to fetch the records.so what steps i have to consider to improve the performance. urgent plz.Hi,
Check this links
Performance tuning for Data Selection Statement & Others
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
http://www.sapdevelopment.co.uk/perform/performhome.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
1. Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
2. Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
3. SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
6. Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
7. Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
8. Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
http://sap.genieholdings.com/abap/performance.htm
http://www.dbis.ethz.ch/research/publications/19.pdf
Reward Points if it is Useful.
Thanks,
Manjunath MS -
HI Experts,
Can some one help me to tune a oracle bpm engine 11g (11.1.1.5)
Any tuning recommendation for a mid-size engine
(if some one really implemented which gives some significant performance improvement)
Tried almost everything but engine is really slow with 5 concurrent user.
Please don’t give url to middleware performance tuning guide.
Configuration:
Hardware
Sparc-t3-2
1.65 GHz
2 Core virtual
Memory 16 GB
Setting – Total 2 containers. 1 manage server each, 1 Admin.
Heap space 4 GB, Perm Space – 1gb
Client :
50 Concurrent User.
10 BPMN Process (each process got around 10-12 activity), 8 BPEL Process,
1000-1500 Live Instances as of now. Need to cater more in future. (around 50,000))Oracle Support Document 1456488.1 (Slow startup of WebLogic Servers on SPARC Series: T1000, T2000, T5xx0 and T3-* ) can be found at:
https://support.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?id=1456488.1 -
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.dbdan wrote:
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.Here is something you might try as a starting point:
Capture the output of the following (to a table, send to Excel, or spool to a file):
SELECT
STAT_NAME,
VALUE
FROM
V$OSSTAT
ORDER BY
STAT_NAME;
SELECT
STAT_NAME,
VALUE
FROM
V$SYS_TIME_MODEL
ORDER BY
STAT_NAME;
SELECT
EVENT,
TOTAL_WAITS,
TOTAL_TIMEOUTS,
TIME_WAITED
FROM
V$SYSTEM_EVENT
WHERE
WAIT_CLASS != 'Idle'
ORDER BY
EVENT;Wait a known amount of time (5 minutes or 10 minutes)
Execute the above SQL statements again.
Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
I moved our internal application from a P3 500mhz CPU 1U
server to a Duo-Core Xeon 1.6ghz server.
The newer box has a larger HD, more RAM, beefier CPU, yet my
application is running at the same or a smidgen faster than the old
server. Oh, I went from a Win2k, IIS 5.0 box to a Windows Server
2003 running IIS 6, mysql 5.0.
I did a Windows Performance Monitor look-see. I added all the
CF services and watched the graphs do their thing. I noticed that
after I ran query on my application, the jrun service Avg. Req Time
is pinned at 100%. Even several minutes after inactivity on the
box, this process is still at 100%. Anyone know if this is a
causing my performance woes? Anyone know what else I can look at to
increase the speed of the app (barring and serious code rebuild)?
Thanks!
chrisAnyone know what else I can look at to increase the speed of
the app
(barring and serious code rebuild)?
There are some tweaks and tips, but I will let more knowledge
folks
speak to these. But I am afraid you problem maybe a code
issue. From
the brief description I would be concerned there is some kind
of memory
leak thing happening in your code and until you track it down
and plug
it up, no amount of memory or performance tuning is going to
do you much
good. -
Performance Tuning for ECC 6.0
Hi All,
I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
Can you please guide me how to do these steps in the sytem
1) Reorganization should be done at least for the top 10 huge tables
and their indexes
2) Unaccessed data can be taken out by SAP Archiving
3)Apply the relevant corrections for top sap standard objects
4) CBO update statistics must be latest for all SAP and customer objects
I have never done performance tuning and want to do it on this system.
Regards,
JitenderHi,
Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
I require your inputs for performance tuning
CPU
Utilization user % 3 Count 2
system % 3 Load average 1min 0.11
idle % 1 5 min 0.21
io wait % 93 15 min 0.22
System calls/s 982 Context switches/s 1752
Interrupts/s 4528
Memory
Physical mem avail Kb 6291456 Physical mem free Kb 93992
Pages in/s 473 Kb paged in/s 3784
Pages out/s 211 Kb paged out/s 1688
Pool
Configured swap Kb 26869896 Maximum swap-space Kb 26869896
Free in swap-space Kb 21631032 Actual swap-space Kb 26869896
Disk with highest response time
Name md3 Response time ms 51
Utilization 2 Queue 0
Avg wait time ms 0 Avg service time ms 51
Kb transfered/s 2 Operations/s 0
Current parameters in the system
System: sapretail_RET_01 Profile Parameters for SAP Buffers
Date and Time: 08.01.2009 13:27:54
Buffer Name Comment
Profile Parameter Value Unit Comment
Program buffer PXA
abap/buffersize 450000 kB Size of program buffer
abap/pxa shared Program buffer mode
|
CUA buffer CUA
rsdb/cua/buffersize 3000 kB Size of CUA buffer
The number of max. buffered CUA objects is always: size / (2 kB)
|
Screen buffer PRES
zcsa/presentation_buffer_area 4400000 Byte Size of screen buffer
sap/bufdir_entries 2000 Max. number of buffered screens
|
Generic key table buffer TABL
zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer
zcsa/db_max_buftab 5000 Max. number of buffered objects
|
Single record table buffer TABLP
rtbb/buffer_length 10000 kB Size of single record table buffer
rtbb/max_tables 500 Max. number of buffered tables
|
Export/import buffer EIBUF
rsdb/obj/buffersize 4096 kB Size of export/import buffer
rsdb/obj/max_objects 2000 Max. number of objects in the buffer
rsdb/obj/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/obj/mutex_n 0 Number of mutexes in Export/Import buffer
|
OTR buffer OTR
rsdb/otr/buffersize_kb 4096 kB Size of OTR buffer
rsdb/otr/max_objects 2000 Max. number of objects in the buffer
rsdb/otr/mutex_n 0 Number of mutexes in OTR buffer
|
Exp/Imp SHM buffer ESM
rsdb/esm/buffersize_kb 4096 kB Size of exp/imp SHM buffer
rsdb/esm/max_objects 2000 Max. number of objects in the buffer
rsdb/esm/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/esm/mutex_n 0 Number of mutexes in Exp/Imp SHM buffer
|
Table definition buffer TTAB
rsdb/ntab/entrycount 20000 Max. number of table definitions buffered
The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
|
Field description buffer FTAB
rsdb/ntab/ftabsize 30000 kB Size of field description buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of table descriptions buffered
FTAB needs about 700 bytes per used entry
|
Initial record buffer IRBD
rsdb/ntab/irbdsize 6000 kB Size of initial record buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of initial records buffered
IRBD needs about 300 bytes per used entry
|
Short nametab (NTAB) SNTAB
rsdb/ntab/sntabsize 3000 kB Size of short nametab
rsdb/ntab/entrycount 20000 Max. number / 2 of entries buffered
SNTAB needs about 150 bytes per used entry
|
Calendar buffer CALE
zcsa/calendar_area 500000 Byte Size of calendar buffer
zcsa/calendar_ids 200 Max. number of directory entries
|
Roll, extended and heap memory EXTM
ztta/roll_area 3000000 Byte Roll area per workprocess (total)
ztta/roll_first 1 Byte First amount of roll area used in a dialog WP
ztta/short_area 3200000 Byte Short area per workprocess
rdisp/ROLL_SHM 16384 8 kB Part of roll file in shared memory
rdisp/PG_SHM 8192 8 kB Part of paging file in shared memory
rdisp/PG_LOCAL 150 8 kB Paging buffer per workprocess
em/initial_size_MB 4092 MB Initial size of extended memory
em/blocksize_KB 4096 kB Size of one extended memory block
em/address_space_MB 4092 MB Address space reserved for ext. mem. (NT only)
ztta/roll_extension 2000000000 Byte Max. extended mem. per session (external mode)
abap/heap_area_dia 2000000000 Byte Max. heap memory for dialog workprocesses
abap/heap_area_nondia 2000000000 Byte Max. heap memory for non-dialog workprocesses
abap/heap_area_total 2000000000 Byte Max. usable heap memory
abap/heaplimit 40000000 Byte Workprocess restart limit of heap memory
abap/use_paging 0 Paging for flat tables used (1) or not (0)
|
Statistic parameters
rsdb/staton 1 Statistic turned on (1) or off (0)
rsdb/stattime 0 Times for statistic turned on (1) or off (0)
Regards,
Jitender -
LDAP Performance Tuning In Large Deployments - dir_chkcredentialsonreadonly parameter
Calendar users are experiencing long delays in logging in or updating a meeting
with many attendees or dates. This is especially notificeable after migrating
from calendar server 1.0x to calendar server 3.x.
<P>
At this time, calendar performance can be improved by up to 30% by reconfiguring
the calendar server to bind to it's directory server either anonymously or with
a specific user. The default is to bind as the user requesting directory
information.
<P>
There is a parameter that can be added in the server configuration file by
editing the /users/unison/misc/unison.ini file. For anonymous binds
adding the dir_chkcredentialsonreadonly line to the DAS section:
<P>
[DAS]
dir_updcalonly = TRUE
dir_connection = persistent
dir_service = LDAP,v2,NSCP,1
enable = TRUE
dir_chkcredentialsonreadonly = FALSE
<P>
For binding as a specific user, also add the following to the LDAP,..,...,,
section:
<P>
[LDAP...]
binddn = dn
bindpwd = password
<P>
Going forward, we are working on other changes in the next versions of the
Calendar Client and Calendar Server to improve performance. Please check with
your Netscape Sales contact for announcements on the availability of these
versions.
<P>Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
But, could you please clarify 2 points to me
1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
Best reagards -
LDAP Performance Tuning In Large Deployments - numconnect parameter
LDAP Performance Tuning In Large Deployments - numconnect parameter
<p>
Tuning the LDAP connections
(numconnect parameter)
This parameter translates directly into the number of unidas processes that will
be launched when Calendar Server is started. A process takes time to load, uses
RAM, and when active, CPU cycles. And, unidas maintains an LDAP client
connection to a Directory Server which can only support a fixed number of these
connections. Since a calendar client does not require constant directory access
then having a matching number of unidas processes (to match uniengd "client"
processes) is not a good configuration.
Basically, a calendar client will make many requests for LDAP information, even
if the event information being retrieved is not currently view able. For example,
if the calendar client is displaying a week view with 20 events and each event
has 5 attendees, that will translate into at least 100 separate ldap search
requests for the given name and surname of each attendee. What this means is
that an "active" calendar user will require the services of a calendar server
unidas connection quite often.
Recommendation is that you increase the number of unidas connections
to match the number of "active" calendar users. Our experience is that
at least 20% of the number of configured users (lck_users from the
/users/unison/misc/unison.ini file) are actually logged in, and 10% of
those calendar users are active. For example, if have 3000 configured
calendar users, 600 configured are logged in and 10% of the logged in
are active, which would translate into at least 60 unidas connections.
Keep in mind that configured vs logged in vs active might be different at each
customer site, so please adjust your number of unidas connections
accordingly. To set this up, edit the /users/unison/log/unison.ini file and add
the numconnect parameter to the section noted (where "hostname" is the name of
your local host):
[LCK]
lck_users = 600
[hostname,unidas]
numconnect = 60
The calendar server will need to be restarted after making changes
to the /users/unison/log/unison.ini file, before those changes will
take effect.
Note: Due to some architectural changes in the Calendar Server 4.x, the total
number of DAS connections should never be set higher than 250.
Recommendations for num_connect would be a maximum of 5% of logged on users.
However, keep in mind that 250 das connections is a very high number.
Example:
[LCK]
lck_users = 5000
[hostname,unidas]
numconnect = 250Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
But, could you please clarify 2 points to me
1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
Best reagards -
Hi,
Can any one guide me how to do the performance tuning in an ECC5,
MSSQL2000,Windows 2003 system
Here the RAM size is 3.5 gb.
Page file size was 9 gb(i increased that to 11.5 gb (3*RAM+ 1 gb)
Also increased the abap/buffer size to 400000.
Regards,
ArunHi Arun,
Go through the following links which are very useful in understanding the parameters values.
Zero administration memory management
Regarding Virtual Memory?
Regards,
Hari.
PS: Points are welcome. -
Hello,
I'm working through the book Oracle Database 10g Performance Tuning Tips & Techniques from R. Niemiec. In chapter 15 is the topic - Top 10 "Memory Abusers" as a percent of all Statements - which I do not understand correct.
Maybe someone has this book and can help me on this?
I executed the sql-statement given in this book and it gives me about 66 percent for the top ten statements. The example has 44 percent.
I don't understand the ranking given for the example, for the example the result is 60 points. Why?
I don't know if I'm allowed to paste the rate-part here.
Greets,
HannibalI customized the query on 10g a little;
SELECT *
FROM (SELECT rank() over(ORDER BY buffer_gets DESC) AS rank_bufgets,
(100 * ratio_to_report(buffer_gets) over()) pct_bufgets,
sql_text,
executions,
disk_reads,
cpu_time,
elapsed_time
FROM v$sqlarea)
WHERE rank_bufgets < 11;In my opinion above query helps to find and prioritize the problematic queries which are run from the last database startup, so the magic number and the conditions are not so important :)
If you are on 10g and have the cost option AWR report of a problematic period or before 10g STATSPACK report will assist you for this purpose. Also after 10gR2 v$sqlstats is very useful, search and check them according to your version here; http://tahiti.oracle.com
Maybe you are looking for
-
How can I copy the name of a file, that I selected by a click?
Hi, I have a device controled by a Excell (VBA) interface. Truogh this interface I control the motors and Diadem (for the report) and the data are save in a database. When I use the load button from the excell, the VBA application call the script fr
-
Internal order number on PR and PO printout
Dear SAP Guru's, My client want to print the P.R and PO document and they want the internal order number to be printed with it. So can some one please help to how is PR # is linked to the Internal order number, How is PO # linked to the internal ord
-
After updating my iPhone4 to iOS6 my speakers do not work. The Apple store after troubleshooting said that none of the speakers they sell will work with iOS6. Does anyone have any suggestions for obtaining speakers which will work with iOS6?
-
Why does the 'bookmark this page' sometimes disappear?
Windows seven pc
-
How to repair a disk without installation CD
Hey everyone, My macbook has been acting up and has been spitting out CDs/DVDs lately... I checked my disk and it said: This disk needs to be repaired. Start up your computer with anpther disk (such as your Mac OS X installation disc), and then use D