T2000 consumer tuning.
Hello all.
Ive got a couple of T2000's in for LDAP performance testing, im thinking apart from IP stack and memory tuning, I should also look at max thread tuning. Some where in the back of my mind im sure there are some gotcha's with T2000's. Am i right in assuming, a 32 core T2000 should give best results if i allocate a maximum number of threads set to say 30?
any thoughts please.
From the few tests that we've done with the T2000, the best results are achieved when the maximum number of threads is about twice the amount of cores.
T2000, 32 cores, set a max number of working threads to 64.
Ludovic.
Similar Messages
-
Performance tuning in database side
Hi All,
I have a query which consumes 2 hours to run. Since, it is a vendor query, am not supposed to tune the query part.
Response time needs to be reduced. So far, i have done the below analysis.
1) All the indexes are working fine
2) Cost is high while sorting. Hence, tried to increase the sort_area_size in the session level. no benefit.
3) cursor_sharing set to similar at session level. But no benefits. Default was exact.
Anyone could help me what parameters need to be altered in v$parameter
My DB is 9.2.0.8.0
Thanks in advance
Regards
A.GopalBefore you do anything, I suggest proper root cause analysis.
Will this sort_area_size setting at session level will be useful? See template tuning threads:
[url https://forums.oracle.com/forums/thread.jspa?threadID=863295]How to post a SQL tuning request
[url https://forums.oracle.com/forums/thread.jspa?messageID=1812597]When your query takes too long
Only by knowing where the time is going and why can anyone suggest a course of action that is relevant.
Do an extended trace (event 10046) at level 12 and run the trace file through tkprof.
Identify waits and any significant differences between estimates and actuals.
If your actual execution metrics indicate that you are spending excessive time sorting or on temp space usage THEN you can consider changing sort_area_size parameters, etc.
All the indexes are working fineWhat is this meant to mean?
If you don't know anything about your execution plan or where time is spent in the query, how can you judge the effectiveness of the indexes?
Edited by: Dom Brooks on Nov 21, 2012 3:02 PM -
Consumer/customer relations payment load report
is there any standard report for Consumer/customer relations payment load report.....for country specific
Hi Rajan,
There is a lot of scope for performance tuning the code:
1. Read statement should be with BINARY SEARCH
2. Select statement should have FIELDS in the same order as they exist in the table
3. Select statement where clause should have fields in same order as they exist in the table.
4. Internal tables seem to be having a huge number of fields. If possible, please limit the fields to only required fields in the internal table.
Best regards,
Prashant -
MDBs in 9.1 continue to consume JMS queues even after being deleted
<b>We have an MDB application that reads a batch message off of a JMS queue, archives it in a database, parses the batch message into individual messages and writes them onto other JMS queues to be consumed by another application. Everything was running fine in Weblogic 8.1.5. However, due to problems with XA drivers and the MSDTC(predictable SQL server crashes), we decided to upgrade to Weblogic 9.1 to take advantage of the LLR option.</b>
<b>First, we had an issue where our MDBs were causing the following exception:</b>
<i>####<May 26, 2006 7:42:12 PM EDT> <Error> <JMX> <ist-clft2> <wltest1> <ExecuteThread: '1' for queue: 'default'> <<WLS Kernel>> <> <> <1148686932991> <BEA-149500> <An exception occurred while registering the MBean null.
java.lang.IllegalArgumentException: Registered more than one instance with the same objectName : com.bea:ServerRuntime=wltest1,MessageDrivenEJBRuntime=RhapsodyMDB_DMBModule!JMSServer4@DMB_BEAN_QUEUE,Name=RhapsodyMDB_DMBModule!JMSServer4@DMB_BEAN_QUEUE,ApplicationRuntime=DataBrokerEAR1_2,Type=EJBPoolRuntime,EJBComponentRuntime=DataBrokerEJB new:[email protected] existing weblogic.ejb.container.monitoring.EJBPoolRuntimeMBeanImpl@7db003
at weblogic.management.jmx.ObjectNameManagerBase.registerObject(ObjectNameManagerBase.java:146)
at weblogic.management.mbeanservers.internal.WLSObjectNameManager.lookupObjectName(WLSObjectNameManager.java:133)
at weblogic.management.jmx.modelmbean.WLSModelMBeanFactory.registerWLSModelMBean(WLSModelMBeanFactory.java:86)
at weblogic.management.mbeanservers.internal.RuntimeMBeanAgent$1.registered(RuntimeMBeanAgent.java:104)
at weblogic.management.provider.internal.RegistrationManagerImpl.invokeRegistrationHandlers(RegistrationManagerImpl.java:205)
at weblogic.management.provider.internal.RegistrationManagerImpl.register(RegistrationManagerImpl.java:85)
at weblogic.management.runtime.RuntimeMBeanDelegate.register(RuntimeMBeanDelegate.java:320)
at weblogic.management.runtime.RuntimeMBeanDelegate.<init>(RuntimeMBeanDelegate.java:257)
at weblogic.management.runtime.RuntimeMBeanDelegate.<init>(RuntimeMBeanDelegate.java:222)
at weblogic.ejb.container.monitoring.EJBPoolRuntimeMBeanImpl.<init>(EJBPoolRuntimeMBeanImpl.java:32)
at weblogic.ejb.container.monitoring.MessageDrivenEJBRuntimeMBeanImpl.<init>(MessageDrivenEJBRuntimeMBeanImpl.java:49)
at weblogic.ejb.container.manager.MessageDrivenManager.initialize(MessageDrivenManager.java:503)
at weblogic.ejb.container.manager.MessageDrivenManager.setup(MessageDrivenManager.java:120)
at weblogic.ejb.container.manager.MessageDrivenManager.setup(MessageDrivenManager.java:146)
at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.createMDManager(MessageDrivenBeanInfoImpl.java:1481)
at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.createDDMDManagers(MessageDrivenBeanInfoImpl.java:1378)
at weblogic.ejb.container.deployer.MessageDrivenBeanInfoImpl.onDDMembershipChange(MessageDrivenBeanInfoImpl.java:1285)
at weblogic.jms.common.CDS$DD2Listener.listChange(CDS.java:454)
at weblogic.jms.common.CDSServer$DDHandlerChangeListener.statusChangeNotification(CDSServer.java:167)
at weblogic.jms.dd.DDHandler.callListener(DDHandler.java:318)
at weblogic.jms.dd.DDHandler.callListeners(DDHandler.java:344)
at weblogic.jms.dd.DDHandler.run(DDHandler.java:282)
at weblogic.jms.common.SerialScheduler.run(SerialScheduler.java:37)
at weblogic.work.ExecuteRequestAdapter.execute(ExecuteRequestAdapter.java:21)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
>
####<May 26, 2006 7:42:13 PM EDT> <Info> <EJB> <ist-clft2> <wltest1> <ExecuteThread: '1' for queue: 'default'> <<WLS Kernel>> <> <> <1148686933069> <BEA-010060> <The Message-Driven EJB: RhapsodyMDB has connected/reconnected to the JMS destination: weblogic.jms.DMB_BEAN_QUEUE.></i>
<b>
Generally this happend after there were cluster communication issues. Multi-cast messages were lost and our MDB reconnects to the JMS queues as indicated by the below log:</b>
<i>####<May 30, 2006 5:19:06 PM EDT> <Info> <EJB> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '54' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1149023946040> <BEA-010060> <The Message-Driven EJB: DataBrokerMDB has connected/reconnected to the JMS destination: weblogic.jms.PHINMS_DMB_QUEUE.>
####<May 30, 2006 5:19:10 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023950228> <BEA-000112> <Removing RAPBEA3S jvmid:720875810499147484S:cmts-rap-bea3:[7005,-1,-1,-1,-1,-1,-1]:DMBstg:RAPBEA3S from cluster view due to timeout.>
####<May 30, 2006 5:19:11 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023951009> <BEA-000115> <Lost 2 multicast message(s).>
####<May 30, 2006 5:19:11 PM EDT> <Info> <Cluster> <AMTC-RAP-STG3> <RAPBEA1S> <[ACTIVE] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1149023951040> <BEA-000111> <Adding RAPBEA3S with ID 720875810499147484S:cmts-rap-bea3:[7005,-1,-1,-1,-1,-1,-1]:DMBstg:RAPBEA3S to cluster: DMBstg_cluster view.></i>
<b>
This would cause the queues to eventually have hundreds of consumers and cause the server to fail.
Basically, it seems as though the MDBs that are supposed to stop continue and attempt to process, while new threads connect to the JMS queues.
I tried undeploying our application and deleted it from the configuration. However there were consumers still on the respective queues and when I sent messages, I got an error indicating a "Class Not Found exception" due to the fact that the EJB was undeployed and deleted from the configuration, however the MDB component was not and continued to listen for messages. In 8.1.5, as soon as the application was undeployed, there were zero consumers on the JMS queues.
I have read the posts about a soon to be released fix that would have the MDBs connect only to the queues locally and not go out the the cluster. Would this fix my issue?
Is there something in the deployment descriptor to configure that will cause it to disconnect and now spawn so many consumers to the JMS queues?
Why is it that the number of MDB consumers on the JMS queues stayed static in 8.1.5, but they are erratic in 9.1 even after I set our 9.1 server to use the 8.1.5 execute queue policy. Help would be much appreciated.</b>I recommend contacting customer support. There's a known problem with MDBs listening to distributed destinations that are local to the same cluster as the MDB, you're problem may be related (the clue is that the stack trace contains jms.dd.DDHandler.callListeners()). The problem is that the MDB connects to all physical queues in a distributed destination rather than just the local queue.
Tom -
Dear all,
We have 11gr2 rac standard edition installed on solaris 10. Using
statspack, we found that highly elapsed times queries are consuming
cpu .We idenitifed the queries. but , how to tune this queries ?.
In enterprise edition,we can use tuning advisors to do it, but how
to do the same in standard edition ?
KaiWell,
again there is a point to think.
here my expireince
i dont know is the database eariler migrated from enterprise edition to standard edition or have license of enterprise edition.
when i start generating AWR report the output is with empty fields, then after my work around i found we need to change paraemters of mangement pack as tuning+
then i can able to generate AWR report. But there is no option to me ask is it enterprise edition license or standard edition license.. ;-)
I think still we can generate AWR or ADDM reports with standard edition even if is cost option.. -
Contention for latches related to the shared pool was consuming significant
We are having performance issue on our database. When I look at the AWR, I see that there is a contention for latches. Below is the AWR Report.
ADDM Report for Task 'ADDM:1775307360_12808'
Analysis Period
AWR snapshot range from 12807 to 12808.
Time period starts at 10-MAY-11 01.00.15 PM
Time period ends at 10-MAY-11 02.00.23 PM
Analysis Target
Database 'ADVFDWP' with DB ID 1775307360.
Database version 11.1.0.7.0.
ADDM performed an analysis of all instances.
Activity During the Analysis Period
Total database time was 27827 seconds.
The average number of active sessions was 7.71.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 Shared Pool Latches 6.43 | 83.42 0
2 Top SQL by DB Time 2.41 | 31.24 3
3 "Concurrency" Wait Class 2.18 | 28.22 0
4 PL/SQL Execution 1.53 | 19.86 1
5 "User I/O" wait Class 1.33 | 17.24 0
6 Hard Parse 1.24 | 16.14 0
7 Undersized Buffer Cache .83 | 10.73 0
8 CPU Usage .7 | 9.02 0
9 Top SQL By I/O .31 | 4.04 1
10 Top Segments by I/O .24 | 3.12 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: Shared Pool Latches
Impact is 6.43 active sessions, 83.42% of total activity.
Contention for latches related to the shared pool was consuming significant
database time in some instances.
Instances that were significantly affected by this finding:
Number Name Percent Impact ADDM Task Name
1 ADVFDWP1 99.31 ADDM:1775307360_1_12808
Check the ADDM analysis of affected instances for recommendations.
Finding 2: Top SQL by DB Time
Impact is 2.41 active sessions, 31.24% of total activity.
SQL statements consuming significant database time were found.
Recommendation 1: SQL Tuning
Estimated benefit is 1.07 active sessions, 13.82% of total activity.
Action
Run SQL Tuning Advisor on the SQL statement with SQL_ID "fdk73nhpt93a5".
Related Object
SQL statement with SQL_ID fdk73nhpt93a5.
INSERT INTO SFCDM.F_LOAN_PTFL_MOL_SNPSHT SELECT * FROM
F_LOAN_PTFL_MOL_SNPSHT_STG
Recommendation 2: SQL Tuning
Estimated benefit is 1 active sessions, 12.96% of total activity.
Action
Tune the PL/SQL block with SQL_ID "7nvgzsgy9ydn9". Refer to the "Tuning
PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
Reference".
Related Object
SQL statement with SQL_ID 7nvgzsgy9ydn9.
begin
insert into SFCDM.F_LOAN_PTFL_MOL_SNPSHT select * from
F_LOAN_PTFL_MOL_SNPSHT_STG;
end;
Recommendation 3: SQL Tuning
Estimated benefit is .4 active sessions, 5.2% of total activity.
Action
Investigate the SQL statement with SQL_ID "fcvfq2gzmxu0t" for possible
performance improvements.
Related Object
SQL statement with SQL_ID fcvfq2gzmxu0t.
select
a11.DT_YR_MO DT_YR_MO,
a11.IND_SCRTZD IND_SCRTZD,
a13.CD_LNSTAT CD_LNSTAT_INTGRTD,
sum(a11.CNT_LOAN) WJXBFS1,
sum(a11.AMT_PART_EOP_UPB) WJXBFS2,
sum(a11.AMT_LST_VLD_PART_UPB) WJXBFS3
from
SFCDM.F_LOAN_PTFL_MOL_SNPSHT
a11
join
SFCDM.D_DETD_LNSTAT_CURR
a12
on
(a11.ID_CYCL_CLOS_DETD_LNSTAT_SRGT = a12.ID_DETD_LNSTAT_SRGT)
join
SFCDM.D_LNSTAT_CD
a13
on
(a12.ID_LNSTAT_CD_SRGT = a13.ID_LNSTAT_CD_SRGT)
join
SFCDM.D_LOAN_CHARTC_CURR_MINI
a14
on
(a11.ID_LOAN_CHARTC_SRGT = a14.ID_LOAN_CHARTC_SRGT)
where
(a11.DT_YR_MO in (201103)
and a14.CD_SFCRM_LOAN_BUS_LI not in ('L', 'T', 'W')
and a13.CD_LNSTAT in (14)
and not exists
(select * from SFCDM.F_LOAN_PTFL_MOL_SNPSHT s
where s.id_loan_syst_gend = a11.id_loan_syst_gend
and s.dt_yr_moIt is worth checking the actual size of the shared pool e.g.
select pool,sum(bytes)/1024/1024/1024 from v$sgastat group by pool;
the parameters you ahve posted suggest you have set a minimum but no maximum, so it could very large.
Next up is looking for unhared SQL i.e.
select column1 from some_table where column2='A_VALUE';
select column1 from some_table where column2='Another_Value';
where the code should be using binds instead of literals for security and performance reasons, a simple way to find this is to look in v$sql for sql having the same plan_hash_value but different sql_Ids and compare the sql_fulltext of each statement.
Also a possibility is sql with many child cursors, this is trickier as the cause may vary and may not be easy to fix. check th econtents of v$sql for sql that have high values in the child_number column anmd investigate the contents of v$sql_shared_cursor for the reason there are multiple child cursors.
Chris -
HI Experts,
Can some one help me to tune a oracle bpm engine 11g (11.1.1.5)
Any tuning recommendation for a mid-size engine
(if some one really implemented which gives some significant performance improvement)
Tried almost everything but engine is really slow with 5 concurrent user.
Please don’t give url to middleware performance tuning guide.
Configuration:
Hardware
Sparc-t3-2
1.65 GHz
2 Core virtual
Memory 16 GB
Setting – Total 2 containers. 1 manage server each, 1 Admin.
Heap space 4 GB, Perm Space – 1gb
Client :
50 Concurrent User.
10 BPMN Process (each process got around 10-12 activity), 8 BPEL Process,
1000-1500 Live Instances as of now. Need to cater more in future. (around 50,000))Oracle Support Document 1456488.1 (Slow startup of WebLogic Servers on SPARC Series: T1000, T2000, T5xx0 and T3-* ) can be found at:
https://support.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?id=1456488.1 -
Solaris 10 Sparc T2000/T1000 slow scp/sftp between hosts
We have 4 x t2000 and 2 x t1000 sun servers all running solaris 10 sparc edition and when uploading an agent to all these servers I was unable to maintain a connection faster than 6MB. All our OEL 6.3 hosts on the same vlans/network/swithes are able to copy the same files around at 85MB/s.
Our entire network is GB and have been trouble shooting along the way and have got down to two sun hosts on the same switch and vlan to rule out firewalling and other factors.
The switchports on the cisco show no errors and no errors are seen on the solaris servers and training up at the GB speed as configured.
So I'm faily certain there is something on the solaris hosts OS wise that is affecting perfomance that is causing some of our backup jobs to be quite long but I'm not sure at what to do now.
I've seen some online articles relating to tuning and adjusting certain tcp options but unsure as to what would be best.
Has anyone else experienced issues such a this and if so what was done to resolve it, thanks.Hi,
this is just a guess, but the T1000 and T2000 are really slow when doing single-thread cpu stuff. When you do scp, a single thread or process on each machine has to do all the encryption and this is most likely what is slowing you down. To analyze this further, look at the output of prstat (or top). If I am right, you will see one scp/ssh process using all cpu time of one of your cores/threads. In a T2000 with 8 cores, 32threads, this will be shown in prstat as 6.7% utilization.
Bjoern -
Query Tuning Parameters to be considered
Hi
Parameters to be considered for tuning the QUERY Performance.
I have a query like
SELECT C.TARGET_GUID, C.METRIC_GUID, C.STORE_METRIC, C.SCHEDULE, C.COLL_NAME, M.METRIC_NAME, M.EVAL_FUNC FROM MGMT_METRIC_COLLECTIONS_REP R, MGMT_METRIC_COLLECTIONS C, MGMT_METRICS M WHERE C.SUSPENDED = 0 AND C.IS_REPOSITORY = 1 AND (C.LAST_COLLECTED_TIMESTAMP IS NULL OR C.LAST_COLLECTED_TIMESTAMP + C.SCHEDULE / 1440 < SYSDATE) AND C.METRIC_GUID = M.METRIC_GUID AND R.TARGET_GUID = C.TARGET_GUID AND R.METRIC_GUID = C.METRIC_GUID AND R.COLL_NAME = C.COLL_NAME;
Could you please tell me what are all the paramaters considered for a for a query to perform well. I also wanted to know execution time,Disk IO,Wait time,Locks by this sql etc.Can i get these details from any views. Also i wanted to find the top 10 resource consuming SQLs without going to the enterprise manager. Kindly let me know how i can find this.
Thanks a lot,I guess we gave couple of views to you. The plans for the query can be seen from V$sql_plan. There is not any such view which can show AFAIK about the waits for the query. The waits are shown for the session/system from V$session_wait/V$system_event.
Sqls don't get locked. The underlying objects get locked which you can see from v$locked_object. The disk io and other charectristics are best shown by tracing the query. You should trace the query with the sql_trace or with 10046 wait event. You would see these all things in the trace file.
About the top 10 queries, I have told you about the Statspack report. If you areon 10g and have license for tuning pack than you can use AWR reporting also.
For the sessions which are more resource consuming, you may use the query given in this link.
http://searchoracle.techtarget.com/generic/0,295582,sid41_gci1049265,00.html#
HTH
Aman.... -
JAVA execution consumed significant database time
Dear all,
DB 11. on solaris..
I have a time consuming package.. this package unloads data in the current db and loads data from another remote db in the same network. below is the recommendation from Oracle ADDM report.
This session is a work flow session. What can I do for JAVA execution consumed significant database time.?
JAVA execution consumed significant database time.
Recommendation 1: SQL Tuning
Estimated benefit is .5 active sessions, 43.98% of total activity.
Action
Tune the PL/SQL block with SQL_ID "6bd4fvsx8n42v". Refer to the "Tuning
PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
Reference".
Related Object
SQL statement with SQL_ID 6bd4fvsx8n42v.
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN OWF_USER.START_METS_REFRESH(SYSDATE );
:mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF;
END;Please advise
KaiHi Forstmann,
Thanks for your update.
Even i have collected ADDM report, extract of Node1 report as below
FINDING 1: 40% impact (22193 seconds)
Cluster multi-block requests were consuming significant database time.
RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"59qd3x0jg40h1". Look for an alternative plan that does not use
object scans.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
FINDING 3: 13% impact (7008 seconds)
Read and write contention on database blocks was consuming significant
database time.
NO RECOMMENDATIONS AVAILABLE
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
Any help from your side , please?
Thanks,
Sunand -
Preformance tuning,its urgent,i would appreciate ur help
hi
I have some problem with performance tuning
SELECT stlty
stlnr
stlkn
stpoz
idnrk
FROM stpo
INTO TABLE t_detail
FOR ALL ENTRIES IN t_header
*MOD04{
WHERE stlnr eq t_header-stlnr
AND postp EQ p_postp
AND lkenz NE c_pos
AND stlty NE c_null.
WHERE
stlty IN s_stlty
AND stlnr EQ t_header-stlnr
AND postp EQ p_postp
AND lkenz EQ space.
*}MOD04
*}MOD03
IF sy-subrc <> 0. "t_detail[] IS INITIAL.
MESSAGE s045. "No Data found for the given selection criteria
LEAVE LIST-PROCESSING.
ELSE.
SORT t_detail BY stlnr.
ENDIF. "t_detail[] IS INITIAL.
iam getting some problem in performance tuning,its displaying as fetch request as 65% from stpo
i have to use some other table other than stpo,there is no other table than stpo,how to do it.even i cheked the indexes used as well
i would be extremely thankfull for ur valuable answersHi,
chk this you might get some idea...
Performance Analysis
Interpreting and correcting performance violations
PURPOSE
Each and every rule is outlined that results in a performance violation and explained in detail as to what to avoid while coding a program and correcting such performance violations if they exist in the code.
It is a general practice to use Select * from <database> This statement populates all the values of the structure in the database.
The effect is many fold:-
It increases the time to retrieve data from database
There is large amount of unused data in memory
It increases the processing time from work area or internal tables
It is always a good practice to retrieve only the required fields. Always use the syntax Select f1 f2 fn from <database>
e.g. Do not use the following statement:-
Data: i_mara like mara occurs 0 with header line.
Data: i_marc like marc occurs 0 with header line.
Select * from mara
Into table i_mara
Where matnr in s_matnr.
Select * from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Instead use the following statement:-
Data: begin of i_mara occurs 0,
Matnr like mara-matnr,
End of i_mara.
Data: begin of i_marc occurs 0,
Matnr like marc-matnr,
Werks like marc-werks,
End of i_marc.
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
The objective here is to identify all select statements in the program where there is a possibility to restrict the data retrieval from database. It identifies all select statements, which do not use the where condition effectively. It is a good practice to restrict the amount of data as this reduces the data retrieval time, improves the processing time and occupies less memory. Always use as many conditions in the where clause as possible.
e.g. Do not use the following statement:-
Select matnr from mara
Into table i_mara.
Loop at i_mara.
If i_mara-matnr+0(1) A.
Delete i_mara index sy-tabix.
Endif.
Endloop.
Instead use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr like A%.
Limit the access to database as far as possible. Accessing the same table multiple times increases the load on the database and may affect other programs accessing the same table. It is a good practice to retrieve all the relevant data from database in a single select statement into an internal table. All the processing should be done with this internal table.
e.g. Do not use the following statement:-
Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln.
Loop at i_vbak.
Case i_vbak-vkorg.
When IJI1.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = IJI1.
When IJI2.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = IJI2.
When IJI3.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = IJI3.
When IJI4.
Select hkunnr from kunh
Into table i_kunh
Where vkorg = IJI4.
Endcase.
Endloop.
Instead use the following statement:-
Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln.
Select hkunnr from kunh
Into table i_kunh
Where vkorg in (IJI1,IJI2,IJI3,IJI4).
Loop at i_vbak.
Case i_vvbak-vkorg.
When IJI1.
Read table i_kunh where vkorg = i_vbak-vkorg.
When IJI2.
Read table i_kunh where vkorg = i_vbak-vkorg.
When IJI3.
Read table i_kunh where vkorg = i_vbak-vkorg.
When IJI4.
Read table i_kunh where vkorg = i_vbak-vkorg.
Endcase.
Endloop.
When the size of the record or the number of records in the internal table is large, doing a linear search is time consuming. It is always a good practice to search a record by binary search (Always Sort the table before doing a binary search). The difference is felt especially in the production environment where the live data is usually huge. As of release 4.0 there are new types of internal tables like SORTED and HASHED which can be effectively used to further reduce the search time and processing time.
e.g. Do not use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Loop at I_mara.
Read table i_marc with key matnr = I_mara-matnr.
Endloop.
Instead use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Sort I_marc by matnr.
Loop at I_mara.
Read table i_marc with key
matnr = I_mara-matnr
binary search.
Endloop.
It is a good practice to search records from internal tables using a binary search. But extreme caution needs to be applied as it may either increase the time or may cause run time termination if it is not sorted.
Always the sort the internal table by the required keys before performing a binary search.
e.g. Do not use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Loop at I_mara.
Read table i_marc with key matnr = I_mara-matnr binary search.
Endloop.
Instead use the following statement:-
Select matnr from mara
Into table i_mara
Where matnr in s_matnr.
Select matnr werks from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Sort I_marc by matnr.
Loop at I_mara.
Read table i_marc with key
matnr = I_mara-matnr
binary search.
Endloop.
It is a general practice to use Read table <itab> This statement populates all the values of the structure in the workarea.
The effect is many fold:-
It increases the time to retrieve data from internal table
There is large amount of unused data in work area
It increases the processing time from work area later
It is always a good practice to retrieve only the required fields. Always use the syntax Read table <itab> transporting f1 f2 FN If just a check is being performed for existence of a record use Read table <itab> transporting no fields
e.g. Do not use the following statement:-
data: i_vbak like vbak occurs 0 with header line.
data: i_vbap like vbap occurs 0 with header line.
Loop at i_vbak.
read table i_vbap with key
vbeln = i_vbak-vbeln binary search.
If sy-subrc = 0 and i_vbap-posnr = 00010.
endif.
Endloop.
Instead use the following statement:-
data: i_vbak like vbak occurs 0 with header line.
data: i_vbap like vbap occurs 0 with header line.
Loop at i_vbak.
read table i_vbap transporting posnr with key
vbeln = i_vbak-vbeln binary search.
If sy-subrc = 0 and i_vbap-posnr = 00010.
endif.
Endloop.
There are many ways in which a select statement can be optimized. Effective use of primary and secondary indexes is one of them. Very little attention is paid especially to the secondary indexes. The following points should be noted before writing a select statement:-
Always use the fields in the where clause in the same order as the keys in the database table
Define the secondary indexes in the database for the fields which are most frequently used in the programs
Always try to use the best possible secondary index in the where clause if it exists
Do not have many secondary indexes defined for a table
Use as many keys both primary and secondary as possible to optimize data retrieval
As of release 4.5 it is now possible to define the secondary index in the where clause using %_HINT.
e.g. Do not use the following statement:-
Assuming a secondary index is defined on the field vkorg in table vbak
Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln.
Loop at i_vbak.
Case i_vbak-vkorg.
When IJI1.
When IJI2.
Endcase.
Endloop.
Instead use the following statement:-
Select vbeln vkorg from vbak
Into table i_vbak
Where vbeln in s_vbeln and
Vkorg in (IJI1,IJI2).
Loop at i_vbak.
Case i_vbak-vkorg.
When IJI1.
When IJI2.
Endcase.
Endloop.
This rule identifies all select statements in the program which use Select UP TO 1 ROWS It is a general practice to use this syntax to retrieve a single record from database table. If all the primary keys of a table are used in the where clause it is a good practice to use Select SINGLE
e.g. Do not use the following statement:-
select matnr up to 1 rows from mara
into mara-matnr
where matnr = itab-matnr.
Instead use the following statement:-
select single matnr from mara
into mara-matnr
where matnr = itab-matnr.
Table buffering is used to optimize data access from database. However if buffering is used on tables which are updated frequently, it has an adverse effect. The buffer area is updated at regular intervals by the system. If selection is done from this table before the buffer is updated the values are fetched first from the buffer and then from the database as the buffer is outdated. This increases the load on the system and also the processing time.
It is always a good practice to bypass buffer in the select statement for tables, which are buffered but are updated at regular frequently.
e.g. Do not use the following statement:-
select matnr from mara
into table i_mara
where matnr in s_matnr.
Instead use the following statement:-
select matnr bypassing buffer
from mara
into table i_mara
where matnr in s_matnr.
Most of the performance issues are related to database. It should be an inherent practice to reduce the access to database as far as possible and use it in the most optimal manner.
Just as in select statements, database updates also affect the performance. When records are to be inserted in database table do it in once. Populate the internal table completely, with which the records are to be inserted. Then update the table from the internal table instead of updating it for each record within the loop.
e.g. Do not use the following statement:-
Loop at i_mara.
Insert into mara values i_mara.
Endloop.
Instead use the following statement:-
Loop at i_mara.
Modify i_mara.
Endloop.
Insert mara from table i_mara.
Read statement fetches the record from the internal table using implicit key or explicit key. When no key or index is specified for the search criteria the key in the WA of the internal table is used implicitly for searching. SAP recommends explicit search criteria. Here the key or index is explicitly specified which is used for searching the records from internal table. This is faster than an implicit search.
e.g. Do not use the following statement:-
Loop at i_mara.
i_marc-matnr = i_mara-matnr.
read table i_marc.
Endloop.
Instead use the following statement:-
Loop at i_mara.
read table i_marc with key
matnr = i_mara-matnr.
Endloop.
When an internal table is sorted without specifying the keys the default is used. This may affect he performance. Always specify the keys when sorting.
e.g. Do not use the following statement:-
Data : begin of i_mara occurs 0,
ersda like mara-ersda,
ernam like mara-ernam,
laeda like mara-laeda,
aenam like mara-aenam,
vpsta like mara-vpsta,
End of i_mara.
Sort i_mara.
Loop at i_mara.
Endloop.
Instead use the following statement:-
Data : begin of i_mara occurs 0,
ersda like mara-ersda,
ernam like mara-ernam,
laeda like mara-laeda,
aenam like mara-aenam,
vpsta like mara-vpsta,
End of i_mara.
Sort i_mara by matnr aenam.
Loop at i_mara.
Endloop.
Whenever values need to be passed in a subroutine have type declarations assigned to the formal parameters. If no specific type declaration is possible then use TYPE ANY. This improves the performance. It is also recommended by SAP and can be noticed during extended program check (EPC) for the program.
e.g. Do not use the following statement:-
perform type_test using p_var1 p_var2 p_var3.
form type_test using p_var1 p_var2 p_var3.
endform.
Instead use the following statement:-
perform type_test using p_var1 p_var2 p_var3.
form type_test using p_var1 type c
p_var2 type any
p_var3 like mara-matnr.
endform.
This is similar to type declarations for subroutines explained in Rule 13. Except that type declarations need to be maintained for field-symbols. More help is provided by SAP under TYPE ASSIGNMENTS.
e.g. Do not use the following statement:-
field-symbols: <fs1>, <fs2>, <fs3>.
Instead use the following statement:-
field-symbols: <fs1> type c,
<fs2> like mara-matnr,
<fs3> like marc-werks.
The Do Enddo loop does not have a terminating condition. This has to be handled explicitly within the loop construct. This has some affect on the performance. On the other hand While Endwhile loop has a condition to satisfy before entering the loop. Hence will improve the performance and is also safe to use.
e.g. Do not use the following statement:-
Do.
If count > 20.
Exit.
Endif.
Count = count + 1.
Enddo.
Instead use the following statement:-
While ( count < 20 ).
Endwhile.
Do not use the CHECK construct within Loop Endloop. The end condition for check statement varies with the type of loop structure. For example within loop endloop it moves to the next loop pass whereas in form endform it terminates the subroutine. Thus the outcome may not be as expected. It is always safe to use If Endif.
e.g. Do not use the following statement:-
Loop at i_mara.
Check i_mara-matnr = 121212
Endloop.
Instead use the following statement:-
Loop at i_mara.
If i_mara-matnr = 121212
Endif.
Endloop.
This rule is similar to rule 16 except that it is for Select Endselect statements. The affect is similar to that in loop endloop.
e.g. Do not use the following statement:-
Select matnr from mara
Into table i_mara.
Check i_mara-matnr = 121212
Endselect.
Instead use the following statement:-
Select matnr from mara
Into table i_mara.
If i_mara-matnr = 121212
Endif.
Endselect.
This feature is also available in SAP Tips & Tricks. As can be seen the time measured for the same logical units of code using if elseif endif is almost twice that of case endcase.
It is always advisable to use case endcase as far as possible.
e.g. Do not use the following statement:-
IF C1A = 'A'. WRITE '1'.
ELSEIF C1A = 'B'. WRITE '2'.
ELSEIF C1A = 'C'. WRITE '3'.
ELSEIF C1A = 'D'. WRITE '4'.
ELSEIF C1A = 'E'. WRITE '5'.
ELSEIF C1A = 'F'. WRITE '6'.
ELSEIF C1A = 'G'. WRITE '7'.
ELSEIF C1A = 'H'. WRITE '8'.
ENDIF.
Instead use the following statement:-
CASE C1A.
WHEN 'A'. WRITE '1'.
WHEN 'B'. WRITE '2'.
WHEN 'C'. WRITE '3'.
WHEN 'D'. WRITE '4'.
WHEN 'E'. WRITE '5'.
WHEN 'F'. WRITE '6'.
WHEN 'G'. WRITE '7'.
WHEN 'H'. WRITE '8'.
ENDCASE.
Sort by Clause should be used instead of Order by at database level especially in the cases where the number of records is less. When using Order by clause we need to ensure that it uses an index. This is not the case in sort by. Also the performance is significantly high for Sort by Clause especially for small number of records.
e.g. Do not use the following statement:-
SELECT * FROM SBOOK
WHERE
CARRID = 'LH ' AND
CONNID = '0400' AND
FLDATE = '19950228'
ORDER BY PRIMARY KEY.
ENDSELECT.
Instead use the following statement:-
SELECT * FROM SBOOK
WHERE
CARRID = 'LH ' AND
CONNID = '0400' AND
FLDATE = '19950228'
SORT BY PRIMARY KEY.
ENDSELECT.
It is observed that break points are hard coded in the program during testing. Some are soft break points some are hard coded and also user specific. These are left in the program during transports and cause production problems later. This rule helps you to identify all the hard coded break points that are left in the program.
This rule is self-explanatory. Hence no examples are provided. -
Performance tuning fro a select stmnt consisting of View and DB Table
Hi Gurus ,
I have a select query which is consuming a lot of time for execution. Well it selects details from a custom view ZV_MKPF_MSEG (join of MKPF and MSEG) and table MARA as following
SELECT GI~BUKRS "company code
GI~WERKS "Plant
GI~BWART "Movement Type
GI~BUDAT "Posting Date
GI~MBLNR "Mat Doc Number
GI~ZEILE "Item
GI~KOSTL "cost center
GI~MENGE "Quantity
GI~SHKZG "Debit credit ind
MAMATKL MAMATNR MA~MEINS
INTO CORRESPONDING FIELDS OF TABLE PT_GITAB
FROM ZV_MKPF_MSEG AS GI INNER JOIN MARA AS MA
ON MAMATNR = GIMATNR
WHERE GI~BUKRS IN S_BUKRS AND
GI~WERKS IN S_WERKS AND
GI~BUDAT IN S_BUDAT AND
GI~BWART IN S_BWART AND
GI~MBLNR IN S_MBLNR AND
MA~MATKL IN S_MATKL
ORDER BY GIMBLNR GIZEILE.
how we should manipulate this query in order to increase the perfromance ? And could anybody give some idea about the use of indexes for a table since both MSEG and MKPF tables have indexes attached.
Thanx
Kylie
Edited by: kylietisha on Jun 7, 2010 5:01 PMModerator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob -
Performance tuning--- Automatic workload repository (AWR &ADDM)
hi all,
i am in need of some kind of demo or step-by-step procedure to implement Oracle 10G Automated workload repository and ADDM....................
atleast Point me metalink note should work fine .... actually i need real time implementation details .....
i will appreciate if any one can post their real time implementation experience/procedure ................ that would help me a lot !! i gotta do this in nxt 2 days...........
thanks in adv.This procedure appeared to be called by ADDM which will generate report based on AWR data. How did you "turn off" AWR? Is it from EM?
Here's more information about ADDM
http://www.oracle.com/technology/oramag/oracle/04-may/o34tech_talking.html
The problem is this shouldn't greatly impact your system performance. This is actually part Oracle so called self tuning effort. How did you determine it consumes a lot of memory? -
Cluster multi-block requests were consuming significant database time
Hi,
DB : 10.2.0.4 RAC ASM
OS : AIX 5.2 64-bit
We are facing too much performance issues and CPU idle time becoming 20%.Based on the AWR report , the top 5 events are showing that problem is in cluster side.I placed 1st node AWR report here for your suggestions.
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
PROD 1251728398 PROD1 1 10.2.0.4.0 YES msprod1
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 26177 26-Jul-11 14:29:02 142 37.7
End Snap: 26178 26-Jul-11 15:29:11 159 49.1
Elapsed: 60.15 (mins)
DB Time: 915.85 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 23,504M 23,504M Std Block Size: 8K
Shared Pool Size: 27,584M 27,584M Log Buffer: 14,248K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 28,126.82 2,675.18
Logical reads: 526,807.26 50,105.44
Block changes: 3,080.07 292.95
Physical reads: 962.90 91.58
Physical writes: 157.66 15.00
User calls: 1,392.75 132.47
Parses: 246.05 23.40
Hard parses: 11.03 1.05
Sorts: 42.07 4.00
Logons: 0.68 0.07
Executes: 930.74 88.52
Transactions: 10.51
% Blocks changed per Read: 0.58 Recursive Call %: 32.31
Rollback per transaction %: 9.68 Rows per Sort: 4276.06
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.87 Redo NoWait %: 100.00
Buffer Hit %: 99.84 In-memory Sort %: 99.99
Library Hit %: 98.25 Soft Parse %: 95.52
Execute to Parse %: 73.56 Latch Hit %: 99.51
Parse CPU to Parse Elapsd %: 9.22 % Non-Parse CPU: 99.94
Shared Pool Statistics Begin End
Memory Usage %: 68.11 71.55
% SQL with executions>1: 94.54 92.31
% Memory for SQL w/exec>1: 98.79 98.74
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 18,798 34.2
gc cr multi block request 46,184,663 18,075 0 32.9 Cluster
gc buffer busy 2,468,308 6,897 3 12.6 Cluster
gc current block 2-way 1,826,433 4,422 2 8.0 Cluster
db file sequential read 142,632 366 3 0.7 User I/O
RAC Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
Begin End
Number of Instances: 2 2
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
Global Cache blocks received: 14,112.50 1,342.26
Global Cache blocks served: 619.72 58.94
GCS/GES messages received: 2,099.38 199.68
GCS/GES messages sent: 23,341.11 2,220.01
DBWR Fusion writes: 3.43 0.33
Estd Interconnect traffic (KB) 122,826.57
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 97.16
Buffer access - remote cache %: 2.68
Buffer access - disk %: 0.16
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.6
Avg global cache cr block receive time (ms): 2.8
Avg global cache current block receive time (ms): 3.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 11.3
Avg global cache cr block flush time (ms): 1.7
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.0
Avg global cache current block flush time (ms): 4.1
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 2.4
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 6.27
% of indirect sent messages: 93.48
% of flow controlled messages: 0.25
Time Model Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
-> Total time in database user-calls (DB Time): 54951s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 54,618.2 99.4
DB CPU 18,798.1 34.2
parse time elapsed 494.3 .9
hard parse elapsed time 397.4 .7
PL/SQL execution elapsed time 38.6 .1
hard parse (sharing criteria) elapsed time 27.3 .0
sequence load elapsed time 5.0 .0
failed parse elapsed time 3.3 .0
PL/SQL compilation elapsed time 2.1 .0
inbound PL/SQL rpc elapsed time 1.2 .0
repeated bind elapsed time 0.8 .0
connection management call elapsed time 0.6 .0
hard parse (bind mismatch) elapsed time 0.3 .0
DB time 54,951.0 N/A
background elapsed time 1,027.9 N/A
background cpu time 518.1 N/A
Wait Class DB/Inst: PROD/PROD1 Snaps: 26177-26178
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
Cluster 50,666,311 .0 30,236 1 1,335.4
User I/O 419,542 .0 811 2 11.1
Network 4,824,383 .0 242 0 127.2
Other 797,753 88.5 208 0 21.0
Concurrency 212,350 .1 121 1 5.6
Commit 16,215 .0 53 3 0.4
System I/O 60,831 .0 29 0 1.6
Application 6,069 .0 6 1 0.2
Configuration 763 97.0 0 0 0.0
Second node top 5 events are as below,
Top 5 Timed Events
Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 25,959 42.2
db file sequential read 2,288,168 5,587 2 9.1 User I/O
gc current block 2-way 822,985 2,232 3 3.6 Cluster
read by other session 345,338 1,166 3 1.9 User I/O
gc cr multi block request 991,270 831 1 1.4 Cluster
My RAM is 95GB each node and SGA is 51 GB and PGA is 14 GB.
Any inputs from your side are greatly helpful to me ,please.
Thanks,
SunandHi Forstmann,
Thanks for your update.
Even i have collected ADDM report, extract of Node1 report as below
FINDING 1: 40% impact (22193 seconds)
Cluster multi-block requests were consuming significant database time.
RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"59qd3x0jg40h1". Look for an alternative plan that does not use
object scans.
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
FINDING 3: 13% impact (7008 seconds)
Read and write contention on database blocks was consuming significant
database time.
NO RECOMMENDATIONS AVAILABLE
SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Inter-instance messaging was consuming significant database
time on this instance. (55% impact [30269 seconds])
SYMPTOM: Wait class "Cluster" was consuming significant database
time. (55% impact [30271 seconds])
Any help from your side , please?
Thanks,
Sunand -
Why does out slapd process consume a lot of memory (1.1 GB)?
Our LDAP installation (4.1) on Solaris 5.8 consumes too much memory. It is now at 1.1 GB and rising. I have tuned the dbcachesize and cachesize parameters in the config file, but that has not helped.
I'd appreciate any help I can get in solving this problem. Also, I can't upgrade our installation due to other issues.Which exact version are you using? There were some memory fixes in the service packs.
Maybe you are looking for
-
How do you add a new device to itunes?
How do you add a new device to Itunes? Also how do you download the new version of Itunes from the iTunes site?
-
I can't open a new tab anymore; what happened?
In the last few days, when I try to open a new tab, nothing happens!
-
Hi Everyone, I bought my iMovie off the apple store afew months ago, and with it have created afew decent movies. But just yesterday, I had simply dragged a photograph into my filmstrip, for the program to stall. iMovie became unresponsive and then v
-
HT4211 how can you find out what kb are being used by what apps
how can you find out what kb are being used by what apps there is always alot of kb being used even when I don't use my phone
-
Not getting agent for standard rule in leave approval
hi everyone, I have copied standard workflow(WS00**77) for leave request and used standard rules for agents. The second rule 00400135 is for personnel administrator. I have assigned user name for this Pers.admin in both SPRO and in HR ORG manageme