No one wants my Performance Tuning experience?
I worked as a DBA for 2 years in IT major where my work was confined to Performance Tuning & monitoring
and so I had no opportunity to get the experience of Backup & Recovery,RMAN and other Core DBA responsibilities.
I had to leave in July last year due to my some other career interest. Unfortunately that didn't get
fulfilled.
So from January 15th I have decided to resume my career as a DBA.
I went to a 3 interviews ,but every time my lack of experience in Backup & Recovery and other Core DBA responsibilities is not making my case stronger in the interviews.
I am losing confidence over my chances of getting a job. Most(Almost 95%) of the IT companies want minimum 3 years experience and that too with Backup & Recovery.
I have not yet done OCA (I passed only 1z0 042). So i am planning to give 1z0-007 so that i become OCA.
My questions to all?
1.*How much value OCA hold in the market* along with 2 years experience(In India i am based,more specifically Pune)
2. How should i get the practical experience of Backup & Recovery at home itself?+_
3. I dont have the financial power to go for TRAINING part of the OCP, so how should i make my resume look strong?
4. Doesn't any company want a DBA who hasn't had a experience in Backup & Recovery. Is Performance Tuning & Monitoring experience such a waste?
Cheers,
Kunwar
user12591638 wrote:
I worked as a DBA for 2 years in IT major where my work was confined to Performance Tuning & monitoring
and so I had no opportunity to get the experience of Backup & Recovery,RMAN and other Core DBA responsibilities.
I had to leave in July last year due to my some other career interest. Unfortunately that didn't get
fulfilled.
So from January 15th I have decided to resume my career as a DBA.
I went to a 3 interviews ,but every time my lack of experience in Backup & Recovery and other Core DBA responsibilities is not making my case stronger in the interviews.
I am losing confidence over my chances of getting a job. Most(Almost 95%) of the IT companies want minimum 3 years experience and that too with Backup & Recovery.
I have not yet done OCA (I passed only 1z0 042). So i am planning to give 1z0-007 so that i become OCA.
OK. India has produced a lot of DBA's over the past few years, so I suspect the supply/demand situation is not in your favour, especially with the global economic climate.
And because of career break employers will often have other candidates who look stronger. And they will have certifications.
So you've been looking a month and got 3 interviews .... that's actually not that bad.
My questions to all?
1.*How much value OCA hold in the market* along with 2 years experience(In India i am based,more specifically Pune)In general not a lot. However the only thing stopping you gettig an 10g DBA OCA is an SQL exam pass. Now if you can't pass that exam then you don't deserve the job. And given you've passed 1z0-042 an employer wll ask why who haven't passed the SQL exam (especially if you're into tuing and performance). People in India seem to love 1z0-007 .... I prefer to see people who have taken 1z0-051 or 1z0-047. However I suspect you cannot afford to spend the extra time on 1z0-047 as you need to concentrate on backup/recovery.
2. How should i get the practical experience of Backup & Recovery at home itself?+_Hopefully you've got kit to practice on.
3. I dont have the financial power to go for TRAINING part of the OCP, so how should i make my resume look strong?Resume's are not my best area. However in your case studying and passing 1z0-043 will help. 1z0-043 is 50% backup and recovery so that has synergy with your weak area. If finance is an issue consider a WDP course if you can get one. Some eligible courses are cheaper than others, but even they may be beyond reach,
4. Doesn't any company want a DBA who hasn't had a experience in Backup & Recovery. Is Performance Tuning & Monitoring experience such a waste?
Obviously performance tuning and monitoring is not a waste ..... but remember Oracle is increasingly attemting to automate these in later vesions, and different techniques are available in 11gR2 compared to 9i.
Cheers,
KunwarYou might find: RMAN Recipes for Oracle Database 11g : A Problem-Solution Approach ... ISBN13: 978-1-59059-851-1 ... a little useful.
Rgds bigdelboy. (I've been composing this over a 2 hour period with several distractions and have run out of time so some bits might not make sense ... i may even have siad something silly).
Similar Messages
-
Oracle Application Performance Tuning
Hello all,
Please forgive me if I am asking this in the wrong section.
I am a s/w engineer with 2 years of exp. I have been working as a performance tuning resource for Oracle EBS in my company. The work mostly involved SQL tuning and dealing with indexes etc. In this time I took up interest in DBA stuff and I completed my OCA in Oracle 10g. Now my comany is giving me an oppurtunity to move into an Application DBA team and I am totally confused about it.
Becoming an Apps DBA will mean that the effort I put into the certification in 10g will be of no use. There are other papers for Apps DBA certification. Should I stay put in performance tuning & wait for a Core DBA chance or shall I accept the Apps DBA oppurtunity.
Also, does my exp. in SQL performace tuning hold any value as such with out an exposure to DBA activities?
Kindly guide me in this regards as I am very confused at this juncture.
Regards,
JithinJithin wrote:
Hello bigdelboy , Thank you for your reply.
Yes, By oracle Apps DBA I meant Oracle EBS.
Clearing 1Z0-046 is an option. However, my doubt is will clearing both the exams Admin I and Admin II be of any help without having practical expericnce? The EBS DBA work involves support and patching, cloning, and new node setup etc.
Also, is my performance tuning experience going to help me in any way in the journey forward as a DBA/ EBS DBA?
Thank you for your valuable time.
Regards,
Jithin SarathThe way I read it is this.
People notice you are capable of understanding and performance tuning SQL. And you must have some knowledge of Oracle EBS. And in fact you also have a DBA OCA.
So there is a 98% + chance you can be made into a good Oracle Apps DBA (and core DBA). If I was in their position I'd want you on my team too; this is the way we used to bring on DBA's in the old days before certification and it has a lot of merit still. Okay you can only do limited tasks at first ... but the number of taks you can do will increase dramatically over a number of months.
I would imagine the Oracle Apps DBA will be delighted to have someone on board who can performance tune SQL which sometimes can sometimes seem more like an art than a science. The patching etc can be taught in small doses. If they are a good team they wont try to give you everything at once, they'll get you to learn one procedure at a time.
And remember, if in doubt ask; and if you dont understand ask again. And be safe rather than sorry on actions that could be critial.
If your worried about liinux: Linux Recipes For Oracle Dbas (Apress) might be a good book to read but could be expensive in India
Likewise: (Oracle Applications Dba Field Guide) may suit and even (Rman Recipes For Oracle Database 11 G: A Problem-solution Approach) may have some value if you need to get up to speed quickly in those areas.
These are not perfect but they sometimes consolidate the information neatly; however only buy if you feel they are cheap enough. You may well buy them and feel disappointed (These all happen to be by Apress who seem to have produced a number of good books ... they've also published some rubbish as well)
And go over the 2-day dba examples as well and linux install examples n OTN as well ... they are free compared to the books I mentioned.
Rgds -bigdelboy. -
Hyperion EPM 11.1.2 Performance tuning
Hi
Can any one let know performance tuning ofe Hyperion 11.1.2 Planning , BIPlus workspace and EAS as we are having issues of slowly running webforms, FR reports and Calc and BR taking time to open and run and login taking time.
ThanksPer CL's point sizing servers does depend on experience. Worst case you can refer to the oracle reference diagrams at: http://download.oracle.com/docs/cd/E17236_01/epm.1112/epm_install_start_here_11121/frameset.htm?ch04s02s04.html
On the 8 GB of ram it depends on what you have on the server. If you review the 100 user planning reference configuration you will note they have 3 servers in this configuration; so if you were to install everything on one machine in simple terms add up the number of cores + amount of memory on each and that would be a 10 core machine with 32 GB of ram.
Regarding virtual servers -- it really depends on the physical servers backing those. I've seen virtual work very well and really awful. The really awful servers happen to be running on 4 year old physical servers. The really well cases were on brand new hardware dedicated to the environment. The newness of the infrastructure and the management of it make a world of difference. You want modern servers and people that know how to handle virtual machines for applications which actually perform work; sometimes IT wants to give you bare minimums and make you prove your application is using it -- which is hard to do until you let all your users have the application and now you have been given servers which are too small and your users are not thrilled about the lack of performance which can impede adoption especially if they are moving from an Excel centric world where they have a dedicate set of processors and your IT gives you one virtual server that is less powerful than one user desktop.
Good luck,
John A. Booth
http://www.metavero.com -
Privilleges for performance tuning
can we assign any specific privilleges to DBA for the performance tuning.
i dont want to give sysdba and sysopr privilleges to DBA.The question should be why there is need to tune anything in Oracle database. What is the problem, have you defined the problem in a very clear manner ? Is there anything which is called problem or it is something other one ? What performance tuning tool you are going to use and why ?
Performance tuning is only two words, but it is very big topic; a topic on which 100 books / blogs have been written and still writing going on.
What and why are you trying to tune and what ORA you got ? As such there is no ready made role in oracle which says something like "PT_ROLE". We just issue the sql to the dictionary objects and as and when we gets privileges related error, we provides select on those sys objects to user; if he really needs though.
Regards
Girish Sharma -
I want to do 1Z0-054 11g Performance Tuning
I want to do 1Z0-054 11g Performance Tuning
for this first needed 1Z0-007 or 1Z0-047 or 1Z0-051, then 1Z0-052 & 1Z0-054
1) Which should be best in 1Z0-007 or 1Z0-047 or 1Z0-051
2) Recommended books... where will get course materials.
Thanks
Harsh857317 wrote:
I want to do 1Z0-054 11g Performance Tuning
for this first needed 1Z0-007 or 1Z0-047 or 1Z0-051, then 1Z0-052 & 1Z0-054
1) Which should be best in 1Z0-007 or 1Z0-047 or 1Z0-051
2) Recommended books... where will get course materials.
Thanks
HarshI always think it is useful to be precise about these things:
You can take the exam +1Z0-054 11g Performance Tuning+ whenever you like. Simply book it, turn up and take it.
In the event you pass you've passed the exam!!! If you don't it is an expensive re-take.
However .....
While you;ve passed the exam you would not be a 'Oracle Database 11g Performance Tuning Certified Expert ' until Oracle confirms to you that certification, and Oracle you have identified all the requriements wht Oracle have detailed here:-
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=287
This indicates one of two additional pre-requisites to the exam pass:-
: Either (1) : A prior DBA 11g Certification.
: Or (2): Verification of your attendance to the Oracle Univerisity Oracle Database 11g: Performance Tuning training course.
Above you have sort of indicated the 11g DBA OCP credential requirements (omitting the mandatory training requirement) ( http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198)
The journey to 'Oracle Database 11g Performance Tuning Certified Expert ' is a long one, and i suggest most people should look at the basics first. Begin by becoming familiar with http://certification.oracle.com.
If you wish to continue I would consider focusing on 1z0-051 as a first exam or possilby attempting to find suitable workforce development programme training (assuming oracle univerisity is too expensive). Please be aware IMHO Oracle are not responsible for WDP institutes and I suspect some are very bad.
https://workforce.oracle.com/pls/wdp/new_home.main -
Hello, I want to perform a simple task.I have two Adobe ID accounts linked to two different email addresses. One account i do not want to use anymore but it is linked to my main email account. the account i do not want anymore is still in its free stage and i have not purchased anything with it. My other Adobe ID account is linked to an email i rarely use and don't particularly want to use. i have tried changing the linked email account to my regular one i use. But it obviously does not allow me because of it already being linked to my other non used obsolete Adobe ID account. Is there any solution to this?? Please help.
Adobe contact information - http://helpx.adobe.com/contact.html may help
-
I want latest testking for 1z0-033 performance tuning
can anybody send me the latest testking of 1z0-033 performance tuning. at [email protected]
Take a look here below :
Oracle Certification Program Candidate
Nicolas. -
Performance tuning of BPEL processes in SOA Suite 11g
Hi,
We are working with a customer for performance tuning of SOA Suite 11g, one of the areas is to tune the BPEL processes. I am new to this and started out with stress testing Hello World process using SOAPUI tool. I would like help with the below topics -
1. How do I interpret the statistics collected during stress testing? Do we have any benchmark set that can indicate that the performance is ok.
2. Do we need to run stress tests for every BPEL process deployed?
2. Is there any performance tuning strategy documentation available? Or can anybody share his/her experiences to guide me?
Thanks in advance!
Sritama1. How do I interpret the statistics collected during stress testing? Do we have any benchmark set that can indicate that the performance is ok.
You need
pay attention to:
java heap usage vs java heap capacity
java eden usage vs java eden capacity
JDBC pool initial connections vs JDBC pool capacity connections
if you are using linux: top
if you are using aix: topas
2. Do we need to run stress tests for every BPEL process deployed?
yes, you need test each BPEL. You can use "Jmeter" tool.
Download Jmeter from here: Apache JMeter - Apache JMeter™
Other tools:
jstat
jstack
jps -v
Enterprise Manager
WebLogic Console
VisualVM
JRockit Mission Control
3. Is there any performance tuning strategy documentation available? Or can anybody share his/her experiences to guide me?
I recommend "Oracle SOA Suite 11g Performance Tuning Cookbook" http://www.amazon.com/Oracle-Suite-Performance-Tuning-Cookbook/dp/1849688842/ref=sr_1_1?ie=UTF8&qid=1378482031&sr=8-1&keywords=oracle+soa+suite+11g+performance+tuning+cookbook -
Hi.. I procure N70 last week.. I installed lotz of software and uninstalled some.. now I feel, its slower then previous. Please tell me how to do performance tuning in my mobile?Message Edited by vasanthtt on 13-Sep-2006
06:44 AMMake a phone backup, and format the phone. The restore and install only the ones you want... but not a normal thing, the slow after install. See, with FExplorer if You have some running processes on background.
-
Performance tuning in BPEL and ESB
Hi,
Any one can tell me how to do Performance tuning in BPEL and ESB.
How to create WEB SERVICES in BPELHi',
Performance tuning in BPEL and ESB.
***This is very big topic I can give you 2 points here
In BPEL we should avoid the use of duplicate variable, the best way to do this is, when ever we are creating a new variable
we need to ask can we reuse variable from inside the process, example when creating the input/output variable in Invoke activity
we need to check if we can use some existing variable instead of creating new.
All the DB related operation should be performed in 1 single composite.
How to create WEB SERVICES in BPEL
Not sure what you want to ask here, as BPEL is itself a webservice.
-Yatan -
Performance Tuning Query on Large Tables
Hi All,
I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
Kind regards
Mike
-------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
/* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
--CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
CALLED_DATE_TIME DATE, DURATION NUMBER );
COMMIT;
-- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
-- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
-- AND DURATIONS (ALLOW +/- 5 SECS)
CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
CALLED_DATE_TIME DATE, DURATION NUMBER )
COMMIT;
--CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
COMMIT;
-- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 6);
INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
DURATION ) VALUES (
'004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
, 17);
COMMIT;
/* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
COMPARE.DURATION COMPARE_DURATION
FROM
SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
FROM TIME_TEST
) MAIN
LEFT JOIN
SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
FROM TIME_TEST_COMPARE
) COMPARE
ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);What does your execution plan look like?
-
Reg: Process Chain, query performance tuning steps
Hi All,
I come across a question like, There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
Please let me know the steps involved in query performance tuning and aggregate tuning.
Thanks & Regards
Omkar.KHi,
Process Chain
Method 1 (when it fails in a step/request)
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
How is it possible to restart a process chain at a failed step/request?
Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. copy the variant from the popup to the variante of table rspcprocesslog
2. copy the instance from the popup to the instance of table rspcprocesslog
3. copy the start date from the popup to the batchdate of table rspcprocesslog
Press F8 to display the entries of table rspcprocesslog.
Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid
2. rspcprocesslog-type -> i_type
3. rspcprocesslog-variante -> i_variant
4. rspcprocesslog-instance -> i_instance
5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
Query performance tuning
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Thanks,
JituK -
Performance Tuning for ECC 6.0
Hi All,
I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
Can you please guide me how to do these steps in the sytem
1) Reorganization should be done at least for the top 10 huge tables
and their indexes
2) Unaccessed data can be taken out by SAP Archiving
3)Apply the relevant corrections for top sap standard objects
4) CBO update statistics must be latest for all SAP and customer objects
I have never done performance tuning and want to do it on this system.
Regards,
JitenderHi,
Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
I require your inputs for performance tuning
CPU
Utilization user % 3 Count 2
system % 3 Load average 1min 0.11
idle % 1 5 min 0.21
io wait % 93 15 min 0.22
System calls/s 982 Context switches/s 1752
Interrupts/s 4528
Memory
Physical mem avail Kb 6291456 Physical mem free Kb 93992
Pages in/s 473 Kb paged in/s 3784
Pages out/s 211 Kb paged out/s 1688
Pool
Configured swap Kb 26869896 Maximum swap-space Kb 26869896
Free in swap-space Kb 21631032 Actual swap-space Kb 26869896
Disk with highest response time
Name md3 Response time ms 51
Utilization 2 Queue 0
Avg wait time ms 0 Avg service time ms 51
Kb transfered/s 2 Operations/s 0
Current parameters in the system
System: sapretail_RET_01 Profile Parameters for SAP Buffers
Date and Time: 08.01.2009 13:27:54
Buffer Name Comment
Profile Parameter Value Unit Comment
Program buffer PXA
abap/buffersize 450000 kB Size of program buffer
abap/pxa shared Program buffer mode
|
CUA buffer CUA
rsdb/cua/buffersize 3000 kB Size of CUA buffer
The number of max. buffered CUA objects is always: size / (2 kB)
|
Screen buffer PRES
zcsa/presentation_buffer_area 4400000 Byte Size of screen buffer
sap/bufdir_entries 2000 Max. number of buffered screens
|
Generic key table buffer TABL
zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer
zcsa/db_max_buftab 5000 Max. number of buffered objects
|
Single record table buffer TABLP
rtbb/buffer_length 10000 kB Size of single record table buffer
rtbb/max_tables 500 Max. number of buffered tables
|
Export/import buffer EIBUF
rsdb/obj/buffersize 4096 kB Size of export/import buffer
rsdb/obj/max_objects 2000 Max. number of objects in the buffer
rsdb/obj/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/obj/mutex_n 0 Number of mutexes in Export/Import buffer
|
OTR buffer OTR
rsdb/otr/buffersize_kb 4096 kB Size of OTR buffer
rsdb/otr/max_objects 2000 Max. number of objects in the buffer
rsdb/otr/mutex_n 0 Number of mutexes in OTR buffer
|
Exp/Imp SHM buffer ESM
rsdb/esm/buffersize_kb 4096 kB Size of exp/imp SHM buffer
rsdb/esm/max_objects 2000 Max. number of objects in the buffer
rsdb/esm/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/esm/mutex_n 0 Number of mutexes in Exp/Imp SHM buffer
|
Table definition buffer TTAB
rsdb/ntab/entrycount 20000 Max. number of table definitions buffered
The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
|
Field description buffer FTAB
rsdb/ntab/ftabsize 30000 kB Size of field description buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of table descriptions buffered
FTAB needs about 700 bytes per used entry
|
Initial record buffer IRBD
rsdb/ntab/irbdsize 6000 kB Size of initial record buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of initial records buffered
IRBD needs about 300 bytes per used entry
|
Short nametab (NTAB) SNTAB
rsdb/ntab/sntabsize 3000 kB Size of short nametab
rsdb/ntab/entrycount 20000 Max. number / 2 of entries buffered
SNTAB needs about 150 bytes per used entry
|
Calendar buffer CALE
zcsa/calendar_area 500000 Byte Size of calendar buffer
zcsa/calendar_ids 200 Max. number of directory entries
|
Roll, extended and heap memory EXTM
ztta/roll_area 3000000 Byte Roll area per workprocess (total)
ztta/roll_first 1 Byte First amount of roll area used in a dialog WP
ztta/short_area 3200000 Byte Short area per workprocess
rdisp/ROLL_SHM 16384 8 kB Part of roll file in shared memory
rdisp/PG_SHM 8192 8 kB Part of paging file in shared memory
rdisp/PG_LOCAL 150 8 kB Paging buffer per workprocess
em/initial_size_MB 4092 MB Initial size of extended memory
em/blocksize_KB 4096 kB Size of one extended memory block
em/address_space_MB 4092 MB Address space reserved for ext. mem. (NT only)
ztta/roll_extension 2000000000 Byte Max. extended mem. per session (external mode)
abap/heap_area_dia 2000000000 Byte Max. heap memory for dialog workprocesses
abap/heap_area_nondia 2000000000 Byte Max. heap memory for non-dialog workprocesses
abap/heap_area_total 2000000000 Byte Max. usable heap memory
abap/heaplimit 40000000 Byte Workprocess restart limit of heap memory
abap/use_paging 0 Paging for flat tables used (1) or not (0)
|
Statistic parameters
rsdb/staton 1 Statistic turned on (1) or off (0)
rsdb/stattime 0 Times for statistic turned on (1) or off (0)
Regards,
Jitender -
Dear All,
In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
And also I have one more query that what is " Table Statistics " , what is the use of this ...
Please give ur valueble suggestions to us in improving the performance .
Thanks in Advance !Hi
<b>Ways of Performance Tuning</b>
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one Internal table
<b>Selection Criteria</b>
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
<b>Points # 1/2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
<b>Select Statements Select Queries</b>
1. Avoid nested selects
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4. For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
5. Use Select Single if all primary key fields are supplied in the Where condition .
<b>Point # 1</b>
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
<b>Point # 2</b>
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
<b>Point # 3</b>
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
<b>Point # 4</b>
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
<b>Point # 5</b>
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
<b>Select Statements contd.. SQL Interface</b>
1. Use column updates instead of single-row updates
to update your database tables.
2. For all frequently used Select statements, try to use an index.
3. Using buffered tables improves the performance considerably.
<b>Point # 1</b>
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
<b>Point # 2</b>
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
<b>Point # 3</b>
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
<b>Select Statements contd Aggregate Functions</b>
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
<b>Select Statements contd For All Entries</b>
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
<u>Points to be must considered FOR ALL ENTRIES</u> Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
<b>Select Statements contd Select Over more than one Internal table</b>
1. Its better to use a views instead of nested Select statements.
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3. Instead of using nested Select loops it is often better to use subqueries.
<b>Point # 1</b>
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
<b>Point # 2</b>
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
<b>Point # 3</b>
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
<b>Internal Tables</b>
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
<b>Point # 2</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
<b>Point # 3</b>
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
<b>Point # 5</b>
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
<b>Point # 6</b>
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
<b>Point # 7</b>
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
<b>Point # 8</b>
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
<b>Point # 9</b>
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 10</b>
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
<b>Point # 11</b>
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
<b>Point # 12</b>
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
<b>Point # 13</b>SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
<b>Internal Tables contd
Hashed and Sorted tables</b>
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
<b>Point # 1</b>
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
<b>Point # 2</b>
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
<b>Reward if usefull</b> -
How to Achieve Performance Tuning In BPM Studio
Please Tell me how to achieve performance Tuning in BPm Project . let me know do have any documentation for this .
Thanks in Advance .*5. Group Automatic Activities in a Single Transactional Boundary*
When you have several automatic activities in a sequence, recognize this as a potential performance improvement opportunity. The default behavior of Oracle BPM is during each Automatic activity's execution:
1. Initiate the transaction
2. Read the work item instance's variable information from the Engine's database
3. Execute the logic in the Automatic activity
4. If no system exception occurs, commit the transaction and write the instance variable information back to the Engine's database
Many times you'll instead want to speed execution when there are several Automatic activities in a sequence. If three Automatic activities are in a sequence, then the four items listed above will occur three times. By grouping these into a single transactional boundary, instead of 12 steps you would have:
1. Initiate the transaction
2. Read the work item instance's variable information from the Engine's database
3. Execute the logic in the first Automatic activity
4. Execute the logic in the second Automatic activity
5. Execute the logic in the third Automatic activity
6. If no system exception occurs, commit the transaction and write the instance variable information back to the Engine's database
This grouping of Automatic activities into a single transactional boundary can be done in one of these three ways:
1. Create a Group around the sequence of Automatic activities (lasso the three activities) -> right mouse click inside the dotted line -> click "Create a Group with Selection" -> click "Runtime" in the upper left corner -> click the checkbox "Is Atomic".
2. Instead of placing the Automatic actiivities in the process, add them in a Procedure and then call the Procedure from a new Automatic activity in the process.
3. In Oracle BPM 10g you can enable "Greedy" execution for the process by right mouse clicking the process's name in the Project Navigator tab -> click "Properties" -> click the "Advanced" tab -> click the "Enable Greedy Execution" radio button.
Dan
Maybe you are looking for
-
* Sometimes the history appears to compile previous visits to a site into only the entry for the most recent visit, but other times it does not. In other words, sometimes when a site is re-visited, the history will move a previous visit to that site
-
Keynote created in 6.5 won't open in 6.1
Anyone else encountering this issue? A Keynote presentation created in version 6.5 (Yosemite) cannot be opened on another mac with keynote version 6.1 (Mavericks). Attempts are met with "Cannot be opened" and prompts to update to Yosemite. I work a
-
My home wifi has just stopped working
All of a sudden I can't connect to my home wifi network. I have tried resetting the network settings, and also resetting the ipod, none of which work. It comes up with the name of the network, with the strength of signal etc, but when I put in the pa
-
Comments show in iWeb, but not on published site (on Mobileme)
Does anyone know how I can get the comments I see on my blog entries in the iWeb app to show up on my website when published (I'm using Apple's Mobileme service). They have always worked in the past, and I recently redesigned the site. I changed the
-
Is Bitdefender virus scan worth it?
i was wondering if anyone who has this is happier with it than the first review I read about it. Is is worth the trouble ? I have never had a virus that I know of on any of my Apple devices.