Performance tuning for new OFMW environment
Hello
We have just set up a new UAT environment entirely on all Oracle FMW latest products which includes:
1) OS -RHEL V5
2) WebCenter
3) SOA Suite
4) Database
5) IDM
6) HTTP Server
7) WebLogic Server
As as post infrastructure setup plan we need to to performance tuning as well. Shall we do only WLS performance tuning or we need to tune each and every product in order to get optimum results.
Any guide? resource to help?
Pls advice
Regards
Dev
When you install the product soa-suite (and probably webcenter too) a prerequisite check is performed
that gives information about packages and parameters that need to be adjusted on your operating system.
For products that run on WebLogic Server you probably have to tune the JVM, examples can be found here:
- http://middlewaremagic.com/weblogic/?p=6930 (discusses JRockit)
- http://middlewaremagic.com/weblogic/?p=7083 (focuses on Coherence, but the considerations can be applied to other environments as well)
Examples that set-up the SOA-Suite can be found here:
- http://middlewaremagic.com/weblogic/?p=6040 (discusses considerations when installing the SOA suite)
- http://middlewaremagic.com/weblogic/?p=6872 (discusses considerations when adding the web-tier to the SOA suite)
The enterprise deployment guides can help too:
- SOA Suite: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12036/toc.htm
- WebCenter: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12037/toc.htm
These also contain sections on integration with Oracle Identity Management and the Oracle HTTP Server
Similar Messages
-
Performance Tuning for BAM 11G
Hi All
Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on LinuxIt would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
Few key things to follow:
1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
Check the Dataobject tables involved in the query for missing indexes.
4. Use batching (this is on by default for most cases)
5. Fast network
6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client -
What are the steps doing a performance tuning for pertcular program
What are the steps doing a performance tuning for pertcular program
chk this link
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
checkout these links:
www.sapgenie.com/abap/performance.htm
www.sap-img.com/abap/ performance-tuning-for-data-selection-statement.htm
www.thespot4sap.com/Articles/ SAPABAPPerformanceTuning_Introduction.asp
Message was edited by: Chandrasekhar Jagarlamudi -
Performance Tuning for Concurrent Reports
Hi,
Can you help me with Performance Tuning for Concurrent Reports/Requests ?
It was running fine but suddenly running slow.
Request Name : Participation Process: Compensation programWhat is your application release?
Please see if (Performance Issues With Participation Process: Compensation Workbench [ID 389979.1]) is applicable.
To enable trace/debug, please see (FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12 [ID 296559.1] -- 5. How does one enable trace for a concurrent program INCLUDING bind variables and waits?).
Thanks,
Hussein -
Performance tuning for siebel CRM application on oracle database
Hi,
Please send me the link for Performance tuning for siebel CRM application on oracle database. If there are any white papers please send me the link.
Thanks,
RajeshHi,
This metalink document is very useful, if you have any other documents or links please inform me.
Thanks once again
Rajesh -
Performance tuning for Oracle SOA 11g
Hi,
Ours is SOA 11g Environment ver 11.1.1.4.2 on Windows Box.
what are the performance tuning steps / guidelines that needs to be followed for SOA 11g in production env
Thanks in adv
Thanks & Regars,
anvv sharmahttp://download.oracle.com/docs/cd/E17904_01/core.1111/e10108/toc.htm
Regards,
Anuj -
Need help in Performance tuning for function...
Hi all,
I am using the below algorithm for calculating the Luhn Alogorithm to calculate the 15th luhn digit for an IMEI (Phone Sim Card).
But the below function is taking about 6 min for 5 million records. I had 170 million records in a table want to calculate the luhn digit for all of them which might take up to 4-5 hours.Please help me performance tuning (better way or better logic for luhn calculation) to the below function.
A wikipedia link is provided for the luhn algorithm below
Create or Replace FUNCTION AddLuhnToIMEI (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no NUMBER (2) := LENGTH (LuhnPrimitive);
Multiplier NUMBER (1) := 2;
Total_Sum NUMBER (4) := 0;
Plus NUMBER (2);
ReturnLuhn VARCHAR2 (25);
BEGIN
WHILE Index_no >= 1
LOOP
Plus := Multiplier * (TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1)));
Multiplier := (3 - Multiplier);
Total_Sum := Total_Sum + TO_NUMBER (TRUNC ( (Plus / 10))) + MOD (Plus, 10);
Index_no := Index_no - 1;
END LOOP;
ReturnLuhn := LuhnPrimitive || CASE
WHEN MOD (Total_Sum, 10) = 0 THEN '0'
ELSE TO_CHAR (10 - MOD (Total_Sum, 10))
END;
RETURN ReturnLuhn;
EXCEPTION
WHEN OTHERS
THEN
RETURN (LuhnPrimitive);
END AddLuhnToIMEI;
http://en.wikipedia.org/wiki/Luhn_algorithmAny sort of help is much appreciated....
Thanks
RedeThere is a not needed to_number function in it. TRUNC will already return a number.
Also the MOD function can be avoided at some steps. Since multiplying by 2 will never be higher then 18 you can speed up the calculation with this.
create or replace
FUNCTION AddLuhnToIMEI_fast (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no pls_Integer;
Multiplier pls_Integer := 2;
Total_Sum pls_Integer := 0;
Plus pls_Integer;
rest pls_integer;
ReturnLuhn VARCHAR2 (25);
BEGIN
for Index_no in reverse 1..LENGTH (LuhnPrimitive) LOOP
Plus := Multiplier * TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1));
Multiplier := 3 - Multiplier;
if Plus < 10 then
Total_Sum := Total_Sum + Plus ;
else
Total_Sum := Total_Sum + Plus - 9;
end if;
END LOOP;
rest := MOD (Total_Sum, 10);
ReturnLuhn := LuhnPrimitive || CASE WHEN rest = 0 THEN '0' ELSE TO_CHAR (10 - rest) END;
RETURN ReturnLuhn;
END AddLuhnToIMEI_fast;
/My tests gave an improvement for about 40%.
The next step to try could be to use native complilation on this function. This can give an additional big boost.
Edited by: Sven W. on Mar 9, 2011 8:11 PM -
Performance Tuning for ECC 6.0
Hi All,
I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
Can you please guide me how to do these steps in the sytem
1) Reorganization should be done at least for the top 10 huge tables
and their indexes
2) Unaccessed data can be taken out by SAP Archiving
3)Apply the relevant corrections for top sap standard objects
4) CBO update statistics must be latest for all SAP and customer objects
I have never done performance tuning and want to do it on this system.
Regards,
JitenderHi,
Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
I require your inputs for performance tuning
CPU
Utilization user % 3 Count 2
system % 3 Load average 1min 0.11
idle % 1 5 min 0.21
io wait % 93 15 min 0.22
System calls/s 982 Context switches/s 1752
Interrupts/s 4528
Memory
Physical mem avail Kb 6291456 Physical mem free Kb 93992
Pages in/s 473 Kb paged in/s 3784
Pages out/s 211 Kb paged out/s 1688
Pool
Configured swap Kb 26869896 Maximum swap-space Kb 26869896
Free in swap-space Kb 21631032 Actual swap-space Kb 26869896
Disk with highest response time
Name md3 Response time ms 51
Utilization 2 Queue 0
Avg wait time ms 0 Avg service time ms 51
Kb transfered/s 2 Operations/s 0
Current parameters in the system
System: sapretail_RET_01 Profile Parameters for SAP Buffers
Date and Time: 08.01.2009 13:27:54
Buffer Name Comment
Profile Parameter Value Unit Comment
Program buffer PXA
abap/buffersize 450000 kB Size of program buffer
abap/pxa shared Program buffer mode
|
CUA buffer CUA
rsdb/cua/buffersize 3000 kB Size of CUA buffer
The number of max. buffered CUA objects is always: size / (2 kB)
|
Screen buffer PRES
zcsa/presentation_buffer_area 4400000 Byte Size of screen buffer
sap/bufdir_entries 2000 Max. number of buffered screens
|
Generic key table buffer TABL
zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer
zcsa/db_max_buftab 5000 Max. number of buffered objects
|
Single record table buffer TABLP
rtbb/buffer_length 10000 kB Size of single record table buffer
rtbb/max_tables 500 Max. number of buffered tables
|
Export/import buffer EIBUF
rsdb/obj/buffersize 4096 kB Size of export/import buffer
rsdb/obj/max_objects 2000 Max. number of objects in the buffer
rsdb/obj/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/obj/mutex_n 0 Number of mutexes in Export/Import buffer
|
OTR buffer OTR
rsdb/otr/buffersize_kb 4096 kB Size of OTR buffer
rsdb/otr/max_objects 2000 Max. number of objects in the buffer
rsdb/otr/mutex_n 0 Number of mutexes in OTR buffer
|
Exp/Imp SHM buffer ESM
rsdb/esm/buffersize_kb 4096 kB Size of exp/imp SHM buffer
rsdb/esm/max_objects 2000 Max. number of objects in the buffer
rsdb/esm/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/esm/mutex_n 0 Number of mutexes in Exp/Imp SHM buffer
|
Table definition buffer TTAB
rsdb/ntab/entrycount 20000 Max. number of table definitions buffered
The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
|
Field description buffer FTAB
rsdb/ntab/ftabsize 30000 kB Size of field description buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of table descriptions buffered
FTAB needs about 700 bytes per used entry
|
Initial record buffer IRBD
rsdb/ntab/irbdsize 6000 kB Size of initial record buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of initial records buffered
IRBD needs about 300 bytes per used entry
|
Short nametab (NTAB) SNTAB
rsdb/ntab/sntabsize 3000 kB Size of short nametab
rsdb/ntab/entrycount 20000 Max. number / 2 of entries buffered
SNTAB needs about 150 bytes per used entry
|
Calendar buffer CALE
zcsa/calendar_area 500000 Byte Size of calendar buffer
zcsa/calendar_ids 200 Max. number of directory entries
|
Roll, extended and heap memory EXTM
ztta/roll_area 3000000 Byte Roll area per workprocess (total)
ztta/roll_first 1 Byte First amount of roll area used in a dialog WP
ztta/short_area 3200000 Byte Short area per workprocess
rdisp/ROLL_SHM 16384 8 kB Part of roll file in shared memory
rdisp/PG_SHM 8192 8 kB Part of paging file in shared memory
rdisp/PG_LOCAL 150 8 kB Paging buffer per workprocess
em/initial_size_MB 4092 MB Initial size of extended memory
em/blocksize_KB 4096 kB Size of one extended memory block
em/address_space_MB 4092 MB Address space reserved for ext. mem. (NT only)
ztta/roll_extension 2000000000 Byte Max. extended mem. per session (external mode)
abap/heap_area_dia 2000000000 Byte Max. heap memory for dialog workprocesses
abap/heap_area_nondia 2000000000 Byte Max. heap memory for non-dialog workprocesses
abap/heap_area_total 2000000000 Byte Max. usable heap memory
abap/heaplimit 40000000 Byte Workprocess restart limit of heap memory
abap/use_paging 0 Paging for flat tables used (1) or not (0)
|
Statistic parameters
rsdb/staton 1 Statistic turned on (1) or off (0)
rsdb/stattime 0 Times for statistic turned on (1) or off (0)
Regards,
Jitender -
Performance tuning for Sales Order and its configuration data extraction
I write here the data fetching subroutine of an extract report.
This report takes 2.5 hours to extract 36000 records in the quality server.
Kindly provide me some suggestions for performance tuning it.
SELECT auart vkorg vtweg spart vkbur augru
kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
gwldt audat knumv
FROM vbak
INTO TABLE it_vbak
WHERE vbeln IN s_vbeln
AND erdat IN s_erdat
AND auart IN s_auart
AND vkorg = p_vkorg
AND spart IN s_spart
AND vkbur IN s_vkbur
AND vtweg IN s_vtweg.
IF NOT it_vbak[] IS INITIAL.
SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
yyequnr vbeln cuobj
FROM vbap
INTO TABLE it_vbap
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND posnr = '000010'.
SELECT bstkd inco1 zterm vbeln
prsdt
FROM vbkd
INTO TABLE it_vbkd
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln.
SELECT kbetr kschl knumv
FROM konv
INTO TABLE it_konv
FOR ALL ENTRIES IN it_vbak
WHERE knumv = it_vbak-knumv
AND kschl = 'PN00'.
SELECT vbeln parvw kunnr
FROM vbpa
INTO TABLE it_vbpa
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND parvw IN ('PE', 'YU', 'RE').
ENDIF.
LOOP AT it_vbap INTO wa_vbap.
IF NOT wa_vbap-cuobj IS INITIAL.
CALL FUNCTION 'VC_I_GET_CONFIGURATION'
EXPORTING
instance = wa_vbap-cuobj
language = sy-langu
TABLES
configuration = it_config
EXCEPTIONS
instance_not_found = 1
internal_error = 2
no_class_allocation = 3
instance_not_valid = 4
OTHERS = 5.
IF sy-subrc = 0.
READ TABLE it_config WITH KEY atnam = 'IND_PRODUCT_LINES'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_GQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_VKN'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_ZE'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_HQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_CALCULATED_INST_HOURS'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP. " End of loop on it_vbap
Edited by: jaya rangwani on May 11, 2010 12:50 PM
Edited by: jaya rangwani on May 11, 2010 12:52 PMHello Jaya,
Will provide some point which will increase the performance of the program:
1. VBAK & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
2. If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
3. Before using for all entries, check whether the internal table is not initial.
And sort the internal table and delete adjacent duplicates.
4. Sort all the resultant internal table based on the required key fields and read always using the binary search.
You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
Regards,
Selva K. -
Performance Tuning for OBIEE Reports
Hi Experts,
I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
The key point is the Dimension table is used in most of the OOTB reports.
so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
can anyone suggest good performance tuning tips to tune the reports.
i created some indices on Materialized view columns and and on dimension table columns.
i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
Please Provide valuable suggestions and comments
Thank You
KumarKumar,
Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
Hope that helps
~Srix -
Performance Tuning for a report
Hi,
We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.
To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.
The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.
select matnr from mara into table t_mara.
select werks from t001w into corresponding fields of table t_t001w .
select matnr werks from marc into corresponding fields of table t_marc.
loop at str_table into wa_table.
if not wa_table-partnumber is initial.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = wa_table-partnumber
IMPORTING
OUTPUT = wa_table-partnumber
endif.
clear wa_message.
read table t_mara into wa_mara with key matnr = wa_table-partnumber.
if sy-subrc is not initial.
concatenate 'material ' wa_table-partnumber ' doesnot exists'
into wa_message.
append wa_message to t_message.
endif.
read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'plant ' wa_table-HostLocID ' doesnot exists' into
wa_message.
append wa_message to t_message.
else.
case wa_t001w-werks.
when 'DE40'
or 'DE42'
or 'DE44'
or 'CN61'
or 'US62'
or 'SG70'
or 'FI40'
read table t_marc into wa_marc with key matnr = wa_table-partnumber
werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'material' wa_table-partnumber ' not extended to plant'
wa_table-HostLocID into wa_message.
append wa_message to t_message.
endif.
when others.
concatenate 'plant ' wa_table-HostLocID ' not allowed'
into wa_message.
append wa_message to t_message.
endcase.
endif.
if wa_message is initial.
data: wa_headdata type BAPIMATHEAD,
wa_PLANTDATA type BAPI_MARC,
wa_PLANTDATAx type BAPI_MARCX.
wa_headdata-MATERIAL = wa_table-PartNumber.
wa_PLANTDATA-plant = wa_table-HostLocID.
wa_PLANTDATAX-plant = wa_table-HostLocID.
wa_PLANTDATA-REORDER_PT = wa_table-ROP.
wa_PLANTDATAX-REORDER_PT = 'X'.
wa_plantdata-ROUND_VAL = wa_table-EOQ.
wa_plantdatax-round_val = 'X'.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
HEADDATA = wa_headdata
PLANTDATA = wa_PLANTDATA
PLANTDATAX = wa_PLANTDATAX
IMPORTING
RETURN = t_bapiret
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
write t_bapiret-message.
endif.
clear: wa_mara, wa_t001w, wa_marc.
endloop.
loop at t_message into wa_message.
write wa_message.
endloop.
Thanks in advance.
Peter
Edited by: kishan P on Sep 17, 2010 4:50 PMHi Peter,
I would suggest few changes in your code. Please refer below procedure to optimize the code.
Steps:
Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.
Please run extended program check or code inspector to remove any errors and warnings.
Few code changes that i would suggest in your code
For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)
Also put an initial check if str_table[] is not initial before you execute the loop.
where ever you have used read table. Please sort these tables and use binary search.
Please clear the work areas after every append statment.
As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.
Hope this helps to resolve your problem.
Have a nice day
Thanks -
Hi,
Can any one guide me how to do the performance tuning in an ECC5,
MSSQL2000,Windows 2003 system
Here the RAM size is 3.5 gb.
Page file size was 9 gb(i increased that to 11.5 gb (3*RAM+ 1 gb)
Also increased the abap/buffer size to 400000.
Regards,
ArunHi Arun,
Go through the following links which are very useful in understanding the parameters values.
Zero administration memory management
Regarding Virtual Memory?
Regards,
Hari.
PS: Points are welcome. -
How to do performance tuning in EXadata X4 environment?
Hi, I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded.
Now the application is testing against this database on exadata.
However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
Thanks a bunch.
db version is 11.2.0.4Hi 9233598 -
Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
Hope this helps.
-Kasey -
Performance Tuning For 11g Database
I've heard some Oracle DBA's suggest / recommend that specific parameters be tuned in the O.S. for using Linux as a dedicated Oracle 11g database server. I'm guess those parameters vary greatly based on utilization, hardware, and various other factors. I just wanted to ask if there are some soft or safe performance adjustments I can make which regardless of other various factors that are unknown would most likely benefit and or improve database performance?
Here is what I know:
- OS = RHEL 6.2 or OEL 6.2 64-bit
- RAM = 4 GB
- x2 Quad Core Xeon CPU's
Thanks for shedding any light on this for me...>
I've heard some Oracle DBA's suggest / recommend that specific parameters be tuned in the O.S. for using Linux as a dedicated Oracle 11g database server. I'm guess those parameters vary greatly based on utilization, hardware, and various other factors. I just wanted to ask if there are some soft or safe performance adjustments I can make which regardless of other various factors that are unknown would most likely benefit and or improve database performance?
>
ALWAYS read and follow the instructions provided by Oracle in the installation guide. It discusses the very issue you mention.
http://docs.oracle.com/cd/E11882_01/install.112/e24321.pdf
There is even a section for 'Configuring Kernel Parameters for Linux'
After you read the installation guide and evaluate your own installation environment if you have any specific questions then post them. And if you intend to 'guess those parameters vary greatly based on utilization, hardware, and various other factors' then it should be obvious that you need to provide information on 'utilization, hardware, and various other factors' or we won't be able to help you with your specific use case. -
Performance tuning for ABAP Query (created from t-cd SQ01)
Hello all,
We created ABAP Query report from transaction SQ01.
But the generated report has an appropriate SQL statement which causes performance problem.
To solve this issue, I guess the easiest way is;
0. Give up to use it.
1. Copy it to another object in the customer namespace.
2. Ajust SQL statement.
But I'm wondering if there're appropriate ways to adjust SQL statement of Query.
Could anybody give me any better idea?
Thank you
YukoYou can try this: Create 2 ranges, for objnr and cdtcode and fill like:
ra_objnr-sign = 'I'.
ra_objnr-option = 'CP'.
ra_objnr-low = 'OR*'.
append ra_objnr.
ra_code-sign = 'I'.
ra_code-option = 'CP'.
ra_code-low = 'CO*'.
append ra_code.
SELECT objnr udate utime
FROM jcds
INTO TABLE it_jcds
WHERE objnr IN ra_objnr
AND stat = l_tj02t
AND cdtcode IN ra_code
AND inact = space
Regards,
John.
Maybe you are looking for
-
Error when starting IPlanet Web Server instance
We are currently using ServletExec 3.0c with IPlanet WebServer, but we have decided to utilise the servlet engine of IPlanet WebServer without much success. We have changed the setting in WebServer Admin to allow Servlets/JSPs to be dealt with by IPl
-
Finder/Window-Content not responding to changes and doesn't refresh
Hi, i have a very strange Problem with the Finder since i upgraded to 10.6.4. The Finder doesn't react to changes from certain Applications. I noticed the effect in the Folder 'Downloads', f.e. if i try to download a file. Usually two files appear, t
-
I have a transition that I'd like to do where someone lifts a bag out of the frame, revealing the next shot underneath. Shot 1 is of the bag and I've already masked it off in After Effects and brought that back into FCP. Shot 2 is the shot I'd like t
-
I cannot paint or use gradients correctly in layer mask mode
I have my blending mode set to 'NORMAL". When I paint in black on a layer mask it creates a transparent effect (not hide the image/pattern). Since the upgrade this has affected both my pc and laptop versions. I've read through the forums and see
-
Java.lang.UnsatisfiedLinkError - Urgent please help
Hi, I want to thank you for looking into this problem first. I have a very headache problem for the past week. I have an applet that uses JNI to call an existing dll to get the system resources. It works fine on windows 2000 server with IE 5.0. The d