RKKBABS0 Performance Issues (Background Processing of CO99) for PM Orders
We are experiencing extremely long run times when batch processing through program RKKBABS0 in ECC 6.0 (just upgraded). The issue appears to be that the program is using the production order numbers to search against the EXKN table which contains no AUFNR or AUFPL information.
Has anyone experienced this same issue and how was it resolved?
Edited by: Ken Lundeen on Apr 9, 2010 9:17 PM
Edited by: Ken Lundeen on Apr 9, 2010 9:17 PM Table ESKN
(I'm sorry you've waited over a year for a reply.)
We also have performance issue. In our case we do not use Service Entry sheets with maintenance or production orders; AUFNR will not be populated in table ESKN. We are unable to 'complete business' our maintenance and production orders using batch processing because of performance.
We use Oracle database, which uses full table scan in this situation. But Secondary index (MANDT and AUFNR) is of no value anyway, we have about 12 million records with client and blank AUFNR field.
Our solution is a combination of a modification and a new index. OSS pilot note "1532483 - Performance of RKKBABS0 CHECK_ENTRYSHEET when reading ESKN" is a modification which introduces code improvements especially if running in background and closing several orders. Because we only have one client, we also created a new index consisting only of AUFNR. Oracle will not add a row to the secondary index of all fields of the index are null, making our new index very small. We then udpated Oracle stats to ensure Oracle would choose our new index.
We can now 'complete business' a single order online in under a minute, and the batch program runs much more efficiently.
This is not a perfect solution, but it has been a useful workaround for us. I hope this is useful to you.
Similar Messages
-
Error in bdc (Background processing not possible for material)
Background processing not possible for material with serial number reqmt
Message no. M7419
Diagnosis
You tried to enter the count for a material managed with serial numbers using background processing. This function is not supported.
Procedure
Use the transaction 'Enter inventory count' (MI04) to enter the count for materials managed with serial numbers.Can You Run this BDC in Foreground Mode
I am doubtfull for that.
please check
Gaurav Sood -
Performance issues executing process flows after upgrading db to 10G
We have installed OWF 2.6.2, and initially our database was at 9.2. Last week we updated our database to 10g, and process flow executions are taking a lot longer, from 1 minute to 15 minutes.
Any ideas anyone what could be the cause of this performance issue?
Thanks,
YanetHi,
Oracle10g database behaves differently on the statistics of tables and indexes. So check these and check wether the mappings are updating these statistics at the right moments with respect to the ETL-proces and with the right interval.
Also, check your generated sources on how statistics are gathered (dmbs_stats.gather....). Does the index that might play a vital role in Oracle9i get new statistics, or only the table? Or only the table where doubled in amount of rows by this mapping?
You can always take matter into your own hands, by letting OWB NOT generate the source for gathering statistics, and call your own procedure in a post-mapping.
Regards,
André -
Performance Issue With Displaying Candidate Details for iRecruitment
We are in EBS R12.1.3 when we trying to display the Candidate Details page in iRecruitment iRecruitment >vacancies>applicants> click on applicant> the page spins for a quite whille to show the results and some times we see the 500 errors..
We are in R12.1.3 and also applied the Patch.10427777:R12.IRC.B patch.. is there any tunnign steps for the iRecruitment page=/oracle/apps/irc/candidateSearch/webui/CmAplSrchPGYou have already applied the patch mention in note: Performance Issue With Displaying Candidate Details Page in 12.1.3 (Doc ID 1293164.1)
check this note also Performance Issue when Clicking on Candidate Name (Doc ID 1575164.1)
thanks -
Performance issue:Show id and Description for same dimension member
Hi,
I am connecting a cube to another reporting system and i need to show the id of member resulting of a query.My first thought was to use this kind of code (bellow) however when i do the same thing with many dimension (many cross join ),it slow down a lot
my query.So how can i have in the same DImension member showing a description and id ? I also have a lot of statement,so i can't have just two columns in the dimension or i will need to duplication the mdx and i could drop down the performance.
So i am trying to get as a result
Dim1 | Dim2| Dim3| Measure
1 50
32 25.2
and also be able to get
Dim1 | Dim2 | Dim3
| Measure
NameElement1Dim1 NameElement50Dim2
NameElement32Dim3 25.2
Thanks in advance
with MEMBER [Measures].[IdElement] as
<element>.currentmember.properties("KEY")
select
CROSSJOIN({[Measures].[IdElement]},{[METRIC].[Description].[All]}),
CROSSJOIN({[Measures].[value]},{<listmetricmdx>})
} on columns,
<pointofview>
<element_and_function>
<TimeBreakdown>
} on rows
<list_filter_clause>
) as list
where ((ElementName is null AND IdElement=0) OR (ElementName is not null))
<list_condition_metric>
but i have multipleHi Vincent,
In your query, you use CrossJoin in it. Crossjoin function will cause the performance issue if there are a lot of properties that need to be displayed. If you cross-join medium-sized or large-sized sets (e.g., sets that contain more than 100 items each),
you can end up with a result set that contains many thousands of items—enough to seriously impair performance. For the detail information, please see:
http://sqlmag.com/data-access/cross-join-performance
In your MDX query, ensure only retrieval the required data. Here are some useful links for your reference.
Configure memory setting:
http://social.msdn.microsoft.com/Forums/en/sqlanalysisservices/thread/bf70ca19-5845-403f-a85f-eac77c4495e6
Performance Tuning:
http://www.microsoft.com/downloads/details.aspx?FamilyID=3be0488d-e7aa-4078-a050-ae39912d2e43&displaylang=en
http://www.packtpub.com/article/query-performance-tuning-microsoft-analysis-services-part2
Regards,
Charlie Liao
TechNet Community Support -
Can we schedule background processing through sm37 for a bdc recordind
hi all,
some one please tell me , can we schedule background job processing through sm37,
for a bdc recording program(table control) .
the data data is not on the presentation server or application server,
actually i fetching the data from the data base into a internal table and processing the data
through me22n t-code through recording and for sto(stock transfer order )update.
thanks in advance
sarathi,
You can do this using function modules 'JOB_OPEN', 'JOB_CLOSED' and SUBMIT statement.
Please find the below example.
*Submit report as job(i.e. in background)
data: jobname like tbtcjob-jobname value
' TRANSFER TRANSLATION'.
data: jobcount like tbtcjob-jobcount,
host like msxxlist-host.
data: begin of starttime.
include structure tbtcstrt.
data: end of starttime.
data: starttimeimmediate like btch0000-char1.
Job open
call function 'JOB_OPEN'
exporting
delanfrep = ' '
jobgroup = ' '
jobname = jobname
sdlstrtdt = sy-datum
sdlstrttm = sy-uzeit
importing
jobcount = jobcount
exceptions
cant_create_job = 01
invalid_job_data = 02
jobname_missing = 03.
if sy-subrc ne 0.
"error processing
endif.
Insert process into job
SUBMIT zreport and return
with p_param1 = 'value'
with p_param2 = 'value'
user sy-uname
via job jobname
number jobcount.
if sy-subrc > 0.
"error processing
endif.
Close job
starttime-sdlstrtdt = sy-datum + 1.
starttime-sdlstrttm = '220000'.
call function 'JOB_CLOSE'
exporting
event_id = starttime-eventid
event_param = starttime-eventparm
event_periodic = starttime-periodic
jobcount = jobcount
jobname = jobname
laststrtdt = starttime-laststrtdt
laststrttm = starttime-laststrttm
prddays = 1
prdhours = 0
prdmins = 0
prdmonths = 0
prdweeks = 0
sdlstrtdt = starttime-sdlstrtdt
sdlstrttm = starttime-sdlstrttm
strtimmed = starttimeimmediate
targetsystem = host
exceptions
cant_start_immediate = 01
invalid_startdate = 02
jobname_missing = 03
job_close_failed = 04
job_nosteps = 05
job_notex = 06
lock_failed = 07
others = 99.
if sy-subrc eq 0.
"error processing
endif. -
Processing log output for Purchase order
Dear All,
Iam getting some problem when iam creating an IDOC and checking in the Processing log for IDOC number.
Actually i had created one custom idoc for Purchase order since my client need only some field and in one line for header and line item, I had done it and i can see it in sdata of EDIDD structure. When i create a Purchase order and save it an idoc number is posting and I can see this file in my physical directory indeed.
Now when iam going into change mode of PO ME22n to see the Processing LOG, It is not showing the IDOC Number in Purchase order output processing log popup.The processing log will only show output based standard output control (Table NAST).
How is your IDoc being created. Via a user exit, BADI. If so then they will not appear on the processing log.
It is being created as a custom IDOC for which i had written a Z function module and given that in PO processing code which is ME10. When iam checking the standard IDOC for PO it is generating the IDOC in processing log as well.
Iam just placing my code just have a look at it and suggest if any thing needs to be done.
FUNCTION Z_IDOC_OUTPUT_ORDERS.
""Local Interface:
*" IMPORTING
*" VALUE(OBJECT) LIKE NAST STRUCTURE NAST
*" VALUE(CONTROL_RECORD_IN) LIKE EDIDC STRUCTURE EDIDC
*" EXPORTING
*" VALUE(OBJECT_TYPE) LIKE WFAS1-ASGTP
*" VALUE(CONTROL_RECORD_OUT) LIKE EDIDC STRUCTURE EDIDC
*" TABLES
*" INT_EDIDD STRUCTURE EDIDD
*" EXCEPTIONS
*" ERROR_MESSAGE_RECEIVED
*" DATA_NOT_RELEVANT_FOR_SENDING
DATA: xdruvo. "Druckvorgang
DATA: neu VALUE '1', "Neudruck
h_kappl LIKE nast-kappl, "Hilfsfeld Applikation
h_parvw LIKE ekpa-parvw, "Hilfsfeld Partnerrolle
h_ebeln LIKE ekko-ebeln. "Hilfsfeld Belegnummer
CLEAR control_record_out.
xdruvo = neu.
h_kappl = object-kappl.
h_ebeln = object-objky.
h_parvw = object-parvw.
DATA:
LT_EDIDC LIKE EDIDC OCCURS 0 WITH HEADER LINE,
L_EDIDC LIKE EDIDC,
L_SEND_FLAG,
W_SDATA LIKE EDIDD-SDATA.
DATA: T_BDI_MODEL LIKE BDI_MODEL OCCURS 0 WITH HEADER LINE.
DATA: T_EDIDC LIKE EDIDC OCCURS 0 WITH HEADER LINE.
DATA: T_EDIDD LIKE EDIDD OCCURS 0 WITH HEADER LINE.
DATA: C_MESSAGE_TYPE LIKE EDIDC-MESTYP VALUE 'ZORDER'.
*- Call function module to determine if message is to be distributed
OBJECT_TYPE = 'BUS2012'.
MOVE control_record_in TO control_record_out.
CALL FUNCTION 'ALE_MODEL_DETERMINE_IF_TO_SEND'
EXPORTING
MESSAGE_TYPE = C_MESSAGE_TYPE
IMPORTING
IDOC_MUST_BE_SENT = L_SEND_FLAG.
EXCEPTIONS
OWN_SYSTEM_NOT_DEFINED = 1
OTHERS = 2.
DATA : BEGIN OF EKKO_tAB OCCURS 0,
EBELN LIKE EKKO-EBELN,
F1 TYPE C VALUE ',',
BUKRS LIKE EKKO-BUKRS,
F2 TYPE C VALUE ',',
BSART LIKE EKKO-BSART,
F3 TYPE C VALUE ',',
LIFNR LIKE EKKO-LIFNR,
F4 TYPE C VALUE ',',
WAERS LIKE EKKO-WAERS,
F5 TYPE C VALUE ',',
BEDAT LIKE EKKO-BEDAT,
F6 TYPE C VALUE ',',
WERKS LIKE EKPO-WERKS,
F7 TYPE C VALUE ',',
PLIFZ LIKE EKPO-PLIFZ,
F8 TYPE C VALUE ',',
EBELP LIKE EKPO-EBELP,
F9 TYPE C VALUE ',',
MATNR LIKE EKPO-MATNR,
F10 TYPE C VALUE ',',
MENGE LIKE EKPO-MENGE,
F11 TYPE C VALUE ',',
MEINS LIKE EKPO-MEINS,
F12 TYPE C VALUE ',',
END OF EKKO_TAB.
DATA SDATA1 LIKE EKKO_tAB OCCURS 0 WITH HEADER LINE.
DATA EBELN LIKE EKKO-EBELN.
WRITE OBJECT-OBJKY TO EBELN.
SELECT T1EBELN T1BUKRS BSART LIFNR WAERS BEDAT WERKS PLIFZ EBELP MATNR MENGE MEINS
FROM EKKO AS T1
INNER JOIN EKPO AS T2 ON T2EBELN = t1EBELN
INTO CORRESPONDING FIELDS OF TABLE EKKO_tAB
WHERE
*T1~KAPPL = 'EF' AND
T1~EBELN = EBELN.
*T1~KSCHL = 'YEDI' .
DATA SDATA LIKE EDIDD-SDATA.
DATA NDATE LIKE SY-DATUM.
DATA NMENGE(17) TYPE C.
LOOP AT EKKO_tAB.
WRITE EKKO_TAB-MENGE TO NMENGE.
NDATE = EKKO_tAB-BEDAT + EKKO_tAB-PLIFZ.
CONCATENATE EKKO_tAB-EBELP ',' EKKO_tAB-BUKRS ',' EKKO_tAB-BSART EKKO_tAB-EBELN ',' EKKO_tAB-LIFNR ',' EKKO_tAB-BEDAT ','
NDATE ',' EKKO_tAB-BSART EKKO_tAB-EBELN ',' EKKO_tAB-EBELN ', 0,' EKKO_tAB-MATNR ','
NMENGE ',' EKKO_tAB-MEINS ',' EKKO_tAB-WERKS INTO SDATA.
MOVE SDATA TO: W_SDATA, T_EDIDD-SDATA.
MOVE 'ZORDERS' TO T_EDIDD-SEGNAM.
APPEND T_EDIDD.
ENDLOOP.
*call function 'L_IDOC_SEGMENT_CREATE'
exporting
i_segnam = 'ZORDERS'
i_sdata = w_sdata
exceptions
others = 1.
*LT_EDIDC
call function 'L_IDOC_SEND'
tables
t_comm_idoc = LT_EDIDC
exceptions
error_distribute_idoc = 1
others = 2.
*DATA T_BDI_MODEL LIKE BDI_MODEL.
WRITE OBJECT-OBJKY TO T_BDI_MODEL.
READ TABLE T_BDI_MODEL INDEX 1. " maximum 1 recipient
L_EDIDC-DIRECT = 1.
L_EDIDC-DOCNUM = DOCNUM. "***
L_EDIDC-RCVPRN = 'HCM_00_785'.
L_EDIDC-RCVPOR = 'MM_PO_FILO'.
MOVE 'ZORDER' TO L_EDIDC-MESTYP.
MOVE 'ZPURIDOC' TO L_EDIDC-IDOCTP.
MOVE 'LS' TO L_EDIDC-RCVPRT.
MOVE T_BDI_MODEL-RCVSYSTEM TO L_EDIDC-RCVPRN.
*MOVE-CORRESPONDING L_EDIDC TO W_EDIDC.
*- Distribute the iDoc
BREAK-POINT.
CALL FUNCTION 'MASTER_IDOC_DISTRIBUTE' "IN UPDATE TASK
EXPORTING
MASTER_IDOC_CONTROL = L_EDIDC
TABLES
COMMUNICATION_IDOC_CONTROL = LT_EDIDC
MASTER_IDOC_DATA = T_EDIDD
EXCEPTIONS
ERROR_IN_IDOC_CONTROL = 01
ERROR_WRITING_IDOC_STATUS = 02
ERROR_IN_IDOC_DATA = 03
SENDING_LOGICAL_SYSTEM_UNKNOWN = 04.
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
READ TABLE LT_EDIDC INDEX 1.
control_record_out-direct = '1'.
CONTROL_RECORD_OUT-DOCNUM = DOCNUM. " ***
control_record_out-serial = sy-datum.
control_record_out-serial+8 = sy-uzeit.
control_record_out-mestyp = LT_EDIDC-mestyp.
control_record_out-idoctp = LT_EDIDC-idoctp.
control_record_out-SNDPRN = 'HCM_00_786'.
MOVE 'MM_PO_FILO' TO control_record_out-RCVPOR.
MOVE 'LI' TO control_record_out-SNDPRT.
MOVE 'SAPQIS' TO control_record_out-SNDPOR.
ENDFUNCTION. -
Dep calculation issue on settelment of asset for pm order
Hi,
When I am trying to settele the asset for pm order it is starting the depreciation from the next day of settelment. But when I am craeting the asset for the same asset class & posting the values from FI it is calculating the depreciation from the same day of expense booking.Why it is not calculating from the same day when it is through pm order.
Help me to resolve the iisue.
Cheers.Hi Blaz,
Thanks for your reply. Whatever you say is right.
My issue is i am creating assets in one asset class. And when-
1. Asset capitalization value is posted by f-90 say on 31.03.2009, system calculates the dep from 31.03.2009 ( Dep is day wise)
2. For asset in the same asset class with same dep key and other values when I post values by settling the PM order ( Say on 31.03.2009) system starts dep from 01.04.2009.
I am not able to analyze why for same parameters assets depreciation calculation is differing by a day.
Thanks -
Issue in Process Controlled workflow for Shopping cart in Quality system.
Hello All,
I ahve configured a Process controlled workflow in SRM 7.0 with custom resolver, and I am facing an issue taht the Workflow works well in Development but in Quality the approvers are dropped after SC is ordered in Quality system.
The SC Workflow drops the approvers picked up from the Interface method /SAPSRM/IF_EX_WF_RESP_RESOLVERGET_AREA_TO_ITEM_MAP and IF_EX_WF_RESP_RESOLVERGET_APPROVERS_BY_AREA_GUID of BADI /SAPSRM/BD_WF_RESP_RESOLVER. The approvers can be seen in the shopping cart Approval preview Tab until the SC is ordered.
I have compared the OSS notes relevant for Workflow, all of them have been transported, Also I compared and checked general Workflow settings, BRF Config and Process level settings in Dev and Quality, everything is same.
Also while debugging; the approvers can be seen in the decision set table in the create_process_forecast method of class /SAPSRM/CL_WF_PROCESS_MANAGER.
Kindly let me know what else i can check to find the root cause.
Thank you in advance for help!
Regards
Prasuna.Hello Vinita;
Thanks for the input and sorry for the not so "ASAP" reply;
From what I'm seeing in from your 2 screenshot, i strongly believe that the problem is even before the Z implementation /SAPSRM/IF_EX_WF_RESP_RESOLVER~GET_APPROVERS_BY_AREA_GUID (in which the FM i ZSRM_GET_USER_FROM_PGRP is called. I think the problem could be in the process level determination ZSRM_WF_BRF_0EXP000_SC_APP100. Let me explain:
In your cases where not buyer is determined, in the approval tab there is not even a process level for buyer approval. If the problem were indeed in the implementation /SAPSRM/IF_EX_WF_RESP_RESOLVER~GET_APPROVERS_BY_AREA_GUID then the process level would be there, but the system will display, instead of the name of the buyer(if the buyer determination fails) a red label with the message: "With the strategy "Buyer determination" an approver could not be determined (or something like that..please check the image at the end of the text)".
I can propose a way to discard this: Implement the method /SAPSRM/IF_EX_WF_RESP_RESOLVER~GET_FALLBACK_AGENTS of class ZCL_BADI_SC_WC (in case you didn't know, in this method you can specify an "default" approver in case that the determination of approver in GET_APPROVERS_BY_AREA_GUID fails). The idea is to specify an default approval and see how it behaves:
If the user you indicated in the method GET_FALLBACK_AGENTS appears as approver, then yes, the problem is arises from implementation GET_APPROVERS_BY_AREA_GUID, in which case it could be a data problem (peharps in pposa_bbp?). You could also check in TX SU53 with the users with this problem to see if there's a missing authorization objetc.
If, in the other hand, the "default" approver is not shown, it means that the process level buyer determination is not even called, so you should check in more detail ZSRM_WF_BRF_0EXP000_SC_APP100 and /SAPSRM/CL_WF_PROCESS_MANAGER > Determine process restart –method ----- (i have never used this method, so i could not tell if it could be the source of the problem).
Also, you could implement the method GET_FALLBACK_AGENTS in this way so the default approver would be the WF administrator indicated in the customizing (or you could just append directly any user you want):
METHOD /SAPSRM/IF_EX_WF_RESP_RESOLVER~GET_FALLBACK_AGENTS.
DATA: lv_admin_expr TYPE swd_shead-admin_expr,
lv_admin TYPE swd_shead-wfi_admin,
lv_admin_type TYPE sy-input,
ls_agent TYPE /sapsrm/s_wf_approver.
CALL FUNCTION 'SWD_WF_DEFINITION_ADMIN_GET'
IMPORTING
default_admin_expr = lv_admin_expr
default_admin = lv_admin
default_admin_type = lv_admin_type.
ls_agent-approver_id = lv_admin.
APPEND ls_agent TO rt_agent.
ENDMETHOD.
Error of agent determination:
Please let me know the result of the test with the implementation of method GET_FALLBACK_AGENT. By doing this we could ensure if really the problem is in method GET_APPROVERS_BY_AREA_GUID or before. I just made the test in our system and I'm almost sure that you wont get the default approver, but i could be wrong.
Any question please let me know.
Best regards
Cristian R. -
Performance issue in process lockbox
Hi all,
As per our client request, we customized process lockbox, splited validation and process lockbox seperately. We have 11 files to process daily.
Nowadays we are receiving huge files with more than 20K records. Process lockboxes is taking more than 8 hours to complete and also incompatibility is set for the process locbox by itself. SO we are unable to run parallelly also.
Our instance is 11.5 and no customization in process locbox concurrent program.
Is there any way to improve the performance of process lockbox?
Is there anyway to run in parallel?
Input will be very helpful for us..
Please post u r valuable inputs..
Thanks,
SundarHussein/Srini
I am trying to get the trace and also regarding profile values.. Let me found out is there any change in improvement.
I have one more info to share.
From log file, we found that few blocks taking more time. For one file, 3 blocks take half of the execution time.
From log file,
entering arlpid
Current system time is 24-AUG-2010 08:25:29
exiting arlpid
Current system time is 24-AUG-2010 09:13:41
entering arlvcc
Current system time is 24-AUG-2010 09:19:19
exiting arlvcc
Current system time is 24-AUG-2010 10:20:55
entering arlvin-111
Current system time is 24-AUG-2010 10:23:42
exiting arlvin
Current system time is 24-AUG-2010 11:16:33
Total time taken to process 3211 file is ~ 350 minutes. But above three blocks were costing more. They have consumed ~170 minutes i.e. half of the execution time.
What is this blocks?
This there anyway to analyse this block and improve th performace? -
Performance issue with select query and for all entries.
hi,
i have a report to be performance tuned.
the database table has around 20 million entries and 25 fields.
so, the report fetches the distinct values of two fields using one select query.
so, the first select query fetches around 150 entries from the table for 2 fields.
then it applies some logic and eliminates some entries and makes entries around 80-90...
and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
in short,
it accesses the same database table twice.
so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
is around 80-90 entries too much for using "for all entries"?
the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
i really cant find the way out...
please help.chinmay kulkarni wrote:Chinmay,
Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
>
> so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
>
It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
> the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
>
That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database. -
Performance issue in 'Selelect statement with for all entries'
Hi,
The following SELECT statement is taking too much time.
SELECT * FROM /rb04/yc5_mver
INTO TABLE g_it_mver
FOR ALL ENTRIES IN l_it_inva
WHERE matnr EQ l_it_inva-matnr AND
werks EQ l_it_inva-werks AND
indei EQ l_it_inva-indei AND
gjahr IN g_r_gjahr.
Internal table l_it_inva is having too many records.
Is there any way to optimize it.
Regards,
TintuHi Tintu,
check table l_it_inva for initial.
sort internal table l_it_inva with key matnr werks indei gjahr.
delete adjacent duplicates from l_it_inva comparing matnr werks indei gjahr.
Then use following select query.
SELECT * FROM /rb04/yc5_mver
INTO TABLE g_it_mver
FOR ALL ENTRIES IN l_it_inva
WHERE matnr EQ l_it_inva-matnr AND
werks EQ l_it_inva-werks AND
indei EQ l_it_inva-indei AND
gjahr IN g_r_gjahr.
Regards,
Vijay -
Need help in improving performance of prorating quantities to stores for existing orders
I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
this 10 is called innersize.
While allocating, each store is provided quantities of innersize first and this looping continues
until available quantity is over
Ex:
store1=10
store2=10
store3=10
store4=10
second time:
store1=10(old)+10
store2=10(old)+10
store3=10(old)+10
store4=10(old)+10--demand fulfilled
third time
store1=20(old)+10
store2=20(old)+10
-- available quantity is over and hence stopped.
My code below-
=================================================
int prorate_allocation()
char *function = "prorate_allocation";
long t_cnt_st;
int t_innersize;
int t_qty_ordered;
int t_cnt_lp;
bool t_complete;
sql_cursor alloc_cursor;
EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
SELECT oh.order_no,
ol.item,
isc.inner_pack_size,
ol.qty_ordered
FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
ordloc ol,
item_supp_country isc
WHERE oh.order_no=ol.order_no
AND oh.supplier=isc.supplier
and ol.item=isc.item
AND EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
AND ol.qty_ordered>0;
char v_order_no[10];
char v_item[25];
double v_innersize;
char v_qty_ordered[12];
char v_alloc_no[11];
char v_location[10];
char v_qty_allocated[12];
int *store_quantities;
bool *store_processed_flag;
EXEC SQL OPEN c_order;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR OPEN: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL ALLOCATE :alloc_cursor;
while(1)
EXEC SQL FETCH c_order INTO :v_order_no,
:v_item,
:v_innersize,
:v_qty_ordered;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR FETCH: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (NO_DATA_FOUND) break;
t_qty_ordered =atoi(v_qty_ordered);
t_innersize =(int)v_innersize;
t_cnt_lp = t_qty_ordered/t_innersize;
t_complete =FALSE;
EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
FROM abrl_alc_chg_ad ad,
alloc_header ah
WHERE ah.alloc_no=ad.alloc_no
AND ah.order_no=:v_order_no
AND ah.item=:v_item
AND ad.qty_allocated!=0;
if SQL_ERROR_FOUND
sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (t_cnt_st>0)
store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
while (t_cnt_lp>0)
EXEC SQL WHENEVER NOT FOUND DO break;
for(int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
if (store_quantities[i]!=(int)v_qty_allocated)
store_quantities[i]=store_quantities[i]+t_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
else
if(store_processed_flag[i]==FALSE)
store_processed_flag[i]=TRUE;
t_cnt_st--;
if (t_cnt_st==0)
t_complete=TRUE;
break;
if (t_complete==TRUE && t_cnt_lp!=0)
for (int i=0;i<t_cnt_st;i++)
store_quantities[i]=store_quantities[i]+v_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
}/*END OF WHILE*/
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
EXEC SQL WHENEVER NOT FOUND DO break;
for (int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
EXEC SQL UPDATE abrl_alc_chg_ad
SET qty_allocated=:store_quantities[i]
WHERE to_loc=:v_location
AND alloc_no=:v_alloc_no;
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
SET PROCESSED='Y'
WHERE LOCATION=:v_location
AND alloc_no=:v_alloc_no
AND PROCESSED IN ('E','U');
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ABRL_ALC_CHG_DETAILS");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL COMMIT;
EXEC SQL CLOSE :alloc_cursor;
free(store_quantities);
free(store_processed_flag);
}/*END OF IF*/
}/*END OF OUTER WHILE LOOP*/
EXEC SQL CLOSE c_order;
if SQL_ERROR_FOUND
sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
return(0);
} /* end prorate_allocation*/I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
this 10 is called innersize.
While allocating, each store is provided quantities of innersize first and this looping continues
until available quantity is over
Ex:
store1=10
store2=10
store3=10
store4=10
second time:
store1=10(old)+10
store2=10(old)+10
store3=10(old)+10
store4=10(old)+10--demand fulfilled
third time
store1=20(old)+10
store2=20(old)+10
-- available quantity is over and hence stopped.
My code below-
=================================================
int prorate_allocation()
char *function = "prorate_allocation";
long t_cnt_st;
int t_innersize;
int t_qty_ordered;
int t_cnt_lp;
bool t_complete;
sql_cursor alloc_cursor;
EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
SELECT oh.order_no,
ol.item,
isc.inner_pack_size,
ol.qty_ordered
FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
ordloc ol,
item_supp_country isc
WHERE oh.order_no=ol.order_no
AND oh.supplier=isc.supplier
and ol.item=isc.item
AND EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
AND ol.qty_ordered>0;
char v_order_no[10];
char v_item[25];
double v_innersize;
char v_qty_ordered[12];
char v_alloc_no[11];
char v_location[10];
char v_qty_allocated[12];
int *store_quantities;
bool *store_processed_flag;
EXEC SQL OPEN c_order;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR OPEN: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL ALLOCATE :alloc_cursor;
while(1)
EXEC SQL FETCH c_order INTO :v_order_no,
:v_item,
:v_innersize,
:v_qty_ordered;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR FETCH: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (NO_DATA_FOUND) break;
t_qty_ordered =atoi(v_qty_ordered);
t_innersize =(int)v_innersize;
t_cnt_lp = t_qty_ordered/t_innersize;
t_complete =FALSE;
EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
FROM abrl_alc_chg_ad ad,
alloc_header ah
WHERE ah.alloc_no=ad.alloc_no
AND ah.order_no=:v_order_no
AND ah.item=:v_item
AND ad.qty_allocated!=0;
if SQL_ERROR_FOUND
sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (t_cnt_st>0)
store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
while (t_cnt_lp>0)
EXEC SQL WHENEVER NOT FOUND DO break;
for(int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
if (store_quantities[i]!=(int)v_qty_allocated)
store_quantities[i]=store_quantities[i]+t_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
else
if(store_processed_flag[i]==FALSE)
store_processed_flag[i]=TRUE;
t_cnt_st--;
if (t_cnt_st==0)
t_complete=TRUE;
break;
if (t_complete==TRUE && t_cnt_lp!=0)
for (int i=0;i<t_cnt_st;i++)
store_quantities[i]=store_quantities[i]+v_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
}/*END OF WHILE*/
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
EXEC SQL WHENEVER NOT FOUND DO break;
for (int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
EXEC SQL UPDATE abrl_alc_chg_ad
SET qty_allocated=:store_quantities[i]
WHERE to_loc=:v_location
AND alloc_no=:v_alloc_no;
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
SET PROCESSED='Y'
WHERE LOCATION=:v_location
AND alloc_no=:v_alloc_no
AND PROCESSED IN ('E','U');
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ABRL_ALC_CHG_DETAILS");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL COMMIT;
EXEC SQL CLOSE :alloc_cursor;
free(store_quantities);
free(store_processed_flag);
}/*END OF IF*/
}/*END OF OUTER WHILE LOOP*/
EXEC SQL CLOSE c_order;
if SQL_ERROR_FOUND
sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
return(0);
} /* end prorate_allocation*/ -
The request could not be submitted for background processing.
Post Author: Chriss
CA Forum: Administration
It's an BOE XI SR2, on Win2k3 server, with a print cluster with two print spools, handling 3000+ printers. I discovered this error to be intermittent and only on one of the spools. It turned out that the only common factor was an HP4250 print driver. I backed all the 4250s down to 4200 drivers and the intermitent error ("Error in File. The request could not be submitted for background processing.") went from about 100 a day to zero. The other spool had a different version of the HP4250 driver and would on rare occassion cause this error, "Error in File ... Page header or footer longer than a page." but never the background processing error.
For reference, when I got this error in XI R1, this was the solution for 'the error with one name and many causes':The error "The request could not be submitted for background processing" can be related to a corrupt or wrong versioned crpe32.dll in the Crystal bin folder. Renaming to crpe32.dll_bak and using the repair command in the the "Add/Remove Programs" tool in the "Control Panel" will reinstall the correct dll. Then restart the Crystal services.Post Author: krishna.moorthi
CA Forum: Administration
For Crystal reports :
Error : "The request could not be submitted for background processing"
I think,this was not related to a corrupt or wrong versioned crpe32.dll.
but the below mentioned is one of the reason for getting this error.
I got the error when the main report(crystalreports10) having more than 2 subreports not assigned proper tables for the subreports.
Example: (this code raise the abone mentioned error.)
rpt.SetDataSource(Exdataset);
rpt.Subreports["subreportname1"].SetDataSource(Exdataset); // Exdatatset.Tables[1]
rpt.Subreports["subreportname2"].SetDataSource(Exdataset);// Exdatatset.Tables[2] -
Performance issue in DB need help with analysing this ADDM report
Hi,
My environment:
Os: RHEL5U3 / 11.1.0.7 64 bit / R12.1.1 64 bit
Issue:
Few days are am facing serious of performance problem in our Production instance. Normally the issue will occur 5 to 10 minutes occasionally per day. At the time of issue we not able to access the EBS application its taking time to load. But backend all the oracle, listener and apps services are up and running. No locks at table and session level. Cpu and memory usage is normal.
We have monitored using "Enterprise Manager" for this issue and we found the wait session present more in Active session tab. At this time EBS application is not able access its loading too time. After some time the in Active session tab the wait session came normal and when we try to access the EBS application its working fine.
We try to find the cause of the issue by running addm report. But am not able to understand what its says. Kindly suggests me
ADDM Report for Task 'TASK_42656'
Analysis Period
AWR snapshot range from 14754 to 14755.
Time period starts at 17-APR-12 11.00.22 AM
Time period ends at 17-APR-12 12.00.33 PM
Analysis Target
Database 'PRD' with DB ID 1789440879.
Database version 11.1.0.7.0.
ADDM performed an analysis of instance PRD, numbered 1 and hosted at
advgrpdb.advgroup.ae.
Activity During the Analysis Period
Total database time was 18674 seconds.
The average number of active sessions was 5.17.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 Top SQL by DB Time 3.43 | 66.33 5
2 Buffer Busy 2.52 | 48.81 5
3 Buffer Busy 1.39 | 26.81 2
4 Log File Switches .91 | 17.56 1
5 Buffer Busy .56 | 10.87 2
6 Undersized SGA .38 | 7.37 1
7 Commits and Rollbacks .28 | 5.42 1
8 Undo I/O .18 | 3.53 0
9 CPU Usage .13 | 2.57 1
10 Top SQL By I/O .11 | 2.21 1
Findings and Recommendations
Finding 1: Top SQL by DB Time
Impact is 3.43 active sessions, 66.33% of total activity.
SQL statements consuming significant database time were found.
Recommendation 1: SQL Tuning
Estimated benefit is 1.59 active sessions, 30.8% of total activity.
Action
Investigate the SQL statement with SQL_ID "a49xsqhv0h31b" for possible
performance improvements.
Related Object
SQL statement with SQL_ID a49xsqhv0h31b.
SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
R.Priority, U.User_Name, O.Oracle_Username,
O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
X.Argument100, R.number_of_arguments, C.CD_Name,
NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
(R.OPS_INSTANCE =
decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
P.Application_Id(+) And R.Concurrent_Program_Id =
P.Concurrent_Program_Id(+) And R.Program_Application_Id =
A.Application_Id(+) And P.Executable_Application_Id =
E.Application_Id(+) And P.Executable_Id =
E.Executable_Id(+) And P.Executable_Application_Id =
A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
= C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
:q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
Q.Max_Processes And R.Rowid = :reqname And
((P.Execution_Method_Code != 'S' OR
(R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
R.status_code NoWait
Rationale
SQL statement with SQL_ID "a49xsqhv0h31b" was executed 4686 times and
had an average elapsed time of 1.2 seconds.
Rationale
Waiting for event "buffer busy waits" in wait class "Concurrency"
accounted for 85% of the database time spent in processing the SQL
statement with SQL_ID "a49xsqhv0h31b".
Rationale
Waiting for event "log file switch (checkpoint incomplete)" in wait
class "Configuration" accounted for 9% of the database time spent in
processing the SQL statement with SQL_ID "a49xsqhv0h31b".
Recommendation 3: SQL Tuning
Estimated benefit is .56 active sessions, 10.91% of total activity.
Action
Investigate the SQL statement with SQL_ID "5d7957yktf3nn" for possible
performance improvements.
Related Object
SQL statement with SQL_ID 5d7957yktf3nn.
UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
Rationale
SQL statement with SQL_ID "5d7957yktf3nn" was executed 266 times and had
an average elapsed time of 7.6 seconds.
Rationale
Waiting for event "buffer busy waits" in wait class "Concurrency"
accounted for 86% of the database time spent in processing the SQL
statement with SQL_ID "5d7957yktf3nn".
Rationale
Waiting for event "log file switch (checkpoint incomplete)" in wait
class "Configuration" accounted for 7% of the database time spent in
processing the SQL statement with SQL_ID "5d7957yktf3nn".
Finding 2: Buffer Busy
Impact is 2.52 active sessions, 48.81% of total activity.
Read and write contention on database blocks was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is 1.42 active sessions, 27.44% of total activity.
Action
Trace the cause of object contention due to SELECT statements in the
application using the information provided.
Related Object
Database object with ID 34562.
Rationale
The SELECT statement with SQL_ID "a49xsqhv0h31b" was significantly
affected by "buffer busy" waits.
Related Object
SQL statement with SQL_ID a49xsqhv0h31b.
SELECT R.Conc_Login_Id, R.Request_Id, R.Phase_Code, R.Status_Code,
P.Application_ID, P.Concurrent_Program_ID, P.Concurrent_Program_Name,
R.Enable_Trace, R.Restart, DECODE(R.Increment_Dates, 'Y', 'Y', 'N'),
R.NLS_Compliant, R.OUTPUT_FILE_TYPE, E.Executable_Name,
E.Execution_File_Name, A2.Basepath, DECODE(R.Stale, 'Y', 'C',
P.Execution_Method_Code), P.Print_Flag, P.Execution_Options,
DECODE(P.Srs_Flag, 'Y', 'Y', 'Q', 'Y', 'N'), P.Argument_Method_Code,
R.Print_Style, R.Argument_Input_Method_Code, R.Queue_Method_Code,
R.Responsibility_ID, R.Responsibility_Application_ID, R.Requested_By,
R.Number_Of_Copies, R.Save_Output_Flag, R.Printer, R.Print_Group,
R.Priority, U.User_Name, O.Oracle_Username,
O.Encrypted_Oracle_Password, R.Cd_Id, A.Basepath,
A.Application_Short_Name, TO_CHAR(R.Requested_Start_Date,'YYYY/MM/DD
HH24:MI:SS'), R.Nls_Language, R.Nls_Territory,
R.Nls_Numeric_Characters, DECODE(R.Parent_Request_ID, NULL, 0,
R.Parent_Request_ID), R.Priority_Request_ID, R.Single_Thread_Flag,
R.Has_Sub_Request, R.Is_Sub_Request, R.Req_Information,
R.Description, R.Resubmit_Time, TO_CHAR(R.Resubmit_Interval),
R.Resubmit_Interval_Type_Code, R.Resubmit_Interval_Unit_Code,
TO_CHAR(R.Resubmit_End_Date,'YYYY/MM/DD HH24:MI:SS'),
Decode(E.Execution_File_Name, NULL, 'N', Decode(E.Subroutine_Name,
NULL, Decode(E.Execution_Method_Code, 'I', 'Y', 'J', 'Y', 'N'),
'Y')), R.Argument1, R.Argument2, R.Argument3, R.Argument4,
R.Argument5, R.Argument6, R.Argument7, R.Argument8, R.Argument9,
R.Argument10, R.Argument11, R.Argument12, R.Argument13, R.Argument14,
R.Argument15, R.Argument16, R.Argument17, R.Argument18, R.Argument19,
R.Argument20, R.Argument21, R.Argument22, R.Argument23, R.Argument24,
R.Argument25, X.Argument26, X.Argument27, X.Argument28, X.Argument29,
X.Argument30, X.Argument31, X.Argument32, X.Argument33, X.Argument34,
X.Argument35, X.Argument36, X.Argument37, X.Argument38, X.Argument39,
X.Argument40, X.Argument41, X.Argument42, X.Argument43, X.Argument44,
X.Argument45, X.Argument46, X.Argument47, X.Argument48, X.Argument49,
X.Argument50, X.Argument51, X.Argument52, X.Argument53, X.Argument54,
X.Argument55, X.Argument56, X.Argument57, X.Argument58, X.Argument59,
X.Argument60, X.Argument61, X.Argument62, X.Argument63, X.Argument64,
X.Argument65, X.Argument66, X.Argument67, X.Argument68, X.Argument69,
X.Argument70, X.Argument71, X.Argument72, X.Argument73, X.Argument74,
X.Argument75, X.Argument76, X.Argument77, X.Argument78, X.Argument79,
X.Argument80, X.Argument81, X.Argument82, X.Argument83, X.Argument84,
X.Argument85, X.Argument86, X.Argument87, X.Argument88, X.Argument89,
X.Argument90, X.Argument91, X.Argument92, X.Argument93, X.Argument94,
X.Argument95, X.Argument96, X.Argument97, X.Argument98, X.Argument99,
X.Argument100, R.number_of_arguments, C.CD_Name,
NVL(R.Security_Group_ID, 0), NVL(R.org_id, 0) FROM
fnd_concurrent_requests R, fnd_concurrent_programs P, fnd_application
A, fnd_user U, fnd_oracle_userid O, fnd_conflicts_domain C,
fnd_concurrent_queues Q, fnd_application A2, fnd_executables E,
fnd_conc_request_arguments X WHERE R.Status_code = 'I' And
((R.OPS_INSTANCE is null) or (R.OPS_INSTANCE = -1) or
(R.OPS_INSTANCE =
decode(:dcp_on,1,FND_CONC_GLOBAL.OPS_INST_NUM,R.OPS_INSTANCE))) And
R.Request_ID = X.Request_ID(+) And R.Program_Application_Id =
P.Application_Id(+) And R.Concurrent_Program_Id =
P.Concurrent_Program_Id(+) And R.Program_Application_Id =
A.Application_Id(+) And P.Executable_Application_Id =
E.Application_Id(+) And P.Executable_Id =
E.Executable_Id(+) And P.Executable_Application_Id =
A2.Application_Id(+) And R.Requested_By = U.User_Id(+) And R.Cd_Id
= C.Cd_Id(+) And R.Oracle_Id = O.Oracle_Id(+) And Q.Application_Id =
:q_applid And Q.Concurrent_Queue_Id = :queue_id And (P.Enabled_Flag
is NULL OR P.Enabled_Flag = 'Y') And R.Hold_Flag = 'N' And
R.Requested_Start_Date <= Sysdate And ( R.Enforce_Seriality_Flag =
'N' OR ( C.RunAlone_Flag = P.Run_Alone_Flag And (P.Run_Alone_Flag =
'N' OR Not Exists (Select Null From Fnd_Concurrent_Requests Sr
Where Sr.Status_Code In ('R', 'T') And Sr.Enforce_Seriality_Flag =
'Y' And Sr.CD_id = C.CD_Id)))) And Q.Running_Processes <=
Q.Max_Processes And R.Rowid = :reqname And
((P.Execution_Method_Code != 'S' OR
(R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) IN
((0,98),(0,100),(0,31721),(0,31722),(0,31757))) AND
((R.PROGRAM_APPLICATION_ID,R.CONCURRENT_PROGRAM_ID) NOT IN
((510,40112),(510,40113),(510,41497),(510,41498),(530,41859),(530,418
60),(535,41492),(535,41493),(535,41494)))) FOR UPDATE OF
R.status_code NoWait
UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
Recommendation 1: Schema Changes
Estimated benefit is .03 active sessions, .62% of total activity.
Action
Consider rebuilding the TABLE "APPLSYS.FND_LOGIN_RESP_FORMS" with object
ID 34651 using a higher value for PCTFREE.
Related Object
Database object with ID 34651.
Rationale
The UPDATE statement with SQL_ID "cqc5crhxxt36t" was significantly
affected by "buffer busy" waits.
Related Object
SQL statement with SQL_ID cqc5crhxxt36t.
UPDATE FND_LOGIN_RESP_FORMS FLRF SET END_TIME = SYSDATE WHERE
FLRF.LOGIN_ID = :B2 AND FLRF.LOGIN_RESP_ID = :B1 AND FLRF.END_TIME IS
NULL AND (FLRF.FORM_ID, FLRF.FORM_APPL_ID) = (SELECT F.FORM_ID,
F.APPLICATION_ID FROM FND_FORM F, FND_APPLICATION A WHERE F.FORM_NAME
= :B4 AND F.APPLICATION_ID = A.APPLICATION_ID AND
A.APPLICATION_SHORT_NAME = :B3 )
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is 2.53 active sessions, 48.87% of total activity.
Finding 4: Log File Switches
Impact is .91 active sessions, 17.56% of total activity.
Log file switch operations were consuming significant database time while
waiting for checkpoint completion.
This problem can be caused by use of hot backup mode on tablespaces. DML to
tablespaces in hot backup mode causes generation of additional redo.
Recommendation 1: Database Configuration
Estimated benefit is .91 active sessions, 17.56% of total activity.
Action
Verify whether incremental shipping was used for standby databases.
Symptoms That Led to the Finding:
Wait class "Configuration" was consuming significant database time.
Impact is .91 active sessions, 17.63% of total activity.
Finding 5: Buffer Busy
Impact is .56 active sessions, 10.87% of total activity.
A hot data block with concurrent read and write activity was found. The block
belongs to segment "ICX.ICX_SESSIONS" and is block 243489 in file 36.
Recommendation 1: Application Analysis
Estimated benefit is .56 active sessions, 10.87% of total activity.
Action
Investigate application logic to find the cause of high concurrent read
and write activity to the data present in this block.
Related Object
Database block with object number 37562, file number 36 and block
number 243489.
Rationale
The SQL statement with SQL_ID "5d7957yktf3nn" spent significant time on
"buffer busy" waits for the hot block.
Related Object
SQL statement with SQL_ID 5d7957yktf3nn.
UPDATE ICX_SESSIONS SET TIME_OUT = :B2 WHERE SESSION_ID = :B1
Rationale
The SQL statement with SQL_ID "326up1aym56dd" spent significant time on
"buffer busy" waits for the hot block.
Related Object
SQL statement with SQL_ID 326up1aym56dd.
UPDATE ICX_SESSIONS SET LAST_CONNECT = SYSDATE WHERE SESSION_ID = :B1
Recommendation 2: Schema Changes
Estimated benefit is .56 active sessions, 10.87% of total activity.
Action
Consider rebuilding the TABLE "ICX.ICX_SESSIONS" with object ID 37562
using a higher value for PCTFREE.
Related Object
Database object with ID 37562.
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is 2.53 active sessions, 48.87% of total activity.
Finding 6: Undersized SGA
Impact is .38 active sessions, 7.37% of total activity.
The SGA was inadequately sized, causing additional I/O or hard parses.
The value of parameter "sga_target" was "4096 M" during the analysis period.
Recommendation 1: Database Configuration
Estimated benefit is .12 active sessions, 2.33% of total activity.
Action
Increase the size of the SGA by setting the parameter "sga_target" to
4608 M.
Symptoms That Led to the Finding:
Wait class "User I/O" was consuming significant database time.
Impact is .7 active sessions, 13.57% of total activity.
Hard parsing of SQL statements was consuming significant database time.
Impact is .13 active sessions, 2.51% of total activity.
Contention for latches related to the shared pool was consuming
significant database time.
Impact is 0 active sessions, .03% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is 2.53 active sessions, 48.87% of total activity.
Finding 7: Commits and Rollbacks
Impact is .28 active sessions, 5.42% of total activity.
Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
were consuming significant database time.
Recommendation 1: Host Configuration
Estimated benefit is .28 active sessions, 5.42% of total activity.
Action
Investigate the possibility of improving the performance of I/O to the
online redo log files.
Rationale
The average size of writes to the online redo log files was 163 K and
the average time per write was 68 milliseconds.
Symptoms That Led to the Finding:
Wait class "Commit" was consuming significant database time.
Impact is .28 active sessions, 5.42% of total activity.
Finding 8: Undo I/O
Impact is .18 active sessions, 3.53% of total activity.
Undo I/O was a significant portion (26%) of the total database I/O.
No recommendations are available.
Symptoms That Led to the Finding:
The throughput of the I/O subsystem was significantly lower than
expected.
Impact is .08 active sessions, 1.46% of total activity.
Wait class "User I/O" was consuming significant database time.
Impact is .7 active sessions, 13.57% of total activity.
Finding 9: CPU Usage
Impact is .13 active sessions, 2.57% of total activity.
Time spent on the CPU by the instance was responsible for a substantial part
of database time.
Recommendation 1: SQL Tuning
Estimated benefit is .13 active sessions, 2.57% of total activity.
Finding 10: Top SQL By I/O
Impact is .11 active sessions, 2.21% of total activity.
Individual SQL statements responsible for significant user I/O wait were
found.
Recommendation 1: SQL Tuning
Estimated benefit is .11 active sessions, 2.22% of total activity.
Action
Run SQL Tuning Advisor on the SQL statement with SQL_ID "b3pnc5yctv2z5".
Related Object
SQL statement with SQL_ID b3pnc5yctv2z5.
INSERT INTO ZX_TRANSACTION_LINES_GT( APPLICATION_ID ,ENTITY_CODE
,EVENT_CLASS_CODE ,TRX_ID ,TRX_LEVEL_TYPE ,TRX_LINE_ID ,LINE_CLASS
,LINE_LEVEL_ACTION ,TRX_LINE_TYPE ,TRX_LINE_DATE
,LINE_AMT_INCLUDES_TAX_FLAG ,LINE_AMT ,TRX_LINE_QUANTITY ,UNIT_PRICE
,PRODUCT_ID ,PRODUCT_ORG_ID ,UOM_CODE ,PRODUCT_CODE ,SHIP_TO_PARTY_ID
,SHIP_FROM_PARTY_ID ,BILL_TO_PARTY_ID ,BILL_FROM_PARTY_ID
,SHIP_FROM_PARTY_SITE_ID ,BILL_FROM_PARTY_SITE_ID
,SHIP_TO_LOCATION_ID ,SHIP_FROM_LOCATION_ID ,BILL_TO_LOCATION_ID
,SHIP_THIRD_PTY_ACCT_ID ,SHIP_THIRD_PTY_ACCT_SITE_ID ,HISTORICAL_FLAG
,TRX_LINE_CURRENCY_CODE ,TRX_LINE_CURRENCY_CONV_DATE
,TRX_LINE_CURRENCY_CONV_RATE ,TRX_LINE_CURRENCY_CONV_TYPE
,TRX_LINE_MAU ,TRX_LINE_PRECISION ,HISTORICAL_TAX_CODE_ID
,TRX_BUSINESS_CATEGORY ,PRODUCT_CATEGORY ,PRODUCT_FISC_CLASSIFICATION
,LINE_INTENDED_USE ,PRODUCT_TYPE ,USER_DEFINED_FISC_CLASS
,ASSESSABLE_VALUE ,INPUT_TAX_CLASSIFICATION_CODE ,ACCOUNT_CCID
,BILL_THIRD_PTY_ACCT_ID ,BILL_THIRD_PTY_ACCT_SITE_ID ,TRX_LINE_NUMBER
,TRX_LINE_DESCRIPTION ,PRODUCT_DESCRIPTION ,USER_UPD_DET_FACTORS_FLAG
,DEFAULTING_ATTRIBUTE1 ) SELECT :B4 ,:B3 ,:B2
,PRL.REQUISITION_HEADER_ID ,:B1 ,PRL.REQUISITION_LINE_ID ,'INVOICE'
,NVL(PRL.TAX_ATTRIBUTE_UPDATE_CODE,'UPDATE') ,'ITEM'
,NVL(PRL.NEED_BY_DATE, SYSDATE) ,'N' ,NVL(PRL.AMOUNT,
PRL.UNIT_PRICE*PRL.QUANTITY) ,PRL.QUANTITY ,PRL.UNIT_PRICE
,PRL.ITEM_ID ,(SELECT FSP.INVENTORY_ORGANIZATION_ID FROM
FINANCIALS_SYSTEM_PARAMS_ALL FSP WHERE FSP.ORG_ID=PRL.ORG_ID)
,(SELECT MUM.UOM_CODE FROM MTL_UNITS_OF_MEASURE MUM WHERE
MUM.UNIT_OF_MEASURE=PRL.UNIT_MEAS_LOOKUP_CODE) ,MSIB.SEGMENT1
,PRL.DESTINATION_ORGANIZATION_ID ,PV.PARTY_ID ,PRH.ORG_ID
,PV.PARTY_ID ,PVS.PARTY_SITE_ID ,PVS.PARTY_SITE_ID
,PRL.DELIVER_TO_LOCATION_ID ,(SELECT HZPS.LOCATION_ID FROM
HZ_PARTY_SITES HZPS WHERE HZPS.PARTY_SITE_ID = PVS.PARTY_SITE_ID)
,(SELECT LOCATION_ID FROM HR_ALL_ORGANIZATION_UNITS WHERE
ORGANIZATION_ID=PRH.ORG_ID) ,PRL.VENDOR_ID ,PRL.VENDOR_SITE_ID ,NULL
,NVL(PRL.CURRENCY_CODE, :B9 ) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE_DATE,
SYSDATE) ,NVL2(PRL.CURRENCY_CODE, PRL.RATE, :B8 )
,NVL2(PRL.CURRENCY_CODE, PRL.RATE_TYPE, :B7 )
,FC.MINIMUM_ACCOUNTABLE_UNIT ,NVL(FC.PRECISION, 2) ,NULL
,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.TRX_BUSINESS_CATEGORY, NULL),
NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_CATEGORY, NULL), NULL )
,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_FISC_CLASSIFICATION,
NULL), NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.LINE_INTENDED_USE, NULL), NULL )
,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.PRODUCT_TYPE, NULL), NULL )
,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.USER_DEFINED_FISC_CLASS, NULL),
NULL ) ,DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.ASSESSABLE_VALUE, NULL), NULL )
,DECODE(:B6 , 'REQIMPORT', PRL.TAX_NAME,
DECODE(PRL.TAX_ATTRIBUTE_UPDATE_CODE, 'CREATE',
NVL2(PRL.PARENT_REQ_LINE_ID, ZXLDET.INPUT_TAX_CLASSIFICATION_CODE,
NULL), NULL ) ) ,NVL((SELECT PRD.CODE_COMBINATION_ID FROM
PO_REQ_DISTRIBUTIONS_ALL PRD WHERE PRD.REQUISITION_LINE_ID =
PRL.REQUISITION_LINE_ID AND ROWNUM = 1), MSIB.EXPENSE_ACCOUNT )
,PV.VENDOR_ID ,PVS.VENDOR_SITE_ID ,PRL.LINE_NUM ,PRL.ITEM_DESCRIPTION
,PRL.ITEM_DESCRIPTION ,(SELECT 'Y' FROM DUAL WHERE :B6 = 'REQIMPORT'
AND PRL.TAX_NAME IS NOT NULL) ,PRL.DESTINATION_ORGANIZATION_ID FROM
PO_REQUISITION_HEADERS_ALL PRH, PO_REQUISITION_LINES_ALL PRL,
ZX_LINES_DET_FACTORS ZXLDET, PO_VENDORS PV, PO_VENDOR_SITES_ALL PVS,
MTL_SYSTEM_ITEMS_B MSIB, FND_CURRENCIES FC WHERE
PRH.REQUISITION_HEADER_ID = :B5 AND PRH.REQUISITION_HEADER_ID =
PRL.REQUISITION_HEADER_ID AND ZXLDET.APPLICATION_ID(+) = :B4 AND
ZXLDET.ENTITY_CODE(+) = :B3 AND ZXLDET.EVENT_CLASS_CODE(+) = :B2 AND
ZXLDET.TRX_LEVEL_TYPE(+) = :B1 AND ZXLDET.TRX_LINE_ID(+) =
PRL.PARENT_REQ_LINE_ID AND PV.VENDOR_ID(+) = PRL.VENDOR_ID AND
PVS.VENDOR_SITE_ID(+) = PRL.VENDOR_SITE_ID AND
MSIB.INVENTORY_ITEM_ID(+) = PRL.ITEM_ID AND MSIB.ORGANIZATION_ID(+) =
PRL.ORG_ID AND FC.CURRENCY_CODE(+) = PRL.CURRENCY_CODE AND
NVL(PRL.MODIFIED_BY_AGENT_FLAG, 'N') = 'N' AND NVL(PRL.CANCEL_FLAG,
'N') = 'N' AND NVL(PRL.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' AND
PRL.LINE_LOCATION_ID IS NULL AND PRL.AT_SOURCING_FLAG IS NULL
Rationale
SQL statement with SQL_ID "b3pnc5yctv2z5" was executed 3 times and had
an average elapsed time of 138 seconds.
Rationale
Average time spent in User I/O wait events per execution was 137
seconds.
Symptoms That Led to the Finding:
Wait class "User I/O" was consuming significant database time.
Impact is .7 active sessions, 13.57% of total activity.
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
Regards
AthishFew days are am facing serious of performance problem in our Production instanceFor production issues, please log a SR.
Was this working before? If yes, any changes been done recently?
Do you have the statistics collected up to date?
Please see these docs.
AutoInvoice Performance Issue When Processing Tax [ID 1059275.1]
R12 : System Hangs When Attempting To Save Blanket Release After Applying Patch 11817843 [ID 1333336.1]
Thanks,
Hussein
Maybe you are looking for
-
Two Accounts. No updates for 90 days. How to Phase out one account?
I now live in Australia. I have always had an iPhone. And when I moved, I had to create an Australian account to get access to my ANZ banking app here. Now I have two accounts that I have purchased apps under. These two accounts coexsisted with minor
-
Extended Characters not showing up in DM Studio text.
Hello, For some reason when I'm entering text into DM Studio (text area, label, etc) and my text contains a special character (® or ©, for instance), it shows up as a small box, indicating it can't represent that character. It used to show up just fi
-
Hi, We have a workflow to be triggered for Record Add and Record Update operations. It has steps: Start, Assignment 1, Assignment 2, Syndicate and Stop. But when any record is added or Updated, its not Syndicating to Port (Outbound Port is defined in
-
Error while installing Jdev 11.1.1.16 on Mac -"There is only 1 MB available
Hello, I am trying to install Jdev 11g on my macbook and every time I try getting below error. Insufficient disk space ! installer requires:- 832 MB for middleware home at /users/Kumar/Oracle/middleware, There is only 1 MB available at /Users/Kumar/O
-
How can I update Camera raw for Photoshop CS5 to access images from Canon Rebel T4i?
How can I update Camera Raw for Photoshop CS5 to access images from Canon Rebel T4i? The updated version of Camera Raw 7 says it only works with CS6. Outside of buying a new Photoshop, is there anything I can do?