Logic taking too much time to execute
Dear Friends,
My i_bkpf internal table has approx. 3 lakh records another table i_del has 2 lakh records. I have to delete these 2 lakh records of i_del from i_bkpf.
I'd written the following piece of code for the same but it takes hell lot of time and ultimately I encounter a timeout.
Kindly have a look at the following code and suggest how can I improve it.
sort i_bkpf by belnr.
sort i_del by belnr.
loop at i_del.
DELETE I_BKPF WHERE BELNR = I_DEL-BELNR AND
GJAHR = I_DEL-GJAHR AND
BUKRS = I_DEL-BUKRS.
ENDLOOP.
Regards,
Alok.
Hi
do like this
sort i_bkpf by bukrs belnr gjahr.
sort i_del by bukrs belnr gjahr.
loop at i_del.
read table I_BKPF With key BUKRS = I_DEL-BUKRS
BELNR = I_DEL-BELNR
GJAHR = I_DEL-GJAHR
binary search.
if sy-subrc = 0.
delete i_bkpf index sy-tabix.
endif..
ENDLOOP.
<b>Reward points for useful Answers</b>
Regards
Anji
Similar Messages
-
Owb job taking too much time to execute
While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
-KCIt depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
Generate SQL at various stages and see if you can achieve the same with a different operator.
The other option would be passing HINTS to the tables selected.
- K -
Comma Separated Value Taking too Much Time to Execute
Hi,
select ES_DIAGNOSIS_CODE DC from ecg_study WHERE PS_PROTOCOL_ID LIKE 'H6L-MC-LFAN'
The above query returns comma separated value from the above query.
I am using the query below to split the comma separated value but the below query is taking lot of time to return the data.
SELECT
select DC from (
with t as ( select ES_DIAGNOSIS_CODE DC from ecg_study WHERE PS_PROTOCOL_ID LIKE 'H6L-MC-LFAN' )
select REGEXP_SUBSTR (DC, '[^,]+', 1, level) DC from t
connect by level <= length(regexp_replace(DC,'[^,]*'))+1 )
Please suggest me is there any alternative way to do this comma separated value.
Thanks
SudhirNikolay Savvinov wrote:
Hi BluShadow,
I know that this function is fast with varchar2 strings from several years of using it. With CLOBs one may need something faster, but the OP didn't menion CLOBs.
Best regards,
NikolayJust because you perceive it to be fast doesn't mean it's faster than doing it in SQL alone.
For starters you are context switching from the SQL engine to PL/SQL to call it.
Then in your code you are doing this...
select substr(v_str,v_last_break+1, decode(v_nxt_break,0,v_length, v_nxt_break-v_last_break-1)) into v_result from dual;which is context switching back from the PL/SQL engine to the SQL engine for each entry in the string.
Why people do that I don't know... when PL/SQL alone could do it without a context switch e.g.
v_result := substr(v_str,v_last_break+1, case when v_nxt_break = 0 then v_length else v_nxt_break-v_last_break-1 end);So, if you still think it's faster than pure SQL (which is what the OP is using), please go ahead and prove it to us. -
My ALV Report is taking too much time to execute
Hi Friends,
My ALV Report is taking long time for execution(more than 1.5 Hrs).Pls suggest the changes to be done to improove the performance.Its very urgent.Pls respond as soon as possible.
Thanks & Regards,
Sunil Maurya
Report is as follows :
REPORT YSEG_PROFIT.
TABLES : ZSEGMENT, coep.
TYPE-POOLS: slis.
DATA : BEGIN OF I_COEP OCCURS 0,
BELNR LIKE COEP-BELNR,
BUZEI LIKE COEP-BUZEI,
PERIO LIKE COEP-PERIO,
WOGBTR LIKE COEP-WOGBTR,
OBJNR LIKE COEP-OBJNR,
KSTAR LIKE COEP-KSTAR,
PAOBJNR LIKE COEP-PAOBJNR,
KVGR5 LIKE ZSEGMENT-KVGR5,
KAUFN LIKE CE4KBL1_ACCT-KAUFN,
END OF I_COEP.
DATA : BEGIN OF I_SECTOR OCCURS 0,
KVGR5 LIKE ZSEGMENT-KVGR5,
END OF I_SECTOR.
DATA : BEGIN OF I_AUFK OCCURS 0,
OBJNR LIKE AUFK-OBJNR,
PSPEL LIKE AUFK-PSPEL,
KDAUF LIKE AUFK-KDAUF,
KDPOS LIKE AUFK-KDPOS,
END OF I_AUFK.
DATA : BEGIN OF I_VBAKP OCCURS 0,
OBJNR LIKE VBAP-OBJNR,
KVGR5 LIKE VBAK-VBELN,
END OF I_VBAKP.
DATA : BEGIN OF I_PRPS OCCURS 0,
OBJNR LIKE PRPS-OBJNR,
PSPHI LIKE PRPS-PSPHI,
ASTNR LIKE PROJ-ASTNR,
END OF I_PRPS.
DATA : BEGIN OF I_OUTPUT OCCURS 0,
KSTAR LIKE COEP-KSTAR,
MCTXT LIKE CSKU-MCTXT,
S01 LIKE COEP-WOGBTR,
S02 LIKE COEP-WOGBTR,
S03 LIKE COEP-WOGBTR,
S04 LIKE COEP-WOGBTR,
S05 LIKE COEP-WOGBTR,
S06 LIKE COEP-WOGBTR,
S07 LIKE COEP-WOGBTR,
S08 LIKE COEP-WOGBTR,
S09 LIKE COEP-WOGBTR,
OTH like COEP-WOGBTR,
TOTAL LIKE COEP-WOGBTR,
END OF I_OUTPUT.
DATA : BEGIN OF I_AFVC OCCURS 0,
OBJNR LIKE AFVC-OBJNR,
PROJN LIKE AFVC-PROJN,
PROJ LIKE PROJ-PSPID,
PSPNR LIKE PROJ-PSPNR,
END OF I_AFVC.
DATA : BEGIN OF I_PROJ OCCURS 0,
PSPNR LIKE PROJ-PSPNR,
ASTNR LIKE PROJ-ASTNR,
END OF I_PROJ.
DATA : I_NP LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
DATA : I_NV LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
DATA : I_DETAIL LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
DATA : I_WB LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
DATA : I_PR LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
data : t_fieldcat_sum_rep TYPE slis_t_fieldcat_alv.
data : t_fieldcat_det_rep TYPE slis_t_fieldcat_alv.
DATA: k_fieldcat TYPE slis_fieldcat_alv.
Declaration by Sunil Maurya for sorting
DATA : GT_SORT TYPE SLIS_T_SORTINFO_ALV,
GS_SORT TYPE SLIS_SORTINFO_ALV.
DATA : GT_SORT1 TYPE SLIS_T_SORTINFO_ALV,
GS_SORT1 TYPE SLIS_SORTINFO_ALV.
*data : it_sortcat type slis_t_sortinfo_alv.
*DATA : k_sortcat like line of it_sortcat.
**data : wa_sort like line of it_sortcat.
Declaration by Sunil Maurya for sorting
constants : c_user_command TYPE char30 VALUE 'USER_COMMAND'.
*Selection screen
SELECTION-SCREEN BEGIN OF BLOCK A WITH FRAME TITLE TEXT-001.
PARAMETER : P_PERIO1 LIKE COEP-PERIO obligatory,
P_PERIO2 LIKE COEP-PERIO MODIF ID D1,
P_GJAHR LIKE COEP-GJAHR obligatory.
select-options : P_KSTAR for COEP-KSTAR,
P_GSBER FOR COEP-GSBER.
SELECT-OPTIONS : S_KVGR5 FOR ZSEGMENT-KVGR5 MODIF ID D1.
SELECTION-SCREEN END OF BLOCK A.
INITIALIZATION.
S_KVGR5-OPTION = 'BT' .
S_KVGR5-LOW = 'S01'.
S_KVGR5-HIGH = 'S09'.
APPEND S_KVGR5.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF SCREEN-GROUP1 = 'D1'.
SCREEN-INPUT = '0'.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
START-OF-SELECTION.
PERFORM GET_SECTORS.
PERFORM GET_DATA_COEP.
PERFORM VALIDATE_SECTOR.
PERFORM GROUP_OUTPUT.
PERFORM BUILD_SORTCAT. " Inserted by by Sunil Maurya for sorting
PERFORM BUILD_CATLOG.
PERFORM DISPLAY_OUTPUT.
*& Form GET_DATA_COEP
text
--> p1 text
<-- p2 text
FORM GET_DATA_COEP .
SELECT BELNR BUZEI PERIO WOGBTR OBJNR KSTAR PAOBJNR
FROM COEP INTO TABLE I_COEP WHERE
KOKRS = 'KBL' AND
PERIO >= P_PERIO1 AND
PERIO <= P_PERIO2 AND
PERIO = P_PERIO1 AND
GJAHR = P_GJAHR AND
KSTAR NE '' AND
OBJNR NE '' AND
KSTAR in P_KSTAR AND
GSBER IN P_GSBER. "AND
BELNR = '0103991827' .
SORT I_COEP BY OBJNR.
DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
OBJNR+0(2) <> 'PR' AND
OBJNR+0(2) <> 'NV' AND
OBJNR+0(2) <> 'NP' AND
OBJNR+0(2) <> 'AO'.
SORT I_COEP BY KSTAR.
DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
KSTAR+0(5) <> '00004'.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
MOVE I_COEP TO I_NP.
APPEND I_NP.
CLEAR : I_NP, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
MOVE I_COEP TO I_NV.
APPEND I_NV.
CLEAR : I_NV, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
MOVE I_COEP TO I_WB.
APPEND I_WB.
CLEAR : I_WB, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
MOVE I_COEP TO I_PR.
APPEND I_PR.
CLEAR : I_PR, I_COEP.
ENDLOOP.
*Inserted by Sunil Maurya for PAOBJNR = "AO....."
Data : ind type sy-tabix.
loop at i_coep where OBJNR+0(2) = 'AO'.
ind = sy-tabix.
select single KAUFN into i_coep-KAUFN from CE4KBL1_ACCT where PAOBJNR =
i_coep-PAOBJNR.
select single KVGR5 into i_coep-kvgr5 from vbak where vbeln =
i_coep-kaufn.
modify i_coep index ind.
clear i_coep.
endloop.
*Inserted by Sunil Maurya for PAOBJNR = "AO....."
*LOOP AT I_COEP WHERE OBJNR+0(2) = 'AO' .
MOVE I_COEP TO I_AO.
APPEND I_AO.
CLEAR : I_AO, I_COEP.
*ENDLOOP.
ENDFORM. " GET_DATA_COEP
*& Form GET_SECTORS
text
--> p1 text
<-- p2 text
FORM GET_SECTORS .
DATA : L_FR TYPE I,
L_TO TYPE I.
DATA : L_CH1(1).
LOOP AT S_KVGR5.
IF S_KVGR5-OPTION = 'EQ'.
I_SECTOR-KVGR5 = S_KVGR5-LOW.
APPEND I_SECTOR.
CLEAR : I_SECTOR.
CONCATENATE '5' S_KVGR5-LOW+1(2) INTO I_SECTOR-KVGR5.
APPEND I_SECTOR.
CLEAR : I_SECTOR.
ENDIF.
IF S_KVGR5-OPTION = 'BT'.
L_FR = S_KVGR5-LOW+1(2).
L_TO = S_KVGR5-HIGH+1(2).
WHILE L_FR <= L_TO.
L_CH1 = L_FR.
CONCATENATE 'S0' L_CH1 INTO I_SECTOR-KVGR5.
APPEND I_SECTOR.
CONCATENATE '50' L_CH1 INTO I_SECTOR-KVGR5.
APPEND I_SECTOR.
CLEAR : I_SECTOR, L_CH1.
L_FR = L_FR + 1.
ENDWHILE.
ENDIF.
ENDLOOP.
ENDFORM. " GET_SECTORS
*& Form VALIDATE_SECTOR
text
--> p1 text
<-- p2 text
FORM VALIDATE_SECTOR .
*get data from AUFK for NP & NV type intab
IF I_NP[] IS NOT INITIAL.
SELECT OBJNR PSPEL KDAUF KDPOS FROM AUFK
INTO TABLE I_AUFK
FOR ALL ENTRIES IN I_NP
WHERE OBJNR = I_NP-OBJNR.
*Push this data in I_WB where order no exist in AUFK
LOOP AT I_AUFK WHERE KDAUF NE ''.
I_WB-OBJNR = I_AUFK-OBJNR.
APPEND I_WB.
CLEAR : I_AUFK, I_WB.
ENDLOOP.
*Push this data in I_PR where order no exist in AUFK
LOOP AT I_AUFK WHERE PSPEL NE ''.
I_PR-OBJNR = I_AUFK-OBJNR.
APPEND I_PR.
CLEAR : I_AUFK, I_PR.
ENDLOOP.
ENDIF.
SELECT BOBJNR AKVGR5 FROM VBAK AS A INNER JOIN VBAP AS B
ON AVBELN = BVBELN
INTO TABLE I_VBAKP
FOR ALL ENTRIES IN I_WB
WHERE B~VBELN = I_WB-OBJNR+2(10).
SORT I_VBAKP BY OBJNR.
SORT I_COEP BY OBJNR.
LOOP AT I_WB.
READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
IF SY-SUBRC = 0.
READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
IF SY-SUBRC <> 0.
DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
loop at i_coep where objnr = i_wb-objnr.
i_coep-kvgr5 = 'OTH'.
modify i_coep.
clear : i_coep.
endloop.
ELSE.
READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
IF SY-SUBRC = 0.
I_COEP-KVGR5 = I_SECTOR-KVGR5.
MODIFY I_COEP INDEX SY-TABIX.
LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
I_COEP-KVGR5 = I_SECTOR-KVGR5.
MODIFY I_COEP.
CLEAR : I_COEP.
ENDLOOP.
ENDIF.
ENDIF.
ELSE.
DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
loop at i_coep where objnr = i_wb-objnr.
i_coep-kvgr5 = 'OTH'.
modify i_coep.
clear : i_coep.
endloop.
ENDIF.
CLEAR : I_VBAKP, I_SECTOR, I_COEP.
ENDLOOP.
IF I_PR[] IS NOT INITIAL.
SELECT AOBJNR APSPHI B~ASTNR
FROM PRPS AS A INNER JOIN PROJ AS B
ON APSPHI = BPSPNR
INTO TABLE I_PRPS
FOR ALL ENTRIES IN I_PR
WHERE A~OBJNR = I_PR-OBJNR.
ENDIF.
IF I_NV[] IS NOT INITIAL.
SELECT OBJNR PROJN FROM AFVC INTO TABLE I_AFVC
FOR ALL ENTRIES IN I_NV
WHERE OBJNR = I_NV-OBJNR AND
PROJN <> '' .
LOOP AT I_AFVC.
CALL FUNCTION 'CONVERSION_EXIT_ABPSP_OUTPUT'
EXPORTING
INPUT = I_AFVC-PROJN
IMPORTING
OUTPUT = I_AFVC-PROJ
I_AFVC-PROJ = I_AFVC-PROJ+0(9).
CALL FUNCTION 'CONVERSION_EXIT_KONPD_INPUT'
EXPORTING
INPUT = I_AFVC-PROJ
IMPORTING
OUTPUT = I_AFVC-PSPNR.
MODIFY I_AFVC.
CLEAR : I_AFVC.
ENDLOOP.
SELECT PSPNR ASTNR FROM PROJ INTO TABLE I_PROJ
FOR ALL ENTRIES IN I_AFVC
WHERE PSPNR = I_AFVC-PSPNR.
LOOP AT I_NV.
I_PRPS-OBJNR = I_NV-OBJNR.
READ TABLE I_AFVC WITH KEY OBJNR = I_NV-OBJNR.
IF SY-SUBRC = 0.
READ TABLE I_PROJ WITH KEY PSPNR = I_AFVC-PSPNR.
IF SY-SUBRC = 0.
I_PRPS-ASTNR = I_PROJ-ASTNR.
ENDIF.
ENDIF.
APPEND I_PRPS.
I_PR-OBJNR = I_NV-OBJNR.
APPEND I_PR.
CLEAR : I_NV, I_AFVC, I_PROJ, I_PR.
ENDLOOP.
ENDIF.
SORT I_PRPS BY OBJNR.
LOOP AT I_PR.
READ TABLE I_PRPS WITH KEY OBJNR = I_PR-OBJNR.
IF SY-SUBRC = 0.
READ TABLE I_SECTOR WITH KEY KVGR5 = I_PRPS-ASTNR+5(3).
IF SY-SUBRC <> 0.
DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
loop at i_coep where objnr = i_pr-objnr.
i_coep-kvgr5 = 'OTH'.
modify i_coep.
clear : i_coep.
endloop.
ELSE.
READ TABLE I_COEP WITH KEY OBJNR = I_PR-OBJNR.
IF SY-SUBRC = 0.
CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
MODIFY I_COEP INDEX SY-TABIX.
LOOP AT I_COEP WHERE OBJNR = I_PR-OBJNR.
CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
MODIFY I_COEP.
CLEAR : I_COEP.
ENDLOOP.
ENDIF.
ENDIF.
ELSE.
DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
loop at i_coep where objnr = i_pr-objnr.
i_coep-kvgr5 = 'OTH'.
modify i_coep.
clear : i_coep.
endloop.
ENDIF.
CLEAR : I_PR, I_PRPS, I_SECTOR, I_COEP.
ENDLOOP.
ENDFORM. " VALIDATE_SECTOR
*& Form GROUP_OUTPUT
text
--> p1 text
<-- p2 text
FORM GROUP_OUTPUT .
LOOP AT I_COEP.
I_OUTPUT-KSTAR = I_COEP-KSTAR.
IF I_COEP-KVGR5 = 'S01'.
I_OUTPUT-S01 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S02'.
I_OUTPUT-S02 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S03'.
I_OUTPUT-S03 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S04'.
I_OUTPUT-S04 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S05'.
I_OUTPUT-S05 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S06'.
I_OUTPUT-S06 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S07'.
I_OUTPUT-S07 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S08'.
I_OUTPUT-S08 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'S09'.
I_OUTPUT-S09 = I_COEP-WOGBTR.
ENDIF.
IF I_COEP-KVGR5 = 'OTH' OR I_COEP-KVGR5 = ''.
I_OUTPUT-OTH = I_COEP-WOGBTR.
ENDIF.
COLLECT I_OUTPUT.
CLEAR : I_COEP, I_OUTPUT.
ENDLOOP.
LOOP AT I_OUTPUT.
SELECT SINGLE MCTXT FROM CSKU
INTO I_OUTPUT-MCTXT
WHERE KTOPL = 'KBL' AND
SPRAS = SY-LANGU AND
KSTAR = I_OUTPUT-KSTAR.
I_OUTPUT-TOTAL = I_OUTPUT-S01 + I_OUTPUT-S02 + I_OUTPUT-S03
+ I_OUTPUT-S04 + I_OUTPUT-S05 + I_OUTPUT-S06
+ I_OUTPUT-S07 + I_OUTPUT-S08 + I_OUTPUT-S09
+ I_OUTPUT-OTH.
MODIFY I_OUTPUT.
CLEAR : I_OUTPUT.
ENDLOOP.
ENDFORM. " GROUP_OUTPUT
*& Form BUILD_CATLOG
text
--> p1 text
<-- p2 text
FORM BUILD_CATLOG .
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'KSTAR'.
k_fieldcat-seltext_l = text-002.
k_fieldcat-hotspot = 'X'.
APPEND k_fieldcat TO t_fieldcat_sum_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'MCTXT'.
k_fieldcat-seltext_l = text-003.
k_fieldcat-hotspot = 'X'.
APPEND k_fieldcat TO t_fieldcat_sum_rep.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S01'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S01'.
k_fieldcat-seltext_l = text-004.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S02'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S02'.
k_fieldcat-seltext_l = text-005.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S03'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S03'.
k_fieldcat-seltext_l = text-006.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S04'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S04'.
k_fieldcat-seltext_l = text-007.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S05'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S05'.
k_fieldcat-seltext_l = text-008.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S06'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S06'.
k_fieldcat-seltext_l = text-009.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S07'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S07'.
k_fieldcat-seltext_l = text-010.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S08'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S08'.
k_fieldcat-seltext_l = text-011.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
READ TABLE I_SECTOR WITH KEY KVGR5 = 'S09'.
IF SY-SUBRC = 0.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'S09'.
k_fieldcat-seltext_l = text-012.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
ENDIF.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'OTH'.
k_fieldcat-seltext_l = text-019.
k_fieldcat-hotspot = 'X'.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'TOTAL'.
k_fieldcat-seltext_l = text-020.
k_fieldcat-do_sum = 'X'. "Statement inserted by Sunil Maurya
APPEND k_fieldcat TO t_fieldcat_sum_rep.
*=======================================================================
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'BELNR'.
k_fieldcat-seltext_l = text-013.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'BUZEI'.
k_fieldcat-seltext_l = text-014.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'PERIO'.
k_fieldcat-seltext_l = text-015.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'WOGBTR'.
k_fieldcat-seltext_l = text-016.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'OBJNR'.
k_fieldcat-seltext_l = text-018.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'KSTAR'.
k_fieldcat-seltext_l = text-002.
APPEND k_fieldcat TO t_fieldcat_det_rep.
CLEAR k_fieldcat.
k_fieldcat-fieldname = 'KVGR5'.
k_fieldcat-seltext_l = text-017.
APPEND k_fieldcat TO t_fieldcat_det_rep.
*==============================================================
Statements inserted by Sunil Maurya for sorting
CLEAR GS_SORT.
GS_SORT-FIELDNAME = 'KSTAR'.
GS_SORT-SPOS = 1.
GS_SORT-UP = 'X'.
APPEND GS_SORT TO GT_SORT.
CLEAR GS_SORT1.
GS_SORT1-FIELDNAME = 'KSTAR'.
GS_SORT1-SPOS = 1.
GS_SORT1-UP = 'X'.
GS_SORT1-SUBTOT = 'X'.
APPEND GS_SORT1 TO GT_SORT1.
*CLEAR GS_SORT1.
*GS_SORT1-FIELDNAME = 'WOGBTR'.
*GS_SORT1-SPOS = 2.
*GS_SORT1-UP = 'X'.
*GS_SORT1-SUBTOT = 'X'.
*APPEND GS_SORT1 TO GT_SORT1.
FORM build_sortcat.
k_sortcat-spos = 1.
k_sortcat-fieldname = 'KSTAR'.
k_sortcat-up = 'X'.
k_sortcat-down = 'X'.
APPEND k_sortcat TO it_sortcat.
clear k_sortcat.
ENDFORM.
Statements inserted by Sunil Maurya for sorting
ENDFORM. " BUILD_CATLOG
*& Form DISPLAY_OUTPUT
text
--> p1 text
<-- p2 text
FORM DISPLAY_OUTPUT .
DATA l_repid TYPE syrepid.
l_repid = sy-repid.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = l_repid
it_fieldcat = t_fieldcat_sum_rep
i_callback_user_command = c_user_command
i_save = 'A'
it_sort = GT_SORT[] "Statements inserted by Sunil
TABLES
t_outtab = I_OUTPUT
EXCEPTIONS
program_error = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
ENDFORM. " DISPLAY_OUTPUT
*& Form user_command
text
FORM user_command USING r_ucomm LIKE sy-ucomm
rs_selfield TYPE slis_selfield.
DATA: l_repid TYPE syrepid.
l_repid = sy-repid.
CASE r_ucomm.
WHEN '&IC1'.
clear : i_detail. refresh : i_detail.
LOOP AT I_COEP WHERE KVGR5 = rs_selfield-fieldname.
MOVE i_coep to i_detail.
append i_detail.
clear : i_detail, i_coep.
endloop.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = l_repid
it_fieldcat = t_fieldcat_det_rep
i_callback_user_command = c_user_command
i_save = 'A'
it_sort = GT_SORT1[] "Inserted by Sunil
TABLES
t_outtab = i_detail
EXCEPTIONS
program_error = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
ENDCASE.
ENDFORM.Run the SE30 with internal tables, then examine the top lines.
see
SE30
The ABAP Runtime Trace (SE30) - Quick and Easy
Define you tables as sorted tables or use at least binary search, otherwise this sorts are useless
and the nested loop is slow.
SORT I_VBAKP BY OBJNR.
SORT I_COEP BY OBJNR.
LOOP AT I_WB.
READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
IF SY-SUBRC = 0.
READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
IF SY-SUBRC 0.
DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
loop at i_coep where objnr = i_wb-objnr.
i_coep-kvgr5 = 'OTH'.
modify i_coep.
clear : i_coep.
endloop.
ELSE.
READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
IF SY-SUBRC = 0.
I_COEP-KVGR5 = I_SECTOR-KVGR5.
MODIFY I_COEP INDEX SY-TABIX.
LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
I_COEP-KVGR5 = I_SECTOR-KVGR5.
MODIFY I_COEP.
CLEAR : I_COEP.
ENDLOOP.
This is all very slow.
Read my blog on internal tables:
Measurements on internal tables: Reads and Loops:
Runtimes of Reads and Loops on Internal Tables
If you really wnat to identify all bugs, then try this
Z_SE30_COMPARE
A Tool to Compare Runtime Measurements: Z_SE30_COMPARE
Nonlinearity Check
Nonlinearity Check Using the Z_SE30_COMPARE
It needs at bit of experience but then it is very powerful!!!
That is also a waste of time, but not a nested loop. You sort and resort the same table,
but the sort is useless the deletes are still sequential on standard tables.
Put all the stuff into ONE loop on I_COEP.
SORT I_COEP BY OBJNR.
DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
OBJNR+0(2) <> 'PR' AND
OBJNR+0(2) <> 'NV' AND
OBJNR+0(2) <> 'NP' AND
OBJNR+0(2) <> 'AO'.
SORT I_COEP BY KSTAR.
DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
KSTAR+0(5) <> '00004'.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
MOVE I_COEP TO I_NP.
APPEND I_NP.
CLEAR : I_NP, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
MOVE I_COEP TO I_NV.
APPEND I_NV.
CLEAR : I_NV, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
MOVE I_COEP TO I_WB.
APPEND I_WB.
CLEAR : I_WB, I_COEP.
ENDLOOP.
LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
MOVE I_COEP TO I_PR.
APPEND I_PR.
CLEAR : I_PR, I_COEP.
ENDLOOP.
There is probably more. BBut with the compare tool you can find everything.
Siegfried -
Taking too much time using BufferedWriter to write to a file
Hi,
I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
Thanks in advance.
public String extractItems() throws InternalServerException{
try{
String extractFileName = getExtractFileName();
FileWriter fileWriter = new FileWriter(extractFileName);
BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
System.out.println("Before -1");
CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
System.out.println("After -1");
PrintWriter out = new PrintWriter(bufferWrt);
System.out.println("Before -2");
TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
System.out.println("After -2");
XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
Enumeration allitems = itemSet.allItems();
System.out.println("the batch size : " +itemSet.getBatchSize());
XDForm frm = itemSet.getXDForm();
XDFormProperty[] props = frm.getXDFormProperties();
System.out.println("Before -3");
bufferWrt.newLine();
long startTime ,startTime1 ,startTime2 ,startTime3;
startTime = System.currentTimeMillis();
System.out.println("time here is--before-while : " +startTime);
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --new: " +startTime1);
bufferWrt.write(aRow.toCharArray());
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
startTime3 = System.currentTimeMillis();
System.out.println("time here is--after-while : " +startTime3);
out.close();//added by rosmon to check extra time taken for extraction//
bufferWrt.close();
fileWriter.close();
System.out.println("After -3");
return extractFileName;
catch(Exception e){
e.printStackTrace();
throw new InternalServerException(e.getMessage());Hi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
Code taking too much time to output
Following code is taking too much time to execute . (some time giving Time_out )
ind = sy-tabix.
SELECT SINGLE * FROM mseg INTO mseg
WHERE bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
IF sy-subrc = 0.
DELETE itab INDEX ind.
CONTINUE.
Is there any other way to write this code to reduce the output time.
ThanksHi,
I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
Try to rewrite the code as follows:
* Outside the loop
SELECT *
from MSEG
into table lt_mseg
for all entries in itab
where bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
Then inside the loop, do a READ on the internal table
Loop at itab.
read table lt_mseg with key bwart = '102'. "plus other conditions
if sy-subrc ne 0.
delete itab. "index is automatically determined here from SY-TABIX
endif.
endloop.
I think this should optimise performance. You can check your code's performance using SE30 or ST05.
Hope this helps! Please revert if you need anything else!!
Cheers,
Shailesh.
Always provide feedback for helpful answers! -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
Hi,
I am running one SQL code that is taking 1 hrs. but when I schedule this code in JOB using package it is taking 5 hrs.
could anybody suggest why it is taking too much time?
Regards
GaganUse TRACE and TKPROF with wait events to see where time is being spent (or waisted).
See these informative threads:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
HOW TO: Post a SQL statement tuning request - template posting
Also you can use V$SESSION and/or V$SESSION_LONGOPS to see what code is currently executing. -
ACCTIT table Taking too much time
Hi,
In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
ThankuHi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku -
Statspack taking too much time
Hi
when t try to execute
exec perfstat.statspack.snap(i_snap_level=>10);it is taking too much time.....
can anybody suggest me for the same....Excerpt from $ORACLE_HOME/rdbms/admin/spdoc.txt
<p>
Levels >= 10 Additional statistics: Parent and Child latches
This level includes all statistics gathered in the lower levels, and
additionally gathers Parent and Child Latch information. Data
gathered at this level can sometimes cause the snapshot to take longer
to complete i.e. this level can be resource intensive, and should
only be used when advised by Oracle personnel.
</p> -
Hi Experts,
Am looing to pull vbrp-vbeln i.e. billing doc #, based on the VGBEL i.e. sales doc #
i.e.
select single * from vbrp into wa_vbrp
where vgbel = wa_vbap-vbeln
and posnr = wa_vbap-posnr.
but, as there is no secondary index in vbrp for vgbel and there r tonns of recs in vbrp, its taking too much time?
so, wht is the alternative that i can find billing doc # with my sales doc #?
thanqMr. Srinivas,
Just a suggestion, if you need only the header details, then why not extract data from VBRK (header for billing doc) & VBAK (header for sales doc). These 2 tables contain only single line per billing or sales doc & hence the performance should be better.
If my suggestion is not what you are looking for, then apologies for the same.
Regards,
Vivek
Alternatively as Mr. Eric suggests, you can use VBFA
VBFA-VBELN = VBRK-VBELN
VBFA-VBELV = VBAK-VBELN
Logic is VBFA-VBELN is the subsequent document & VBFA-VBELV is the preceding document.
Hope it helps. (but be sure, the document created after sales order is billing document, there might be cases where there could be delivery documents after sales order & before billing document, so be careful)
Edited by: Vivek on Jan 29, 2008 11:11 PM -
Sites Taking too much time to open and shows error
hi,
i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
does not open any page or central admin site and shows following error
event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error.This usually happens if you are low on hardware requirements. Check whether your machine confirms with the required software and hardware requirements.
https://technet.microsoft.com/en-us/library/cc262485.aspx
http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
Please remember to up-vote or mark the reply as answer if you find it helpful. -
Taking too much time to load application
Hi,
I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
I have another 10g server (same version) in which the same application is loading very fast.
When I checked the apache error logs found this :-
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
[Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
[Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
Please HELP ME...Hi this is what the solution given by your link
A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
Problem
To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
Solution
The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
Oc4jUserKeepalive on
Oc4jConnTimeout 12000 (or a similar value)
Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
ajp.keepalive=true
For example:
java -Dajp.keepalive=true -jar oc4j.jar
Please tell me where or which file i should put the option
java -Dajp.keepalive=true -jar oc4j.jar ??????/ -
Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.
stop the download if it's stalled, and restart your download.
-
Hi, from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.
The Basic Troubleshooting Steps are:
Restart... Reset... Restore...
iPhone Reset
http://support.apple.com/kb/ht1430
Try this First... You will Not Lose Any Data...
Turn the Phone Off...
Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
Wait for the Apple logo to Appear and then Disappear...
Usually takes about 15 - 20 Seconds... ( But can take Longer...)
Release the Buttons...
Turn the Phone On...
If that does not help... See Here:
Backing up, Updating and Restoring
http://support.apple.com/kb/HT1414
Maybe you are looking for
-
Need help with my HttpConnection From Midlet To Servlet...
NEED HELP ASAP PLEASE.... This class is supposed to download a file from the servlet... the filename is given by the midlet... and the servlet will return the file in bytes... everything is ok in emulator... but in nokia n70... error occurs... Http V
-
I really need help with this question. Can someone please shed some light on this subject.
-
ITunes Match fails to upload songs
I am posting this as information for other hapless iTunes Match users with the same problem. When I first subscribed to iTunes Match it successfully completed Steps 1 and 2 but failed to complete Step 3, where it uploads songs and artwork not found i
-
How many recipients can be added to one enqueue procedure?
Hi all, Does anybody know how many recipients can be added to one DBMS_AQ.ENQUEUE procedure? According to a user's guide of Advanced Quing, it seems that we can have 1000 "local queue" subscribers and/or 32 "remote database" subscribers. But I'm not
-
Canot install recovery image on Satellite L775-140
Hi everyone, I have a Satellite L775-140. I decided to restore my notebook to factory settings so Ive created 4 CDs using preinstalled Toshiba recovery disc creator tool. However, when I put the first CD, after a few minutes, there was a error: diskp