Performance issue with Export functionality in Discovere
Hi ,
I am running a discoverer report which is taking only 3 minutes to complete. But when we try to export it to Excel , it takes 2 hours.
There are only 1500 records to be exported.
Has anyone faced this type of issue ?
Thanks for your help in advance.
Pl post your version of Discoverer - are you exporting out of an EBS instance ? Does this work on other PCs or for other workbooks ? Pl see if these MOS Docs are helpful
245752.1 - Explaining Oracle BI Discoverer Session Memory Management And Server Cache Settings
237607.1 - ALERT: Required and Recommended Patch Levels For All Discoverer Versions
HTH
Srini
Similar Messages
-
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issue with using MAX function in pl/sql
Hello All,
We are having a performance issue with the below logic wherein MAX is being used in order to get the latest instance/record for a given input variable ( p_in_header_id).. the item_key is having the format as :
p_in_header_id - <number generated from a sequence>
This query to fetch even 1 record takes around 1 minutes 30 sec..could someone please help if there is a better way to form this logic & to improve performance in this case.
We want to get the latest record for the item_key ( this we are getting using MAX (begin_date)) for a given p_in_header_id value.
Query 1 :
SELECT item_key FROM wf_items WHERE item_type = 'xxxxzzzz'
AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id
AND root_activity ='START_REQUESTS'
AND begin_date =
(SELECT MAX (begin_date) FROM wf_items WHERE item_type = 'xxxxzzzz'
AND root_activity ='START_REQUESTS'
AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id);
Could someone please help us with this performance issue..we are really stuck because of this
regardsFirst of all Thanks to all gentlemen who replied ..many thanks ...
Tried the ROW_NUMBER() option but still it is taking time...have given output for the query and tkprof results as well. Even when it doesn't fetch any record ( this is a valid cased because the input header id doesn't have any workflow request submitted & hence no entry in the wf_items table)..then also see the time it has taken.
Looked at the RANK & DENSE_RANK options which were suggested..but it is still taking time..
Any further suggestions or ideas as to how this could be resolved..
SELECT 'Y', 'Y', ITEM_KEY
FROM
( SELECT ITEM_KEY, ROW_NUMBER() OVER(ORDER BY BEGIN_DATE DESC) RN FROM
WF_ITEMS WHERE ITEM_TYPE = 'xxxxzzzz' AND ROOT_ACTIVITY = 'START_REQUESTS'
AND SUBSTR(ITEM_KEY,1,INSTR(ITEM_KEY,'-') - 1) = :B1
) T WHERE RN <= 1
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 1.57 0 0 0 0
Fetch 1 8700.00 544968.73 8180 8185 0 0
total 2 8700.00 544970.30 8180 8185 0 0
many thanks -
Hi Friends
I am having performance issue for this function-module(HR_TIM_REPORT_ABSENCE_DATA) and one my client got over 8 thousend employees . This function-module taking forever to read the data. is there any other function-module to read the absences data IT2001 .
I did use like this .if i take out this F.M 'HR_TIM_REPORT_ABSENCE_DATA_INI' its not working other Function-module.please Suggest me .
call function 'HR_TIM_REPORT_ABSENCE_DATA_INI'
exporting "Publishing to global memory
option_string = option_s "string of sel org fields
trig_string = trig_s "string of req data
alemp_flag = sw_alemp "all employee req
infot_flag = space "split per IT neccessary
sel_modus = sw_apa
importing
org_num = fdpos_lines "number of sel org fields
tables
fieldtab = fdtab "all org fields
field_sel = fieldnametab_m. "sel org fields
To Read all infotypes from Absences type.
RP_READ_ALL_TIME_ITY PN-BEGDA PN-ENDDA.
central function unit to provide internal tables: abse orgs empl
call function 'HR_TIM_REPORT_ABSENCE_DATA'
exporting
pernr = pernr-pernr
begda = pn-begda
endda = pn-endda
IMPORTING
SUBRC = SUBRC_RTA
tables
absences = absences_01
org_fields = orgs
emp_fields = empl
REFTAB =
APLTAB =
awart_sel_p = awart_s[]
awart_sel_a = awart_s[]
abstp_sel = abstp_s[]
i0000 = p0000
i0001 = p0001
i0002 = p0002
i0007 = p0007
i2001 = p2001
i2002 = p2002
i2003 = p2003.
Thanks & Regards
Reddyguessing will not help you much, check with SE30 to get a better insight
SE30
The ABAP Runtime Trace (SE30) - Quick and Easy
what is the total time, what are the Top 10 in the hitlist.
Siegfried -
Issue with gui_download function module
Hi All,
I have an issue with gui_download function module that it is creating one extra line while downloading my internal table data into text file,which i donot want.i have searched for various threads but couldnot get the proper reply.Or please provide me some other Function Module which will not create one extra line.Please help.part 2
INCLUDE RPPPXD00.
DATA : BEGIN OF COMMON PART A.
INCLUDE RPPPXD10.
DATA : END OF COMMON PART.
INCLUDE PC2RXTW0.
INCLUDE RPC2RX00.
DATA : BEGIN OF COMMON PART B.
INCLUDE RPC2CD00.
DATA : END OF COMMON PART.
INCLUDE RPPPXM00.
INCLUDE RPCMGR00.
AT SELECTION-SCREEN OUTPUT.
CONCATENATE SY-DATUM2(6) SY-UZEIT0(4) INTO REF_NO.
LOOP AT SCREEN.
IF R1 = 'X'.
IF SCREEN-NAME = 'FLN' OR SCREEN-NAME = '%_FLN_%_APP_%-TEXT' OR
SCREEN-NAME = 'BTC' OR SCREEN-NAME = '%_BTC_%_APP_%-TEXT' OR
SCREEN-NAME = 'PY_DT' OR SCREEN-NAME = '%_PY_DT_%_APP_%-TEXT'"SOC BY ANKITA"
OR SCREEN-NAME = 'ORG_ID' OR SCREEN-NAME = '%_ORG_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'ORG_AC' OR SCREEN-NAME = '%_ORG_AC_%_APP_%-TEXT'
OR SCREEN-NAME = 'DEPT_CD' OR SCREEN-NAME = '%_DEPT_CD_%_APP_%-TEXT'
OR SCREEN-NAME = 'REF_NO' OR SCREEN-NAME = '%_REF_NO_%_APP_%-TEXT'
OR SCREEN-NAME = 'PRS_BNK' OR SCREEN-NAME = '%_PRS_BNK_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_TY' OR SCREEN-NAME = '%_TRANS_TY_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_ID' OR SCREEN-NAME = '%_TRANS_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_RK' OR SCREEN-NAME = '%_TRANS_RK_%_APP_%-TEXT'."EOC BY ANKITA
SCREEN-ACTIVE = 0.
ENDIF.
ENDIF.
IF R2 = 'X'.
IF SCREEN-NAME = 'FLN' OR SCREEN-NAME = '%_FLN_%_APP_%-TEXT' OR
SCREEN-NAME = 'BTC' OR SCREEN-NAME = '%_BTC_%_APP_%-TEXT' OR
SCREEN-NAME = 'PREPBY' OR SCREEN-NAME = '%_PREPBY_%_APP_%-TEXT'
OR SCREEN-NAME = 'APROBY' OR SCREEN-NAME = '%_APROBY_%_APP_%-TEXT'
OR SCREEN-NAME = 'PY_DT' OR SCREEN-NAME = '%_PY_DT_%_APP_%-TEXT' "SOC BY ANKITA
OR SCREEN-NAME = 'ORG_ID' OR SCREEN-NAME = '%_ORG_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'ORG_AC' OR SCREEN-NAME = '%_ORG_AC_%_APP_%-TEXT'
OR SCREEN-NAME = 'DEPT_CD' OR SCREEN-NAME = '%_DEPT_CD_%_APP_%-TEXT'
OR SCREEN-NAME = 'REF_NO' OR SCREEN-NAME = '%_REF_NO_%_APP_%-TEXT'
OR SCREEN-NAME = 'PRS_BNK' OR SCREEN-NAME = '%_PRS_BNK_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_TY' OR SCREEN-NAME = '%_TRANS_TY_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_ID' OR SCREEN-NAME = '%_TRANS_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_RK' OR SCREEN-NAME = '%_TRANS_RK_%_APP_%-TEXT'."EOC BY ANKITA
SCREEN-ACTIVE = 0.
ENDIF.
ENDIF.
IF R3 = 'X'.
IF SCREEN-NAME = 'PREPBY' OR SCREEN-NAME = '%_PREPBY_%_APP_%-TEXT'
OR SCREEN-NAME = 'APROBY' OR SCREEN-NAME = '%_APROBY_%_APP_%-TEXT'
OR SCREEN-NAME = 'PY_DT' OR SCREEN-NAME = '%_PY_DT_%_APP_%-TEXT' "SOC BY ANKITA
OR SCREEN-NAME = 'ORG_ID' OR SCREEN-NAME = '%_ORG_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'ORG_AC' OR SCREEN-NAME = '%_ORG_AC_%_APP_%-TEXT'
OR SCREEN-NAME = 'REF_NO' OR SCREEN-NAME = '%_REF_NO_%_APP_%-TEXT'
OR SCREEN-NAME = 'DEPT_CD' OR SCREEN-NAME = '%_DEPT_CD_%_APP_%-TEXT'
OR SCREEN-NAME = 'PRS_BNK' OR SCREEN-NAME = '%_PRS_BNK_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_TY' OR SCREEN-NAME = '%_TRANS_TY_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_ID' OR SCREEN-NAME = '%_TRANS_ID_%_APP_%-TEXT'
OR SCREEN-NAME = 'TRANS_RK' OR SCREEN-NAME = '%_TRANS_RK_%_APP_%-TEXT'."EOC BY ANKITA
SCREEN-ACTIVE = 0.
ENDIF.
ENDIF.
IF R4 = 'X'.
IF SCREEN-NAME = 'PREPBY' OR SCREEN-NAME = '%_PREPBY_%_APP_%-TEXT'"SOC BY ANKITA
OR SCREEN-NAME = 'APROBY' OR SCREEN-NAME = '%_APROBY_%_APP_%-TEXT'
OR SCREEN-NAME = 'BTC' OR SCREEN-NAME = '%_BTC_%_APP_%-TEXT'."EOC BY ANKITA
SCREEN-ACTIVE = 0.
ENDIF.
ENDIF.
MODIFY SCREEN.
ENDLOOP.
START-OF-SELECTION.
SELECT SINGLE * FROM T549Q WHERE PERMO = '01'
AND PABRJ = PRD+0(4)
AND PABRP = PRD+4(2).
FR_DT = T549Q-BEGDA.
TO_DT = T549Q-ENDDA.
CONCATENATE FR_DT0(4) FR_DT4(2) INTO FR_P.
CONCATENATE TO_DT0(4) TO_DT4(2) INTO TO_P.
PN-PAPER = PRD.
PN-PERMO = '01'.
GET PERNR.
RP-PROVIDE-FROM-LAST P0003 SPACE PN-BEGDA PN-ENDDA.
RP_PROVIDE_FROM_LAST P0001 SPACE PN-BEGDA PN-ENDDA.
IF PNP-SW-FOUND EQ 1.
SN = SN + 1.
ITAB1-SNO = SN.
ITCC-SNO = SN.
ITAB1-ENO = PERNR-PERNR.
ITAB1-NAM = PERNR-ENAME.
ELSE.
REJECT.
ENDIF.
RP-INIT-BUFFER.
RP-SEL-CALC.
CALL FUNCTION 'RP_EVALUATION_PERIODS'
EXPORTING
LAST_CALCULATED_DAY = P0003-ABRDT
LAST_DAY_IN_PERIOD = TO_DT
RETROCALCULATED_DAY = RP-SEL-CALC-RRDAT
TABLES
DIR = RGDIR
EVP = EVP
EXCEPTIONS
RGDIR_EMPTY = 1
INTERNAL_ERROR = 2
OTHERS = 3.
DESCRIBE TABLE EVP LINES LIN.
IF LIN > 0.
LOOP AT EVP.
IF EVP-IAPER = TO_P AND EVP-PAPER = TO_P.
RX-KEY-PERNR = PERNR-PERNR.
UNPACK EVP-SEQNR TO RX-KEY-SEQNO.
RP-IMP-C2-TN.
READ TABLE BT INDEX 1.
READ TABLE WPBP INDEX 1.
READ TABLE TAX INDEX 1."CHANGES BY ANKITA
ITAB1-BAC = BT-BANKN.
ITAB1-BKEY = BT-BANKL .
ITAB1-DEP = WPBP-KOSTL.
ITAB1-BETRG = BT-BETRG."CHANGES BY ANKITA
ITAB1-TAXID = TAX-TAXID."CHANGES BY ANKITA
YEAR = VERSC-PAYDT+0(4) - 11.
MONTH = VERSC-PAYDT+4(2).
DAY = VERSC-PAYDT+6(2).
CONCATENATE YEAR MONTH DAY INTO ITAB1-PDT.
ITAB1-PDT = VERSC-PAYDT - 110000.
ITCC-DEP = WPBP-KOSTL.
LOOP AT RT WHERE LGART = '/559'.
ITAB1-BTFR = RT-BETRG.
ITCC-BTFR = RT-BETRG.
IF EVP-SRTZA = 'P'.
ITAB1-BTFR = ITAB1-BTFR - RT-BETRG.
ELSE.
ITAB1-BTFR = ITAB1-BTFR + RT-BETRG.
ENDIF.
ENDLOOP.
ENDIF.
ENDLOOP.
ENDIF.
APPEND: ITAB1, ITCC.
CLEAR: ITAB1, ITCC.
END-OF-SELECTION.
CONCATENATE 'Prepared By:' ` ` PREPBY INTO PREPBY.
CONCATENATE 'Approved By:' ` ` APROBY INTO APROBY.
IF R1 = 'X'.
FORMAT COLOR 2.
ULINE (127).
NEW-LINE.
WRITE: 2 'Sr No.', 10 'Emp Num', 27 'Name'.
WRITE: 57 'Department'.
WRITE: 72 ' Transfer Amount' RIGHT-JUSTIFIED.
WRITE: 92 'Bank Key', 107 'Bank AC. Number'.
WRITE:1 '|', 8 '|', 25 '|', 55 '|', 70 '|', 90 '|', 105 '|', 127 '|'.
NEW-LINE.
ULINE (127).
NEW-LINE.
FORMAT COLOR OFF.
LOOP AT ITAB1.
SN = SY-TABIX.
WRITE: 2 SN, 10 ITAB1-ENO, 27 ITAB1-NAM.
WRITE: 57 ITAB1-DEP.
WRITE: 72 ITAB1-BTFR.
WRITE: 92 ITAB1-BKEY, 107 ITAB1-BAC.
WRITE: 1 '|', 8 '|', 25 '|', 55 '|', 70 '|', 90 '|', 105 '|', 127 '|'.
ULINE (127).
NEW-LINE.
ENDLOOP.
SKIP 4.
ULINE 90(32).
NEW-LINE.
WRITE: 90 PREPBY.
SKIP 4.
ULINE 90(32).
NEW-LINE.
WRITE: 90 APROBY.
ENDIF.
IF R2 = 'X'.
LOOP AT ITCC.
COLLECT ITCC INTO ITCOL.
ENDLOOP.
FORMAT COLOR 2.
ULINE (44).
NEW-LINE.
WRITE:2 'Sr No.', 9 'Department'.
WRITE: 27 'Transfer Amount ' RIGHT-JUSTIFIED.
WRITE:1 '|', 8 '|', 25 '|', 44 '|'.
NEW-LINE.
ULINE (44).
NEW-LINE.
FORMAT COLOR OFF.
LOOP AT ITCOL.
SN = SY-TABIX.
WRITE: 2 SN, 9 ITCOL-DEP, 27 ITCOL-BTFR.
WRITE:1 '|', 8 '|', 25 '|', 44 '|'.
NEW-LINE.
ULINE (44).
NEW-LINE.
ENDLOOP.
ENDIF.
IF R3 = 'X'.
LOOP AT ITAB1.
CLEAR: ITTF, P3, P11, P13, P6, V_BAC.
LEN = STRLEN( ITAB1-BKEY ).
IF LEN < 3.
CONCATENATE ITAB1-BKEY '***' INTO P3.
ELSE.
LEN = LEN - 3.
LEN = 3.
P3 = ITAB1-BKEY+LEN(3).
ENDIF.
CLEAR LEN.
V_BAC = ITAB1-BAC.
REPLACE ALL OCCURRENCES OF '-' IN ITAB1-BAC WITH ''.
CONDENSE ITAB1-BAC NO-GAPS.
LEN = STRLEN( ITAB1-BAC )."if length of acc num > limit
IF LEN > 11.
IT_FAIL-EN = ITAB1-ENO.
IT_FAIL-BA = V_BAC.
APPEND IT_FAIL.
CLEAR: IT_FAIL.
CONTINUE.
ENDIF.
P11 = ITAB1-BAC.
CONCATENATE P11 '***********' INTO P11.
above step is for putting '' in place of unfilled chars of P11.
P13 = ITAB1-BTFR * 100.
P6 = ITAB1-PDT+2(6).
CONCATENATE ` ` P3 P11 BTC P13 P6 INTO STR.
ITTF-ROW = STR.
APPEND ITTF.
ENDLOOP.
IF ITTF[] IS NOT INITIAL.
CONCATENATE FLN SY-DATUM SY-UZEIT '.txt' INTO FILEPATH.
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
FILENAME = FILEPATH
FILETYPE = 'ASC'
WRITE_FIELD_SEPARATOR = 'X'
TABLES
DATA_TAB = ITTF
OTHERS = 22
SKIP 2.
IF SY-SUBRC <> 0.
WRITE:/ 'Unable to Download file at ', FILEPATH.
ELSE.
WRITE:/ 'File with following data downloaded at ', FILEPATH.
NEW-LINE.
SKIP 2.
LOOP AT ITTF.
WRITE:/ ITTF.
ENDLOOP.
ENDIF.
ELSE.
WRITE 'No Data, no file was downloaded'.
ENDIF.
IF IT_FAIL[] IS NOT INITIAL.
SKIP 2.
FORMAT COLOR 2.
WRITE 'Acc. No. of following employees exceeded the length limit'.
WRITE:/ 'So their entry was not created in the file'.
SKIP 1.
WRITE : 'Employee Number', 20 'Bank Acc. No.'.
FORMAT COLOR OFF.
LOOP AT IT_FAIL.
NEW-LINE.
WRITE : IT_FAIL-EN, 20 IT_FAIL-BA.
ENDLOOP.
ENDIF.
ENDIF.
IF R4 = 'X'."CHANGES BY ANKITA
WRITE:/ 'ERROR LOG - BANK A/C NO. CONTAINS ALPHANUMERIC'.
WRITE:/ 'EMPID' COLOR COL_POSITIVE,12 '|',15 'Receiving Bank Code' COLOR COL_POSITIVE,
40 '|','Receiver A/C No' COLOR COL_POSITIVE.
PERFORM EXTRACT_DATA.
SKIP 2.
ENDIF."EOC
RP-READ-PAYROLL-DIR.
Edited by: ANKITA BHARDWAJ on Dec 9, 2009 10:36 AM -
Performance Issues with crystal reports 11 - Critical
Post Author: DJ Gaba
CA Forum: Exporting
I have migrated from crystal reports version 8 to version 11.
I am experiencing some performance issues with reports when displayed in version 11
Reports that was taking 2 seconds in version 8 is now taking 4-5 seconds in versino 11
I am using vb6 to export my report file into pdf
ThanksPost Author: synapsevampire
CA Forum: Exporting
Pleae don't multiple forums on the site with the same question.
I responded to your other post.
-k -
Performance issues with FDK in large XML documents
In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
--Franz -
ERP Sales Order : Performance issues with Product Proposal
Hi
we are working on CRM 2007 solution and are facing serious performance issues with the ERP Sales Order functionality provided in the ICWC of this version.
In our development we are adding items in the ERP cart as soon as the user clicks on the 'New Sales Order' button of the sales order sreen. We get the items by very simple and optimized call to the ERP system and then add these entities in the Item Cart (Item collection ...in simple sense).
For adding 10 items the application takes 10 seconds and this is too much for adding just 10 items.
Can you please provide any Notes/alternative solution to resolve this issue.
Regards
AjitabhHi Ajithabh,
Please apply the following SAP notes:
1061423 - Interaction Center ERP Order Performance improvement
1262277 - Performance: CRM value help causes dumps in ERP
1292817 - Performance: Reduce RFC calls during creation of ERP order.
1319885 - ERP sales order search with external reference
1326527 - Reducing number of RFC calls in IC ERP Sales Order
I hope it helps!
Regards,
Gabriel Santana -
Hi All,
Need some urgent help..
Iam facing some issue with the Function Module 'SKWF_FIND_BY_QUERY' in a BW ECC6.0 system.
As shown below, in the function module, the Table IT_PROPERTIES_RESULT gets populated with some values based on inputs like IT_CLASSES, IT_QUERY, and L.
This updation of IT_PROPERTIES_RESULT table is happening for some of the services sent through IT_QUERY and is not getting populated for some.
call function 'SKWF_FIND_BY_QUERY
exporting
CONNECTION_SPACE =
OBJ_TYPE = 'L'
PTYPE =
X_STRICT =
IMPORTING
ERROR =
tables
CLASSES = IT_CLASSES
QUERIES = IT_QUERY
RESULT_OBJECTS = IT_LOIO
PROPERTIES_REQUEST = PROPERTIES_RESULT = IT_PROPERTIES_RESULT.
The values are as follows:-
Values getting populated in IT_CLASSES BW_LO_TRAN Values getting populated in IT_QUERY 1) BW_QUERY, 2) /BIC/ZSERVICE
I would like to know whether any Standard Customizing BW transaction is present that is maintaining IT_PROPERTIES_RESULT table properties and fetching through this Function Module.
Also, suggest how this issue can be resolved
Thanks & Regards,
Shailesh nagarThanks Suhas. That definitely helped.
Also the following links helped.
http://help.sap.com/saphelp_nw70/helpdata/EN/86/1c8c3e94243446e10000000a114084/frameset.htm
/people/siegfried.szameitat/blog/2005/09/29/generic-extraction-via-function-module
Cheers,
Preethi -
BPC 7 performance issue with Microsoft 2003 vs Microsoft 2007
Hi All,
Needs some help with our performance issues with BPC 7 on Microsoft 2007. We have noticed a significant difference between the two versions. When running the same report on BPC 7 MS 2003 it takes roughly around 5-6 min. When running the same report in BPC 7 MS 2007 the report takes roughly 15-20 min. We've tried numerous things to figure out what can be causing this but does anyone have the same issue or shed some light as to what i can do to eliminate this problem?
Thanks,
ElmerHi,
Did you install the following component on your application server (which is a prerequisite for running Excel 2007)? This note is part of the installation guide:
In order for the software to function correctly in Excel 2007, you must install the Microsoft Office 2007 system driver Data Connectivity Components, which you can get from the Microsoft web site. The components must be installed on the Application server. We recommend that you install all of the latest Microsoft Windows hotfixes prior to installing the Administration client. For information about which hotfixes to install, see http://www.microsoft.com/downloads/details.aspx?FamilyID=7554F536-8C28-4598-9B72-EF94E038C891&DisplayLang=en.
Hope this will help.
Kind Regards,
Patrick -
Performance Issue with Crosstab Reports Using Disco Viewer 10.1.2.48.18
We're experiencing Performance Issue (retrieving 40000 rows) with Crosstab Reports Using Disco Viewer 10.1.2.48.18 ( > 01 Minute , executing "Building Page Axis" or executing a Refresh).
Are there parameters to tun (in pref.txt file) , in order to reduce "Building Page Axis" execution ?
Note : We've got the same performance problem , using Discoverer Desktop 10.1.2.48.18.
Thank's in advance for your Help.Hi
Well if the same issue occurs in both Desktop and Viewer then you have your answer. It's not the way that Discoverer is running the workbook its the way the workbook has been constructed.
For a start, 40000 rows for a Crosstab is way over the top and WILL cause performance issues. This is because Discoverer has to create a bucket for every data point for every combination of items on the page, side and top axes. The more rows, page items and column headings that you have, the more buckets you have and therefore the longer it will take for Discoverer to work out the contents of every bucket.
Also, whenever you use page items or crosstabs, Discoverer has to retrieve all of the rows for the entire query, not just the first x rows as with a table. This is because it cannot possibly know how many buckets to create until it has all the rows.
You therefore to:
a) apply sufficient filters to reduce the amount of data being returned to something manageable
b) reduce the number of page items, if used
c) reduce the number of items on the side or top axis of a crosstab
d) reduce the number of complex calculations, especially calculations that would generate a new bucket
If you have a lot of complex calculations, you should consider the use of a materialized view / summary folder to pre-calculate the values.
Does this help?
Best wishes
Michael Armstrong-Smith
URL: http://learndiscoverer.com
Blog: http://learndiscoverer.blogspot.com -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with Homesharing?
I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
Has anyone some suggestions?Hi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
Maybe you are looking for
-
My code not give me result and error for move to next record please see
hi master sir i import this file also import javax.faces.event.ValueChangeEvent; then my error remove i am use this code in button event getMfatableDataProvider().cursorNext(); form1.discardSubmittedValues("virtualForm1"); and my textField is bounded
-
TC - 802.11N can's see the TC when it is configured to 5GHz 'n only'
G'Day I am trying to set up my TC to use the 5GHz bandwidth "n only", when I do this I can't see the TC from either my MB Air or Mac Pro (both are n capable). Connecting on the 2.4GHz freq is not a problem and the connection shows as being around 120
-
Subcontract process PO via delivery via shipment
Hi, Can anyone let me know; issue the material to Subcontract Vendor PO through delivery via shipment? Present Scenario is, Create the subcontracting PO (PO with Item Category L) Issue the material with reference PO through MB1B transport positing us
-
What's the best forum in the UK for Sun ONE UDS
Does anyone know of a user group that meets regularly to discuss SUN ONE UDS ?in the UK ?
-
Selectone_radio inside panel_list
Hi all!! I'm using a panel_list to display a list of products. I would like the table to have one radio_button per row to allow selecting one of the products. How would be the jsp code? Is it possible?