How to Improve performance issue when we are using BRM LDB
HI All,
I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
Please suggest me how to improve the performance issue.
Thanks in advance
Chakradhar
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob
Similar Messages
-
PERFORMANCE BAD WHEN CURSORS ARE USED WITHIN PL/SQL BLOCKS
There are poor database performance at Oracle 10g
For some cursor selects, the performance is under Oracle 10g
significant slower than under Oracle 9i.On a test system (Oracle 9) the
problem does not reproduce, on the 10g system however always.
The simple execution of the base select statement was on both
databases in roughly of the same speed. If however a cursor was defined and
those executed within a PL/SQL block (without involving the user interface),
then the time behavior was in accordance with the observed behavior
behavior on the user interface.
By adding of the hint first_rows on both machines a
similar (fast) time behavior can be achieved.
Conclusion: Something in the (Optimizer) settings of the Oracle-10
databases must be fundamentally different than that of Oracle 9. Or Oracle
10 has a real problem. The analysis and solution of this general
problem seems to me more reasonable than the solution of individual performance
problems.
can you help me, many thanksHello, thanks for the explanatory notes.
Here are the concerned Script. The only difference is the Hint (with Hint = good püerformance; without Hint = bad performance)
DECLARE
b BOOLEAN;
BEGIN
b := plogin.batch_login('**', '****', 717, FALSE);
prost.reload_context;
END;
DECLARE
l_ma_kuerzel VARCHAR2(100) DEFAULT NULL;
l_sta_id mitarbeiter_historie.sta_id%TYPE;
l_org_id organisationseinheit.org_id%TYPE;
l_pv_like mitarbeiter.ma_name%TYPE;
l_typ_id typ.typ_id%TYPE;
l_mihi_beginn VARCHAR2(40);
l_ma_ausgeschieden VARCHAR2(40);
l_ma_ldap mitarbeiter.ma_ldap%TYPE;
l_smodify_form VARCHAR2(80);
l_sform_typ VARCHAR2(80);
l_sheader VARCHAR2(200);
l_nurlsource NUMBER;
l_nurldestination prosturl.pur_id%type;
l_ma_like VARCHAR2(100) DEFAULT NULL;
l_nma_typ NUMBER;
l_bshow BOOLEAN;
l_counter NUMBER DEFAULT 0;
cursor ma_list_not_all_detail(
p_ma_like IN VARCHAR2 DEFAULT NULL,
p_ma_kuerzel IN VARCHAR2 DEFAULT NULL,
p_sta_id IN VARCHAR2 DEFAULT NULL,
p_org_id IN VARCHAR2 DEFAULT NULL,
p_typ_id IN VARCHAR2 DEFAULT NULL,
p_mihi_beginn IN VARCHAR2 DEFAULT NULL,
p_pv_like IN VARCHAR2 DEFAULT NULL,
p_ma_ausgeschieden IN VARCHAR2 DEFAULT NULL,
p_ma_ldap IN VARCHAR2 DEFAULT NULL
) IS
SELECT /*+ first_rows */
ma.ma_id ma_id
, view_fkt.display_ma(mihi.typ_id_mt
, view_fkt.cat_maname(ma.ma_name
, ma.ma_zusatz
, ma.ma_titel
, ma.ma_vorname)) name
, view_fkt.display_ma(mihi.typ_id_mt,ma.ma_kuerzel) ma_kuerzel
, typ.typ_value mt_kuerzel
, substr(org.typ_id,4,length(org.typ_id)) || ' ' || org.org_name||' ('||org.org_ktr||')' org_name
, to_char(mihi.mihi_beginn, 'dd.mm.yyyy') beginn
, decode(pv.ma_name ||' '|| pv.ma_titel ||' '|| pv.ma_vorname
, ' ',prost_cons.t_blank
, pv.ma_name||', '||pv.ma_titel||' '||pv.ma_vorname) pv_kuerzel
, mihi.sta_id sta_id
, nvl(to_char(ma.ma_ausgeschieden,'dd.mm.yyyy'), ' ') ausgeschieden
, nvl(to_char(mihi.mihi_wochenarbeitszeit,'90D00'),' ') wochenarbeitszeit
, nvl(to_char(mihi.mihi_taz_mo,'90D00'),' ') taz_mo
, nvl(to_char(mihi.mihi_taz_di,'90D00'),' ') taz_di
, nvl(to_char(mihi.mihi_taz_mi,'90D00'),' ') taz_mi
, nvl(to_char(mihi.mihi_taz_do,'90D00'),' ') taz_do
, nvl(to_char(mihi.mihi_taz_fr,'90D00'),' ') taz_fr
, nvl(to_char(mihi.mihi_taz_sa,'90D00'),' ') taz_sa
, nvl(to_char(mihi.mihi_taz_so,'90D00'),' ') taz_so
, nvl(ma.ma_ldap, ' ') ma_ldap
, mihi.mihi_beginn mihi_beginn
, mihi.mihi_order_no mihi_order_no
, mihi.mihi_order_pos mihi_order_pos
FROM organisationseinheit org
, typ typ
, mitarbeiter pv
, mitarbeiter ma
, v$mihi_id mid
, mitarbeiter_historie mihi
, v$access_orgs_th_t th
WHERE mihi.org_id = th.org_id
AND mid.mihi_id = mihi.mihi_id
AND ma.ma_id = mid.ma_id
AND ma.ma_delete = 'n'
AND ma.ma_virtualitaet = 'N'
AND (p_ma_like IS NULL
OR ma.ma_name LIKE p_ma_like)
AND (p_ma_kuerzel IS NULL
OR ma.ma_kuerzel LIKE p_ma_kuerzel)
AND (p_sta_id IS NULL
OR mihi.sta_id = p_sta_id)
AND (p_org_id IS NULL
OR org.org_id = p_org_id)
AND (p_typ_id IS NULL
OR typ.typ_id = p_typ_id)
AND mihi_beginn >= nvl(p_mihi_beginn,to_date('01.01.1960','dd.mm.yyyy'))
AND (p_pv_like IS NULL
OR pv.ma_name LIKE p_pv_like)
AND (ma.ma_ausgeschieden >= nvl(p_ma_ausgeschieden,to_date('01.01.1960','dd.mm.yyyy'))
AND ma.ma_ausgeschieden - 1 < nvl(p_ma_ausgeschieden,to_date('01.01.1960','dd.mm.yyyy'))
OR p_ma_ausgeschieden IS NULL)
AND (p_ma_ldap IS NULL
OR ma.ma_ldap LIKE p_ma_ldap)
AND pv.ma_id (+)= mihi.ma_id_pv
AND org.org_id (+)= mihi.org_id
AND typ.typ_id = mihi.typ_id_mt
ORDER BY upper(ma.ma_name), upper(ma.ma_vorname);
l_result ma_list_not_all_detail%ROWTYPE;
BEGIN
l_nMA_Typ := pmitarbeiter.cn_Incomplete_Ma;
l_ma_like := NULL;
l_ma_kuerzel := NULL;
l_sta_id := NULL;
l_org_id := 'KST0000421301';
l_typ_id := NULL;
l_mihi_beginn := NULL;
l_pv_like := NULL;
l_ma_ausgeschieden := NULL;
l_ma_ldap := NULL;
IF (l_ma_like IS NOT NULL
OR l_ma_kuerzel IS NOT NULL
OR l_sta_id IS NOT NULL
OR l_org_id IS NOT NULL
OR l_typ_id IS NOT NULL
OR l_mihi_beginn IS NOT NULL
OR l_pv_like IS NOT NULL
OR l_ma_ausgeschieden IS NOT NULL
OR l_ma_ldap IS NOT NULL) THEN
-- fuer Mitarbeiter unvollstandig wird ein andere cursor angesprochen
-- um der Mitarbeiter vollstandig zu kriegen soll ein Standort,
-- Arbeitszeitmodel, Bereich und Tagesarbeitszeiten ausgevult wirden
-- Wenn er dan gespeichert wirdt wirden die betriffende velder gespeichert
-- und wirdt das Feld Virtualiteat auf R gesetzt (war N)
l_counter := 0;
dbms_output.put_line(to_char(sysdate, 'sssss'));
FOR j IN ma_List_Not_All_Detail(
l_ma_like,
l_ma_kuerzel,
l_sta_id,
l_org_id,
l_typ_id,
l_mihi_beginn,
l_pv_like,
l_ma_ausgeschieden,
l_ma_ldap
) LOOP
l_counter := l_counter + 1;
dbms_output.put_line(l_counter);
dbms_output.put_line(j.ma_kuerzel);
END LOOP;
dbms_output.put_line(to_char(sysdate, 'sssss'));
END IF;
return;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
END;
=============
Thank you -
Performance issue when a Direct I/O option is selected
Hello Experts,
One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
Other Information -
a) Environment Details
HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
OS: Microsoft Windows x64 (64-bit) 2003 R2
b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
Application 1: Buffered 700MB, Direct 5GB
Application 2: Buffered 600MB to 1.5GB, Direct 2GB
Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
c) What is the current Database Data cache; Data file cache and Index cache values?
Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
Application 1: Page File 20GB, Index 1.7GB.
Application 2: Page 3GB, index 700MB.
Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
Thanks in advance.
Regards,
SudhirSudhir,
Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you. -
Performance issues when creating a Report / Query in Discoverer
Hi forum,
Hope you are can help, it involves a performance issues when creating a Report / Query.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
Please see attached the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
Many thanks,
LanceHi Rod,
I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
Ive been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
Lance
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ), -
How to improve performance of slect
Hi friends,
Following code is from a report for getting opening stock of material. This particular statement is taking 99% time of the run time. Kindly advise how this select statement can be modified in order to improve the performance of the report.
DATA: BEGIN OF I_MARD OCCURS 0,
WERKS LIKE MARD-WERKS,
MATNR LIKE MARD-MATNR,
LGORT LIKE MARD-LGORT,
LABST LIKE MARD-LABST,
INSME LIKE MARD-LABST,
MEINS LIKE MARA-MEINS,
EINME LIKE MARD-LABST,
SPEME LIKE MARD-LABST,
RETME LIKE MARD-LABST,
END OF I_MARD.
SELECT MKPFMBLNR MKPFMJAHR MKPFVGART MKPFBUDAT
MSEGZEILE MSEGBWART MSEGXAUTO MSEGMATNR
MSEGWERKS MSEGLGORT MSEGSHKZG MSEGMENGE
MSEG~MEINS
INTO CORRESPONDING FIELDS OF TABLE I_MKPF
FROM MKPF AS MKPF JOIN
MSEG AS MSEG
ON MKPFMBLNR = MSEGMBLNR
AND MKPFMJAHR = MSEGMJAHR
FOR ALL ENTRIES IN I_MARD
WHERE MKPF~BUDAT GE S_BUDAT-LOW
AND MSEG~MATNR EQ I_MARD-MATNR
AND MSEG~WERKS EQ I_MARD-WERKS
AND MSEG~LGORT NE ''.
thanks
anuHi..,
Remove that CORRESPONDING FIELDS OF TABLE .... it drastically reduces the Performance !!!
The definition of table I_MKPF should be like this !! then onli u can remove the <b>INTO CORRESPONDING FIELDS OF TABLE</b>...
DATA: BEGIN OF I_MKPF OCCURS 0,
MBLNR TYPE MKPF-MBLNR,
MJAHR TYPE MKPF-MJAHR,
VGART TYPE MKPF-VGART,
BUDAT TYPE MKPF-BUDAT,
ZEILE TYPE MSEG-ZEILE,
BWART TYPE MSEG-BWART
XAUTO TYPE MSEG-XAUTO
MATNR TYPE MSEG-MATNR
WERKS TYPE MSEG-WERKS
LGORT TYPE MSEG-LGORT
SHKZG TYPE MSEG-SHKZG
MENGE TYPE MSEG-MENGE
MEINS TYPE MSEG-MEINS
END OF I_MKPF.
Now the select statement is ....
and when u are using the For all entries .. u always need to check the IS NOT INITIAL condition ...
<b>
if not I_MARD[] is initial .</b>
SELECT MKPFMBLNR MKPFMJAHR MKPFVGART MKPFBUDAT
MSEGZEILE MSEGBWART MSEGXAUTO MSEGMATNR
MSEGWERKS MSEGLGORT MSEGSHKZG MSEGMENGE
MSEG~MEINS
<b>INTO TABLE I_MKPF</b>
FROM MKPF AS MKPF JOIN
MSEG AS MSEG
ON MKPFMBLNR = MSEGMBLNR
AND MKPFMJAHR = MSEGMJAHR
FOR ALL ENTRIES IN I_MARD
WHERE MKPF~BUDAT GE S_BUDAT-LOW
AND MSEG~MATNR EQ I_MARD-MATNR
AND MSEG~WERKS EQ I_MARD-WERKS
AND MSEG~LGORT NE ''.
<b>endif.</b>
Any way u r going for for all entries .. on fields WERKS and MATNR..
so based on your requirement its better if u use the SORT and DELETE ADJUCENT DUPLICATES on the table I_MARD..
i mean before the select statement ...
sort I_MARD by MATNR WERKS.
delete adjucent duplicates from I_MARD comparing MATNR WERKS. <i>**( this eliminates duplicate records when using For all entries )</i>
Reward all helpful answers !!!
regards,
sai ramesh -
How to improve spreadsheet speed when single-threaded VBA is the bottleneck.
My brother works with massive Excel spreadsheets and needs to speed them up. Gigabytes in size and often with a million rows and many sheets within the workbook. He's already refined the sheets to take advantage of Excel's multi-thread recalculation and
seen significant improvements, but he's hit a stumbling block. He uses extensive VBA code to aid clarity, but the VB engine is single-threaded, and these relatively simple functions can be called millions of times. Some functions are trivial (e.g. conversion
functions) and just for clarity and easily unwound (at the expense of clarity), some could be unwound but that would make the spreadsheets much more complex, and others could not be unwound.
He's aware of http://www.analystcave.com/excel-vba-multithreading-tool/ and similar tools but they don't help as the granularity is insufficiently fine.
So what can he do? A search shows requests for multi-threaded VBA going back over a decade.
qtsHi,
>> The VB engine is single-threaded, and these relatively simple functions can be called millions of times.
The Office Object Model is
Single-Threaded Apartments, if the performance bottleneck is the Excel Object Model operation, the multiple-thread will not improve the performance significantly.
>> How to improve spreadsheet speed when single-threaded VBA is the bottleneck.
The performance optimization should be based on the business. Since I’m not familiar with your business, so I can only give you some general suggestions from the technical perspective. According to your description, the size of the spreadsheet had reached
Gigabytes and data volume is about 1 million rows. If so, I will suggest you storing the data to SQL Server and then use the analysis tools (e.g. Power Pivot).
Create a memory-efficient Data Model using Excel 2013
and the Power Pivot add-in
As
ryguy72 suggested, you can also leverage some other third party data processing tool
according to your business requirement.
Regards,
Jeffrey
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Performance Issues when editing large PDFs
We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files. (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues.
Are there any tips or tricks with regard to working with these large documents that would improve performance?You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
Be sure to work on a copy. -
Oracle Retail 13 - Performance issues when open, save, approving worksheets
Hi Guys,
Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
Issues:
- Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
- Time for smaller worksheets is also around 10 minutes just to open a worksheet
- Also just to open multiple worksheets takes "ages" up to 10-15 minuts
Questions:
- Is it expected performance for such worksheets?
- What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
- What are the average expected times for such operations?
Any feedback and hints would be much appreciated.
Cheers!!Hi,
I guess you mean Order/Buyer worksheets?
This is not normal, should be quicker, matter of seconds to at most a minute.
Database side tuning is where I would look for clues.
And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
Best regards, Erik Ykema -
How to improve performance of MediaPlayer?
I tried to use the MediaPlayer with a On2 VP6 flv movie.
Showing a video with a resolution of 1024x768 works.
Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
Does somebody know a solution for this problems?
Cheers
masimduplicate thread..
How to improve performance of attached query -
How to solve performance issue
Hi
How to resolve performance issue in following select query-
SELECT *
INTO CORRESPONDING FIELDS OF TABLE it_final
FROM ce1zcsc
WHERE paledger EQ c_10 "Currency Type
AND vrgar IN s_vrgar "Record Type
AND versi EQ space "Plan Version
AND perio IN r_perio "Period
AND bukrs IN s_bukrs. "Company Code
TABLE CE1ZCSC has around 173 fields,but it_final has around 105 fields.
The indexes are created for the following fields:
paledger
vrgar
versi
perio
bukrs and
prctr.
I doubt whether we should look for Estim. CPU-Costs in index range scan or table access by index rowid in Execution plan for SQL statement.
If anybody can provide me with informative documents on performance issue.Hi,
Dont use " * " & " corresponding fields " in the select query rather declare all the fields and use the "into table" clause.
Let me know if you still face the same problem.
-Naveen. -
How to improve performance of Siebel Configurator
Hi All,
We are using Siebel Configurator to model the item structures. We wrote few constraint rules on that. But while launching the configurator it is taking more time to open.
Even without rules also it is behaving in the same manner.
Any inputs on this could be highly appreciated
RAMduplicate thread..
How to improve performance of attached query -
How to add an issue if you are a responsible manager of a child wbs ?
How to add an issue if you are a responsible manager of a child wbs ? I have some projects with lots of child WBS and these child WBS is managed by others project managers (or others responsible managers), different responsible manager from the project. I'd like the others responsible managers could add/edit/delete issues in their WBS's. How to do that ?
Hi Yves
try to type at the end of your query the following
select cardcode,cardname,balance from ocrd
for browse
and execute it again
Edited by: Fasolis Vasilios on Nov 30, 2011 12:10 PM -
How to improve performance of the attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
How to improve performance of attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
How to improve performance of query
Hi all,
How to improve performance of query.
please send :
[email protected]
thanks in advance
bhaskarhi
go through the following links for performance
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
Maybe you are looking for
-
How to delete data-records with BI-planning in BEx?
Hi folks, masters and gurus, we just upgraded some of our functions from BPS to BI-Planning. Only one thing I could not resolve up to now: In BPS (web based) it was easily possible to delete data-records by choosing a radio-button beside the data-ent
-
Can't get alias to work in Shared Folder
Hello, I want to share my Pictures folder with other people on my home network. I thought it would be easy to just put an alias of it in the Shared Folder, but when other people log into my computer and see the alias in my Shared Folder, when they do
-
I purchased a book on one macbook in my itunes account. I tried to open it on another account while logging into itunes account. The book does not appear. I have tried updating purchases, the bottom at the bottom-update purchases from another devic
-
Hi, The links about Oracle9i Database Online Courses Training at OTN seems to be broken, any ideas? Thanks in advance. The URL with courses is the following: http://www.oracle.com/technology/idevelop/online/courses/oln/getting_started.html sve
-
Import Error Opening AppleWorks 6.2.9 .cwk files in Pages 3.0.2
Once upon a time I could open all my Appleworks 6 files in Pages. Now, trying to open these files in Pages produces an "Import Error - AppleWorks document 'name xxx.cwk' is not a word processing document." The file(s) were saved, re-saved and re-save