Bad performance of MKPF / MSEG selects
Hi,
I am working on report for performance issue.
select-options : matnr FOR mara-matnr,
mtart FOR mara-mtart,
werks FOR mseg-werks NO INTERVALS,
lgort FOR mseg-lgort NO INTERVALS,
budat FOR mkpf-budat ,"OBLIGATORY,
reci FOR mseg-bwart ,"OBLIGATORY NO INTERVALS,
iss FOR mseg-bwart ."
select queries are followings but inside loop.
loop at itab.
at new matnr.
SELECT SUM( MENGE ) INTO qty1 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT < BUDAT-LOW and SHKZG = 'S'
AND BWART IN ('561','105','101','309','701').
SELECT SUM( MENGE ) INTO qty2 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT < BUDAT-LOW and SHKZG = 'H'
AND BWART IN ('562','201','261','502','543','333','551' ) .
SELECT SUM( MENGE ) INTO r_qty1 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT IN BUDAT AND SHKZG = 'S' AND
bwart in reci.
SELECT SUM( MENGE ) INTO R_qty2 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT IN BUDAT and SHKZG = 'H'
and bwart in reci.
SELECT SUM( menge ) INTO q1_qty1 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT IN BUDAT AND SHKZG = 'S' AND
bwart in iss.
SELECT SUM( menge ) INTO q1_qty2 from mkpf inner join mseg on
MKPF~MBLNR = MSEG~MBLNR and MKPF~MJAHR = MSEG~MJAHR
WHERE MSEG~MATNR = itab-MATNR AND WERKS in WERKS
AND BUDAT IN BUDAT AND SHKZG = 'S' AND
bwart in iss.
after that calculations..
endat.
endloop.
Its performance is very poor. It takes lot much time to get output.
I want to replace the code with for all entries..
Then 2 slect statements based on mkpf table
1 ) for budat in budat
2) for budat > budat-low.
the based on this 8- 10 select statement from mseg table.
how to get into final table all data ....
Thanks sir. I worked like that.
SELECT mblnr mjahr vgart blart blaum budat FROM mkpf INTO TABLE tt_mkpf
WHERE budat IN s_budat.
IF sy-subrc = 0.
SORT tt_mkpf BY mblnr mjahr budat.
ENDIF.
IF NOT tt_mkpf[] IS INITIAL.
select matnr mblnr mjahr zeile bwart werks lgort shkzg dmbtr menge
FROM mseg INTO corresponding fields of TABLE tt_mseg_temp FOR ALL ENTRIES IN tt_mkpf
WHERE mblnr = tt_mkpf-mblnr
AND mjahr = tt_mkpf-mjahr
AND ( shkzg = 'S'
or shkzg = 'H' )
and matnr in s_matnr
and lgort in s_lgort
and werks in s_werks
and ( bwart in s_reci or
bwart in s_iss ).
and for second case SELECT mblnr mjahr vgart blart blaum budat FROM mkpf INTO TABLE tt_mkpf
WHERE budat < s_budat.
after fetching data from mseg based on above statement .
how to take the data to final table based on two mkpf amd mseg table data so that i can use loop tt at into wa . I need just small assistane on this.
Similar Messages
-
Hello Experts,
I had a issue where we are executing one custom report in which i used inner join on table MKPF & MSEG, some time join statement took 9-10 min to excute and some time execute within 1-2 min with same test data .
i am not able to understand what the actaully happing .
please help.
code :
SELECT f~mblnr f~mjahr f~usnam f~bktxt p~bukrs
INTO TABLE itab
FROM mkpf AS f INNER JOIN mseg AS p
ON f~mblnr = p~mblnr AND f~mjahr = p~mjahr
WHERE f~vgart = 'WE'
AND f~budat IN p_budat
AND f~usnam IN p_sgtxt
AND p~bwart IN ('101','105')
AND p~werks IN p_werks
AND p~lgort IN p_lgort.
Regards,
Dipendra Panwar.Hi Dipendra,
if you call a report twice after another with the same test data for data selection, then the second run should be faster, because some data are remaining in memory and needn't to be caught from database. This will be also for the following third und further runs, until the data in the SAP memory will be removed by other programs.
For performance traces you should try to test with a first run.
Regards,
Klaus -
Bad Performance in a query into table BKPF
Hi forum i have a really problem in the second query under the table
BKPF.. some body cans help me, please
*THIS IS THE QUERY UNDER MSEG
SELECT tmsegmblnr tmkpfbudat tmsegbelnr tmsegbukrs tmseg~matnr
tmsegebelp tmsegdmbtr tmsegwaers tmsegwerks tmseg~lgort
tmsegmenge tmsegkostl
FROM mseg AS tmseg JOIN mkpf AS tmkpf ON tmsegmblnr = tmkpfmblnr
INTO CORRESPONDING FIELDS OF TABLE it_docs
WHERE
tmseg~bukrs IN se_bukrs AND
tmkpf~budat IN se_budat AND
tmseg~mjahr = d_gjahr AND
( tmsegbwart IN se_bwart AND tmsegbwart IN (201,261) ).
IF sy-dbcnt > 0.
I CREATE AWKEY FOR CONSULTING BKPF
LOOP AT it_docs.
CONCATENATE it_docs-mblnr d_gjahr INTO it_docs-d_awkey.
MODIFY it_docs.
ENDLOOP.
THIS IS THE QUERY WITH BAD BAD PERFOMANCE
I NEED KNOW "BELNR" FOR GO TO THE BSEG TABLE
SELECT belnr awkey
FROM bkpf
INTO CORRESPONDING FIELDS OF TABLE it_tmp
FOR ALL ENTRIES IN it_docs
WHERE
bukrs = it_docs-bukrs AND
awkey = it_docs-d_awkey AND
gjahr = d_gjahr AND
bstat = space .
THNKSHi Josue,
The bad performance is because you're not specifying the primary keys of the table BKPF in your WHERE condition; BKPF usually is a big table.
What you really need is to create a new index on database for table BKPF via the ABAP Dictionary on fields BUKRS, AWKEY, GJAHR & BSTAT. You'll find the performace of the program will significantly increase after the new index is activated. But I would talk to the Basis first to confirm they have no issues if you create a new index for BKPF on the database system.
Hope this helps.
Cheers,
Sougata. -
MKPF/ MSEG to get Sales Order No.
Dear All ,
Does anyone know how we can Sale Order No. from table MKPF/MSEG ? In between what table we need to combine/Join instand to get the Sale Order No. ? ???
THanksHi Orlang,
You have VBELN (Sales Order No) in MSEG table as KDAUF also. check once
You can use MBLNR field available in both table to join them
SELECT c~mblnr c~kdauf c~kdpos
INTO CORRESPONDING FIELDS OF TABLE itab
FROM mseg AS c
INNER JOIN mkpf AS p ON p~mblnr = c~mblnr.
Cheerz
Ram
Edited by: Ramchander Krishnamraju on Nov 19, 2009 7:18 AM -
Reporting on master data customer and bad performances : any workaround ?
Hello,
I've been asked to investiguate on bad performances encountered when performing reporting
on the specific master data zcustomer.
Basically this master data has a quite similar design that 0customer, there are 96000 entries in the master data table.
A simple query has been developed : the reporting is done on the master data zcustomer and its attributes : no key figure, no calculation, no restriction ...
Nevertheless, the query can not be executed .. the query runs around 10 minute in rsrt, then the private memory is exhausted and then a short dump is generated.
I tried to buid a very simple query on 0customer, this time, without the attributes ... and it took more than 30 sec before I get the results.
I checked the queries statistics :
3.x Analyzer Server 10 sec
OLAP: Read Texts : 20 sec
How is it that it is so long to performthe reporitng on those master data, while in the same time If i try to display the content in SAP by choosing "maintain master data", I have an immediate answer.
I there any workaround ?
Any help would be really appreciated.
thank you.
RaoulHi.
How much data have you got in the cube?
If you make no restrictions, you are asking the system to return data for all 96.000 customers. That is one thing that might take some time.
Also, using the attributes of this customer object, fx making selection or displaying several of them, means that the system has to run through the 96.000 records in masterdata to know what goes where in the report.
When you display the masterdata, you are by default displaying just the 250 or so first hits, and you are not joining against any cube or sorting the result set, so that is fast.
You should make some kind of restriction on other things than zcustomer (time, org.unit, version, etc, to limit the dataset from the cube, but also a restriction on one of the zcustomer attribs, with an index for that maybe, and performance should improve.
br
Jacob -
Temporary LOBs - bad performance when nocache is used
Hello.
Please, advise me what could be the reason of bad performance row 8 from the next anonymous block:
declare
i integer;
c clob := 'c';
procedure LTrimSys(InCLOB in clob ) is
OutCLOB clob;
begin
DBMS_LOB.CREATETEMPORARY(OutCLOB, false, DBMS_LOB.call);
dbms_lob.Copy(OutCLOB, InCLOB, dbms_lob.getlength(InCLOB));
DBMS_LOB.freetemporary(OutCLOB);
end;
begin
for j in 1 .. 1000 loop
LTrimSys(c);
end loop;
end;
I have two practically identical databases 10.2.0.4.0 EE 64-bit on Windows
On first DB I have elapsed time: 4 sec, on second - 0.2 sec
I didn't find important difference between init parameters (hidden parameters too).
First DB has more memory (PGA) then second.
Main time events in time of executing anonymous block on first DB are
PL/SQL execution elapsed time
DB CPU
sql execute elapsed time
DB time
In second DB - the same but much less
If I use caching of temporary LOBs then both DBs work fine, but I can not understand why first DB works slowly when I use nocache temporary LOBs.
What can be the reason?I don't think that is the problem. See next outputs:
select * from V$PGASTAT order by name
NAME
VALUE
UNIT
PGA memory freed back to OS
49016834031616
bytes
aggregate PGA auto target
170893312
bytes
aggregate PGA target parameter
1073741824
bytes
bytes processed
95760297282560
bytes
cache hit percentage
93,43
percent
extra bytes read/written
6724614496256
bytes
global memory bound
107366400
bytes
max processes count
115
maximum PGA allocated
2431493120
bytes
maximum PGA used for auto workareas
372516864
bytes
maximum PGA used for manual workareas
531456
bytes
over allocation count
102639421
process count
57
recompute count (total)
117197176
total PGA allocated
1042407424
bytes
total PGA inuse
879794176
bytes
total PGA used for auto workareas
757760
bytes
total PGA used for manual workareas
0
bytes
total freeable PGA memory
75694080
bytes
select * from V$PGA_TARGET_ADVICE_HISTOGRAM where PGA_TARGET_FACTOR = 1
PGA_TARGET_FOR_ESTIMATE
PGA_TARGET_FACTOR
ADVICE_STATUS
LOW_OPTIMAL_SIZE
HIGH_OPTIMAL_SIZE
ESTD_OPTIMAL_EXECUTIONS
ESTD_ONEPASS_EXECUTIONS
ESTD_MULTIPASSES_EXECUTIONS
ESTD_TOTAL_EXECUTIONS
IGNORED_WORKAREAS_COUNT
1073741824
1
ON
2199023255552
4398046510079
0
0
0
0
0
1073741824
1
ON
1099511627776
2199023255551
0
0
0
0
0
1073741824
1
ON
549755813888
1099511627775
0
0
0
0
0
1073741824
1
ON
274877906944
549755813887
0
0
0
0
0
1073741824
1
ON
137438953472
274877906943
0
0
0
0
0
1073741824
1
ON
68719476736
137438953471
0
0
0
0
0
1073741824
1
ON
34359738368
68719476735
0
0
0
0
0
1073741824
1
ON
17179869184
34359738367
0
0
0
0
0
1073741824
1
ON
8589934592
17179869183
0
0
0
0
0
1073741824
1
ON
4294967296
8589934591
0
0
0
0
0
1073741824
1
ON
2147483648
4294967295
0
0
0
0
0
1073741824
1
ON
1073741824
2147483647
0
0
0
0
0
1073741824
1
ON
536870912
1073741823
0
0
0
0
0
1073741824
1
ON
268435456
536870911
0
0
0
0
0
1073741824
1
ON
134217728
268435455
0
376
0
376
0
1073741824
1
ON
67108864
134217727
0
0
0
0
0
1073741824
1
ON
33554432
67108863
0
0
0
0
0
1073741824
1
ON
16777216
33554431
1
0
0
1
0
1073741824
1
ON
8388608
16777215
10145
45
0
10190
0
1073741824
1
ON
4194304
8388607
20518
21
0
20539
0
1073741824
1
ON
2097152
4194303
832
1
0
833
0
1073741824
1
ON
1048576
2097151
42440
0
0
42440
0
1073741824
1
ON
524288
1048575
393113
7
0
393120
0
1073741824
1
ON
262144
524287
10122
2
0
10124
0
1073741824
1
ON
131072
262143
22712
0
0
22712
0
1073741824
1
ON
65536
131071
110215
0
0
110215
0
1073741824
1
ON
32768
65535
0
0
0
0
0
1073741824
1
ON
16384
32767
0
0
0
0
0
1073741824
1
ON
8192
16383
0
0
0
0
0
1073741824
1
ON
4096
8191
0
0
0
0
0
1073741824
1
ON
2048
4095
83409618
0
0
83409618
0
1073741824
1
ON
1024
2047
0
0
0
0
0
1073741824
1
ON
0
1023
0
0
0
0
0
SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM v$sql_workarea_histogram);
OPTIMAL_COUNT
OPTIMAL_PERC
ONEPASS_COUNT
ONEPASS_PERC
MULTIPASS_COUNT
MULTIPASS_PERC
12181507016
100
146042
0
0
0 -
Performance issue with MSEG table
Hi all,
I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
Regards ,
AmitHi,
There could be various reasons for performance issue with MSEG.
1) database statistics of tables and indexes are not upto date.
because of this wrong index is choosen during the execution.
2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
3) Optimizer bug in oracle.
4) Size of table is very huge, archive.
Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
Hope this helps
dileep -
LIS Info Structure with table MKPF/MSEG
Hi Gurus,
I have a view made up of table MKPF/MSEG (74 fields in totals). Based on my requirement (loading archived and non archived marerial movements data) I am looking to create an Info structure with the same 74 fields - to connect the LIS infostructures to BW.
I need to create a char. as KF fields catalog.
my issue is while doing that I am not able to have the 74 fields I am looking for (I am just getting around 50).
I know how to create an Infostructure/ Catalogue (Transaction MC21/MC18)
Any Idea how to Create an Info Structure that would have all the fields from table MKPF and MSEG?
Regards
Edited by: Blaiso on Jun 2, 2011 6:10 PMHi,
Steps in LIS EXTRACTION:
T.code u2013 :MC18 u2013 create field catalog
1. Characteristic Catalog
Application-01-Sales and Distribution, 02-Purchasing, 03-Inventory Controlling, etc..
Catalog category 1. Characteristic catalog, 2. Key figures catalog 3. Date catalog Select characteristic catalog and enter, click on characteristic select the source table and it will be display the relevant source field and select the source field, copy + close, copy.
Save, similarly create key figures catalog
T.code : MC21 u2013 create infostructure
Example u2013
Inforstructure : S789
Application u2013 01
Choose characteristic select the catalog, select the fields, copy + close Choose key figures catalog select the key figures ,copy + close, save and generate
T.code u2013 MC24 u2013 create updating
Infostructure : S789
Update group : 01- Sales document, delivery, billing document ,enter Select the key figures click on rules for key figures give suggest rules, copy save and generate Click on updating (activate updating) Select the infostructure set periodic split 1. Daily, 2. Week, 3. Month, 4. Posting period Updating u20131)No updating,2)Synchronous updating (V1), 3)As synchronous updating (V2), 4)As synchronous updating (V3),
T.code u2013 LBW0 u2013 Connection of LIS Information structures to SAPBW Information structure : S786 Select the radio button-Setup LIS environment and Execute.
Select the radio button-Generate data source and Execute.
For Delta update:
Select the radio button-Generate updating and Execute Select the radio button -Activate / deactivate and Execute.
T.code u2013 SBIW u2013 Display IMG (implementation guide) Setting for applications specific data source u2013 logistics u2013 Managing transfer information structure u2013 setup of statistical data u2013 applications specific setup of statistical data u2013perform statistical setup u2013 sales.
Choose activity
Setup u2013 Orders, deliveries, billing
Choose the activities enter the infostructure (S789), give name of the run, date of termination, time of termination, No. of tolerated faulty documents. Then execute
T.code u2013 RSA3 u2013 Extractor checker
Give the data source name eg. 2LIS 01S789 and execute, result will get some records Go to BW side replicate data source u2013 Assign infosource u2013 Create infocube u2013 Create update rules u2013 create infopackage and schedule the package with initialize delta process.
For delta update :
In R/3 side
T.code u2013 MC25, set update (V1) or (V2) or (V3)
T.code u2013 LBW0, choose generate updating and execute then choose activate / deactivate and execute
BW side u2013 create infopackage and schedule the package with delta update.
First time if your scheduling the infopackage -in R/3 side T.code :MC25 -Udating set to No update,insted of selecting the update V1,V2,V3.
If your doing the Delta update:in R/3 side T.code :MC25-Updating set to either V1 or V2 or V3. and the to T.code :LBW0 -Select the radio button Active/deactivate and Execute.
and schedule the infopackage with delta update.
Modules for LIS : SD,MM, PP,QM.
Deltas for LIS:
After setting up the LIS environment, 2 transparent tables and 1 extract structure is generated for this particular info structure. Within transaction SE11 you can view the tables u2018SnnnBIW1u2019, u2018SnnnBIW2u2019 and the structure u2018SnnnBIWSu2019 and the InfoStructure itself u201ASnnnu2018
The tables S5nnnBIW1 & SnnnnBIW2 are used to assist the delta update process within BW.
Extract structure u2018SnnnnBIWCu2019 is used as an interface structure between OLTP InfoStructure and BW
The OLTP system has automatically created an entry in the control table u2018TMCBIWu2019. Within transaction u2018SE16u2019 youu2019ll see, that for your particular InfoStructure the field u2018BIW activeu2019 has the value u2018Xu2019 and the field u2018BIW statusu2019 is filled with value u20181u2019 (refers to table SnnnBIW1).
The orgininal LIS update program u201ARMCX#### will be enhanced within the form routines u201Aform Snnnbiw1_update_u2026.u2018 and u201Aform Snnnbiw2_update
With the transaction u2018SE38u2019 youu2019ll see at the end of the program starting at line 870 / 1006, that the program is enhanced within a u2018BIW delta updateu2019 coding
Within the flag u201AActivate/Deactivateu2018 the update process into the delta tables (SnnnBIW1/Sn5nnBIW2) is swichted on/off. In the table u201ATMCBIWu2018 is defined, which table is active for delta update.
Regards,
Prakash -
Very bad performance of NWBC in IE11
Hello colleagues!
I've faced with a bad performance of NWBC in IE11 after update to NW7.02 SP16 GRC10 SP18. According to the pam IE 11 is supported.
When I log in and try to create an access request I get a picture with slow circle
(the message at the bottom of the window indicates that I've switched off all security settings and plug-ins)
After 1-1,5 minute I get a normal screen
If I try select an entry from any drop list or perform any other actions the window hangs with information (Not responding)
Has anyone experienced the same problem in GRC or similar products?
My search results for this issue were not successful.
I suspect that the problem is in NWBC, because webgui works fine, and hope that someone can provide with the corrections.
I've tried to use httpwatch, but it hasn't shown any useful information. I only identified that big time consumption happens when load one page (see below)
Could any one advice how to resolve the issue ?
System parameters:
SAP_BASIS 702 0016 SAPKB70216 SAP - базисная система
SAP_ABA 702 0016 SAPKA70216 Компоненты, общие для всех приложений
PI_BASIS 702 0016 SAPK-70216INPIBASIS Basis Plug-In
ST-PI 2008_1_700 0011 SAPKITLRDK SAP Solution Tools Plug-In
SAP_BW 702 0016 SAPKW70216 SAP Business Warehouse
GRCFND_A V1000 0018 SAPK-V1018INGRCFNDA GRC Foundation ABAP
ST-A/PI 01Q_700 0002 SAPKITAB7L Servicetools for other App./Netweaver 04
Regards,
Artem
P.S. notes that I have checked:
http://service.sap.com/sap/support/notes/1717650
http://service.sap.com/sap/support/notes/1674530
http://service.sap.com/sap/support/notes/1937379
http://service.sap.com/sap/support/notes/1926394
http://service.sap.com/sap/support/notes/2016738Hello Samuli!
I've implemented all corrections up to Corrections for unified rendering 702/17 VII.
Program WDG_MAINTAIN_UR_MIMES shows the following output:
Status
ZIP Archive Path in MIME Repository: /sap/public/bc/ur/ur_mimes_nw7.zip
ZIP File Prefix: mimes/
ZIP Archive Timestamp: 11.03.2015 16:37:21
ZIP Archive LOIO: 0019BBCA3D421DEDB6E50FEB0CAFEA8B
ZIP Archive Size [Bytes]: 12.879.368
Deployed ZIP Archive Timestamp: 11.03.2015 16:37:21
Deployment Timestamp: 11.03.2015 17:09:09
Number of deployed files: 8.161
Deployed version.properties:
urversion: 7.33.3.80.0
urrelease: UnifiedR_03_REL
urchangelist: 222951
urtimestamp: 201410291102
urimplementationversion: 03.000.20141028171614.0000
urspnumber: 00
ursppatchlevel: 0
urtarget: NewYork
codeline: $File: //tc1/uicore/UnifiedR_03_REL/src/_version/version/version.properties $
CLUR_NW7=>VERSION_PROPERTIES:
urversion: 7.33.3.80.0
urrelease: UnifiedR_03_REL
urchangelist: 222951
jschangelist: 223208
urtimestamp: 201410291102
urimplementationversion: 03.000.20141028171614.0000
urspnumber: 00
ursppatchlevel: 0
urtarget: NewYork
codeline: $File: //tc1/uicore/UnifiedR_03_REL/src/_version/version/version.properties $
Previously I had 7.33.3.68.0.
So, I don't know what else can do... If I roll back to Explorer 9, the problem disappears, however another user with IE9 experiences the same problem. FF works fine.
Could you give me please another idea to resolve the data?
Regards,
Artem -
CMP 6.1 Entity bad performance.
I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
around 150ms for an insert (i have 20 columns).
When accessing an order interface to read 2 fields in a session bean method: around
90 ms.
I'am very disapointed and confused. What should I look up for
to increase the performance ? Any important tuning or parameters ? Should I use EJB
2.0 to have significant perf ?
Thanks for any advice because we are thinking to switch all the application on stored
procedures. A solution without Entity and fewer stateless session beans.
My config:
WL: 6.1 on Sun sparc
SGBD: Sybase
Entity: WebLogic 6.0.0 EJB 1.1 RDBMS (weblogic-rdbms11-persistence-600.dtd)
ThanksHistorically its hard to get good performance & scalability out of sybase
without using stored procs. Using dynamic sql on sybase just doesnt do as
well as procs. Oracle on the other hand can get very close to stored proc
speed out of well written dynamic sql.
As far as weblogic goes, my experience is the focus of their testing for db
related stuff is Oracle, then DB2, then MSSQLServer. Sybase is usually last
on the list.
As far as the 6.1 cmp, haven't used it much, but because of these other
things I would be cautious about using it with Sybase.
Joel
"Antoine Bas" <[email protected],> wrote in message
news:3cc7cdcf$[email protected]..
>
I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
around 150ms for an insert (i have 20 columns).
When accessing an order interface to read 2 fields in a session beanmethod: around
90 ms.
I'am very disapointed and confused. What should I look up for
to increase the performance ? Any important tuning or parameters ? ShouldI use EJB
2.0 to have significant perf ?
Thanks for any advice because we are thinking to switch all theapplication on stored
procedures. A solution without Entity and fewer stateless session beans.
My config:
WL: 6.1 on Sun sparc
SGBD: Sybase
Entity: WebLogic 6.0.0 EJB 1.1 RDBMS(weblogic-rdbms11-persistence-600.dtd)
>
Thanks -
Bad performance when open a bi publisher report in excel
We use bi publisher(xml publisher) to create a customized report. For a small report, user like it very much. But for a bigger report, users complain about the performance when they open the file.
I know it is not a native excel file, that may cause the bad performance. So I ask my user to save it to a new file as a native excel format. The new file still worse than a normal excel file when we open it.
I did a test. We try to save a bi publish report to excel format, the size shrink to 4Mb. But if we "copy all" and "Paste Special" value only to a new excel file, the size is only 1Mb.
Do I have any way to improve that, users are complaining everyday. Thanks!
I did a test today.
I create a test report
Test 1: Original file from BIP in EBS is 10Mb. We save it in my local disk, when we open the file, it takes 43 sec.
Test 2: We save the file in native excel format, the file size is 2.28Mb, it takes 7 sec. to open.
Test 3: We copy all cell and "PasteSpecial" to a new excel file with value only. The file size is 1.66Mb, it takes only 1 sec to open.
Edited by: Rex Lin on 2010/3/31 下午 11:26EBS or Standalone BIP?
If EBS see this thread for suggestions on performance tuning and hints and tips:
EBS BIP Performance Tuning - Definitive Guide?
Note also that I did end up rewriting my report as PL/SQL producing a csv file and have done with several large reports in BIP on EBS.
Cheers,
Dave -
Bad performance when calling a function in where clause
Hi All,
I have a performance problem when executing a query that contains a function call in my where clause.
I have a query with some joins and a where clause with some regular filters. But one of these filters is a function, and its input parameters are columns of the tables used in the query.
When I run it with only a few rows in the tables, it goes ok. But as the number of rows grows, performance falls geometrically, even when my where clause filters the result to only a few rows.
If I take the function call off of the where clause, then run the query and then call the function for each returned row, performance is ok. Even when the number of returned rows is big.
But I need the function call to be in the where clause, because I can't build a procedure to execute it.
Does anyone have any clue on how to improve performance?
Thanks,
RafaelYou have given very little information...
>
If I take the function call off of the where clause, then run the query and then call the function for each returned row, performance is ok. Even when the number of returned rows is big. Can you describe how you measured the performance for a big result set without the function? For example lets say there had been 10.000 rows returned (which is not really big, but it is astarting point). Did you see all 10.000 rows? A typical mistake is to execute the query in some tool like Oracle SQL Developer or TOAD and measure how fast the first couple of rows are returned. Not the performance of the full select.
As you can see from this little detail there are many questions that you need to address first before we can drill down to the root of your problem. Best way is to go through the thread that Centinul linked and provide all that information first. During the cause of that you might discover that you learn things on the way that help a lot for later tuning problems/approaches.
Edited by: Sven W. on Aug 17, 2009 5:16 PM -
Bad performance updating purchase order (ME22N)
Hello!
Recently, we face bad performance updating purchase orders using transaction ME22N. The problem occurs since we implemented change documents for a custom table T. T is used to store additional data to purchase order positions using BAdIs ME_PROCESS_PO_CUST and ME_GUI_PO_CUST.
I've created a change document C_T for T using transaction SCDO. The update module of the change document is triggered in the method POST of BAdI ME_PROCESS_PO_CUST.
Checking transaction SM13, I recognized that the update requests of ME22n have status INIT for several minutes before they are processed. I also tried to exclude the call of the update module for change document C_T (in Method POST) - the performance problem still occurs!
The problem only occurs with transaction ME22N, thus I assume that the reason is the new change document C_T.
Thanks for your help!
Greetings,
WolfgangI agree with vikram, we don't have enough information, even not a small hint on usage of this field, so which answer do you expect (The quality of an answer depends ...) This analysis must be executed on your system...
From a technical point of view, the BAPI_PO_CHANGE has EXTENSIONIN table parameter, fill it using structure BAPI_TE_MEPOITEM[X] alreading containing CI_EKPODB (*) and CI_EKPODBX (**)
Regards,
Raymond
(*) I guess you have used this include
(**) I guess you forgot this one (same field names but data element always BAPIUPDATE) -
Bad performance in web intelligence reports
Hi,
We use Business Objects with Web Intelligence documents and Crystal Reports.
We are supporting bad performance when we use the reports specilly when we need to change the drill options
Can someone telling me if exists some best practices to improve performance? What features should i look to?
Best Regards
João FernandesHi,
Thank you for your interest. I know that this a issue with many variables because that i need information about anything that could cause bad performance.
For bad performance i mean the time that we take running and refreshing reports data.
We have reports with many lines but the performance is bad even when a few users are in the system
Best Regards
João Fernandes -
Help: Bad performance in marketing documents!
Hello,
When creating an AR delivery note which has about 10 lines, we have really noticed that the creation of lines becomes slower and slower. This especially happens when making tab in the system field "Quantity". In fact, before going to the next field quickly, it stays in Quantity field for about 5 seconds!
The number of formatted searches in AR delivery note is only 5. And only one is automatic. The number of user fields is about 5.
We have heard about the bad performance when the number of lines increases in the documents when having formatted searches, but it is odd to happen this with about 10 lines in the document.
We are using PL16 and this issue seems to have been solved already at PL10.
Could you throw some light on this?
Thanks in advance,It is solved now.
It had to be with the automatic formated search in 2 head fields.
If the automatic search is removed, the performance is OK.
Hope it helps you,
Maybe you are looking for
-
My Ipod touch will only work when plugged into an Itouch radio. When I take it out of the charger, it goes blank. The battery shows fully charged. It appears that the battery is no longer charging. What should I do?
-
How do I determine what 4 computers are authorized to download content from my account?
How do I determine what 4 computers are authorized to download content from my account? I need to manage what computers are authorized.
-
I'm having trouble downloading iTunes to Windows 8!
So I click download iTunes, and then it asks me where I want to save the setup file and I click save, and then usually it appears in my downloads file and I can double click on it and then it says run, save, or cancel. This time after I clicked save,
-
Reload archived file on another sap system
Dear Experts, We have multiple plants in sap system maintained with different clients. For example Plant A , Plant B, Plant C. Is it possible to migrate plant data from one SAP System to new sap system. we have a Plan to provide separate systems as
-
Trying to load MP3 from desktop to AIR application
I'm trying to load an MP3 file into an Air application. Here's some code. The first 2 lines are in my constructor fileToOpen.browseForOpen("Open", [new FileFilter("MP3", "*.mp3")]); fileToOpen.addEventListener(Event.SELECT, fileSelected); private fun