Help! Poor Performance for BW reporting in WAD
Hi, Gurus:
I have one BW report, if running it in analyzer and can get the result within 10 seconds, however, when running it in WAD, it cost more than 4 minutes even through it has no data, and I'm really confused by this phenomenon, pls help ,thanks in advance!
Hi
Please check this link .
Re: Tips for Frontend performance - Web Reports (WAD).
Hope this helps.
Cheers
Chanda
Similar Messages
-
WAD-Analysis Item-Poor performance for new lines
Hi.
I use WAD7 for entering new records in IP.
According to our business requirements, in analysis item properties I have defined "Number of New Lines in Planning Queries" = 50.
After that I face before extremely poor performance - it takes about 1-2 minutes until the page has loaded (note that it is empty page - only new blank rows).
When I define new lines = 1 performance is very good (2-3 sec).
Does anybody know what could be the problem ?
Thanks.Hello Andrey,
the number of new lines configured in WAD is completely unknown in the ABAP backend and has no impact on backend performance; in fact, the front end gets information about one 'line' of template cells, this costs nothing.
Checks for characteristic relationships etc. only may have significant costs for cells of the result set, not for new lines.
Whether this problem comes from the ABAP backed or form the Java stack or from the browser rendering can be checked
with an RSTT trace. Is the run time of the trace different between 1 or 50 new lines, were different backend calls recorded?
If not, the problem comes from the Java stack or from the browser rendering. One can check the latter via task manager.
To check (partially) the run time in Java and/or ABAP stack add the parameter &PROFILING=X in the url, cf. note 1048691.
Regards,
Gregor -
Help with Sql for Annual Report per month
Hi, I have been given the task to create an annual report by month that would show company's profits per month and totals in the last column to show which branch had the hightest income.
Branch||January||February||March||April||May||June....||Total||
ABC ||$0.00 ||$0.00 ||$0.00||$0.00||$0.00||$0.00||Total Amt||
DEF ||$18.01 ||$3.88 ||$18.01||$4.12||$18.01||$3.97||Total Amt||
Can anyone please help me in giving an idea of how to write sql for this report..? I am building sub-queries for everymonth by giving the dates for Jan/Feb/March..but I think this is not the right way to do this....
SELECT
sum(a.commission) December,
sum(b.commission) November
FROM
Select
c.account_id,
c.officer,
c.account_product_class_id,
sum(c.dp_monthly_premium) Commission
From
contract c
Where
c.account_id=109 and
c.status='APPROVED' and
c.protection_effective between '01-DEC-2009' and '31-DEC-2009'
Group by
c.account_id,
c.officer,
c.account_product_class_id
) a,
Select
c.account_id,
c.officer,
c.account_product_class_id,
sum(c.dp_monthly_premium) Commission
From
contract c
Where
c.account_id=109 and
c.status='APPROVED' and
c.protection_effective between '01-NOV-2009' and '30-NOV-2009'
Group by
c.account_id,
c.officer,
c.account_product_class_id
) b
I always have hight hope from this forum. So please help. Thanks in advance.
Edited by: Aditi_Seth on Jan 26, 2010 2:29 PMYou may try a group report on one simple query like:
Select
c.account_id, c.officer, to_char(c.protection_effective, 'MM') month
sum(c.dp_monthly_premium) Commission
From
contract c
Where
c.status='APPROVED' and .....
Group by
c.account_id
c.officer,
to_char(c.protection_effective, 'MM')
break/gropu on account_id, c.officer, to_char(c.protection_effective, 'MM') and total will be automatically calculated by Reports. -
Poor performance for full screen desktop
Hi,
Full screen desktop ( displayed as Kiosk ) of Linux with gnome ( I believe it's the same for all window managers ) is poor ( even with command as ls you see the delays ).
It happens on the local network. Connection to the application server is SSH.
SGD server - Solaris 10 , Sun Fire 280. Application server is regular modern PC.
Regular windows performance is very good.
Any suggestions ?
ThanksI think you will find the poor performance is only with GTK applications.
For example, if you go into a large directory of files, and do an ls -aRl, you will notice it is a lot slower with a gnome-terminal than it is with a plain xterm.
I think 4.3 will resolve this performance issue. -
Need help in Plsql for Oracle Reports
Hi i need urgent requirement for this, Kindly guide me. Since i couldnt do.
Main Table:
For example: Conside this table: (Card_no, month and year are unique constraint)
Card_No Name Amount_1 Amount _2 Total Month Year
1 Justin 50 1000 1050 Jan 2011
1 Justin 100 500 600 Feb 2011
2 Charles 50 100 150 Jan 2011
1 Justin 100 50 150 Jan 2012
1 Justin 50 1000 1050 Feb 2012
1 Justin 100 500 600 Mar 2012
2 Charles 50 100 150 Jan 2012
1 Justin 100 50 150 Jan 2012
Now i need Reports like this:
1. When i select Year as 2012 and select amount1, i need report like this
Card_No Name Jan Feb Mar Apr May June . . . . Dec Total
1 Justin 100 50 100 0 0 0 . . . 0 250
2 Charles 50 0 0 0 0 0 . . . 0 50
2. If i select many years and select amount1, i need report like this
Card_No Name Year Jan Feb Mar Apr May June . . . . Dec Total
1 Justin 2012 100 50 100 0 0 0 . . . 0 250
1 Justin 2011 50 100 0 0 0 0 0 150
2 Charles 2012 50 0 0 0 0 0 . . . 0 50
2 Charles 2011 50 0 0 0 0 0 . . . 0 50
These are the two requirements
Kindly guide me and many more suggestion also will be helpful
Thanks in advanceActually you need to use matrix report in you report style to meet your requirement
however please try below file if you use oracle 10g (I am not much sure as it was checked long time back)
https://docs.google.com/file/d/0B6k7l8hLvpK2UnJwUDFRR1N5d2s/edit
Best of luck
Edited by: AppsLearner on Aug 3, 2012 12:49 AM -
Poor performance for the 1st select everyday for AFRU table
Hello everyone, I have performance problems with AFRU table. Every day, the first time I run a "Z" transaction, it takes around 100-120 seconds, but the second time and following it only takes four seconds. What could I do, in order to reduce the first execution time?
This is the select:
SELECT * FROM AFRU WHERE MANDT = :A0 AND CATSBELNR = :A1 AND BUDAT = :A2 AND PERNR = :A3 AND STOKZ <> :A4 AND STZHL = :A5
The execution plan for this select takes index AFRU~ZCA with an acceptable cost of 6.319. Index AFRU~ZCA is a nonunique index with these colums: MANDT + CATSBELNR + BUDAT + PERNR
I'll appreciate any ideas.
Thanks in advance,
Santi.What database system are you using (ASE, Oracle, etc?).
If ASE, for the general issue of the first exection of a query taking longer, the two most likely reasons would be
a) the table's data has aged out of cache so the query has to do a lot of physical i/o to read the data back into cache
or
b) the query plan for the query has aged out of statement cache and needs to be recompiled.
This query looks pretty simple, so the data cache seems much more likely.
To get a better feel, some morning run the query with
set statistics io on
set statistics time on
then run it again and look for differences in the physical vs logical i/o numbers and compile vs execution times.
You could use a scheduled event (Job Scheduler, cron job) to run the query or some query very like it a little earlier in the day to prime the data cache with the table data. -
Poor performance for my wlan in conference rooms
Hi,
I have real problems in my conference rooms. I have deployed about 25 aps for my building. I have 1242, 1131, 3501 and 1142's. I have a 2 story building. I've used the WCS maps feature to provide me with a coverage area map. I think that the upstairs and downstairs AP's are interfering with each other. It was suggested I put 3 AP's into one conference room, each with it's own a/b or g radio. How do I pinpoint what is causing the problems with connectivity in my conference rooms?
Also, have you tried manually adjusting power levels? I believe once I start with that, I'll have to touch each and every ap, if I start messing with that. Any suggestions?
ThanksRemeber that when working with wireless we should always do a site survey to determine the current rf the site has, where to locate the access points , how many access points to get and where to install each ap so that the overlap between the aps is not more then 15%.
Also once the APs are installed you can use the wlc options and heck how many rouges aps the wlc aps are seeing since this rouges would affect your aps and auto rrm will not be able to know what channel to use on the aps managed by your wlc so you would need to configure it manully, also you can go to the monitro tab, select the ap by ap to see how each ap sees the signal to another ap managed y the wlc.
For auto rrm to work on the wlc you need that each ap sees atleast 3 more aps with a good signal to be able to set the correct power and channel to use.
Sent from Cisco Technical Support iPhone App -
Need help with performance for very very huge tables...
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
My DB has many tables and out of which I am interested in getting data from product and sales.
select /*parallel 32*/count(1) from (
select /*parallel 32*/distinct prod_code from product pd, sales s
where pd.prod_opt_cd is NULL
and s.sales_id = pd.sales_id
and s.creation_dts between to_date ('2012-07-01','YYYY-MM-DD') and
to_date ('2012-07-31','YYYY-MM-DD')
More information -
Total Rows in sales table - 18001217
Total rows in product table - 411800392
creation_dts dont have index on it.
I started query in background but after 30 hours I saw the error saying -
ORA-01555: snapshot too old: rollback segment number 153 with name
Is there any other way to get above data in optimized way?Formatting your query a bit (and removing the hints), it evaluates to:
SELECT COUNT(1)
FROM (SELECT DISTINCT prod_code
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD')
);This should be equivalent to
SELECT COUNT(DISTINCT prod_code)
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD');On the face of it, that's a ridiculously simple query If s.sales_id and pd.sales_id are both indexed, then I don't see why it would take a huge amount of time. Even having to perform a FTS on the sales table because creation_dts isn't indexed shouldn't make it a 30-hour query. If either of those two is not indexed, then it's a much uglier prospect in joining the two tables. However, if you often join the product and sales tables (which seems likely), then not having those fields indexed would be contraindicated. -
Poor performance on reports that were migrated from 6i to 10g
We are migrating from 6i client server to 10g reports server and getting poor performance on various reports. Reports that work in seconds in 6i are taking much longer to run or even timing out.
Reports Server:
Version 10.1.2.0.2
initEngine = 1
maxEngine = 20
minEngine = 1
engLife = 1
engLife = 1
maxIdle = 30
The reports are being called from 10g forms with the following:
T_repstr := '../reports/rwservlet?server=rep_aporaapp_frhome1'
|| '&report='|| T_prog_name
|| '&userid='|| T_nds_uid;
|| '&destype=cache'
|| '¶mform=yes'
|| '&mode=Default'
|| '&desformat=pdf'
||' orientation=Landscape';
web.show_document(T_repstr,'_blank');Using these and not hearing much bad
Init Engine 1
Max Engine 6
Min Engine 0
Eng Life 10
MaxIdle 30
Trace Error
Trace Replace
I set my Report Server Parameters
CACHE SIZE - 700
CACHE DIRECTORY = (you have to decide)
IDLE timeout 120
Max Connections 120
Max Queue Size 4000
trace options = trace_err
trace mode trace_replace -
Poor Performance in ETL SCD Load
Hi gurus,
We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
Some questions for the above:
- is this an expected average execution duration for this number of records?
- updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
Thanks in advance,
GuilhermeHi,
Thank you for posting in Windows Server Forum.
Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
Hope it helps!
Thanks.
Dharmesh Solanki
TechNet Community Support -
Hi,
I have a problem updating several reports which either time out or take ages to respond when making simple changes. It doesn't seem to matter which mode you are in while making a change (whether in data or structure mode) nor the tool used (Webi or Rich Client).
I have read in other forums users reporting similiar problems which results in an error unable to get the first page of the report.
To ensure it wasn't an issue inherited from a previous version of BO (as these reports were originally written in BOXI BI 3.1) I recreated the report from scratch only to hit the same issue when populating the report with various formula.
When this occurs (i.e. the unable to get the first page of the report error occurring) I am forced to save then close the Rich Client and then have to re-open the file each and every time.
We are currently using Business Objects BI4.0 SP6 Edge. These reports consist of some 600+ variables however they never caused this issue in the older version of BO.
Please can someone suggest a solution to this issue as it is taking me much longer to make simple updates to these reports when it ought to be straight forward.
Cheers
IanHi Farzana,
Thanks for your response. Yes, I had read this on a variety of forums and due to the poor performance wrote the report from scratch again.
Firstly, I got the structure of the report and saved it as a template. No issues. Then add in Queries and variables. No issues. It was only when I had populated the report with the formula / calculations (after about my half way point) I started to detect performance issues again.
This forced my hand and I used RestFUL Web Service calls to complete the rest otherwise it would of been a painful excercise. The report contains some 600+ variables as 750+ cells populated with formula calculations so it is a busy report.
I would of thought others with complex reports may have reported similiar performance issues as this worked fine in our old BOXI v3.1 environment. -
Hi All,
I have users complaining of poor performance for TCP applications over a site to site VPN.
I would like to know if anyone knows what to look for when trying to see if we need to reduce the MTU on each side of the VPN.
I dont want to reconfigure MTU unless I have to because one of the two sites is the centre of the hub and I will most likely have to configure it for all of the other sites if I configure it for the one.
the VPNs run between 5510 devices running 7.08 and 8.21
thanks very much for any help
Regards
Amanda Lalli-CafiniI am having an issue with performance since changing our Linked server connection to SQL 2012. The query in 2008R2 ran in 9 seconds and now it takes around 10 minutes. When I trace the query on the 2 servers, the statement is completely different.
CREATE TABLE [dbo].[PS_OCC_ADDRESS_S](
[EMPLID] [varchar](11) NULL,
[ADDRESS_TYPE] [varchar](4) NULL,
[OCC_ADDR_TYP_DESCR] [varchar](30) NULL,
[ADDRESS1] [varchar](55) NULL,
[ADDRESS2] [varchar](55) NULL,
[ADDRESS3] [varchar](55) NULL,
[CITY] [varchar](30) NULL,
[STATE] [varchar](6) NULL,
[POSTAL] [varchar](12) NULL,
[COUNTY] [varchar](30) NULL,
[COUNTRY] [varchar](3) NULL,
[DESCR] [varchar](30) NULL,
[FERPA] [varchar](1) NULL,
[LASTUPDDTTM] [varchar](75) NULL,
[LASTUPDOPRID] [varchar](30) NULL
) ON [PRIMARY]
Statement:
SELECT EmplId, Address1, Address2, Address3, City, State = substring(State,1,2), Zip = substring(Postal,1,10), County, Country
--INTO tmp_Addr
FROM [Sql03].[DWHCRPT].[dbo].[PS_OCC_ADDRESS_S]
WHERE (Address_Type = 'PERM') and (EmplID IN (
SELECT UserID As EmplID FROM Collegium.dbo.Users
UNION
SELECT EmplID As EmplID FROM Emap.dbo.Applications
UNION
SELECT ID As EmplID FROM HS_Program_Application.dbo.Applications
Any suggestions would be appreciated
SELECT UserID As EmplID FROM Collegium.dbo.Users
into #temp1
UNION
SELECT EmplID As EmplID FROM Emap.dbo.Applications
UNION
SELECT ID As EmplID FROM HS_Program_Application.dbo.Applications
....WHERE (Address_Type = 'PERM') and EmplID IN (select * from #temp1)
Still it does not resolve 9 secs over 10 minutes. Good luck. -
Performance issue in Report (getting time out error)
Hi experts,
I am doing Performance for a Report (getting time out error)
Please see the code below and .
while looping internal table IVBAP after 25 minutes its showing time out error at this poit ->
SELECT MAX( ERDAT ) .
please send alternate code for this .
Advance thanks
from
Nagendra
Get Sales Order Details
CLEAR IVBAP.
REFRESH IVBAP.
SELECT VBELN POSNR MATNR NETWR KWMENG WERKS FROM VBAP
INTO CORRESPONDING FIELDS OF TABLE IVBAP
FOR ALL ENTRIES IN IVBAK
WHERE VBELN = IVBAK-VBELN
AND MATNR IN Z_MATNR
AND WERKS IN Z_WERKS
AND ABGRU = ' '.
Check for Obsolete Materials - Get Product Hierarhy/Mat'l Description
SORT IVBAP BY MATNR WERKS.
CLEAR: WK_MATNR, WK_WERKS, WK_PRDHA, WK_MAKTX,
WK_BLOCK, WK_MMSTA, WK_MSTAE.
LOOP AT IVBAP.
CLEAR WK_INVDATE. "I6677.sn
SELECT MAX( ERDAT ) FROM VBRP INTO WK_INVDATE WHERE
AUBEL EQ IVBAP-VBELN AND
AUPOS EQ IVBAP-POSNR.
IF SY-SUBRC = 0.
MOVE WK_INVDATE TO IVBAP-INVDT.
MODIFY IVBAP.
ENDIF. "I6677.e n
SELECT SINGLE * FROM MBEW WHERE "I6759.sn
MATNR EQ IVBAP-MATNR AND
BWKEY EQ IVBAP-WERKS AND
BWTAR EQ SPACE.
IF SY-SUBRC = 0.
MOVE MBEW-STPRS TO IVBAP-STPRS.
IVBAP-TOT = MBEW-STPRS * IVBAP-KWMENG.
MODIFY IVBAP.
ENDIF. "I6759.en
IF IVBAP-MATNR NE WK_MATNR OR IVBAP-WERKS NE WK_WERKS.
CLEAR: WK_BLOCK, WK_MMSTA, WK_MSTAE, WK_PRDHA, WK_MAKTX.
MOVE IVBAP-MATNR TO WK_MATNR.
MOVE IVBAP-WERKS TO WK_WERKS.
SELECT SINGLE MMSTA FROM MARC INTO MARC-MMSTA
WHERE MATNR = WK_MATNR
AND WERKS = WK_WERKS.
IF NOT MARC-MMSTA IS INITIAL.
MOVE '*' TO WK_MMSTA.
ENDIF.
SELECT SINGLE LVORM PRDHA MSTAE MSTAV FROM MARA
INTO (MARA-LVORM, MARA-PRDHA, MARA-MSTAE, MARA-MSTAV)
WHERE MATNR = WK_MATNR.
IF ( NOT MARA-MSTAE IS INITIAL ) OR
( NOT MARA-MSTAV IS INITIAL ) OR
( NOT MARA-LVORM IS INITIAL ).
MOVE '*' TO WK_MSTAE.
ENDIF.
MOVE MARA-PRDHA TO WK_PRDHA.
SELECT SINGLE MAKTX FROM MAKT INTO WK_MAKTX
WHERE MATNR = WK_MATNR
AND SPRAS = SY-LANGU.
ENDIF.
IF Z_BLOCK EQ 'B'.
IF WK_MMSTA EQ ' ' AND WK_MSTAE EQ ' '.
DELETE IVBAP.
CONTINUE.
ENDIF.
ELSEIF Z_BLOCK EQ 'U'.
IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
DELETE IVBAP.
CONTINUE.
ENDIF.
ELSE.
IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
MOVE '*' TO WK_BLOCK.
ENDIF.
ENDIF.
IF WK_PRDHA IN Z_PRDHA. "I4792
MOVE WK_BLOCK TO IVBAP-BLOCK.
MOVE WK_PRDHA TO IVBAP-PRDHA.
MOVE WK_MAKTX TO IVBAP-MAKTX.
MODIFY IVBAP.
ELSE. "I4792
DELETE IVBAP. "I4792
ENDIF. "I4792
IF NOT Z_ALNUM[] IS INITIAL. "I9076
SELECT SINGLE * FROM MAEX "I9076
WHERE MATNR = IVBAP-MATNR "I9076
AND ALNUM IN Z_ALNUM. "I9076
IF SY-SUBRC <> 0. "I9076
DELETE IVBAP. "I9076
ENDIF. "I9076
ENDIF. "I9076
ENDLOOP.Hi Nagendra!
Get Sales Order Details
CLEAR IVBAP.
REFRESH IVBAP.
check ivbak is not initial
SELECT VBELN POSNR MATNR NETWR KWMENG WERKS FROM VBAP
INTO CORRESPONDING FIELDS OF TABLE IVBAP
FOR ALL ENTRIES IN IVBAK
WHERE VBELN = IVBAK-VBELN
AND MATNR IN Z_MATNR
AND WERKS IN Z_WERKS
AND ABGRU = ' '.
Check for Obsolete Materials - Get Product Hierarhy/Mat'l Description
SORT IVBAP BY MATNR WERKS.
CLEAR: WK_MATNR, WK_WERKS, WK_PRDHA, WK_MAKTX,
WK_BLOCK, WK_MMSTA, WK_MSTAE.
avoid select widin loop. instead do selection outside loop.u can use read statement......and then loop if required.
LOOP AT IVBAP.
CLEAR WK_INVDATE. "I6677.sn
SELECT MAX( ERDAT ) FROM VBRP INTO WK_INVDATE WHERE
AUBEL EQ IVBAP-VBELN AND
AUPOS EQ IVBAP-POSNR.
IF SY-SUBRC = 0.
MOVE WK_INVDATE TO IVBAP-INVDT.
MODIFY IVBAP.
ENDIF. "I6677.e n
SELECT SINGLE * FROM MBEW WHERE "I6759.sn
MATNR EQ IVBAP-MATNR AND
BWKEY EQ IVBAP-WERKS AND
BWTAR EQ SPACE.
IF SY-SUBRC = 0.
MOVE MBEW-STPRS TO IVBAP-STPRS.
IVBAP-TOT = MBEW-STPRS * IVBAP-KWMENG.
MODIFY IVBAP.
ENDIF. "I6759.en
IF IVBAP-MATNR NE WK_MATNR OR IVBAP-WERKS NE WK_WERKS.
CLEAR: WK_BLOCK, WK_MMSTA, WK_MSTAE, WK_PRDHA, WK_MAKTX.
MOVE IVBAP-MATNR TO WK_MATNR.
MOVE IVBAP-WERKS TO WK_WERKS.
SELECT SINGLE MMSTA FROM MARC INTO MARC-MMSTA
WHERE MATNR = WK_MATNR
AND WERKS = WK_WERKS.
IF NOT MARC-MMSTA IS INITIAL.
MOVE '*' TO WK_MMSTA.
ENDIF.
SELECT SINGLE LVORM PRDHA MSTAE MSTAV FROM MARA
INTO (MARA-LVORM, MARA-PRDHA, MARA-MSTAE, MARA-MSTAV)
WHERE MATNR = WK_MATNR.
IF ( NOT MARA-MSTAE IS INITIAL ) OR
( NOT MARA-MSTAV IS INITIAL ) OR
( NOT MARA-LVORM IS INITIAL ).
MOVE '*' TO WK_MSTAE.
ENDIF.
MOVE MARA-PRDHA TO WK_PRDHA.
SELECT SINGLE MAKTX FROM MAKT INTO WK_MAKTX
WHERE MATNR = WK_MATNR
AND SPRAS = SY-LANGU.
ENDIF.
IF Z_BLOCK EQ 'B'.
IF WK_MMSTA EQ ' ' AND WK_MSTAE EQ ' '.
DELETE IVBAP.
CONTINUE.
ENDIF.
ELSEIF Z_BLOCK EQ 'U'.
IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
DELETE IVBAP.
CONTINUE.
ENDIF.
ELSE.
IF WK_MMSTA EQ '' OR WK_MSTAE EQ ''.
MOVE '*' TO WK_BLOCK.
ENDIF.
ENDIF.
IF WK_PRDHA IN Z_PRDHA. "I4792
MOVE WK_BLOCK TO IVBAP-BLOCK.
MOVE WK_PRDHA TO IVBAP-PRDHA.
MOVE WK_MAKTX TO IVBAP-MAKTX.
MODIFY IVBAP.
ELSE. "I4792
DELETE IVBAP. "I4792
ENDIF. "I4792
IF NOT Z_ALNUM[] IS INITIAL. "I9076
SELECT SINGLE * FROM MAEX "I9076
WHERE MATNR = IVBAP-MATNR "I9076
AND ALNUM IN Z_ALNUM. "I9076
IF SY-SUBRC 0. "I9076
DELETE IVBAP. "I9076
ENDIF. "I9076
ENDIF. "I9076
endloop.
U have used many select queries widin loop-endloop which is a big hindrance as far as performance is concerned.Avoid such practice.
Thanks
Deepika -
Performance of Financial Reports against a ASO Cube to a BSO Cube
Hi All,
I am working on Financial Reporting and Essbase. I wanted to understand and find some relevant documentation, which tells about the performance issues or difference when a financial report hits a BSO cube or an ASO cube.
1. If there is a difference in the performance for an ASO vs BSO for Financial Reports, where can I find the document or details for it?
2. If there is a difference in the performance for an ASO vs BSO for Financial Reports, what is the reason for the same?
3. How can we improve the ASO performance for the reports?
Any insights for the same, would be highly appreciated.
Thanks
Ankur Jain.Thanks Sean V,
Its quite amazing for me as well, and that is why I am trying drill into any of the FR documentation which might contain something like this. As of now,since I don't have access to the cube, I have nothing to add, nor have the insights to the cube, based upon which I could explain, more on the Cube and Outline design.
But as soon as I get the access, I will bring this back and have a discussion on the forum.
Thanks for confirming the same, of what I had been explaining to my Client side as well. Though, I still need some basis to explain and prove them exactly and a prototype as well, which might be needed as well.
Thanks,
Ankur Jain. -
Poor performance of Report Writer reports (Special Ledger Library)
Greetings - We are running into problems with poor performance of reports that are written with the SAP Report Writer. The problem appears to be caused when SAP is using the primary-key index in our Special Purpose ledger (where the reports are generated). The index contains object fields that cannot be added to the report library (COBJNR, SOBJNR, ROBJNR). We have created alternate indices, but they are not being picked up with the Report Writer reports.
Are there any configurable or technical settings that we can work with in order to force the use of a specific index for a report? It seems logical that SAP would find the most efficient index to use, but with the reports that we are looking at, this does not appear to be the case.
Any help that can be offered will be greatly appreciated...We are currently using version 4.6C, but are planning an upgrade to ECC 6.0 later this year.
Thanks in advance -Arjun,
Where / which files contains these parameters we cannot find them all ???
Tomcat - Java _properties and try again ( You can tune below value as per your system memory)
-XX:PermSize=256m
-XX:MaxPermSize=256m
-XX:NewSize=171m
-XX:MaxNewSize=171m
-XX:SurvivorRatio=2
-XX:TargetSurvivorRatio=90
-XX:+DisableExplicitGC
-XX:+UseTLAB
As a general update it looks like we need to use the Monitoring tools that are installed by default, we are now in the process of installing the database etc
Cheers
Maybe you are looking for
-
USB 2.0 Hub Does it have to be MacOS X compatible?
Hi! Like the subject title say, I want to buy a USB Hub, and I'm wondering if the Hub have to be MacOS X compatible, or the Hub just going to be compatible with the USB 2.0 chain anyway, because it is USB 2.0? Since some Hub are cheaper than others,
-
Layers show hide in Firefox/Netscape
Uploaded new page to server with layers hide / show for rollovers. Works great in IE the rollovers are excessible. However !! The are not working in Firefox or Netscape. The appear when rolling over but do not stay showing for me to click on the imag
-
I would like to know if it is possible to get the computer client name from Flex and if possible how to do it. Regards,
-
Pleaze! Help me! Kernel Panic on the Macbook Pro Retina
Pleaze! Help me! Kernel Panic today on the macbook pro retina, 10.8.2. 2.6 Ghz. 8GB. Sun Sep 23 04:54:44 2012 panic(cpu 0 caller 0xffffff802f8b7bd5): Kernel trap at 0xffffff802fc56d88, type 14=page fault, registers: CR0: 0x000000008001003b, CR2: 0x00
-
LcsCDR & QoEMetrics Purge and Usage Summary SQL Agent Jobs
Hello. On our SQL instance that hosts the LcsCDR & QoEMetrics databases I have the following SQL Server Agent Jobs: LcsCDR_Purge LcsCDR_UsageSummary QoEMetrics_Purge QoEMetrics_UsageSummary I'd like to know if anyone else has these same jobs on their