DropDown in table, bad performance.
Hi all.
We have a webdynpro, with a dropdownbyindex combo, on one column of a table.
The combo is filled with accounts numbers and works fine when the accounts are few.
But when the items are more (1800 or more), is too slow. It takes like 5 seconds to open the accounts, and the scroll is very slow too.
We tried different ways to fill the combo (supply funcions, by code), we also tried the same methods with a dropdownbykey combo, but the result is always the same, "very slow".
Anybody knows any solution for this? or have any suggestions ? or is it just the way the Render works and i can do nothing.
Thanks very much.
Regards
Hi Nithya,
I have a design restrinction and i need it to be a dropdown, no other way.
Thats why i need to tune this ui element in particular.
Thanks for your quick reply, and i'll take your suggestion in consideration, in such case we can eventually use another method rather than combo.
Hope somebody can help me with this issue.
Regards,
Cristian
Similar Messages
-
Bad Performance in a query into table BKPF
Hi forum i have a really problem in the second query under the table
BKPF.. some body cans help me, please
*THIS IS THE QUERY UNDER MSEG
SELECT tmsegmblnr tmkpfbudat tmsegbelnr tmsegbukrs tmseg~matnr
tmsegebelp tmsegdmbtr tmsegwaers tmsegwerks tmseg~lgort
tmsegmenge tmsegkostl
FROM mseg AS tmseg JOIN mkpf AS tmkpf ON tmsegmblnr = tmkpfmblnr
INTO CORRESPONDING FIELDS OF TABLE it_docs
WHERE
tmseg~bukrs IN se_bukrs AND
tmkpf~budat IN se_budat AND
tmseg~mjahr = d_gjahr AND
( tmsegbwart IN se_bwart AND tmsegbwart IN (201,261) ).
IF sy-dbcnt > 0.
I CREATE AWKEY FOR CONSULTING BKPF
LOOP AT it_docs.
CONCATENATE it_docs-mblnr d_gjahr INTO it_docs-d_awkey.
MODIFY it_docs.
ENDLOOP.
THIS IS THE QUERY WITH BAD BAD PERFOMANCE
I NEED KNOW "BELNR" FOR GO TO THE BSEG TABLE
SELECT belnr awkey
FROM bkpf
INTO CORRESPONDING FIELDS OF TABLE it_tmp
FOR ALL ENTRIES IN it_docs
WHERE
bukrs = it_docs-bukrs AND
awkey = it_docs-d_awkey AND
gjahr = d_gjahr AND
bstat = space .
THNKSHi Josue,
The bad performance is because you're not specifying the primary keys of the table BKPF in your WHERE condition; BKPF usually is a big table.
What you really need is to create a new index on database for table BKPF via the ABAP Dictionary on fields BUKRS, AWKEY, GJAHR & BSTAT. You'll find the performace of the program will significantly increase after the new index is activated. But I would talk to the Basis first to confirm they have no issues if you create a new index for BKPF on the database system.
Hope this helps.
Cheers,
Sougata. -
Bad performance updating purchase order (ME22N)
Hello!
Recently, we face bad performance updating purchase orders using transaction ME22N. The problem occurs since we implemented change documents for a custom table T. T is used to store additional data to purchase order positions using BAdIs ME_PROCESS_PO_CUST and ME_GUI_PO_CUST.
I've created a change document C_T for T using transaction SCDO. The update module of the change document is triggered in the method POST of BAdI ME_PROCESS_PO_CUST.
Checking transaction SM13, I recognized that the update requests of ME22n have status INIT for several minutes before they are processed. I also tried to exclude the call of the update module for change document C_T (in Method POST) - the performance problem still occurs!
The problem only occurs with transaction ME22N, thus I assume that the reason is the new change document C_T.
Thanks for your help!
Greetings,
WolfgangI agree with vikram, we don't have enough information, even not a small hint on usage of this field, so which answer do you expect (The quality of an answer depends ...) This analysis must be executed on your system...
From a technical point of view, the BAPI_PO_CHANGE has EXTENSIONIN table parameter, fill it using structure BAPI_TE_MEPOITEM[X] alreading containing CI_EKPODB (*) and CI_EKPODBX (**)
Regards,
Raymond
(*) I guess you have used this include
(**) I guess you forgot this one (same field names but data element always BAPIUPDATE) -
Reporting on master data customer and bad performances : any workaround ?
Hello,
I've been asked to investiguate on bad performances encountered when performing reporting
on the specific master data zcustomer.
Basically this master data has a quite similar design that 0customer, there are 96000 entries in the master data table.
A simple query has been developed : the reporting is done on the master data zcustomer and its attributes : no key figure, no calculation, no restriction ...
Nevertheless, the query can not be executed .. the query runs around 10 minute in rsrt, then the private memory is exhausted and then a short dump is generated.
I tried to buid a very simple query on 0customer, this time, without the attributes ... and it took more than 30 sec before I get the results.
I checked the queries statistics :
3.x Analyzer Server 10 sec
OLAP: Read Texts : 20 sec
How is it that it is so long to performthe reporitng on those master data, while in the same time If i try to display the content in SAP by choosing "maintain master data", I have an immediate answer.
I there any workaround ?
Any help would be really appreciated.
thank you.
RaoulHi.
How much data have you got in the cube?
If you make no restrictions, you are asking the system to return data for all 96.000 customers. That is one thing that might take some time.
Also, using the attributes of this customer object, fx making selection or displaying several of them, means that the system has to run through the 96.000 records in masterdata to know what goes where in the report.
When you display the masterdata, you are by default displaying just the 250 or so first hits, and you are not joining against any cube or sorting the result set, so that is fast.
You should make some kind of restriction on other things than zcustomer (time, org.unit, version, etc, to limit the dataset from the cube, but also a restriction on one of the zcustomer attribs, with an index for that maybe, and performance should improve.
br
Jacob -
Master column tree - bad performance
Hello,
I have a table with master column that uses recursive node without Load Children event. the whole table loads at startup. I Have very bad performance : every click on the little triangle takes about 6-7 seconds, and there is even no round-trip to server because all data is already at the client!
I need to show list of products by there family (50 families, each has 10 products). How can i imporve my performance (maybe server configuration or my code or another method)
I work with ep 7 sp12.
Thanks
Roni.Hi Roni,
Yes, you have to manipulate the add element on before loading the tree. Best Practice is to use onLoadChildren event. Loading the tree at once will always have performance issues.
Refer to these tutorial for creating tree using onLoadChildren event.
<a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webdynpro/wd%20java/wd%20tutorials/constructing%20a%20recursive%20and%20loadable%20web%20dynpro%20tree.pdf">Recursive Loadable Tree</a>
<a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webdynpro/wd%20java/wd%20tutorials/integration%20of%20a%20tree%20structure%20in%20a%20web%20dynpro%20table.pdf">Integration of Tree Structure in Table</a>
There are no limitations on this UI control, you can stay with any number of levels.
Regards
Srikanth -
5 tables Used - Performance Improvement Help Req...
Hello Experts,
I am using 5 table joins to fetch data from query; does this will degrade performance?
Please find below the query used; in this WHERE Clause, 1) le.locker_entry_id is PRIMARY Key column
2) dc.object_type - created a index on this column 3) Created a function based index on this column upper(dt.partner_device_type)
Can any other code improvement be done or view be created ?
Please suggest.
SELECT count(*) FROM locker_entry le;
COUNT(*)
1762
SELECT count(*) FROM digital_compatibility dc;
COUNT(*)
227757
SELECT count(*) FROM digital_encode_profile dep;
COUNT(*)
48
SELECT count(*) FROM device_type dt;
COUNT(*)
421
SELECT count(*) FROM digital_sku dsku;
COUNT(*)
26037
EXPLAIN PLAN FOR
SELECT
/*+ INDEX(dep, DIGITAL_ENCODE_PROFILE_ID_PK) */
DISTINCT le.locker_entry_id AS locker_entry_id,
dep.delivery_type
FROM locker_entry le,
digital_compatibility dc,
digital_encode_profile dep,
device_type dt,
digital_sku dsku
WHERE le.sku_id = dsku.legacy_id
AND le.digital_package_id = dsku.digital_package_id
AND dsku.digital_sku_id = dc.object_id
AND dc.encode_profile_id =dep.digital_encode_profile_id
AND dt.capability_set_id =dc.capability_set_id
AND le.locker_entry_id IN (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32)
AND dc.object_type =:"SYS_B_0"
AND upper(dt.partner_device_type) =:33;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 370 | 481 (3)|
| 1 | HASH UNIQUE | | 1 | 370 | 481 (3)|
| 2 | NESTED LOOPS | | 1 | 370 | 480 (3)|
| 3 | NESTED LOOPS | | 1 | 336 | 479 (3)|
|* 4 | HASH JOIN | | 1 | 193 | 478 (3)|
| 5 | MAT_VIEW ACCESS BY INDEX ROWID | DIGITAL_SKU | 1 | 48 | 5 (0)|
| 6 | NESTED LOOPS | | 16 | 1392 | 5 (0)|
| 7 | INLIST ITERATOR | | | | |
| 8 | TABLE ACCESS BY INDEX ROWID | LOCKER_ENTRY | 32 | 1248 | 1 (0)|
|* 9 | INDEX RANGE SCAN | LOCKER_ENTRY_ID_PK | 32 | | 1 (0)|
| 10 | BITMAP CONVERSION TO ROWIDS | | | | |
| 11 | BITMAP AND | | | | |
| 12 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 13 | INDEX RANGE SCAN | IDX_DIGITAL_SKU_LEGACY_ID | 1 | | 1 (0)|
| 14 | BITMAP CONVERSION FROM ROWIDS| | | | |
|* 15 | INDEX RANGE SCAN | IDX_DIGITAL_PACKAGE_ID | 1 | | 1 (0)|
|* 16 | MAT_VIEW ACCESS FULL | DIGITAL_COMPATIBILITY | 2098 | 217K| 472 (3)|
|* 17 | INDEX RANGE SCAN | DEVICE_TYPE_IDX | 1 | 143 | 1 (0)|
| 18 | MAT_VIEW ACCESS BY INDEX ROWID | DIGITAL_ENCODE_PROFILE | 1 | 34 | 1 (0)|
|* 19 | INDEX UNIQUE SCAN | DIGITAL_ENCODE_PROFILE_ID_PK | 1 | | 1 (0)|
Predicate Information (identified by operation id):
4 - access("DSKU"."DIGITAL_SKU_ID"=TO_NUMBER("DC"."OBJECT_ID"))
9 - access("LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:1) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:2) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:3) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:4) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:5) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:6) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:7) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:8) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:9) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:10) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:11) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:12) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:13) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:14) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:15) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:16) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:17) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:18) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:19) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:20) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:21) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:22) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:23) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:24) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:25) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:26) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:27) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:28) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:29) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:30) OR
"LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:31) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:32))
13 - access("LE"."SKU_ID"="DSKU"."LEGACY_ID")
15 - access("LE"."DIGITAL_PACKAGE_ID"="DSKU"."DIGITAL_PACKAGE_ID")
16 - filter("DC"."OBJECT_TYPE"=:SYS_B_0)
17 - access("DT"."CAPABILITY_SET_ID"="DC"."CAPABILITY_SET_ID" AND
UPPER("PARTNER_DEVICE_TYPE")=:33)
19 - access("DC"."ENCODE_PROFILE_ID"="DEP"."DIGITAL_ENCODE_PROFILE_ID")
Note
- 'PLAN_TABLE' is old version
Trace information
=================
recursive calls 17
db block gets 0
consistent gets 239
physical reads 61
redo size 0
bytes sent via SQL*Net to client 742
bytes received via SQL*Net from client 1361
SQL*Net roundtrips to/from client 2
sorts (memory) 5
sorts (disk) 0
Edited by: Linus on Oct 10, 2011 2:44 AMLinus wrote:
Yes Bravid i got regarding the point to remove the hint from the query; unless there is a change in execution plan with index hint.
My concern is; is there any issues by keeping 5 tables on single query and will this degrade performance?
Because as of during the certification practise; i used to overcome some where ,"NOT to use many table joins".
So that is reason i am asking you; sorry if am wrong in any ways.
Thanks...There's nothing inherently wrong with joining lots of tables and there isn't one specific thing that will degrade performance. You could have a query that joins 2 tables performing very badly and a query with 10 tables that performs very well. A lot of it is down to the quality of the decisions the optimiser can make. To get the best decisions out of it you need to make sure it had enough information to work with. So that means making sure stats are available and relevant for the data you're querying and removing any hints unless you have a specific reason to use them.
Earlier this year I had the "joy" of working with a query that had 65 tables in it. It was an INSERT INTO...SELECT FROM and it was a clear example of where a single statement really should have been about 6 or 7 separate statements. The reason in this case was because there were 6 or 7 different sections to the query that had essentially no relationship with each other and they were all outer joined and used lots of analytic functions and case statements to categorise each row and populate the same columns with different values depending on which query block it had originated from.
This is an extreme example but the point is you have to look at the statement and decide whether it does it's job well. My personal preference is to try to avoid big "generic" SQL statements that cater for lots of different scenarios because they can easily become over complicated and difficult to maintain and tune.
HTH
David
Edited by: Bravid on Oct 10, 2011 1:53 PM -
CMP 6.1 Entity bad performance.
I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
around 150ms for an insert (i have 20 columns).
When accessing an order interface to read 2 fields in a session bean method: around
90 ms.
I'am very disapointed and confused. What should I look up for
to increase the performance ? Any important tuning or parameters ? Should I use EJB
2.0 to have significant perf ?
Thanks for any advice because we are thinking to switch all the application on stored
procedures. A solution without Entity and fewer stateless session beans.
My config:
WL: 6.1 on Sun sparc
SGBD: Sybase
Entity: WebLogic 6.0.0 EJB 1.1 RDBMS (weblogic-rdbms11-persistence-600.dtd)
ThanksHistorically its hard to get good performance & scalability out of sybase
without using stored procs. Using dynamic sql on sybase just doesnt do as
well as procs. Oracle on the other hand can get very close to stored proc
speed out of well written dynamic sql.
As far as weblogic goes, my experience is the focus of their testing for db
related stuff is Oracle, then DB2, then MSSQLServer. Sybase is usually last
on the list.
As far as the 6.1 cmp, haven't used it much, but because of these other
things I would be cautious about using it with Sybase.
Joel
"Antoine Bas" <[email protected],> wrote in message
news:3cc7cdcf$[email protected]..
>
I'am using entity 1.1 EJB on WL 6.1 and facing very bad performances:
around 150ms for an insert (i have 20 columns).
When accessing an order interface to read 2 fields in a session beanmethod: around
90 ms.
I'am very disapointed and confused. What should I look up for
to increase the performance ? Any important tuning or parameters ? ShouldI use EJB
2.0 to have significant perf ?
Thanks for any advice because we are thinking to switch all theapplication on stored
procedures. A solution without Entity and fewer stateless session beans.
My config:
WL: 6.1 on Sun sparc
SGBD: Sybase
Entity: WebLogic 6.0.0 EJB 1.1 RDBMS(weblogic-rdbms11-persistence-600.dtd)
>
Thanks -
Bad performance when open a bi publisher report in excel
We use bi publisher(xml publisher) to create a customized report. For a small report, user like it very much. But for a bigger report, users complain about the performance when they open the file.
I know it is not a native excel file, that may cause the bad performance. So I ask my user to save it to a new file as a native excel format. The new file still worse than a normal excel file when we open it.
I did a test. We try to save a bi publish report to excel format, the size shrink to 4Mb. But if we "copy all" and "Paste Special" value only to a new excel file, the size is only 1Mb.
Do I have any way to improve that, users are complaining everyday. Thanks!
I did a test today.
I create a test report
Test 1: Original file from BIP in EBS is 10Mb. We save it in my local disk, when we open the file, it takes 43 sec.
Test 2: We save the file in native excel format, the file size is 2.28Mb, it takes 7 sec. to open.
Test 3: We copy all cell and "PasteSpecial" to a new excel file with value only. The file size is 1.66Mb, it takes only 1 sec to open.
Edited by: Rex Lin on 2010/3/31 下午 11:26EBS or Standalone BIP?
If EBS see this thread for suggestions on performance tuning and hints and tips:
EBS BIP Performance Tuning - Definitive Guide?
Note also that I did end up rewriting my report as PL/SQL producing a csv file and have done with several large reports in BIP on EBS.
Cheers,
Dave -
How to populate values in to dropdown in table ui element
Hi,
according to my scenario i have atable with five records ...andi have acolumn name DATE and it contains 5 dropdowns with some values i.e dates from jan 1 2008-dec 31 2008.user needs to select only those values which are in dropdown. can u tell me the code to populate values in to dropdown in table UI element.
Thanks
RajuHi,
you can go for two drop downs like DropDown by Key or Drop Down by Index as per requirment.
Create context Node for the table UI, in that one will be for ur drop down. Create element for the Context node and add thses to the conetxt.
Code example for DropDownBy Key:-
ISimpleType simpleType = wdContext .nodeProjEstiTable().getNodeInfo()
.getAttribute("projphasname") .getModifiableSimpleType();
IModifiableSimpleValueSet svs1 =
simpleType.getSVServices().getModifiableSimpleValueSet();
svs1.clear();
for (int j = 0; j < projphasname.length; j++) {
svs1.put(projphasname[j][1], projphasname[j][1]);
for DropDownBy Index you can work in normal way means try to create element for the respective context attribute.
Hope this may help you...
Deepak -
Dropdown in table element using webDynpro for ABAP
Hi All,
My requirement is:
i have a dropdown in table ui element . before i do a bind_table with context , my dropdown should be loaded.
how to do this ??
pls help me with sample code.
thanks,
Hema.Hi Hema Sundar,
You just need to create an Attribute in your context node and bind it to the
SelectedKey property of the dropdown.
Populate the value of the attribute when you are fetching data to fill in
your internal table.
Also If you want to populate data into the dropdown on your own
you can refer to the below code
data:
lr_node type ref to if_wd_context_node,
lr_node_info type ref to if_wd_context_node_info,
lt_nv type wdr_context_attr_value_list,
ls_nv type wdr_context_attr_value.
lr_node = wd_context->get_child_node( if_componentcontroller=>wdctx_element ).
lr_node_info = lr_node->get_node_info( ).
build lt_nv ...
lr_node_info->set_attribute_value_set(
name = 'ATTRIBUTE'
value_set = lt_nv ).
Hope this helps.
Regards,
Ismail. -
Bad performance in web intelligence reports
Hi,
We use Business Objects with Web Intelligence documents and Crystal Reports.
We are supporting bad performance when we use the reports specilly when we need to change the drill options
Can someone telling me if exists some best practices to improve performance? What features should i look to?
Best Regards
João FernandesHi,
Thank you for your interest. I know that this a issue with many variables because that i need information about anything that could cause bad performance.
For bad performance i mean the time that we take running and refreshing reports data.
We have reports with many lines but the performance is bad even when a few users are in the system
Best Regards
João Fernandes -
Help: Bad performance in marketing documents!
Hello,
When creating an AR delivery note which has about 10 lines, we have really noticed that the creation of lines becomes slower and slower. This especially happens when making tab in the system field "Quantity". In fact, before going to the next field quickly, it stays in Quantity field for about 5 seconds!
The number of formatted searches in AR delivery note is only 5. And only one is automatic. The number of user fields is about 5.
We have heard about the bad performance when the number of lines increases in the documents when having formatted searches, but it is odd to happen this with about 10 lines in the document.
We are using PL16 and this issue seems to have been solved already at PL10.
Could you throw some light on this?
Thanks in advance,It is solved now.
It had to be with the automatic formated search in 2 head fields.
If the automatic search is removed, the performance is OK.
Hope it helps you, -
Bad performance on system, export/import buffer many sawps
Hello,
I have an ECC 6.0 system on AIX with 6 application servers. There seems to be a performance problem on the system, this issue is being noticed very well when people are trying to save a sale order for example, this operation takes about 10 minutes.
Sometimes we get short dumps TSV_TNEW_PAGE_ALLOC_FAILED or MEMORY_NO_MORE_PAGING but not very often.
I am not very good at studying the performance issues, but from what I could see is that there are may swaps on buffer export/import, program and generic key. Also the HitRatio is 88% at buffer export/import, which I think is pretty low.
I know that the maximum value accepted of swaps per day is 10000, is that right?
Can you please advice me what needs to be done in order for these swaps to decrese and hit ratio to increase? And also what else I should do in order to analyse and root cause and the bad performance of the system?
Many thannks,
manolivHi,
sappfpar determines the minimum and maximum (worst-case) swap space requirements of an R/3 application server. It also checks on shared memory requirements and that the em/initial_size_MB and abap/heap_area_total parameters are correctly set with the following procedure:
/usr/sap/<SYSTEMNAME>/SYS/exe/run/sappfpar check pf=/usr/sap/<SYSTMENAME>/SYS/profile/<Profile name>
At the end of the list, the program reports the minimum swap space, maximum heap space, and worst case swap space requirements:
Additional Swap Space Requirements :
You will probably need to increase the size of the swap space in hosts in which R/3 application servers run.
As a rule of thumb, swap space should equal
3 x Size of Main Storage or at least 1 GB, whichever is larger.
SAP recommends a swap space of 2-3 GB for optimal performance.
Determining Current Swap Space Availability: memlimits
You can find out how much swap space is currently available in your host system with R/3s memlimits program.
Heres how to run memlimits:
From the UNIX command prompt, run the R/3 memlimits program to check on the size of the available swap space on the host system on which an R/3 application server is to run.
The application server must be stopped, not running.
/usr/sap/<SYSTEMNAME>/SYS/exe/run/memlimits | more
The available swap space is reported in the output line Total available swap space: at the end of the program output. The program also indicates whether this amount of swap space will be adequate and determines the size of the data segments in the system. -
Bad performance when deleting report column in webi(with "Design – Structure only")
Hi all,
One of our customer has recently upgraded from BO XI to BO4.1. In the new BO 4.1, they encountered a bad performance issue when they were deleting a column in Webi(using "Design – Structure only" mode).
With “Design – Structure only" mode, it took webi about 10 seconds to complete after the customer right-clicked a report column and clicked the "delete". The customer said that they only need to wait for less than 1 second when they did the same in BO XI old version.
The new BO version used is 4.1SP02, installed in Windows Server 2008 R2. (Server with 32 core CPU, 32G memory)
This bad performance happened in both Webi web and Rich Client. (in Webi Rich Client, the performance is a little bit better. The 'delete column' action takes about 8 seconds to complete).
Do anyone know how to tune this performance in webi? Thank you.
Besides, it seems that each time we are making change in the webi report structure in IE or Rich Client, webi need to interact with Server site to upload the changes. Is there any option to change this behavior? Say, do not upload change to Server for when 'deleting report column', only trigger the upload after a set of actions(e.g. trigger when click the "Save" button).
Thank you.
Regards,
Eton.Hi all,
Could anyone help me on this? Thanks!
What customer concerns now is that when they did the same 'column editing' action in BO XI R2 for the same report, they did not need to wait. And they need to wait for at least 7-8 second in the BO 4.1SP02 environment for this action to complete.(data already purged, in structure-only mode)
One more information about the webi report being editing is: there are many sheets in the report(about 6~10 sheets in one report). Customer don't want to separate these sheet into different reports as it will increase the effort of their end users to locate similar report sheets.
Regards,
Eton. -
Temporary LOBs - bad performance when nocache is used
Hello.
Please, advise me what could be the reason of bad performance row 8 from the next anonymous block:
declare
i integer;
c clob := 'c';
procedure LTrimSys(InCLOB in clob ) is
OutCLOB clob;
begin
DBMS_LOB.CREATETEMPORARY(OutCLOB, false, DBMS_LOB.call);
dbms_lob.Copy(OutCLOB, InCLOB, dbms_lob.getlength(InCLOB));
DBMS_LOB.freetemporary(OutCLOB);
end;
begin
for j in 1 .. 1000 loop
LTrimSys(c);
end loop;
end;
I have two practically identical databases 10.2.0.4.0 EE 64-bit on Windows
On first DB I have elapsed time: 4 sec, on second - 0.2 sec
I didn't find important difference between init parameters (hidden parameters too).
First DB has more memory (PGA) then second.
Main time events in time of executing anonymous block on first DB are
PL/SQL execution elapsed time
DB CPU
sql execute elapsed time
DB time
In second DB - the same but much less
If I use caching of temporary LOBs then both DBs work fine, but I can not understand why first DB works slowly when I use nocache temporary LOBs.
What can be the reason?I don't think that is the problem. See next outputs:
select * from V$PGASTAT order by name
NAME
VALUE
UNIT
PGA memory freed back to OS
49016834031616
bytes
aggregate PGA auto target
170893312
bytes
aggregate PGA target parameter
1073741824
bytes
bytes processed
95760297282560
bytes
cache hit percentage
93,43
percent
extra bytes read/written
6724614496256
bytes
global memory bound
107366400
bytes
max processes count
115
maximum PGA allocated
2431493120
bytes
maximum PGA used for auto workareas
372516864
bytes
maximum PGA used for manual workareas
531456
bytes
over allocation count
102639421
process count
57
recompute count (total)
117197176
total PGA allocated
1042407424
bytes
total PGA inuse
879794176
bytes
total PGA used for auto workareas
757760
bytes
total PGA used for manual workareas
0
bytes
total freeable PGA memory
75694080
bytes
select * from V$PGA_TARGET_ADVICE_HISTOGRAM where PGA_TARGET_FACTOR = 1
PGA_TARGET_FOR_ESTIMATE
PGA_TARGET_FACTOR
ADVICE_STATUS
LOW_OPTIMAL_SIZE
HIGH_OPTIMAL_SIZE
ESTD_OPTIMAL_EXECUTIONS
ESTD_ONEPASS_EXECUTIONS
ESTD_MULTIPASSES_EXECUTIONS
ESTD_TOTAL_EXECUTIONS
IGNORED_WORKAREAS_COUNT
1073741824
1
ON
2199023255552
4398046510079
0
0
0
0
0
1073741824
1
ON
1099511627776
2199023255551
0
0
0
0
0
1073741824
1
ON
549755813888
1099511627775
0
0
0
0
0
1073741824
1
ON
274877906944
549755813887
0
0
0
0
0
1073741824
1
ON
137438953472
274877906943
0
0
0
0
0
1073741824
1
ON
68719476736
137438953471
0
0
0
0
0
1073741824
1
ON
34359738368
68719476735
0
0
0
0
0
1073741824
1
ON
17179869184
34359738367
0
0
0
0
0
1073741824
1
ON
8589934592
17179869183
0
0
0
0
0
1073741824
1
ON
4294967296
8589934591
0
0
0
0
0
1073741824
1
ON
2147483648
4294967295
0
0
0
0
0
1073741824
1
ON
1073741824
2147483647
0
0
0
0
0
1073741824
1
ON
536870912
1073741823
0
0
0
0
0
1073741824
1
ON
268435456
536870911
0
0
0
0
0
1073741824
1
ON
134217728
268435455
0
376
0
376
0
1073741824
1
ON
67108864
134217727
0
0
0
0
0
1073741824
1
ON
33554432
67108863
0
0
0
0
0
1073741824
1
ON
16777216
33554431
1
0
0
1
0
1073741824
1
ON
8388608
16777215
10145
45
0
10190
0
1073741824
1
ON
4194304
8388607
20518
21
0
20539
0
1073741824
1
ON
2097152
4194303
832
1
0
833
0
1073741824
1
ON
1048576
2097151
42440
0
0
42440
0
1073741824
1
ON
524288
1048575
393113
7
0
393120
0
1073741824
1
ON
262144
524287
10122
2
0
10124
0
1073741824
1
ON
131072
262143
22712
0
0
22712
0
1073741824
1
ON
65536
131071
110215
0
0
110215
0
1073741824
1
ON
32768
65535
0
0
0
0
0
1073741824
1
ON
16384
32767
0
0
0
0
0
1073741824
1
ON
8192
16383
0
0
0
0
0
1073741824
1
ON
4096
8191
0
0
0
0
0
1073741824
1
ON
2048
4095
83409618
0
0
83409618
0
1073741824
1
ON
1024
2047
0
0
0
0
0
1073741824
1
ON
0
1023
0
0
0
0
0
SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM v$sql_workarea_histogram);
OPTIMAL_COUNT
OPTIMAL_PERC
ONEPASS_COUNT
ONEPASS_PERC
MULTIPASS_COUNT
MULTIPASS_PERC
12181507016
100
146042
0
0
0
Maybe you are looking for
-
Sharepoint Foundation 2010 and Large Media Files?
I have already seen links giving instructions in how to raise the default 50MB upload limit to up to 2GB, but it doesn't seem like a good idea based on all the caveats and warnings about it. If we need to occasionally allow access to external SharePo
-
Adobe LiveCycle Designer keeps crashing
Hello All, I am currently using Adobe LiveCycle Designer Version 11 with SAP GUI version 730. The Adobe LiveCycle Software keeps crashing within a few minutes whenever I am working on the Layout. I have tried deleting "C:\Users\(UserName)\AppData\Roa
-
Help I was trying to save my pictures from my phone to my computer and then it restored my cell phone and I lost all pictures and videos from the past year and my text messages are from a year ago.. how did i get all that back? and I can't find my ic
-
Strorage location is initial in Inventory movements cube 0IC_C03.
Hi - I have a requirement to show total inventory value by Storage locations for selected dates. I have activated business content and loaded data into Infocube as per how to document.... But 0STOCTYP, 0STOR_LOC and 0BATCH are not filling in
-
How to create automatic receipts in ar
hi, we created document sequence and assigned receipt method against it. And changed profile option to partially used. And create the Receipt Classes for automatic. While making the receipt it asking payment instrument. how to solve this error. Pls g