Connect by cause performance issue in Table Based Value Set.
Hi,
In PO Requisition Distribution DFF we have added some segments.
I have a value set for party details in first segment (Attribute1) and returns the party site id (due to some dependency i cant make it return party_id).
Using the party_site_id i have to get all contact persons ( Need to Scan entire organization hierarchy to find contacts ) using organization relationship.
My Table Type Value set is written like below.
Table Name : HZ_PARTIES
Value : party_name
Id : party_id
Select party_name, party_id from hz_parties
where
party_id IN
(SELECT
Object_id
FROM hz_relationships hr
WHERE 1 =1
AND relationship_code = 'CONTACT'
AND subject_type = 'ORGANIZATION'
AND subject_table_name = 'HZ_PARTIES'
AND object_type = 'PERSON'
AND object_table_name = 'HZ_PARTIES'
AND status = 'A'
START WITH object_id =
(SELECT party_id
FROM hz_party_sites
WHERE party_site_id = :$FLEX$.XX_PROJ_COUNTERPART_INST
CONNECT BY NOCYCLE PRIOR object_id = subject_id
This is working as expected but has poor performance. It's taking around 20 sec to 1 Min based data volume. Can this be tuned?
Any help will be appreciated.
Best Regards,
Ram
Hi Syed,
BP is right.
Just note: The phrase "i have passed most of the primary keys in the query..." does not mean the key is used for database access: Only key field in sequence starting with the first one will result in the use of an index, I.e. if the tables index fields are A B C D E F G, use of A, AB, ABC, ... will get the index used, CDE, BCD or EFG will not use the index at all.
Regards,
Clemens
Similar Messages
-
SetAttribute causing performance issue.
Hi ,
I am using 11.1.1.4.0
Code:::
DCIteratorBinding itr=ADFUtil.findIterator(iterator);
RowSetIterator rsi=itr.getRowSetIterator();
Row currRow=rsi.getCurrentRow();
currRow.setAttribute(id,null);
If i call setAttribute multiple times(like 10-20 times) ,it causes severe performance issue .
Is there any reason for it ??
Should we avoid using setAttribute() ??If so then what we should use?
Any help is appreciated .
Thanks
Sazzusecase is user wld see a existing vacancy record and able to update it.
GEVacancyFromNotificationVO1() is a query based vo and getGETranVacancyVO1() is a updatable VO . Now using view criretia i am pulling the record in the updatable VO , this will have only 1 record at 1 time.
GEVacancyFromNotificationVO1() gets the details and set then in attributes of the updatable VO as this VO includes many trasient attributes which are required in my jsff . Basicallly this data are not saved in DB but required to show in the UI.
Anyways now the thing is setAttribute as called 20-30 times you see , the performance is slow and sometimes data is not set as well.
I used attributeListImp class to create a name value pair and create a new row for this VO using createAndInitRow() and that works super fast . That is requied for another use case and works perfectly ok . Only when i want to update a existing record i have to update the same row. cant create another row, so facing this performance issue and sometime data doesnt set properly . i get null in DCiterator binding when i fetch the data in bean class.
So my question is why does setAttribute of AttributeListImpl is much much faster than setAttribute of Row class.??
public void initializeFromNotification(String role, String emp) {
ViewObjectImpl notifyVO = this.getGEVacancyFromNotificationVO1();
ViewObjectImpl transVO = this.getGETranVacancyVO1();
ViewObjectImpl geLoginPersonIdVO = this.getGELoginPersonIdVO1();
ViewObjectImpl autoPopulatevo =
this.getGEAutopopulateHireSysforCopyVacanciesVO1();
ViewObjectImpl geNextApproverVO = this.getGENextApproverVO1();
ViewObjectImpl transHireVo = getGEHireSystemReqTeamTransVO1();
ViewObjectImpl gejobdesc = getGEJobDescTransVO1();
Row row = notifyVO.first();
if (row != null) {
//query the trx table
transVO.setApplyViewCriteriaName("VacancyNumberVC");
transVO.setNamedWhereClauseParam("p_vac_num",
row.getAttribute("VacancyNumber"));
transVO.executeQuery();
if (transVO.first() == null) {
return;
} else {
transVO.setCurrentRow(transVO.first());
Row currentRow = transVO.getCurrentRow();
List<String> transColumns =
Arrays.asList(currentRow.getAttributeNames());
//setting values from notification vo to transvacancy VO
String arr[] = row.getAttributeNames();
if (null != transVO.getCurrentRow()) {
// AttributeListImpl attrList = new AttributeListImpl();
for (String attr : arr) {
if (row.getAttribute(attr) != null) {
if (attr.equalsIgnoreCase("VacTrxId")) {
} else if (transColumns.contains(attr)) {
if (currentRow.getAttribute(attr) == null) {
currentRow.setAttribute(attr,
row.getAttribute(attr).toString());
if (role != null && role.startsWith("ORG_MGR")) {
transVO.getCurrentRow().setAttribute("userRole",
"INITIATOR_HM");
transVO.getCurrentRow().setAttribute("userRoleDisplay",
"Hiring Manager");
} else if (role != null && role.startsWith("HRM")) {
transVO.getCurrentRow().setAttribute("userRole",
"INITIATOR_HRM");
transVO.getCurrentRow().setAttribute("userRoleDisplay",
"HR Manager");
} else {
transVO.getCurrentRow().setAttribute("userRole",
"INITIATOR_RFO");
transVO.getCurrentRow().setAttribute("userRoleDisplay", "RFO");
transVO.getCurrentRow().setAttribute("EmpNumber", emp);
geLoginPersonIdVO.setNamedWhereClauseParam("sso", emp);
geLoginPersonIdVO.executeQuery();
transVO.getCurrentRow().setAttribute("userPersonId",
geLoginPersonIdVO.first().getAttribute(0)); -
WILL BIG INDEX WILL CAUSE PERFORMANCE ISSUE?
In an index table, if there are a lot of insert then data will grow and/or if the index is
huge then can it really cause performance issue?
Is there a document in metalink that says if index is 50% of data then we have to rebuild it? What are the basis and threshold of rebuilding index?A big index by itself won't cause a performance issue. There are other circumstances you should consider for the index.
First of all, which kind of index are you talking about, there are several kind of indexes in Oracle. On the other hand, assuming you are talking about a regular B*Tree index, you should consider factors such as selectivity and cardinality. If the indexed column has evenly distributed values, then the index will be highly selective, and if the indexed column is highly skewed, in order for the index not to become a real bottleneck you should gather histograms, so selectivity can be calculated at execution time and in case the query retrieves a highly selective data range the index won't slow performance, otherwise a full table scan will be considered a best data access path.
Rebuilding indexes is an operation performed when the index becomes invalid, or when migrating the index to a new tablespace, but not when you suspect the index has become 'fragmented' in this case you should use the Coalesce command. Oracle provides efficient algorithms to maintain the index balanced.
~ Madrid
http://hrivera99.blogspot.com/ -
Troubleshoot Error - Unable to perform table-based value assignment config
After to creating class, characteristics, and value assignment type, the system is unable to perform table-based value assignment configuration. The following error is displayed:
Hi Mr. SAP,
Based on the diagnosis, you can figure out that most likely someone is already editing the customizing table or might you are not authorized..
In case if you have the access to SM12 Transaction code, kindly check if an entry present there. If yes it means the table is locked and you can't proceed on that. you have to reach out to the specific resource to unlock the table who did a lock, else you need to reach out to BASIS to make that entry delete.
Regarding authrization for locking of table, please check SU53 after the execution of the T-code, if you are missing any role. If you find anything there, then reach out to your Security team to get and assigned roles to your profile.
Regards,
Abhi -
EHS: Set up Table-based Value assignment Error.
Hi all,
We are customizin Basic Data & Tools and when trying to set table based assignments (table TCG11_VAI; program RC1TCG11_02) we are getting no entries in the table. The message shown is always <i>"0 unchanged entries, 0 new entries, 0 entries deleted"</i> independant on the entry criteria
The problem is that we can create those entries manually but it will be endless
Has this happened to anyone before? Any idea?
Many thanks and regards,
AlbertoHi all,
We have just find the solution.
Just for your information the problem was that the IMG activity "Adopt
Standard Specification Database" was executed but not working properly
because no data can be copied from client 000. Then when executing "Set
Up table Based Value Assignment" no entries were made in the table. We
have just change the client, execute "Adopt Standard Specification
Database" and then "Set Up table Based Value Assignment" and now is
working properly
Alberto -
Creation of a Table Type value set with 'ALL' as one of the value
Gurus,
My requirement is to create [table type]value set which would show the [LOV]values in parameter of Conc Progr .
So far we have three such values to chose from ,they are, 'Frozen', 'Pending' and 'Testing'. I achieved it.
My question is ,
if user wants to choose 'ALL' three values , how shall I accommodate it in this table type value set?
Giving fourth option as ALL, which would eventually select 'ALL' three values 'Frozen', 'Pending' and 'Testing'.
thanks in advance.
-sDJYou can't have UNION in the value set.
Try creating a view, which is having UNION with ALL.
Check the following links.
Table Value Set.
ORA-00907 Missing Right Parenthesis in Value Set
By
Vamsi -
Validate a value against table validation value set within PL/SQL
Hi,
I am trying to import price list lines along with Pricing attribute values.
I have to validate the uploaded values against the pricing attribute value set, before I import them into base tables.
Value set defined is of type table validation.
I wanted to know if there are any public APIs that can be used to validate the value against the Value set values within my PL/SQL procedure
Also please point me to documentation that lists various public PL/SQL APIs
Regards,
MrutyunjayYou can find functions and procedure for Value sets in packages FND_FLEX_VAL_API or FND_FLEX_VAL_UTIL.
Example : get_table_vset_select gives you the select statement of your value set. Executing this statement will allow you to validate your values. -
Performance issues involving tables S031 and S032
Hello gurus,
I am having some performance issues. The program involves accessing data from S031 and S032. I have pasted the SELECT statements below. I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected. I am fairly new to SAP, so I apologize if I've missed an obvious error. From debugging the program, it seems the 2nd select statement is taking a very long time to process.
GT_S032: approx. 40,000 entries
S031: approx. 90,000 entries
MSEG: approx. 115,000 entries
MKPF: approx. 100,000 entries
MARA: approx. 90,000 entries
SELECT
vrsio "Version
werks "Plan
lgort "Storage Location
matnr "Material
ssour "Statistic(s) origin
FROM s032
INTO TABLE gt_s032
WHERE ssour = space AND vrsio = c_000 AND werks = gw_werks.
IF sy-subrc = 0.
SELECT
vrsio "Version
werks "Plant
spmon "Period to analyze - month
matnr "Material
lgort "Storage Location
wzubb "Valuated stock receipts value
wagbb "Value of valuated stock being issued
FROM s031
INTO TABLE gt_s031
FOR ALL ENTRIES IN gt_s032
WHERE ssour = gt_s032-ssour
AND vrsio = gt_s032-vrsio
AND spmon IN r_spmon
AND sptag = '00000000'
AND spwoc = '000000'
AND spbup = '000000'
AND werks = gt_s032-werks
AND matnr = gt_s032-matnr
AND lgort = gt_s032-lgort
AND ( wzubb <> 0 OR wagbb <> 0 ).
ELSE.
WRITE: 'No data selected'(m01).
EXIT.
ENDIF.
SORT gt_s032 BY vrsio werks lgort matnr.
SORT gt_s031 BY vrsio werks spmon matnr lgort.
SELECT
p~werks "Plant
p~matnr "Material
p~mblnr "Document Number
p~mjahr "Document Year
p~bwart "Movement type
p~dmbtr "Amount in local currency
t~shkzg "Debit/Credit indicator
INTO TABLE gt_scrap
FROM mkpf AS h
INNER JOIN mseg AS p
ON hmblnr = pmblnr
AND hmjahr = pmjahr
INNER JOIN mara AS m
ON pmatnr = mmatnr
INNER JOIN t156 AS t
ON pbwart = tbwart
WHERE h~budat => gw_duepr-begda
AND h~budat <= gw_duepr-endda
AND p~werks = gw_werks.
Thanks so much for your help,
JayeshIssue with table s031 and with for all entries.
Hi,
I have following code in which select statement on s031 is
taking long time and after that it shows a dump. What should I do instead of
exceeding the time limit of execution of an abap program.
TYPES:
BEGIN OF TY_MTL, " Material Master
MATNR TYPE MATNR, " Material Code
MTART TYPE MTART, " Material Type
MATKL TYPE MATKL, " Material Group
MEINS TYPE MEINS, " Base unit of Measure
WERKS TYPE WERKS_D, " Plant
MAKTX TYPE MAKTX, " Material description (Short Text)
LIFNR TYPE LIFNR, " vendor code
NAME1 TYPE NAME1_GP, " vendor name
CITY TYPE ORT01_GP, " City of Vendor
Y_RPT TYPE P DECIMALS 3, "Yearly receipt
Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
M_OPG TYPE P DECIMALS 3, "Month opg
M_OPG1 TYPE P DECIMALS 3,
M_RPT TYPE P DECIMALS 3, "Month receipt
M_ISS TYPE P DECIMALS 3, "Month issue
M_CLG TYPE P DECIMALS 3, "Month Closing
D_BLK TYPE P DECIMALS 3, "Block Stock,
D_RPT TYPE P DECIMALS 3, "Today receipt
D_ISS TYPE P DECIMALS 3, "Day issues
TL_FL(2) TYPE C,
STATUS(4) TYPE C,
END OF TY_MTL,
BEGIN OF TY_OPG , " Opening File
SPMON TYPE SPMON, " Period to analyze - month
WERKS TYPE WERKS_D, " Plant
MATNR TYPE MATNR, " Material No
BASME TYPE MEINS,
MZUBB TYPE MZUBB, " Receipt Quantity
WZUBB TYPE WZUBB,
MAGBB TYPE MAGBB, " Issues Quantity
WAGBB TYPE WAGBB,
END OF TY_OPG,
DATA :
T_M TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
WA_M TYPE TY_MTL,
T_O TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
WA_O TYPE TY_OPG.
DATA: smonth1 TYPE spmon.
SELECT
a~matnr
a~mtart
a~matkl
a~meins
b~werks
INTO TABLE t_m FROM mara AS a
INNER JOIN marc AS b
ON a~matnr = b~matnr
* WHERE a~mtart EQ s_mtart
WHERE a~matkl IN s_matkl
AND b~werks IN s_werks
AND b~matnr IN s_matnr .
endif.
SELECT spmon
werks
matnr
basme
mzubb
WZUBB
magbb
wagbb
FROM s031 INTO TABLE t_o
FOR ALL ENTRIES IN t_m
WHERE matnr = t_m-matnr
AND werks IN s_werks
AND spmon le smonth1
AND basme = t_m-meins. -
Oracle CPU Jan 2009 cause performance issue
I did installed Oracle CPU Jan 2009 on HP-UX machine. But once the installation is completed, users complaint saying that it takes more than 1 minutes to open a new ticket on that application. With this, it cause a backlog processess increase tremendously.
Will the CPU Jan 2009 patch cause any of the network performance issue:
Server: HP-UX Itanium 64 bit
Database: Oracle 10.2.0.3.0
Instances: 2 instances running on this server.
Edited by: user3858134 on Oct 26, 2009 9:30 PMI believe the latest CPU Patch for Oracle 10.2.0.3 on HP is CPU Jan 2009 only.Don't you think your database should be on 10.2.0.4?
Anyways, do you have any benchmark statspack/AWR report? Can you compare that with the latest one? Do you see any difference?
Regards,
S.K. -
MDX calculated measure causing performance issue
The calculated measure below against all product members is causing the excel pivot table to hang indefinitely. Any help on how to optimize the query for better performance?
SCOPE ([MEASURES].[DIDaysInMonth]);
THIS = CASE WHEN [Measures].[MonthDifference] < 0 THEN 0
WHEN [MEASURES].[MonthDifference] >= 0 AND ProjectedEnd > 0 THEN [MEASURES].[DaysRemainingInMonth]
WHEN [MEASURES].[MonthDifference] = 0 AND ProjectedEnd < 0 THEN
[Measures].[Ordered Cases] / (([Measures].[Forecasted Sales]-[Measures].[Cases])/[measures].[DaysRemainingInMonth])
WHEN [MEASURES].[MonthDifference] >= 0 AND ([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) <= 0 THEN 0
WHEN [MEASURES].[MonthDifference] > 0 AND ([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) > 0 THEN
([Time Monthly].[Time Monthly].CurrentMember.PrevMember,[MEASURES].[ProjectedEnd]) /
([Forecasted Sales] / [daysInMonth]) END;
END SCOPE;
BI DeveloperHi Abioye,
According to your description, you create a calculated measure which against all products in your AS cube, now the performance is poor when using this calculated measure in EXCEL Pivot table, right? In this case, here are some links which describe
tips about performance tuning in SSAS, plesae see:
http://technet.microsoft.com/en-us/library/cc966527.aspx
http://sqlmag.com/t-sql/top-9-analysis-services-tips
Hope this helps.
Regards,
Charlie Liao
TechNet Community Support -
Performance issue Create table as select BLOB
Hi!
I have a performance issue when moving BLOB´s between tables. (The size of images files are from 2MB to 10MB).
I'm using follwing statement for example,
"Create table tmp_blob as select * from table_blob
where blob_id = 333;"
Is there any hints that I can give when moving data like this or is Oracle10g better with BLOB's?Did you find a resolution to this issue?
We are also having the same issue and wondering if there is a faster mechanism to copy LOBs between two table. -
WEBUTIL - Does adding it to all forms cause performance issues?
If I add the webutil library and object library to all forms in the system (as part of a standard template) despite the fact most won't use it, will this cause any performance issues???
Thanks in advance...The webutil user guide has a chapter on performance considerations. Have you looked at that?
The number one point from that chapter is:
1. Only WebUtil Enable Forms that actually need the functionality. Each form that is WebUtil enabled will generate a certain amount of network traffic and memory
usage simply to instantiate the utility, even if you don’t use any WebUtil
functionality. -
Performance issue in linux while using set with URL object
Hi,
I am facing performance issue while using Set(HashSet) with URL object on linux. But it is running perfectly on windows.
I am using
set.contains(urlObject)
Above particular statement is taking 40 sec on Linux, and only a fraction of ms on windows.
I have checked the jre version on both OS. It is the same version (jre6)
on both the OS.
Could anyone please tell me what is the exact reason, why the same statement is taking more time on linux than windows.
Thanks & Regards
Naveenjtahlborn wrote:
I believe the URL hashCode/equals implementations have some /tricky behavior which involves network access in order to run (doing hostname lookups and the like). you may want to either use simple Strings, or possibly the URI class (i think it fixed some of this behavior, although i could be wrong).The second new thing I have learned today. I was wrong in reply # 1 because looking at the URL code for 1.6 I see that the hash code is generated from the IP address and this has a lazy evaluation. Each URL placed in a HashMap (or other hash based collection) requires a DNS lookup the first time the hash code is used.
P.S. 40 seconds does seem a long time for a DNS lookup!
Edited by: sabre150 on Feb 13, 2008 3:40 PM -
Performance Issue with Selection Screen Values
Hi,
I am facing a performance issue(seems like a performance issue ) in my project.
I have a query with some RKFs and sales area in filters (single value variable which is optional).
Query is by default restricted by current month.
The Cube on which the query operates has around 400,000 records for a month.
The Cube gets loaded every three hours
When I run the query with no filters I get the output within 10~15 secs.
The issue I am facing is that, when I enter a sales area in my selection screen the query gets stuck in the data selection step. In fact we are facing the same problem if we use one or two other characteristics in our selection screen
We have aggregates/indexes etc on our cube.
Has any one faced a similar situation?
Does any one have any comments on this ?
Your help will be appreciated. ThanksHi A R,
Goto RSRT--> Give ur query anme --> Execute =Debug
--> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
now try to execyte the report... it should work properly with out delays for those selections.
I hope it helps you...
Regards,
Ramki. -
How to use the :$PROFILE$ token in a table validation value set
Hi Community,
Let me explain the scenario.
We have a Flex Value Set (CLIENTES SERVICIO DIRECTO) with a table validation, included in the PO Headers DFF, which shows a LOV with the Ship-to addressfrom the customer which we want to serve the goods; i.e. we are acting as a commisionists and the supplier puts the goods in the Ship-to address of the customer.
This Flex Value Set, as I mentioned, have a table validation, with the following validation table information:
Table Application: Oracle Receivables
Table Name: RA_ADDRESSES_ALL a, RA_CUSTOMERS b
Table Columnns: b.CUSTOMER_NAME Varchar2(20), a.PARTY_LOCATION_ID Varchar2(20)
Where/Order By: a.CUSTOMER_ID = b.CUSTOMER_ID
Additional Columns: a.ADDRESS1 "Dirección"(20), a.CITY "Ciudad"(10)
If we translate this to a SQL code:
select a.address1,
a.city,
a.party_location_id,
b.customer_name
from ra_addresses_all a,
ra_customers b
where a.customer_id = b.customer_id
and b.customer_id = 6283 <--- This last condition clause is for narrowing the result to the interested customer.
This select retrive us two records; for the same customer, one of them for one organization_id (let's say 85) and one of them for the other organization_id (84).
What we are looking for and want is that the LOV, which actually display the two records, shows only the ship-to site of the customer that belongs to the organization_id which belongs the user who queries or creates the Purchase Order.
I.e. Suppose that we enter to Purchasing using the responsibility assigned to the Operating Unit or Organization_id 84. We want that theLOV only shows the Ship-to Site from the customer that belongs to the Operating Unit or Organization_id 84.
I believe that we can achieve this using the :$PROFILE$ token, but we do not know how.
Any ideas?
Thanks a lot for your answer.Hi Rcana,
We have just test your suggestion and it works. We believe that the correc sentence was fnd_profile.get('profile_name'), but the value feature solve the problem. Thanks for your help.
Regards.
Maybe you are looking for
-
I recently bought a new computer becouse my old computer crashed. Now I can not sync my itunes account. I get an error when I try. It says that itunes will erase my phone content and replace with what is on the phone. I do not want to lose all my stu
-
Dear Experts, I had a small issue while doing MIGO Purchase order created for 100PCS and this is splitted 2 deliveries for 50pcs each While doing MIGO eventhough we are referencing one delivery document for 50 pcs, the system is clubbing up the two d
-
Time_Out dumps occurred frequently with generic users
Hello Experts, We are getting daily 10-15 Time_out dumps with SOLMAN_BTC and SM_EFWK users. Can you please some one provide suggestions on this to resolve the issue. Thanks in advance.... Rgds, Venkat
-
Objects communicating with each other
Suppose I have a main class that has a Thread instance variable. What is the best way to notify the main class that something has happened in the thread? Example: I have a main class, a Producer, and a Consumer. Both the Producer and the Consumer ext
-
Trying to convert a .pub file
I have just joined to convert a .pub file but twice the conversion has failed why?