BP3 Performance Issue Adding AD Groups
In our Pre-BP3 QA environment it took 1-2 seconds for OIM to populate the popup list of available AD groups to assign a user. Post-BP3 this takes 35 seconds. It does not affect the population of the popup list for assigning iPlanet (LDAP) groups. It would seem possible this is related to the bug fix for OIM not being able to add AD groups which have a '/' in the common name.
Has anyone else noted this dramatic degradation in performance? Or maybe no one else uses the feature?
Hi Buddha,
Have you checked permissions of the user who belongs to your Security Group on SharePoint Site? You can go to your site -> Site Settings -> Site Permissions -> check permissions.
As far as I know, the user permissions which set in AD Groups is not updated immediately to SharePoint Site. The AD group informations are converted into claims and packed into security token issued by the STS (Security Token Service).
For troubleshooting your issue,you can configure the Token Cache to a smaller value:
https://sergeluca.wordpress.com/2013/07/06/sharepoint-2013-use-ag-groups-yes-butdont-forget-the-security-token-caching-logontokencacheexpirationwindow-and-windowstokenlifetime/
http://www.shillier.com/archive/2010/10/25/authorization-failures-with-claims-based-authentication-in-sharepoint-2010.aspx
http://sharepoint.stackexchange.com/questions/56741/authorization-fails-when-using-active-directory-group-membership
http://sharepoint.stackexchange.com/questions/14649/why-are-user-permissions-set-in-ad-not-updated-immediately-to-sharepoint
Thanks,
Eric
Forum Support
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
[email protected]
Eric Tao
TechNet Community Support
Similar Messages
-
Hi All,
I need some assistance for my below query...
If there are already existing some product/quote lines on the quote and then we try to add another new product/quote line to this quote , then it is taking more time to add the product. As per my understanding it is calling the Pricing engine for the existing line as well. How can we avoid the pricing engine call for the existing lines.
There are some parameters which we are setting as mentioned below :
l_control_rec.header_pricing_event := 'BATCH' -- What does this mean when we set to batch
l_control_rec.price_mode := 'ENTIRE_QUOTE'; -- (possible values could be CHANGE_LINES , QUOTE_LINE)
l_header_rec.pricing_status_indicator := 'C';
l_control_rec.calculate_freight_charge_flag := 'Y';
l_control_rec.calculate_tax_flag := 'Y';
l_header_rec.tax_status_indicator := 'C';
Question :Could someone please help us with this whether is there any way with these parameters could be altered or changed to some other value ( like for PRICE_MODE we see this parameter could have some other values like : CHANGE_LINES , QUOTE_LINE etc other than ENTIRE_QUOTE).
means lets say we do the Pricing Engine call only for the Newly Added quote line but not do it for the Entire Quote again and again..
Now the other question here could be how do we finally synch the line level price values for all the quote lines upto the Quote header level in form of Totals (TOTAL_LIST_PRICE,TOTAL_TAX, TOTAL_SHIPPING_CHARGE, SURCHARGE, TOTAL_QUOTE_PRICE in aso_quote_headers_all table) ??
Also is there a way that we don't do the Freight Charge calculation and Tax calculation (means we skip this completely) while adding products to the quote but do it at a later point when doing the Submit to Order functionality.
Could someone please help with these pricing related parameters and modes to be used in order to get around this performance issue
Thanks
MithunDear Expert,
Activate your Controlling area as usual and Cost Centers and Profit Center , You can assign an internal order for the particular product line for what you are seeing and can collect the costs of that particular product line exclusively.
Regards,
Shankar K B -
Performance issue adding a new product line to existing Quote pricing issue
Hi All,
Morning , need some assistance with this as we are currently stuck with this...
Using the Seeded API call mentioned here : aso_quote_pub.update_quote we are trying to add a new product/item lines to an existing quote in Sales Online Module but it is taking lot of time ( means performance issue is there ).
Also if there are already existing some product/quote lines on the quote and then we try to add another new product/quote line to this quote , then also it more and more of the time..
There are some parameters which we are setting as mentioned below :
l_control_rec.header_pricing_event := 'BATCH' -- What does this mean when we set to batch
l_control_rec.price_mode := 'ENTIRE_QUOTE'; -- (possible values could be CHANGE_LINES , QUOTE_LINE)
l_header_rec.pricing_status_indicator := 'C';
l_control_rec.calculate_freight_charge_flag := 'Y';
l_control_rec.calculate_tax_flag := 'Y';
l_header_rec.tax_status_indicator := 'C';
Question :Could someone please help us with this whether it there any way these parameters could be altered or changed to some other value ( like for PRICE_MODE we see this parameter could have some other values like : CHANGE_LINES , QUOTE_LINE etc other than ENTIRE_QUOTE).
means lets say we do the Pricing Engine call only for the Newly Added quote line but not do it for the Entire Quote again and again..
Question : Now the other question here could be how do we finally synch the line level price values for all the quote lines upto the Quote header level in form of Totals (TOTAL_LIST_PRICE,TOTAL_TAX, TOTAL_SHIPPING_CHARGE, SURCHARGE, TOTAL_QUOTE_PRICE in aso_quote_headers_all table) ??
2.Also is there a way that we don't do the Freight Charge calculation and Tax calculation (means we skip this completely) while adding products to the quote but do it at a later point when doing the Submit to Order functionality.
Could someone please help with these pricing related parameters and modes to be used in order to get around this performance issue
ThanksDear Expert,
Activate your Controlling area as usual and Cost Centers and Profit Center , You can assign an internal order for the particular product line for what you are seeing and can collect the costs of that particular product line exclusively.
Regards,
Shankar K B -
PERFORMANCE ISSUE IN LOV(ORACLE FORMS)
I have a requirement to populate an LOV in a Form Which is taking LOT of TIME (PERFORMANCE ISSUE)
the Record Group Query is as
select segment1 INVENTORY_ITEM ,
inventory_item_id,
description,
primary_uom_code,
decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
service_duration_period_code,
shippable_item_flag,
Decode(bom_item_type ,
1,'MDL',2,'OPT',3,'PLN',4,
Decode( service_item_flag,'Y','SRV',
Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
from mtl_system_items_b --table name
where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
AND (bom_item_type = 1 or bom_item_type = 4)
AND vendor_warranty_flag = 'N'
AND primary_uom_code <> 'ENR'
AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
(:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
When ever i give :QOTLNDET_LINES.INVENTORY_ITEM from Front end This LOV need to be displayed.
IT IS TAKING MORE THAT 3 MINUTES DEPENDING ON THE ITEM GIVEN.
SUGGEST ME TO REDUCE THIS TIME.
Thanks,
Durga Srinivas
Edited by: DurgaSrinivas_886836 on May 31, 2012 5:14 PMI had an idea ,
record_group1=
select segment1 INVENTORY_ITEM ,
inventory_item_id,
description,
primary_uom_code,
decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
service_duration_period_code,
shippable_item_flag,
Decode(bom_item_type ,
1,'MDL',2,'OPT',3,'PLN',4,
Decode( service_item_flag,'Y','SRV',
Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
from mtl_system_items_b --table name
where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
AND (bom_item_type = 1 or bom_item_type = 4)
AND vendor_warranty_flag = 'N'
AND primary_uom_code 'ENR'
AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
(:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM
Record_group2 =
select segment1 INVENTORY_ITEM ,
inventory_item_id,
description,
primary_uom_code,
decode(service_item_flag, 'Y', service_duration, NULL) service_duration,
service_duration_period_code,
shippable_item_flag,
Decode(bom_item_type ,
1,'MDL',2,'OPT',3,'PLN',4,
Decode( service_item_flag,'Y','SRV',
Decode( serviceable_product_flag,'Y','SVA','STD'))) item_type_code
from mtl_system_items_b --table name
where organization_id = :QOTLNDET_LINES.ORGANIZATION_ID
AND (bom_item_type = 1 or bom_item_type = 4)
AND vendor_warranty_flag = 'N'
AND primary_uom_code 'ENR'
AND ((:QOTLNDET_LINES.LINE_CATEGORY_CODE = 'ORDER' and customer_order_enabled_flag = 'Y') OR
(:LINE_CATEGORY_CODE = 'RETURN' and NVL(returnable_flag, 'Y') = 'Y'))
AND segment1 like :QOTLNDET_LINES.INVENTORY_ITEM || '%'
If i can give Full item name then dynamically I will assign Record_group1 else i will assign Record_group2 by using Set_LOV_Property()
so that if i give full item name lov is populated quickly .
Suggest me Which Triggers Should i use.
Edited by: DurgaSrinivas_886836 on May 31, 2012 6:49 PM -
SQL Performance issue: Using user defined function with group by
Hi Everyone,
im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
IS
tz_name VARCHAR2(100);
date_out date;
BEGIN
SELECT
to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
INTO date_out
FROM dual;
RETURN date_out;
END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
select
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
But if I execute the following statement, it takes only ~90ms ...
select
fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
noi
from
select
stp_end_stamp,
count(*) noi
from step
where
stp_end_stamp
BETWEEN
to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
AND
to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
group by
stp_end_stamp
)The execution plan for all three statements is EXACTLY the same!!!
Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
My questions are:
Why is the second statement sooo much slower than the third?
and
Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
I would really appreciate some help on this really weird issue.
Thanks in advance,
AndiHi,
The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
drop table t cascade constraints purge;
create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
exec dbms_stats.gather_table_stats(user, 't');
create or replace function test_fnc(p_int number) return number is
begin
return trunc(p_int);
end;
explain plan for select id from t group by id;
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from t group by test_fnc(id);
select * from table(dbms_xplan.display(null,null,'advanced'));
explain plan for select test_fnc(id) from (select id from t group by id);
select * from table(dbms_xplan.display(null,null,'advanced'));Output:
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL>
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
2 - SEL$1 / T@SEL$1
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$1" "T"@"SEL$1")
OUTLINE_LEAF(@"SEL$1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "TEST_FNC"("ID")[22]
2 - "ID"[NUMBER,22]
34 rows selected.
SQL>
Explained.
SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
PLAN_TABLE_OUTPUT
Plan hash value: 47235625
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 30 | 162 (3)| 00:00:02 |
| 1 | HASH GROUP BY | | 10 | 30 | 162 (3)| 00:00:02 |
| 2 | TABLE ACCESS FULL| T | 10000 | 30000 | 159 (1)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$F5BB74E1
2 - SEL$F5BB74E1 / T@SEL$2
Outline Data
/*+
BEGIN_OUTLINE_DATA
FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
MERGE(@"SEL$2")
OUTLINE_LEAF(@"SEL$F5BB74E1")
ALL_ROWS
OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
Column Projection Information (identified by operation id):
1 - (#keys=1) "ID"[NUMBER,22]
2 - "ID"[NUMBER,22]
37 rows selected. -
Performance issue with grouping components
Hi Guys,
I am building a dashboard in Dashboards 4.1 using Live Office connections. The initial summary view contains multiple charts, labels, customized image components, etc. which have all been grouped into one component. The user needs to able to filter the dashboard based on "Dept Name", "Employee Type" and "Month".
Now, to filter on "Dept Name", there are 5 different check boxes provided for each department inside a pie chart. Based on the selection, all the data in the dashboard will change. The way I am thinking of achieving this is by creating 5 copies of the initial grouped component and then setting dynamic visibility on each based on the check box selection. I will also change the data mapping for each copy of the grouped component.
Similarly, I am thinking about doing the same for the filter for "Employee Type" & "Month"
My question is - Is this a good method to achieve the task ? Will it cause any performance issues ?copying the same set of components 5 or 7 times will result in a model that is slower to load and mat be slower to use. If possible try to limit the number of components to one set and move the data around the spreadsheet instead. This can be hard in some cases, and depending how you do it could also affect performance.
I have found that the more objects you copy on the canvas the more liable to corruption the file gets as well.
As always, designing a dashboard is a balance between complexity and usability. -
WEBUTIL - Does adding it to all forms cause performance issues?
If I add the webutil library and object library to all forms in the system (as part of a standard template) despite the fact most won't use it, will this cause any performance issues???
Thanks in advance...The webutil user guide has a chapter on performance considerations. Have you looked at that?
The number one point from that chapter is:
1. Only WebUtil Enable Forms that actually need the functionality. Each form that is WebUtil enabled will generate a certain amount of network traffic and memory
usage simply to instantiate the utility, even if you don’t use any WebUtil
functionality. -
Performance Issue in Oracle EBS
Hi Group,
I am working in a performance issue at customer site, let me explain the behaviour.
There is one node for the database and other for the application.
Application server is running all the services.
EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
I have just send a ping command and the reponse time between servers is below to 5 ms.
I have several activities in mind like:
- OATM conversion.
- ASM implementation.
- Upgrade to 11.2.0.4.
Has anybody had this behaviour?, any advice about this problem will be really appreciated.
Thanks in advance.
Kind regards,
Francisco Mtz.Hi Bashar, thank you very much for your quick response.
If both servers are on the same network then the ping should not exceed 2 ms.
If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
Have you checked the network performance between the clients and the application server?
Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
What is the status of the CPU usage on both servers?
There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
Did this happen after you performed the hardware upgrade?
Yes, it happened after changing some memory parameters in the JVM and the database.
Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
Thanks in advance.
Kind regards,
Francisco Mtz. -
How do I handle large resultsets in CRXI without a performance issue?
Hello -
Problem Definition
I have a performance problem displaying large/huge resultset of data on a crystal report. The report takes about 4 minutes or more depending on the resultset size.
How do you handle large resultsets in Crystal Reports without a performance issue?
Environment
Crystal Reports XI
Apache WebSvr 2.X, Jboss 4.2.3, Struts
Java Reporting Component (JRC),Crystal Report Viewer (CRV)
Firefox
DETAILS
I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp. Webapp queries the database, gets a "resultset".
I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
So.....
- Request received to generate a report with a filter criteria
- Query DB to get resultset
- Initialize JRC and CRV
- finally display the report by calling
reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
The performance problem is within the last step. I put logs everywhere and noticed that database query doesnt take too long to return resultset. Everything processes pretty quickly till I call the processHttpRequest of CRV. This method just hangs for a long time before displaying the report on browser.
CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
I do have subreports and use Crystal report formulas on the reports. Some of them are used for grouping also. But I dont think Subreports is the real culprit here. Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
Solutions?
So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
I have thought of some half baked ideas.
A) Use external pagination and fetch data only for the current page being displayed. But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly. I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV. But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events.
B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation. So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
C) Try using Crystal Reports 2008. I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
D) Will using the Crystal Reports Servers like cache server/application server etc help in any way? I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
I'd appreciate it if someone can point me in the right direction.Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.
-
Hi,
I'm new to this forum and to javafx, so hi everybody :-)
I'm currently evaluating possible client technologies for a customer that is planning to create an application to edit big annotated graphs with a fixed layout.
Basically, those graphs consist of lines between nodes and "glyphs" denoting various annotations arranged on those lines.
Graphs are sized in the range of about 10.000 to 1.000.000 displayable objects, but not more than 1.000 visible at the same time.
I was able to build a prototype using javafx and encountered some performance issues with scrolling, panning, rotating and zooming a test graph that consists of about 40.000 javafx.Nodes. Those nodes are partially arranged hierarchically according to their position.
The hardware my customer uses does not support hardware accelerated rendering and the customer is not willing to upgrade just for that application, so I had to optimize.
When I tried to enable caching for areas in my graph, the performance for scrolling and panning was OK, but for rotating and zooming it wasn't. In addition I ran into problems when zooming in too far - probably because of the size of the cached images involved.
During my experiments I found out that it was possible to markedly improve performance for all operations mentioned above by selectively turning subareas of the graph invisible.
Consequently, my next try was to add a ChangeListener to the hvalueProperty of the enclosing ScrollPane that selectively toggles the visibility of subgraphs to show only those near the current viewport (aided by a data structure to give me quick access to subregions by position).
This works well except for one thing:
Toggleing the visiblilty of a subgraph via setVisible() invokes a layout pass in the scene graph that makes my viewport jump around during scrolling and panning.
I already added a background rectangle to the ScrollPane's content Group to avoid a change of the bounds of the content. Still, when the visibility of one subgraph is toggled, the scene jumps to an incorrect position for one mouse event and jumps back after that.
I also already tried to declare the subregions unmanaged - this didn't change anything.
h3. So finally, my questions:
1) Are there other means of performance optimizations built into the framework I'm not aware of?
2) Is it possible to subclass Group in a way that allows me to set its visibility without triggering the properties dependents? (Since the invisibility is a techincal optimization and not a real property of the graph, a layout pass is neither necessary nor desired).
3) Are any optimizations of this kind planned for those of us that cannot use hardware acceleration (i.g. arrange the children of a Group in a quad-tree and process only the visible parts)?
4) Would hardware acceleration even help here?
5) Is anyone aware of other means to performance tweak very large scene graphs in javafx?
6) Is the jumping of the viewport a bug I should enter into JIRA ?
Thank you for following my ramblings up to this point and thank you for your help (if you do help ;-) ).Cool, i didnt realize there was an Opengl pipeline for java2d in the 1.5 api =)
And i who got the LWJGL library to play with, seems like overkill, gonna try it out someday!
Thx for the replies, the game progresses good and i got some cool features that is gonna be implemented.
At this point i got a full working editor with easy file loading,monster/waypoint system, able to choose between 3 different towers and placing them on the playfield, upgrading 4 different aspects of every tower or sell them, kill monsters -> gain money/kills, monsters can finish the map and thus reducing number of lives of the player until game over. So basically i just need a lot of graphics and create more types of monsters/turrets with cool effects. (got homing missiles/slowing shots and normal bullets atm) and figure out how to get a nice background working on my current playground, and then probably some sweet menu to start the game from =) Ofc some balancing needs to be done, with respect to monster HP and towers damage etc. Anyone interested in creating graphics for monsters/towers/projectiles for this project are free to contact me and get credit for their work if this shit ever comes out to be any good :P -
Can't access root share sometimes and some strange performance issues
Hi :)
I'm sometimes getting error 0x80070043 "The network name cannot be found" when accessing \\dc01 (the root), but can access shares via \\dc01\share.
When I get that error I also didn't get the network drive hosted on that server set via Group Policy, it fails with this error:
The user 'W:' preference item in the 'GPO Name' Group Policy Object did not apply because it failed with error code '0x80070008 Not enough storage is available to process this command.' This error was suppressed.
The client is Windows Server 2012 Remote Desktop and file server is 2012 too. On a VMware host.
Then I log off and back on, and no issues.
Maybe related and maybe where the problem is: When I have the issue above and sometimes when I don't (the network drive is added fine) I have some strange performance issues on share/network drive: Word, Excel and PDF files opens very slowly. Offices says
"Contacting \\dc01\share..." for 20-30 sec and then opens. Text files don't have that problem.
I have a DC02 server also 2012 with no issues like like this.
Any tips how to troubleshoot?Hi,
Based on your description, you could access shares on DC via
\\dc01\share. But you couldn’t access shares via \\dc01.
Please check the
Network Path in the Properties of the shared folders at first. If the network path is
\\dc01\share, you should access the shared folder by using
\\dc01\share.
And when you configure
Drive Maps via domain group policy, you should also type the Network Path of the shared folders in the
Location edit.
About opening Office files very slow. There are some possible reasons.
File validation can slow down the opening of files.
This problem caused by the issue mentioned above.
Here are a similar thread about slow opening office files from network share
http://answers.microsoft.com/en-us/office/forum/office_2010-word/office-2010-slow-opening-files-from-network-share/d69e8942-b773-4aea-a6fc-8577def6b06a
For File Validation, please refer to the article below,
Office 2010 File Validation
http://blogs.technet.com/b/office2010/archive/2009/12/16/office-2010-file-validation.aspx
Best Regards,
Tina -
Returning multiple values from a called tabular form(performance issue)
I hope someone can help with this.
I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
Sorry for being so long-winded and thanks in advance for any help.Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
Hope this helps,
Candace Stover
Forms Product Management -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance Issues with Photoshop CS6 64-Bit
Hello -
Issue at hand: over the course of the last few weeks, I have noticed significant issues with performance since the last update to PS CS6 via the Adobe Application Manager, ranging from unexpected shut downs to bringing my workstation to a crawl (literally, my cursor seems to crawl across my displays). I'm curious as to if anyone else is experiencing these issues, or if there is a solution I have not yet tried. Here is a list of actions that result in these performance issues - there are likely more that I have either not experienced due to my frustration, or have not documented as occuring multiple times:
Opening files - results in hanging process, takes 3-10 seconds to resolve
Pasting from clipboard - results in hanging process, takes 3-10 seconds to resolve
Saving files - takes 3-10 seconds to open the dialog, another 3-10 seconds to return to normal window (saving a compressed PNG)
Eyedropper tool - will either crash Photoshop to desktop, or take 5-15 seconds to load
Attempting to navigate any menu - will either crash Photoshop to desktop, or take 5-15 seconds to load
Attempts I've taken to resolve this matter, which have failed:
Uninstalled all fonts that I have added since the last update (this was a pain in the ***, thank you Windows explorer for being glitchy)
Uninstall application and reinstall application
Use 32-bit edition
Changing process priority to Above Normal
Confirm process affinity to all available CPU cores
Change configuration of Photoshop performance options
61% of memory is available to Photoshop to use (8969 MB)
History states: 20; Cache levels: 6; Cache tile size: 1024K
Scratch disks: active on production SSD, ~10GB space available
Dedicated graphics processor is selected (2x nVidia cards in SLI)
System Information:
Intel i7 2600K @ 3.40GHz
16GB DDR3, Dual Channel RAM
2x nVidia GeForce GTS 450 cards, 1GB each
Windows 7 Professional 64-bit
Adobe Creative Cloud
This issue is costing me time I could be working every day, and I'm about ready to begin searching for alternatives and cancel my membership if I can't get this resolved.Adobe Photoshop Version: 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00) x64
Operating System: Windows 7 64-bit
Version: 6.1 Service Pack 1
System architecture: Intel CPU Family:6, Model:10, Stepping:7 with MMX, SSE Integer, SSE FP, SSE2, SSE3, SSE4.1, SSE4.2, HyperThreading
Physical processor count: 4
Logical processor count: 8
Processor speed: 3392 MHz
Built-in memory: 16350 MB
Free memory: 12070 MB
Memory available to Photoshop: 14688 MB
Memory used by Photoshop: 61 %
Image tile size: 1024K
Image cache levels: 6
OpenGL Drawing: Enabled.
OpenGL Drawing Mode: Basic
OpenGL Allow Normal Mode: True.
OpenGL Allow Advanced Mode: True.
OpenGL Allow Old GPUs: Not Detected.
OpenCL Version: 1.1 CUDA 4.2.1
OpenGL Version: 3.0
Video Rect Texture Size: 16384
OpenGL Memory: 1024 MB
Video Card Vendor: NVIDIA Corporation
Video Card Renderer: GeForce GTS 450/PCIe/SSE2
Display: 2
Display Bounds: top=0, left=1920, bottom=1080, right=3840
Display: 1
Display Bounds: top=0, left=0, bottom=1080, right=1920
Video Card Number: 3
Video Card: NVIDIA GeForce GTS 450
Driver Version: 9.18.13.1106
Driver Date: 20130118000000.000000-000
Video Card Driver: nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um
Video Mode:
Video Card Caption: NVIDIA GeForce GTS 450
Video Card Memory: 1024 MB
Video Card Number: 2
Video Card: LogMeIn Mirror Driver
Driver Version: 7.1.542.0
Driver Date: 20060522000000.000000-000
Video Card Driver:
Video Mode: 1920 x 1080 x 4294967296 colors
Video Card Caption: LogMeIn Mirror Driver
Video Card Memory: 0 MB
Video Card Number: 1
Video Card: NVIDIA GeForce GTS 450
Driver Version: 9.18.13.1106
Driver Date: 20130118000000.000000-000
Video Card Driver: nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um
Video Mode: 1920 x 1080 x 4294967296 colors
Video Card Caption: NVIDIA GeForce GTS 450
Video Card Memory: 1024 MB
Serial number: 90970233273769828003
Application folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\
Temporary file path: C:\Users\ANDREW~1\AppData\Local\Temp\
Photoshop scratch has async I/O enabled
Scratch volume(s):
C:\, 111.8G, 7.68G free
Required Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Required\
Primary Plug-ins folder: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Plug-ins\
Additional Plug-ins folder: not set
Installed components:
ACE.dll ACE 2012/06/05-15:16:32 66.507768 66.507768
adbeape.dll Adobe APE 2012/01/25-10:04:55 66.1025012 66.1025012
AdobeLinguistic.dll Adobe Linguisitc Library 6.0.0
AdobeOwl.dll Adobe Owl 2012/09/10-12:31:21 5.0.4 79.517869
AdobePDFL.dll PDFL 2011/12/12-16:12:37 66.419471 66.419471
AdobePIP.dll Adobe Product Improvement Program 7.0.0.1686
AdobeXMP.dll Adobe XMP Core 2012/02/06-14:56:27 66.145661 66.145661
AdobeXMPFiles.dll Adobe XMP Files 2012/02/06-14:56:27 66.145661 66.145661
AdobeXMPScript.dll Adobe XMP Script 2012/02/06-14:56:27 66.145661 66.145661
adobe_caps.dll Adobe CAPS 6,0,29,0
AGM.dll AGM 2012/06/05-15:16:32 66.507768 66.507768
ahclient.dll AdobeHelp Dynamic Link Library 1,7,0,56
aif_core.dll AIF 3.0 62.490293
aif_ocl.dll AIF 3.0 62.490293
aif_ogl.dll AIF 3.0 62.490293
amtlib.dll AMTLib (64 Bit) 6.0.0.75 (BuildVersion: 6.0; BuildDate: Mon Jan 16 2012 18:00:00) 1.000000
ARE.dll ARE 2012/06/05-15:16:32 66.507768 66.507768
AXE8SharedExpat.dll AXE8SharedExpat 2011/12/16-15:10:49 66.26830 66.26830
AXEDOMCore.dll AXEDOMCore 2011/12/16-15:10:49 66.26830 66.26830
Bib.dll BIB 2012/06/05-15:16:32 66.507768 66.507768
BIBUtils.dll BIBUtils 2012/06/05-15:16:32 66.507768 66.507768
boost_date_time.dll DVA Product 6.0.0
boost_signals.dll DVA Product 6.0.0
boost_system.dll DVA Product 6.0.0
boost_threads.dll DVA Product 6.0.0
cg.dll NVIDIA Cg Runtime 3.0.00007
cgGL.dll NVIDIA Cg Runtime 3.0.00007
CIT.dll Adobe CIT 2.1.0.20577 2.1.0.20577
CoolType.dll CoolType 2012/06/05-15:16:32 66.507768 66.507768
data_flow.dll AIF 3.0 62.490293
dvaaudiodevice.dll DVA Product 6.0.0
dvacore.dll DVA Product 6.0.0
dvamarshal.dll DVA Product 6.0.0
dvamediatypes.dll DVA Product 6.0.0
dvaplayer.dll DVA Product 6.0.0
dvatransport.dll DVA Product 6.0.0
dvaunittesting.dll DVA Product 6.0.0
dynamiclink.dll DVA Product 6.0.0
ExtendScript.dll ExtendScript 2011/12/14-15:08:46 66.490082 66.490082
FileInfo.dll Adobe XMP FileInfo 2012/01/17-15:11:19 66.145433 66.145433
filter_graph.dll AIF 3.0 62.490293
hydra_filters.dll AIF 3.0 62.490293
icucnv40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
icudt40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
image_compiler.dll AIF 3.0 62.490293
image_flow.dll AIF 3.0 62.490293
image_runtime.dll AIF 3.0 62.490293
JP2KLib.dll JP2KLib 2011/12/12-16:12:37 66.236923 66.236923
libifcoremd.dll Intel(r) Visual Fortran Compiler 10.0 (Update A)
libmmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
LogSession.dll LogSession 2.1.2.1681
mediacoreif.dll DVA Product 6.0.0
MPS.dll MPS 2012/02/03-10:33:13 66.495174 66.495174
msvcm80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcm90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcp100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcp80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcp90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcr100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcr80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcr90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
pdfsettings.dll Adobe PDFSettings 1.04
Photoshop.dll Adobe Photoshop CS6 CS6
Plugin.dll Adobe Photoshop CS6 CS6
PlugPlug.dll Adobe(R) CSXS PlugPlug Standard Dll (64 bit) 3.0.0.383
PSArt.dll Adobe Photoshop CS6 CS6
PSViews.dll Adobe Photoshop CS6 CS6
SCCore.dll ScCore 2011/12/14-15:08:46 66.490082 66.490082
ScriptUIFlex.dll ScriptUIFlex 2011/12/14-15:08:46 66.490082 66.490082
svml_dispmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
tbb.dll Intel(R) Threading Building Blocks for Windows 3, 0, 2010, 0406
tbbmalloc.dll Intel(R) Threading Building Blocks for Windows 3, 0, 2010, 0406
updaternotifications.dll Adobe Updater Notifications Library 6.0.0.24 (BuildVersion: 1.0; BuildDate: BUILDDATETIME) 6.0.0.24
WRServices.dll WRServices Friday January 27 2012 13:22:12 Build 0.17112 0.17112
Required plug-ins:
3D Studio 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Accented Edges 13.0
Adaptive Wide Angle 13.0
Angled Strokes 13.0
Average 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Bas Relief 13.0
BMP 13.0
Camera Raw 8.1
Camera Raw Filter 8.1
Chalk & Charcoal 13.0
Charcoal 13.0
Chrome 13.0
Cineon 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Collada 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Color Halftone 13.0
Colored Pencil 13.0
CompuServe GIF 13.0
Conté Crayon 13.0
Craquelure 13.0
Crop and Straighten Photos 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Crop and Straighten Photos Filter 13.0
Crosshatch 13.0
Crystallize 13.0
Cutout 13.0
Dark Strokes 13.0
De-Interlace 13.0
Dicom 13.0
Difference Clouds 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Diffuse Glow 13.0
Displace 13.0
Dry Brush 13.0
Eazel Acquire 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Embed Watermark 4.0
Entropy 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Extrude 13.0
FastCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Fibers 13.0
Film Grain 13.0
Filter Gallery 13.0
Flash 3D 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Fresco 13.0
Glass 13.0
Glowing Edges 13.0
Google Earth 4 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Grain 13.0
Graphic Pen 13.0
Halftone Pattern 13.0
HDRMergeUI 13.0
IFF Format 13.0
Ink Outlines 13.0
JPEG 2000 13.0
Kurtosis 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Lens Blur 13.0
Lens Correction 13.0
Lens Flare 13.0
Liquify 13.0
Matlab Operation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Maximum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mean 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Measurement Core 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Median 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mezzotint 13.0
Minimum 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
MMXCore Routines 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Mosaic Tiles 13.0
Multiprocessor Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Neon Glow 13.0
Note Paper 13.0
NTSC Colors 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Ocean Ripple 13.0
Oil Paint 13.0
OpenEXR 13.0
Paint Daubs 13.0
Palette Knife 13.0
Patchwork 13.0
Paths to Illustrator 13.0
PCX 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Photocopy 13.0
Photoshop 3D Engine 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Picture Package Filter 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Pinch 13.0
Pixar 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Plaster 13.0
Plastic Wrap 13.0
PNG 13.0
Pointillize 13.0
Polar Coordinates 13.0
Portable Bit Map 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Poster Edges 13.0
Radial Blur 13.0
Radiance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Range 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Read Watermark 4.0
Reticulation 13.0
Ripple 13.0
Rough Pastels 13.0
Save for Web 13.0
ScriptingSupport 13.1.2
Shear 13.0
Skewness 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Smart Blur 13.0
Smudge Stick 13.0
Solarize 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Spatter 13.0
Spherize 13.0
Sponge 13.0
Sprayed Strokes 13.0
Stained Glass 13.0
Stamp 13.0
Standard Deviation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
STL 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Sumi-e 13.0
Summation 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Targa 13.0
Texturizer 13.0
Tiles 13.0
Torn Edges 13.0
Twirl 13.0
Underpainting 13.0
Vanishing Point 13.0
Variance 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Variations 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Water Paper 13.0
Watercolor 13.0
Wave 13.0
Wavefront|OBJ 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
WIA Support 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
Wind 13.0
Wireless Bitmap 13.1.2 (13.1.2 20130105.r.224 2013/01/05:23:00:00)
ZigZag 13.0
Optional and third party plug-ins: NONE
Plug-ins that failed to load: NONE
Flash:
Mini Bridge
Kuler
Installed TWAIN devices: NONE -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben
Maybe you are looking for
-
Help needed for a new Mac user with his photo and video librarys
after swopping my Ericson for an iphone a year ago. then taking delivery of an iPad on launch day. I have finally taken the plunge and swopped my windows pc for a lovely 27" iMac... what have I been doing all these years using windows? using the Mac
-
Add fields in the BW extraction structure
Dear Experts: Today I have added a field "UEPOS" in the BW extraction struction MC11VA0ITM. UEPOS is the standard field of the SAP TABLE VBAP. When I test the extraction of the new field the system did not move the corresponding value from VBAP
-
AVCHD - best definition to export
What's the best options to export my projects from FCE? Share using Quicktime? And the best settings? All my clips are full HD quality and I don't wanna loose this great image quality, I'd like to export and keep the same quality as a raw footage (wh
-
NWDS Active/Inactive DC shows offline after loging to NWDI
Hi experts, I am Facing a weird issue using NWDS. When I log to NWDI through NWDS, the active/inactive DC perspective still shows as Offline. I tried closing and reopening NWDS couple of times but still the issue persists. I faced same issue couple o
-
8800 wap and mms set up problems
Hi, I wonder if anyone out there can help me. I have just purchased a lovely ol vintage phone.. the 8800 and now want to get the wap set up on it ( I am not so worried about the mms as for some reason have never been able to get this sorted out). I l