Degradation
Hi Experts,
Could any one clarify the below terms which used in bw side..
1. Service degradation on BW for reports based on Infocube 'ICXXX'.
2. Degradation time.
3. Outtage and outage time.
Thanks,
MBA
Hi Manfred,
Thanks, rewarded points.
Is degradation was successfully finished means the Endusers will expect good or bad performance for their reports...pls. clarify.
Thanks,
MBA
Similar Messages
-
Report Performance degradation
hi,
We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
Ideally for best performance how many records should be there one entity?
What cud be the other reasons for such a performance?
We are working in a multi tenant enviroment
Edited by: Rita Negi on Dec 13, 2009 5:50 AMRita,
Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
Things that will degrade performance are:
* Joins to other dimensions
* Custom calculations
* Number of records
* Number of fields returned
There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
Thanks
Oli @ Innoveer -
Sensor Mapping Express VI's performanc​e degrades over time
I was attempting to do a 3d visualization of some sensor data. I made a model and managed to use it with the 3d Picture Tool Sensor Mapping Express VI. Initially, it appeared to work flawlessly and I began to augment the scene with further objects to enhance the user experience. Unfortunately, I believe I am doing something wrong at this stage. When I add the sensor map object to the other objects, something like a memory leak occurs. I begin to encounter performance degradation almost immediately.
I am not sure how I should add to best add in the Sensor Map Object reference to the scene as an object. Normally, I establish these child relationships first, before doing anything to the objects, beyond creating, moving, and anchoring them. Since the Sensor Map output reference is only available AFTER the express vi run. My compromise solution, presently, is to have a case statement of controlled by the"First Call" constant. So far, performace seems to be much better.
Does anyone have a better solution? Am I even handling these objects the way the community does it?
EDIT: Included the vi and the stl files.
Message Edited by Sean-user on 10-28-2009 04:12 PM
Message Edited by Sean-user on 10-28-2009 04:12 PM
Message Edited by Sean-user on 10-28-2009 04:16 PM
Solved!
Go to Solution.
Attachments:
test for forum.vi 105 KB
chamber.zip 97 KBI agree with Hunter, your current solution is simple and effective, and I can't really visualize a much better way to accomplish the same task.
Just as a side-note, the easiest and simplest way to force execution order is to use the error terminals on the functions and VIs in your block diagram. Here'a VI snippet with an example of that based on the VI you posted. (If you paste the image into your block diagram, you can make edits to the code)
Since you expressed some interest in documentation related to 3D picture controls, I did some searching and found a few articles you might be interested in. There's nothing terribly complex, but these should be a good starting point. The first link is a URL to the search thread, so you can get an idea of where/what I'm searching.You'll get more hits if you search from ni.com rather than ni.com/support.
http://search.ni.com/nisearch/app/main/p/q/3d%20picture/
Creating a 3D Scene with the 3D Picture Control
Configuring a 3D Scene Window
Using the 3D Picture Control 'Create Height Field VI' to convert a 2D image into a 3D textured heigh...
Using Lighting and Fog Effects in 3d Picture Control
3D Picture Control - Create a Moving Texture Using a Series of Images
Changing Set Rotation and Background of 3D Picture Control
Caleb Harris
National Instruments | Mechanical Engineer | http://www.ni.com/support -
CC 2014 - another noticeable degradation in performance
Hi all, just posting up to get a consensus on the current performance of CC 2014, for me, I've found it to be another very noticeable downgrade, to the point where its not useable for me in my workflow. I've since deleted 2014 and am hoping it doesn't replace Illustrator CC any time soon.
While I love the new pen tool and general accuracy of selecting anchor points and closing paths, it seems to have brought with it a very noticeable lag in selection and laying out points. In previous versions I'd say it performs like having Smart Guides enabled on a complex file, only with or without Smart Guides enabled in CC2014 the lagginess remains.
The file above is used as an example of the problem I'm having, its a small, relatively basic file. In Illustrator CC, when selecting the paving on the left, I get minimal to zero selection lag. When selecting that same area on Illustrator CC 2014, I get a 0.5 to 1 second delay, with or without smart guides enabled.
I have a late 2012 iMac, fully upgraded with best videocard, fastest processor and 32gb of ram. I run my apps off a small local SSD and the artwork files of a Thunderbolt RAID array. It doesn't matter where I put the artwork, local, or RAID, the performance issues are the same.
Anyone having a similar experience? It seems to me with every release or update, Adobe says it comes with "Improved performance" but I've yet to see any release that performed better than the previous.Hello Crispe,
Could you please share with us the test file @ "[email protected]" so that we can reproduce the problem at our end? Please be assured that your test file would solely be used for reproducing the issue and won't be shared elsewhere.
Please include the link (CC 2014 - another noticeable degradation in performance) in the mail you send.
Regards,
Dhirendra -
Help needed! Raid degraded again!
Hi!
Help needed! I hava made bootable RAID with two S-ATAII 250Gb HDD and its not working! Every now and then at bootup I get a message RAID -> DEGRADED... Must be seventh time! Rebuild takes its own time!
What am I doing wrong!
T: Ekku
K8N Neo4 Ultra
AMD 64 4200+
2 Gb RAM
2 x 250 Gb HDD (Maxtor)
nVidia RAID (in mb)
P.S. I wery SORRY with my poor language!I'm going to blame the nVRAID because I've seen issues with it in the past. If your motherboard has another non-nVidia RAID solution, use that instead. Using the nVidia SATA ports as BASE or JBOD is fine and dandy but RAIDing always had issues. It's not even a driver issue I think it's just instability. Latest drivers and even boxed drivers never helped. Granted, some will report success with their rig. But on a professional level I've seen nForce issues on different motherboards and different hard drives that had RAID disaster stories.
Good luck and if you don't have another RAID solution, my suggestion would be to buy a dedicated RAID controller card.
LPB -
Performance degradation with 11.0.2 CS5 update
Hi!
Has anyone else run into a problem with performance reduction/degradation in GPU mode after updating to 11.0.2 of Flash Pro CS5? I've been working on breakout style game (unBrix), which I carefully built up to run at very close to 60 fps, and it has been fine - until I updated Flash to 11.0.2 (to get Android publish to work, and fix a few other issues).
I have confirmed that downgrading back to the release version of Flash CS5 fixes the performance stuttering that I have noticed since updating to 11.0.2.
I'd love to know if anyone else has noticed a similar problem - or even better isolated the cause - or if anyone has tried downgrading back to see if there is an improvement (I test this by installing the CS5 trial on a VMWare image if that helps).
Thanks,
Kevin N.I found the same problem with new update and already wrote about this problem in this forum.
You can find my post here: http://forums.adobe.com/message/3214594#3214594
But I same as you don't have any answers.
Also I tried some benchmark test and found that the FPS result is the same for previous & updated packager.
So, I think the problem is only visually. New packager drops a lot of frames. Looks very slowly ((( -
Performance Degradation of HR Disco reports after upgrade from 1158 - 11510
Hi there
Has anyone else seen a degradation in performance between 4i and 10.1.2 on apps HR modules? Or indeed not against HR modules.
Following an upgrade from 1158 to 11510 we have seen significant downturn in performance of long standing HR reports. We are using Discoverer desktop in both cases and have a query that is scheduled overnight and usually completed in 2 hours using 4i against the 1158. Now it's taking more like 15 or more using discoverer 10.1.2 against the 11510 environment. We have tested using 4i against 11510 environment and this performs fine, in fact better than it used to in 1158, so it looks like Discoverer 1012 is causing the downturn in performance.
Anyone got anything to share on this??
Cheers, Kate
Message was edited by:
kaubonKaubon,
There is a difference between oracle 9i and 10g database.
If you are using optimizer_features_enable="9.2.0.8" to revert back to the old behavior then you are not utilizing the updated feature of 10g. You are making 10g to follow 9i behavior.
It is just like buying a new car and damaging it to get the feel of your old car.
We had the similar problem, but we fine tune all our queries according to 10g. Main problem was use of rules. You need to either remove those rules or modify those rules.
There are few good documents of 10g performance on metalink. You can refer to that.
All the best.
Soham khot
http://oracleappshr.blogspot.com -
Performance Degradation on HR module 4i to 10.1.2
Hi there
Has anyone else seen a degradation in performance between 4i and 10.1.2 on apps HR modules? Or indeed not against HR modules.
Following an upgrade from 1158 to 11510 we have seen significant downturn in performance of long standing HR reports. We are using Discoverer desktop in both cases and have a query that is scheduled overnight and usually completed in 2 hours using 4i against the 1158. Now it's taking more like 15 or more using discoverer 10.1.2 against the 11510 environment. We have tested using 4i agains 11510 and this performs fine, in fact better than it used to in 1158, so it looks like Discoverer 1012 is causing the downturn in performance.
Anyone got anything to share on this??
Cheers, Katewe've looked at this more...
the explain plans were exactly the same between 4i and 10.1.2 so it wasn't to do with the aggregation strategy. seems it was due to the 10g database upgrade (set a DB parameter called optimizer_features_enable="9.2.0.8" to revert back to the old behaviour).
also a number of required E-Business database initialisation parameters were missing
kate -
Performance degradation with addition of unicasting option
We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
*-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
*-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
and the following in the remote client node that point to one of the server node like this
*-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
Is performance degradation in well-known addressing is a limitation and expected?Hi Mahesh,
From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
Thanks,
Mark
Oracle Coherence -
Performance degradation with -g compiler option
Hello
Our mearurement of simple program compiled with and without -g option shows big performance difference.
Machine:
SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
Compiler:
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
#include "time.h"
#include <iostream>
int main(int argc, char ** argv)
for (int i = 0 ; i < 60000; i++)
int *mass = new int[60000];
for (int j=0; j < 10000; j++) {
mass[j] = j;
delete []mass;
return 0;
}Compilation and execution with -g:
CC -g -o test_malloc_deb.x test_malloc.c
ptime test_malloc_deb.xreal 10.682
user 10.388
sys 0.023
Without -g:
CC -o test_malloc.x test_malloc.c
ptime test_malloc.xreal 2.446
user 2.378
sys 0.018
As you can see performance degradation of "-g" is about 4 times.
Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
This will give us possibility to open customer core files using non-stripped exe.
But our tests shows that stripping does not give performance of executable compiled without '-g'.
So we are losing performance by using this compilation method.
Is it expected behavior of compiler?
Is there any way to have -g option "on" and not lose performance?In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl. -
Performance degradation factor 1000 on failover???
Hi,
we are gaining first experience with WLS 5.1 EBF 8 clustering on
NT4 SP 6 workstation.
We have two servers in the cluster, both on same machine but with
different IP adresses (as it has to be)!
In general it seems to work: we have a test client connecting to
one of the servers and
uses a stateless test EJB which does nothing but writing into weblogic.log.
When this server fails, the other server resumes to work the client
requests, BUT VERY VERY VERY SLOW!!!
- I should repeat VERY a thousand times, because a normal client
request takes about 10-30 ms
and after failure/failover it takes 10-15 SECONDS!!!
As naive as I am I want to know: IS THIS NORMAL?
After the server is back, the performance is also back to normal,
but we were expecting a much smaller
performance degradation.
So I think we are doing something totally wrong!
Do we need some Network solution to make failover performance better?
Or is there a chance to look closer at deployment descriptors or
weblogic.system.executeThreadCount
or weblogic.system.percentSocketReaders settings?
Thanks in advance for any help!
Fleming
See http://www.weblogic.com/docs51/cluster/setup.html#680201
Basically, the rule of thumb is to set the number of execute threads ON
THE CLIENT to 2 times the number of servers in the cluster and the
percent socket readers to 50%. In your case with 8 WLS instances in the
cluster, add the following to the java command line used to start your
client:
-Dweblogic.system.executeThreadCount=16
-Dweblogic.system.percentSocketReaders=50
Hope this helps,
Robert
Fleming Frese wrote:
> Hi Mike,
>
> thanks for your reply.
>
> We do not have HTTP clients or Servlets, just EJBs and clients
> in the same LAN,
> and the failover should be handled by the replica-aware stubs.
> So we thought we need no Proxy solution for failover. Maybe we
> need a DNS to serve failover if this
> increases our performance?
>
> The timeout clue sounds reasonable, but I would expect that the
> stub times out once and than switches
> to the other server for subsequent requests. There should be a
> refresh (after 3 Minutes?) when the stub
> gets new information about the servers in the cluster, so he could
> check then if the server is back.
> This works perfectly with load balancing: If a new server joins
> the cluster, I automatically receives
> requests after a while.
>
> Fleming
>
> "Mike Reiche" <[email protected]> wrote:
> >
> >It sounds like every request is first timing out it's
> >connection
> >attempt (10 seconds, perhaps?) on the 'down' instance
> >before
> >trying the second instance. How do requests 'failover'?
> >Do you
> >have Netscape, Apache, or IIS with a wlproxy module? Or
> >do
> >you simply have a DNS that takes care of that?
> >
> >Mike
> >
> >
> >
> >"Fleming Frese" <[email protected]> wrote:
> >>
> >>Hi,
> >>
> >>we are gaining first experience with WLS 5.1 EBF 8 clustering
> >>on
> >>NT4 SP 6 workstation.
> >>We have two servers in the cluster, both on same machine
> >>but with
> >>different IP adresses (as it has to be)!
> >>
> >>In general it seems to work: we have a test client connecting
> >>to
> >>one of the servers and
> >>uses a stateless test EJB which does nothing but writing
> >>into weblogic.log.
> >>
> >>When this server fails, the other server resumes to work
> >>the client
> >>requests, BUT VERY VERY VERY SLOW!!!
> >> - I should repeat VERY a thousand times, because a normal
> >>client
> >>request takes about 10-30 ms
> >>and after failure/failover it takes 10-15 SECONDS!!!
> >>
> >>As naive as I am I want to know: IS THIS NORMAL?
> >>
> >>After the server is back, the performance is also back
> >>to normal,
> >>but we were expecting a much smaller
> >>performance degradation.
> >>
> >>So I think we are doing something totally wrong!
> >>Do we need some Network solution to make failover performance
> >>better?
> >>Or is there a chance to look closer at deployment descriptors
> >>or
> >>weblogic.system.executeThreadCount
> >>or weblogic.system.percentSocketReaders settings?
> >>
> >>Thanks in advance for any help!
> >>
> >>Fleming
> >>
> >
-
Hi All,
I am facing some serious application pool crash in one of my customer's Production site SharePoint servers. The Application Error logs in the event Viewer says -
Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c8f9
Exception code: 0xc0000374
Fault offset: 0x00000000000c40f2
Faulting process id: 0x1414
Faulting application start time: 0x01ce5edada76109d
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 5a69ec1e-cace-11e2-9be2-441ea13bf8be
At the same time the SharePoint ULS logs says -
1)
06/13/2013 03:44:29.53 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8e2s
Medium Unknown SPRequest error occurred. More information: 0x80070005 8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General
8e25 Medium Failed to look up string with key "FSAdmin_SiteSettings_UserContextManagement_ToolTip", keyfile Microsoft.Office.Server.Search.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8l3c
Medium Localized resource for token 'FSAdmin_SiteSettings_UserContextManagement_ToolTip' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\14\Template\Features\SearchExtensions\ExtendedSearchAdminLinks.xml". 8b343224-4aa6-490c-8a2a-ce06ac160773
2)
06/13/2013 03:44:29.01 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
Web Parts
emt4 High Error initializing Safe control - Assembly:Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c TypeName: Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl
Error: Could not load type 'Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl' from assembly 'Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:29.50 w3wp.exe (0x0808)
0x2DF0 SharePoint Foundation Logging Correlation Data
xmnv Medium Site=/ 8b343224-4aa6-490c-8a2a-ce06ac160773
3)
06/13/2013 03:43:59.67 w3wp.exe (0x263C) 0x24D8 SharePoint Foundation
Performance 9fx9
Medium Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip. at Microsoft.SharePoint.SPListItem.GetValue(SPField fld,
Int32 columnNumber, Boolean bRaw, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.GetValue(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.get_Item(String fieldName)
at Microsoft.SharePoint.WebControls.BaseFieldControl.get_ItemFieldValue() at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.RenderFieldForDisplay(HtmlTextWriter output) at Microsoft.SharePoint.WebControls.BaseFieldControl.Render(HtmlTextWriter
output) at Microsoft.SharePoint.Publishing.WebControls.BaseRichField.Render(HtmlTextWriter output) at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.R...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...ender(HtmlTextWriter output) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection
children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)
at System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter
writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWrit...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...er writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer,
ICollection children) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context)
at Microsoft.SharePoint.Publishing.TemplateRedirectionPage.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionSte...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...p step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception
error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr module...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...Data, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67 w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance g4zd
High Performance degradation: note field [PublishingPageContent] was not in demoted fields. b8d0b8ca-8386-441f-8fce-d79fe72556e1
Anybody has any idea whats going on? I need to fix this ASAP as we are suppose to go live in next few days.
SoumalyaHello Soumalya,
Do you have an update on your issue? We are actually experiencing a similar issue at a new customer.
- Dennis | Netherlands | Blog |
Twitter -
Performance Degradation with EJBs
I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
Therefore I believe the problem is a performance degradation on Entity EJBs that
get invoked in a loop.
I am using OC4J 10.1.3.3.
I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
EJB version being used is 2.1.One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
Overview:
http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
Download:
http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
Install/Config Guide:
http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
Usage Scenarios:
http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf -
Performance Degradation from SSL
I have read articles which are showing that performance could go down to
1/10 with certain servers (Reference
http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
using SSL.
I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
performance degradation would I see if I switch all my transactions from
normal unsecure http transactions to secure ones (SSL V3)?
Any help appreciated.
best regards, AndreasAndreas,
Internal benchmarks (unofficial) have shown SSL to be 65-80% slower than
typical connections. So, anywhere between 3 to 5 times slower. This is the
same across all http servers.
In Denali (the next release), we're adding a performance pack enhancement
that includes native code impls of some of our crypto code. This should
show large speedups when it's released in March.
Thanks!
Michael Girdley
Sr. Product Manager
WebLogic Server
BEA Systems
ph. 415.364.4556
[email protected]
Andreas Rudolf <[email protected]> wrote in message
news:820bv6$br6$[email protected]..
I have read articles which are showing that performance could go down to
1/10 with certain servers (Reference
http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
using SSL.
I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
performance degradation would I see if I switch all my transactions from
normal unsecure http transactions to secure ones (SSL V3)?
Any help appreciated.
best regards, Andreas -
Performance Degradation - High fetches and Prses
Hello,
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
2) High fetches and poor number/ low number of rows being processed
Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1) */ * FROM SAPNXP.INOB
WHERE MANDT = :A0
AND KLART = :A1
AND OBTAB = :A2
AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
call count cpu elapsed disk query current rows
Parse 119 0.00 0.00 0 0 0 0
Execute 239 0.16 0.13 0 0 0 0
Fetch 239 2069.31 2127.88 0 13738804 0 0
total 597 2069.47 2128.01 0 13738804 0 0
PLAN_TABLE_OUTPUT
Plan hash value: 1235313998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 268 | 1 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| INOB | 2 | 268 | 1 (0)| 00:00:01 |
|* 3 | INDEX SKIP SCAN | INOB~2 | 7514 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=TO_NUMBER(:A4))
2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
filter("OBTAB"=:A2)
18 rows selected.
SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
INDEX_NAME TABLE_NAME COLUMN_NAME
INOB~2 INOB MANDT
INOB~2 INOB CLINT
INOB~2 INOB OBTAB
Is it possible to Maximise the rows/fetch
call count cpu elapsed disk query current rows
Parse 163 0.03 0.00 0 0 0 0
Execute 163 0.01 0.03 0 0 0 0
Fetch 174899 55.26 59.14 0 1387649 0 4718932
total 175225 55.30 59.19 0 1387649 0 4718932
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 27
Rows Row Source Operation
28952 TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
28952 INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 174899 0.00 0.16
SQL*Net more data to client 155767 0.01 5.69
SQL*Net message from client 174899 0.11 208.21
latch: cache buffers chains 2 0.00 0.00
latch free 4 0.00 0.00
********************************************************************************user4566776 wrote:
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
But if you look at the text you are using bind variables.
The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
time and Weblogic 5.1 sp6.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
the weblogic wont even shutdown completely when trying to shutdown.
Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
sp 9) in the mean time.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
ManiThere shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
encryption. If that's the issue, you should post in the security
newsgroup or contact [email protected]
-- Rob
Mani Ayyalas wrote:
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
time and Weblogic 5.1 sp6.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
the weblogic wont even shutdown completely when trying to shutdown.
Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
sp 9) in the mean time.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
Mani
Maybe you are looking for
-
Hello. I installed Oralce BI 10.1.3.4 and java 1.6.0. The installation was successful. Then I start oc4j, he also successfully launched. But when I want to go to the Presentation Services window oc4j error: oc4j could not create the java virtual mach
-
Java.lang.Thread.eetop
who knows what this field in Thread class for?
-
New imac not working properlyHELP
Hi I have had a new imac for a week. I cannot get on to an apple site at all, had to get here with my Sony laptop. It just hangs when I try to load an apple website, and if I get it to load, I cannot get to support pages WHY Also today iphoto stopped
-
Hi Experts, My requirement is- I have HR ESS in place. Whenever an user places a request for leave, mail goes to SAP Inbox. I need to send a copy to external mail. I got the code somehow. Can anyone suggest Please tell me the userexit/BADI I need to
-
Upgrade ram on macbook pro 2010
I have bought two different brands (G.Skill and PNY) of ram for my Macbook Pro 13.3 2.4 GHz Intel Core 2 Duo (2010) trying to upgrade from 4GB to 8GB and both are not working. Meaning I followed the exact instructions, didn't touch the Gold and ins