Severe performance degradation vs Mavericks
Hi I have a MacBook Pro (Retina, 13-inch, Late 2012) 2.5 GHz Intel Core i58 GB 1600 MHz DDR3 Intel HD Graphics 4000 1024 MB and after upgrading to Yosemite I noticed a significant performance degradation: overall the system is not as smooth as it used to me, even resizing a Safari window is "choppy".
I am disappointed.
I can't believe Apple have made this software available to the masses when it's clearly not ready. Since "upgrading" my Mac mini has been pretty useless. Programs take a substantially increased time to open compared to Mavericks, and then performance is poor at best. Opening files in Photoshop, for example, is painfully slow and I get the colour wheel almost all of the time - something which definitely didn't happen with Mavericks.
Aside from this, I've experienced a number of frustrating bugs today. When I open a new program on my left screen the right screen is changed to a fresh desktop. Why? When I turn my Mac on I get the login screen on the left monitor (as it always used to) but then the primary desktop is set to the right and I've been unable to keep the correct setting so far (it forgets that I've changed it following a reboot). The background of the top bar keeps disappearing so all of the icons, time, etc. just sit on top of the desktop background.
I hope Apple can release a fix, and quickly.
Similar Messages
-
This was discussed here, with no resolution
http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
I have the same issue. This is a single-purpose physical mailbox server with 320 users and 72GB of RAM. That should be plenty. I've checked and there are no manual settings for the database cache. There are no other problems with
the server, nothing reported in the logs, except for the aforementioned error (see below).
The server is sluggish. A reboot will clear up the problem temporarily. The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each. Does anyone have
any ideas on this?
Warning ESE Event ID 906.
Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)Brian,
We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
for the sole purpose of serving as our public folder servers.
So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
cache flush to paging file, we got the following alert:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:14 AM
Event ID: 17012
Task Category: Storage
Level: Error
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
Followed by:
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:08:15 AM
Event ID: 17106
Task Category: Storage
Level: Information
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
Log Name: Application
Source: MSExchangeTransport
Date: 8/2/2012 2:13:50 AM
Event ID: 17102
Task Category: Storage
Level: Warning
Keywords: Classic
User: N/A
Computer: HTS1.company.com
Description:
Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action. This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
actions.
Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
Thanks! -
Severe performance degradation after update/clean install of 10.10 Yosemite
Hello, Apple Community.
I am submitting this general complaint in the name of myself and of my friend. We both have rather fresh MacBooks (15" Early 2014 with 2,5 GHz Core i7 and Late 2013 with 2,3 GHz Core i7, both with GeForce 750M as dedicated GPU).
I decided to upgrade my (flawlessly running) Mavericks while my friend performed the clean install.
Now we both experience serious performance degradation (in comparison to 10.9), visible especially in very low framerate of most of system UI animations (the worst are: showing/hiding mission control, switching spaces, displaying/hiding folder stacks - but the simple ones like magnifying the dock tend to stutter as well).
Beside that, there are many fades (like the one between the login screen and the desktop or the one between the main screen of system preferences overview and particular settings panes) that often get distorted - it seems to be the issue connected with retina scaling (I am using the 1920x1200 mode), since both login screen and settings seem to be showing some parts of UI improperly magnified or zoomed that suddenly "skip" to their adequate proportions.
Summing up - I am really unhappy with my initial 10.10 experiences, especially since I used to consider my hardware rather hi-end - the worst thing being that Yosemite runs on my friend's 2012 Mac Mini flawlessly.
Do any of you guys experience the same issues with 10.10 and your rMBPs? Maybe just those of us with dedicated graphics?Hello Richard:
As a side note, there is no such thing as a +"clean install."+ Apple changed that terminology to +"erase and install"+ several years ago.
I am not sure if what you describe would work. Unless the new owner does not have broadband service, there is no reason why they cannot install your version of OS X 10.5 and then run software update when they get the system. I vaguely recall that when one does an erase and install the system prompts for passwords, etc at that time.
I think I would erase the HD and then let the new owner go from there.
Barry -
Performance degradation using Jolt ASP Connectivity for TUXEDO
We have a customer that uses Jolt ASP Connectivity for TUXEDO and is suffering
from a severe performance degradation over time.
Initial response times are fine (1 s.), but they tend to increase to 3 minutes
after some time (well, eh, a day or so).
Data:
- TUXEDO 7.1
- Jolt 1.2.1
- Relatively recent rolling patch installed (so no there are probably no JSH performance
issues and memory leaks as fixed in earlier patches)
The ULOG shows that during the night the JSH instances notice a timeout on behalf
of the client connection and do a forced shutdown of the client:
040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1185: "INFO: Userid:
[ZZ_Webpol], Clientid: [AP_WEBSRV3] timed out due to inactivity"
040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1198: "WARN: Forced
shutdown of client; user name 'ZZ_Webpol'; client name 'AP_WEBSRV3'"
This happens every 10 minutes as per configuration of the JSL (-T flag).
The customer "solved" the problem for the time being by increasing the connection
pool size on the IIS web server.
However, they didn't find a "smoking gun" - no definite cause for the problem.
So, it is debatable whether their "solution" suffices.
It is my suspicion the problem might be located in the Jolt ASP classes running
on the IIS.
Maybe the connection pool somehow loses connections over time, causing subsequent
users having to queue before they get served (although an exception should be
raised if no connections are available).
However, there's no documentation on the functioning of the connection pool for
Jolt ASP.
My questions:
1) What's the algorithm used for managing connections with Jolt ASP for TUXEDO?
2) If connections are terminated by a JSH, will a new connection be established
from the web server automatically? (this is especially interesting, because the
connection policy can be configured in the JSL CLOPT, but there's no info on how
this should be handled/configured by Jolt ASP connectivity for TUXEDO)
Regards,
Winfried ScheuldermanHi,
For ASP connectivity I would suggest looking at the .Net client facility provided in Tuxedo 9.1 and later.
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
SQL Performance Degrades Severely in WAN
The Oracle server is located in the central LAN and the client is located in the remote LAN. These two LANs are connected via 10Mbps wide network. Both LANs are of 100M bps inside. If the SQL commands are issued in the same LAN as Oracle server is located in, the speed is fast. However, if the same commands are issued in the remote LAN, the speed is degraded severly, almost 10 times slower. There are only a few results returned by these SQL commands. My questions are the reason of this performance degradeness and how to improve the performance in the remote client.
The server is Oracle817 and OPS, and the SQL commands are issued in PB programs in the remote client.
Thanks very much.Thank you very much.
I found another point which might lead to the performance problem. The server's Listener.ora is configured as following:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
And the client's TNSNAMES.ORA is configured as following:
EMIS02.HZJYJ.COM.CN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.26.17.18)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = emis)
It shows that the listener protocol is set as IPC. However, the client is set to the protocol of TCP. Would there be a network latency for the protocol conversion between IPC and TCP?
Thanks a lot. -
Performance degradation in pl/sql parsing
We are trying to use xml pl/sql parser and noticed performance degradation as we run multiple times. We zeroed into the following clause:
doc := xmlparser.getDocument(p);
The first time the procedure is run the elapsed time at sqlplus is something like .45sec, but as we run repeatedly in the same session the elapsed time keeps on increasing by .02 seconds. If we log out and start fresh, we start again from .45sec.
We noticed similar degradation with
p := xmlparser.newParser;
but we got around by making the 'p' variable as package variable, initializing it once and using the same for all invocations.
Any suggestions?
Thank you.Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
But please provide some more details like your database version etc.
I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
SQL> alter session set events '10046 trace name context forever, level 12';
SQL> execute your PL/SQL code here
SQL> exitWill give you a .trc file in your udump directory on the server.
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
Also this informative thread can give you more ideas:
HOW TO: Post a SQL statement tuning request - template posting
as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29 -
Performance degradation with Oracle EJB
Wonder if someone has done any benchmark on the performance degradation as the number of connection into EJB based application increases. We are experiencing rather severe degradation in one such implementation. Will appreciate if you could share your experience with regard to this.
Try to see is there any contention on the MTS configuration. Try to increase the number of MTS if the number user is very high
-
Performance degradation after upgrading to yosemite
I'm experiencing performance degradation on MacBook Pro 15 including
number of spinning wheels,
instance of dark screen,
overheating after upgrading to Yosemite
diminished battery life
Is Yosemite the cause of this and other issuesI can't believe Apple have made this software available to the masses when it's clearly not ready. Since "upgrading" my Mac mini has been pretty useless. Programs take a substantially increased time to open compared to Mavericks, and then performance is poor at best. Opening files in Photoshop, for example, is painfully slow and I get the colour wheel almost all of the time - something which definitely didn't happen with Mavericks.
Aside from this, I've experienced a number of frustrating bugs today. When I open a new program on my left screen the right screen is changed to a fresh desktop. Why? When I turn my Mac on I get the login screen on the left monitor (as it always used to) but then the primary desktop is set to the right and I've been unable to keep the correct setting so far (it forgets that I've changed it following a reboot). The background of the top bar keeps disappearing so all of the icons, time, etc. just sit on top of the desktop background.
I hope Apple can release a fix, and quickly. -
How to investigate DB performance degradation.
We use Oracle11gr2 on win2008R2.
I heard that DB performance degradation is happening and I would like to know how to improve DB performance.
How could I investigate the reason of DB performance degradation ?Hi,
the first thing to establish is the scope of the problem -- whether it's the entire database, a single query, or a group of queries which have something in common. You cannot rely on users for that.
Then depending on the scope of the problem, you pick a performance tool that matches the scope of the problem, and use it to obtain the diagnostic information.
If you can confirm that the issue is global (almost everything is slow, not just one query), then AWR and ASH may be helpful. For local (i.e. one or several queries) issues, you can use SQL trace, dbms_xplan and ASH. Keep in mind that ASH and AWR require a Diagnostic and Tuning Pack license.
Best regards,
Nikolay -
Performance degradation with addition of unicasting option
We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
*-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
*-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
and the following in the remote client node that point to one of the server node like this
*-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
Is performance degradation in well-known addressing is a limitation and expected?Hi Mahesh,
From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
Thanks,
Mark
Oracle Coherence -
Performance degradation with -g compiler option
Hello
Our mearurement of simple program compiled with and without -g option shows big performance difference.
Machine:
SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
Compiler:
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
#include "time.h"
#include <iostream>
int main(int argc, char ** argv)
for (int i = 0 ; i < 60000; i++)
int *mass = new int[60000];
for (int j=0; j < 10000; j++) {
mass[j] = j;
delete []mass;
return 0;
}Compilation and execution with -g:
CC -g -o test_malloc_deb.x test_malloc.c
ptime test_malloc_deb.xreal 10.682
user 10.388
sys 0.023
Without -g:
CC -o test_malloc.x test_malloc.c
ptime test_malloc.xreal 2.446
user 2.378
sys 0.018
As you can see performance degradation of "-g" is about 4 times.
Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
This will give us possibility to open customer core files using non-stripped exe.
But our tests shows that stripping does not give performance of executable compiled without '-g'.
So we are losing performance by using this compilation method.
Is it expected behavior of compiler?
Is there any way to have -g option "on" and not lose performance?In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl. -
Performance degradation factor 1000 on failover???
Hi,
we are gaining first experience with WLS 5.1 EBF 8 clustering on
NT4 SP 6 workstation.
We have two servers in the cluster, both on same machine but with
different IP adresses (as it has to be)!
In general it seems to work: we have a test client connecting to
one of the servers and
uses a stateless test EJB which does nothing but writing into weblogic.log.
When this server fails, the other server resumes to work the client
requests, BUT VERY VERY VERY SLOW!!!
- I should repeat VERY a thousand times, because a normal client
request takes about 10-30 ms
and after failure/failover it takes 10-15 SECONDS!!!
As naive as I am I want to know: IS THIS NORMAL?
After the server is back, the performance is also back to normal,
but we were expecting a much smaller
performance degradation.
So I think we are doing something totally wrong!
Do we need some Network solution to make failover performance better?
Or is there a chance to look closer at deployment descriptors or
weblogic.system.executeThreadCount
or weblogic.system.percentSocketReaders settings?
Thanks in advance for any help!
Fleming
See http://www.weblogic.com/docs51/cluster/setup.html#680201
Basically, the rule of thumb is to set the number of execute threads ON
THE CLIENT to 2 times the number of servers in the cluster and the
percent socket readers to 50%. In your case with 8 WLS instances in the
cluster, add the following to the java command line used to start your
client:
-Dweblogic.system.executeThreadCount=16
-Dweblogic.system.percentSocketReaders=50
Hope this helps,
Robert
Fleming Frese wrote:
> Hi Mike,
>
> thanks for your reply.
>
> We do not have HTTP clients or Servlets, just EJBs and clients
> in the same LAN,
> and the failover should be handled by the replica-aware stubs.
> So we thought we need no Proxy solution for failover. Maybe we
> need a DNS to serve failover if this
> increases our performance?
>
> The timeout clue sounds reasonable, but I would expect that the
> stub times out once and than switches
> to the other server for subsequent requests. There should be a
> refresh (after 3 Minutes?) when the stub
> gets new information about the servers in the cluster, so he could
> check then if the server is back.
> This works perfectly with load balancing: If a new server joins
> the cluster, I automatically receives
> requests after a while.
>
> Fleming
>
> "Mike Reiche" <[email protected]> wrote:
> >
> >It sounds like every request is first timing out it's
> >connection
> >attempt (10 seconds, perhaps?) on the 'down' instance
> >before
> >trying the second instance. How do requests 'failover'?
> >Do you
> >have Netscape, Apache, or IIS with a wlproxy module? Or
> >do
> >you simply have a DNS that takes care of that?
> >
> >Mike
> >
> >
> >
> >"Fleming Frese" <[email protected]> wrote:
> >>
> >>Hi,
> >>
> >>we are gaining first experience with WLS 5.1 EBF 8 clustering
> >>on
> >>NT4 SP 6 workstation.
> >>We have two servers in the cluster, both on same machine
> >>but with
> >>different IP adresses (as it has to be)!
> >>
> >>In general it seems to work: we have a test client connecting
> >>to
> >>one of the servers and
> >>uses a stateless test EJB which does nothing but writing
> >>into weblogic.log.
> >>
> >>When this server fails, the other server resumes to work
> >>the client
> >>requests, BUT VERY VERY VERY SLOW!!!
> >> - I should repeat VERY a thousand times, because a normal
> >>client
> >>request takes about 10-30 ms
> >>and after failure/failover it takes 10-15 SECONDS!!!
> >>
> >>As naive as I am I want to know: IS THIS NORMAL?
> >>
> >>After the server is back, the performance is also back
> >>to normal,
> >>but we were expecting a much smaller
> >>performance degradation.
> >>
> >>So I think we are doing something totally wrong!
> >>Do we need some Network solution to make failover performance
> >>better?
> >>Or is there a chance to look closer at deployment descriptors
> >>or
> >>weblogic.system.executeThreadCount
> >>or weblogic.system.percentSocketReaders settings?
> >>
> >>Thanks in advance for any help!
> >>
> >>Fleming
> >>
> >
-
Hi All,
I am facing some serious application pool crash in one of my customer's Production site SharePoint servers. The Application Error logs in the event Viewer says -
Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c8f9
Exception code: 0xc0000374
Fault offset: 0x00000000000c40f2
Faulting process id: 0x1414
Faulting application start time: 0x01ce5edada76109d
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 5a69ec1e-cace-11e2-9be2-441ea13bf8be
At the same time the SharePoint ULS logs says -
1)
06/13/2013 03:44:29.53 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8e2s
Medium Unknown SPRequest error occurred. More information: 0x80070005 8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General
8e25 Medium Failed to look up string with key "FSAdmin_SiteSettings_UserContextManagement_ToolTip", keyfile Microsoft.Office.Server.Search.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8l3c
Medium Localized resource for token 'FSAdmin_SiteSettings_UserContextManagement_ToolTip' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\14\Template\Features\SearchExtensions\ExtendedSearchAdminLinks.xml". 8b343224-4aa6-490c-8a2a-ce06ac160773
2)
06/13/2013 03:44:29.01 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
Web Parts
emt4 High Error initializing Safe control - Assembly:Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c TypeName: Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl
Error: Could not load type 'Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl' from assembly 'Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:29.50 w3wp.exe (0x0808)
0x2DF0 SharePoint Foundation Logging Correlation Data
xmnv Medium Site=/ 8b343224-4aa6-490c-8a2a-ce06ac160773
3)
06/13/2013 03:43:59.67 w3wp.exe (0x263C) 0x24D8 SharePoint Foundation
Performance 9fx9
Medium Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip. at Microsoft.SharePoint.SPListItem.GetValue(SPField fld,
Int32 columnNumber, Boolean bRaw, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.GetValue(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.get_Item(String fieldName)
at Microsoft.SharePoint.WebControls.BaseFieldControl.get_ItemFieldValue() at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.RenderFieldForDisplay(HtmlTextWriter output) at Microsoft.SharePoint.WebControls.BaseFieldControl.Render(HtmlTextWriter
output) at Microsoft.SharePoint.Publishing.WebControls.BaseRichField.Render(HtmlTextWriter output) at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.R...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...ender(HtmlTextWriter output) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection
children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)
at System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter
writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWrit...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...er writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer,
ICollection children) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context)
at Microsoft.SharePoint.Publishing.TemplateRedirectionPage.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionSte...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...p step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception
error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr module...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...Data, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67 w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance g4zd
High Performance degradation: note field [PublishingPageContent] was not in demoted fields. b8d0b8ca-8386-441f-8fce-d79fe72556e1
Anybody has any idea whats going on? I need to fix this ASAP as we are suppose to go live in next few days.
SoumalyaHello Soumalya,
Do you have an update on your issue? We are actually experiencing a similar issue at a new customer.
- Dennis | Netherlands | Blog |
Twitter -
Performance Degradation with EJBs
I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
Therefore I believe the problem is a performance degradation on Entity EJBs that
get invoked in a loop.
I am using OC4J 10.1.3.3.
I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
EJB version being used is 2.1.One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
Overview:
http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
Download:
http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
Install/Config Guide:
http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
Usage Scenarios:
http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf -
Performance Degradation from SSL
I have read articles which are showing that performance could go down to
1/10 with certain servers (Reference
http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
using SSL.
I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
performance degradation would I see if I switch all my transactions from
normal unsecure http transactions to secure ones (SSL V3)?
Any help appreciated.
best regards, AndreasAndreas,
Internal benchmarks (unofficial) have shown SSL to be 65-80% slower than
typical connections. So, anywhere between 3 to 5 times slower. This is the
same across all http servers.
In Denali (the next release), we're adding a performance pack enhancement
that includes native code impls of some of our crypto code. This should
show large speedups when it's released in March.
Thanks!
Michael Girdley
Sr. Product Manager
WebLogic Server
BEA Systems
ph. 415.364.4556
[email protected]
Andreas Rudolf <[email protected]> wrote in message
news:820bv6$br6$[email protected]..
I have read articles which are showing that performance could go down to
1/10 with certain servers (Reference
http://isglabs.rainbow.com/isglabs/shperformance/SHPerformance.html) when
using SSL.
I am cuurently using WebLogic 4.5. Can anybody tell me what kind of
performance degradation would I see if I switch all my transactions from
normal unsecure http transactions to secure ones (SSL V3)?
Any help appreciated.
best regards, Andreas
Maybe you are looking for
-
How can i texture just 1 face of a cube?
Im wondering how i could just set a texture to the TOP face of a cube Edited by: newber on Jul 10, 2008 4:38 PM
-
My phone bill is paid and I can't send or receive msgs and I tried to call Rogers but that doesn't even work and if it was a bill issue I could still call Rogers think it might be software but I would like to know what to do
-
Mail to File Integration(Mail Sender Communication channel status RED)
Hi all, I am doing Email to File Integration Scenario, XI is reading mails from an particular exchange server mail id & putting it in to a file. now all IR & ID configuration is done. But in RuntimeW Bench -> Adapter Monitreing The Sender Mail Comm
-
File open issue, after file deleted. fopen status is failing in VC++
/* fopen example */ #include <stdio.h> int _tmain(int argc, _TCHAR* argv[]) FILE * pFile; for(int i=0; i < 1000000; i++) bool ret = remove("C:\\abc.txt"); pFile = fopen ("C:\\abc.txt","w"); if (pFile!=NULL)
-
My mini ipad screen show error ipad is diablw
my screen show error:ipad disable