KNOWN ISSUES 3513544: Performance degradation...
I see that version 3.1 of the Microsoft Drivers for PHP for SQL Server was published on 12/12/2014 and is available on
http://www.microsoft.com/en-us/download/details.aspx?id=20098. Thank you Microsoft...
One thing that bothered me though is the "Known issue" described at the end of the release.txt file included in the SQLSRV31.EXE package:
KNOWN ISSUES
"3513544: Performance degradation when using Microsoft Drivers 3.1 for PHP for SQL Server with Windows 7/Windows Server 2008 R2 and previous versions. Clients connecting to supported versions of Microsoft SQL Server may notice decreased performance when
opening and closing connections in a Windows 7/Windows Server 2008 R2 environment. The recommended course of action is to upgrade to Windows 8/Windows Server 2012 or later."
Has anybody experienced that "decreased performance when opening and closing connections" problem on Windows Server 2008 R2? If you have, how bad is it?
And are there any solutions - other than the "recommended course of action" ("upgrade to Windows 8/Windows Server 2012 or later")? In a corporate environment upgrading a OS isn't always a simple thing that you do in a few minutes. It can
take weeks or months of planning and testing...
As I googled, apparently there are less articles mentioned this issue. On the worst surmise, MS may just drive you to upgrade :P
I don't find a specific document on this, this may be a better question on a dedicated PHP forum.
Similar Messages
-
Are there any known issues with Adobe Edge Animate and Yosemite? Experiencing performance issues since upgrading OS. Animation I was working on that had been performing in browser fine suddenly stopped working, and was not related to any action I had done at that point. Also was working in it today and program stopped responding to key board short cut commands.
I am having a whole slew of odd interface problems with a fresh 2014.1.1 on a fresh macbook pro with latest Yosemite. Program locks up, cursor selections don't show, things disappear. I have a mac mini also and the program runs fine on it. Is there possibly something related to the solid state hard drive in new macs?
-
Migrate SQL Server 7.0 to Oracle 8i - Any known issues??
Hi,
I am in the process of migrating SQL Server database to Oracle 8i for testing purpose. When I do migrate SQL Server database to Oracle 8i, am I doing any harm to the exiting SQL Server database? Would the users be able to use SQL Server database as usual?? Are there any known issues in this regard?
Please reply.
Thanks.
RameshThe Migration Workbench copys the information it requires from the source database and stores it in the Migration Workbench Repository, which is separate from the Source database. So, to answer your question, yes, the users can continue to use the SQL Server database. The data move may cause some system performance degradation. It may also be an idea to replicate the SQL Server database, in case any complication should arise.
Hope this helps
Dan -
Excel Pivot Table with Date Hierarchies - query performance degradation
For the sake of this explanation, I’m going to try and keep it simple. Slicing the data by additional dimensions only makes the issue worse. I’ll keep this description to one fact table and three dimensions. Also, I’m fairly new to SSAS Tabular; I’ve worked
with SSAS Multidimensional in the past.
We’ve got a fact table that keeps track of bill pay payments made over time. Currently, we only have about six months of data, with the fact row count at just under 900,000 rows. The grain is daily.
There is an Account dimension (approx. 460,000 rows), with details about the individual making a payment.
There is a Payment Category dimension (approx.. 35,000 rows), which essentially groups various Payees into groups which we like to report on: Automobile Loan, Mortgage, Insurance, etc.
There is the requisite Date dimension (exactly 62093 rows-more days than we need?), which allows visibility as to what is being paid when.
Using this DW model, I’ve created a SSAS BISM Tabular model, from which Excel 2010 is ultimately used to perform some analysis, using Pivot Tables. In the tabular model, for easier navigation (doing what I’ve always done in SSAS MultiDimensional), I’ve created
several Date Hierarchies, Year-Month, Year-Quarter-Month, etc.
There are currently only two measures defined in the Tabular model: one for the “Sum of PaymentAmount”; one for the “PaymentsProcessed”.
OK, in Excel 2010, using a Pivot Table, drag the “Sum of PaymentAmount” measure to the Values section, next to/under the PivotTable Field List. Not too exciting, just the grand total of all Payments, for all time.
Drag the “YearMonth” hierarchy (from the Date dimension) to the “Column Labels” section. After expanding the year hierarchy to see the months, now the totals are for each of the months, for which we have data, for June through November, 2013.
Drag the “PaymentCategory” (from the Payment Categories dimension) to the “Report Filter” section. Filter accordingly: We just want to see the monthly totals for “Automobile Loans”.
Now, some details. Drag the “AccountSK” (hiding the actual account numbers) to the “Row Labels” section. This shows all accounts that have made Automobile Loan payments over the last six months, showing the actual payment amounts.
So far, so good. Remember, I’m using a Date Hierarchy here, in this case “YearMonth”
Now, if any of the other attributes on the Account dimension table, say “CreditScore”, or “LongName”, are subsequently dragged over to the “Row Lables” section, under the “AccountSK”, the results will never come back, before timing out or by giving up and
pressing ESCape!
If this exact scenario is done by removing the Date Hierarchy, “YearMonth” from the “Column Labels” and replace it with “Year” and “MonthName” attributes from the Date dimension, these fields not being in any sort of hierarchy, adding an additional “Account”
attribute does not cause any substantial delay.
What I’m trying to find out is why is this happening? Is there anything I can do as a work around, other than what I’ve done by not using a Date Hierarchy? Is this a known issue with DAX and the query conversion to MDX? Something else?
I’ve done a SQL Profiler trace, but I’m not sure at this point what it all means. In the MDX query there is a CrossJoin involved. There are also numerous VertiPaq Scans which seems to be going through each and every AccountSK in the Account dimension, not
just the ones filtered, to get an additional attribute (About 3,600 accounts which are “Automobile Loan” payments.).
Any thoughts?
Thanks! Happy Holidays!
AAOThanks for your reply Marco. I've been reading your book, too, getting into Tabular.
I've set up the Excel Pivot Table using either the Year/MonthName levels, or the YearMonth hierarchy and then adding the additional attribute for the CreditScore.
Incidentally, when using the YearMonth hierarchy and adding the CreditScore, all is well, if the Year has not been "opened". When this is done, I suspect the same thing is going on.
From SQL Profiler, each of the individual MDX queries below (formatted a bit for readability).
Thanks!
// MDX query using separate Year and MonthName levels, NO hierarchy.
SELECT
NON EMPTY
Hierarchize(
DrilldownMember(
CrossJoin(
{[Date].[Year].[All],[Date].[Year].[Year].AllMembers},
{([Date].[MonthName].[All])}
,[Date].[Year].[Year].AllMembers, [Date].[MonthName]
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME
ON COLUMNS,
NON EMPTY
Hierarchize(
DrilldownMember(
CrossJoin(
{[Accounts].[AccountSK].[All],[Accounts].[AccountSK].[AccountSK].AllMembers},
{([Accounts].[CreditScore].[All])}
,[Accounts].[AccountSK].[AccountSK].AllMembers, [Accounts].[CreditScore]
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME
ON ROWS
FROM [PscuPrototype]
WHERE ([PaymentCategories].[PaymentCategory].&[Automobile Loan],[Measures].[Sum of PaymentAmount])
CELL PROPERTIES VALUE, FORMAT_STRING, LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS
// MDX query using separate YearMonth hierarchy (Year, MonthName).
SELECT
NON EMPTY
Hierarchize(
DrilldownMember(
{{DrilldownLevel({[Date].[YearMonth].[All]},,,INCLUDE_CALC_MEMBERS)}},
{[Date].[YearMonth].[Year].&[2013]},,,INCLUDE_CALC_MEMBERS
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME
ON COLUMNS,
NON EMPTY
Hierarchize(
DrilldownMember(
CrossJoin(
{[Accounts].[AccountSK].[All],[Accounts].[AccountSK].[AccountSK].AllMembers},
{([Accounts].[CreditScore].[All])}
,[Accounts].[AccountSK].[AccountSK].AllMembers, [Accounts].[CreditScore]
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME
ON ROWS
FROM [PscuPrototype]
WHERE ([PaymentCategories].[PaymentCategory].&[Automobile Loan],[Measures].[Sum of PaymentAmount])
CELL PROPERTIES VALUE, FORMAT_STRING, LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS
AAO -
Performance degradation with addition of unicasting option
We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
*-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
*-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
and the following in the remote client node that point to one of the server node like this
*-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
Is performance degradation in well-known addressing is a limitation and expected?Hi Mahesh,
From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
Thanks,
Mark
Oracle Coherence -
Hi All,
I am facing some serious application pool crash in one of my customer's Production site SharePoint servers. The Application Error logs in the event Viewer says -
Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c8f9
Exception code: 0xc0000374
Fault offset: 0x00000000000c40f2
Faulting process id: 0x1414
Faulting application start time: 0x01ce5edada76109d
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 5a69ec1e-cace-11e2-9be2-441ea13bf8be
At the same time the SharePoint ULS logs says -
1)
06/13/2013 03:44:29.53 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8e2s
Medium Unknown SPRequest error occurred. More information: 0x80070005 8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General
8e25 Medium Failed to look up string with key "FSAdmin_SiteSettings_UserContextManagement_ToolTip", keyfile Microsoft.Office.Server.Search.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8l3c
Medium Localized resource for token 'FSAdmin_SiteSettings_UserContextManagement_ToolTip' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\14\Template\Features\SearchExtensions\ExtendedSearchAdminLinks.xml". 8b343224-4aa6-490c-8a2a-ce06ac160773
2)
06/13/2013 03:44:29.01 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
Web Parts
emt4 High Error initializing Safe control - Assembly:Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c TypeName: Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl
Error: Could not load type 'Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl' from assembly 'Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:29.50 w3wp.exe (0x0808)
0x2DF0 SharePoint Foundation Logging Correlation Data
xmnv Medium Site=/ 8b343224-4aa6-490c-8a2a-ce06ac160773
3)
06/13/2013 03:43:59.67 w3wp.exe (0x263C) 0x24D8 SharePoint Foundation
Performance 9fx9
Medium Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip. at Microsoft.SharePoint.SPListItem.GetValue(SPField fld,
Int32 columnNumber, Boolean bRaw, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.GetValue(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.get_Item(String fieldName)
at Microsoft.SharePoint.WebControls.BaseFieldControl.get_ItemFieldValue() at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.RenderFieldForDisplay(HtmlTextWriter output) at Microsoft.SharePoint.WebControls.BaseFieldControl.Render(HtmlTextWriter
output) at Microsoft.SharePoint.Publishing.WebControls.BaseRichField.Render(HtmlTextWriter output) at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.R...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...ender(HtmlTextWriter output) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection
children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)
at System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter
writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWrit...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...er writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer,
ICollection children) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context)
at Microsoft.SharePoint.Publishing.TemplateRedirectionPage.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionSte...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...p step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception
error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr module...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...Data, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67 w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance g4zd
High Performance degradation: note field [PublishingPageContent] was not in demoted fields. b8d0b8ca-8386-441f-8fce-d79fe72556e1
Anybody has any idea whats going on? I need to fix this ASAP as we are suppose to go live in next few days.
SoumalyaHello Soumalya,
Do you have an update on your issue? We are actually experiencing a similar issue at a new customer.
- Dennis | Netherlands | Blog |
Twitter -
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
time and Weblogic 5.1 sp6.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
the weblogic wont even shutdown completely when trying to shutdown.
Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
sp 9) in the mean time.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
ManiThere shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
encryption. If that's the issue, you should post in the security
newsgroup or contact [email protected]
-- Rob
Mani Ayyalas wrote:
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
time and Weblogic 5.1 sp6.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
the weblogic wont even shutdown completely when trying to shutdown.
Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
sp 9) in the mean time.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
Mani -
Performance degradation encountered while running BOE in clustered set up
Problem Statement:
We have a clustered BOE set up in Production with 2 CMS servers (named boe01 and boe02) . Mantenix application (Standard J2EE application in a clustered set up) points to these BOE services hosted on virtual machines to generate reports. As soon as BOE services on both boe01 and boe02 are up and running , performance degradation is observed i.e (response times varies from 7sec to 30sec) .
The same set up works fine when BOE services on boe02 is turned off i.e only boe01 is up and running.No drastic variation is noticed.
BOE Details : SAP BusinessObjects environment XIR2 SP3 running on Windows 2003 Servers.(Virtual machines)
Possible Problem Areas as per our analysis
1) Node 2 Virtual Machine Issue:
This currently being part of the Production infrastructure, any problem assessment testing is not possible.
2) BOE Configuration Issue
Comparison report to check the build between BOE 01 and BOE 02 - Support team has confirmed no major installation differences apart from a minor Operating System setting difference.Question being is there some configuration/setting that we are missing ?
3) Possible BOE Cluster Issue:
Tests in staging environment ( with a similar clustered BOE setup ) have proved inconclusive.
We require your help in
- Root cause Analysis for this problem.
- Any troubleshooting action henceforth.
Another observation from our Weblogic support engineers for the above set up which may or may not be related to the problem is mentioned below.
When the services on BOE_2 are shutdown and we try to fetch a particular report from BOE_1 (Which is running), the following WARNING/ERROR comes up:-
07/09/2011 10:22:26 AM EST> <WARN> <com.crystaldecisions.celib.trace.d.if(Unknown Source)> - getUnmanagedService(): svc=BlockingReportSourceRepository,spec=aps<BOE_1> ,cluster:@BOE_OLTP, kind:cacheserver, name:<BOE_2>.cacheserver.cacheserver, queryString:null, m_replaceable:true,uri=osca:iiop://<BOE_1>;SI_SESSIONID=299466JqxiPSPUTef8huXO
com.crystaldecisions.thirdparty.org.omg.CORBA.TRANSIENT: attempt to establish connection failed: java.net.ConnectException: Connection timed out: connect minor code: 0x4f4f0001 completed: No
at com.crystaldecisions.thirdparty.com.ooc.OCI.IIOP.Connector_impl.connect(Connector_impl.java:150)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.createTransport(GIOPClient.java:233)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClientWorkersPool.next(GIOPClientWorkersPool.java:122)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.getWorker(GIOPClient.java:105)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.startDowncall(GIOPClient.java:409)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshalBase(Downcall.java:181)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshal(Downcall.java:298)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.preMarshal(DowncallStub.java:250)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.setupRequest(DowncallStub.java:530)
at com.crystaldecisions.thirdparty.com.ooc.CORBA.Delegate.request(Delegate.java:556)
at com.crystaldecisions.thirdparty.org.omg.CORBA.portable.ObjectImpl._request(ObjectImpl.java:118)
at com.crystaldecisions.enterprise.ocaframework.idl.ImplServ._OSCAFactoryStub.getServices(_OSCAFactoryStub.java:806)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getUnmanagedService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.AbstractStubHelper.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(Unknown Source)
at com.crystaldecisions.sdk.occa.managedreports.ps.internal.a$a.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
We see the above warning coming 2 or 3 times before the request is processed and then we see the report. We have checked our config's for the cluster but didn't find anything concrete.
Is this a normal behavior of the software or can we optimize it?
Any assistance that you can provide would be greatRahul,
I have exactly the same problem running BO 3.1 SP3 in a 2 machine cluster on AIX. Exact same full install on both machines. When I take down one of the machines the performance is much better.
An example of the problem now is that when i run the command ./ccm.sh -display -username administrator -password xxx on the either box when they are both up and running, I sometimes receive a timeout error (over 15mins)
If I run SQLplus direct on the boxes to the CMS DB then the response is instant. Tnspings of course shows no problems
When I bring down one of the machines and run the command ./ccm.sh -display again then this brings back results in less than a minute...
I am baffled as to the problem so was wondering if you found anything from your end
Cheers
Chris -
Performance degradation using Jolt ASP Connectivity for TUXEDO
We have a customer that uses Jolt ASP Connectivity for TUXEDO and is suffering
from a severe performance degradation over time.
Initial response times are fine (1 s.), but they tend to increase to 3 minutes
after some time (well, eh, a day or so).
Data:
- TUXEDO 7.1
- Jolt 1.2.1
- Relatively recent rolling patch installed (so no there are probably no JSH performance
issues and memory leaks as fixed in earlier patches)
The ULOG shows that during the night the JSH instances notice a timeout on behalf
of the client connection and do a forced shutdown of the client:
040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1185: "INFO: Userid:
[ZZ_Webpol], Clientid: [AP_WEBSRV3] timed out due to inactivity"
040911.csu013.cs.kadaster.nl!JSH.234333.1.-2: JOLT_CAT:1198: "WARN: Forced
shutdown of client; user name 'ZZ_Webpol'; client name 'AP_WEBSRV3'"
This happens every 10 minutes as per configuration of the JSL (-T flag).
The customer "solved" the problem for the time being by increasing the connection
pool size on the IIS web server.
However, they didn't find a "smoking gun" - no definite cause for the problem.
So, it is debatable whether their "solution" suffices.
It is my suspicion the problem might be located in the Jolt ASP classes running
on the IIS.
Maybe the connection pool somehow loses connections over time, causing subsequent
users having to queue before they get served (although an exception should be
raised if no connections are available).
However, there's no documentation on the functioning of the connection pool for
Jolt ASP.
My questions:
1) What's the algorithm used for managing connections with Jolt ASP for TUXEDO?
2) If connections are terminated by a JSH, will a new connection be established
from the web server automatically? (this is especially interesting, because the
connection policy can be configured in the JSL CLOPT, but there's no info on how
this should be handled/configured by Jolt ASP connectivity for TUXEDO)
Regards,
Winfried ScheuldermanHi,
For ASP connectivity I would suggest looking at the .Net client facility provided in Tuxedo 9.1 and later.
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
Performance degradation of Weblogic 5.1 sp 6 when used with Peoplesoft 8
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 2000. Besides the weblogic wont even shutdown completely
when trying to shutdown.
Weblogic customer support advised to upgrade to sp 8 but sp 8 wont support 128
bit encription which peoplesoft 8 need.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
ManiThere shouldn't be any reason that 5.1 SP9 wouldn't support 128 bit
encryption. If that's the issue, you should post in the security
newsgroup or contact [email protected]
-- Rob
Mani Ayyalas wrote:
Recenly we have upgraded from Peoplesoft 7 to Peoplesoft 8.1.2
Peoplesoft 8.1.2 bundled with Peopletools(Web based front end ) for the first
time and Weblogic 5.1 sp6.
There is performance degradation of the weblogic 5.1 sp 6 (on Windows 2000 ) when
the no of users increases to 80. The Weblogic is becoming 100% CPU bound. Besides
the weblogic wont even shutdown completely when trying to shutdown.
Peoplesoft customer support advised to upgrade to Weblogic 5.1 sp 9 but sp 9 wont
support 128 bit encription which Peoplesoft 8.1.2 application need. Peoplesoft
8.1.3 will be supporting 128 bit encription after some 3 months. We have to get
along with the above mentioned configuration (Peoplesoft 8.1.2 with Weblogic 5.1
sp 9) in the mean time.
Any of you had such an experience ? Please let me know if there is a solution
or workaround.
Thanks in advance.
Mani -
Performance Degradation of new Servers
Hi All,
We are experiencing massive performance degradation on production when the system is under heavy use. It seems to be at its worst around month end. The worst effected transactions are the Cost / Profit centre 'line item' reports using RCOPCA02 (and other similar programs).
We have the message server running on the database instance as well as 2x App servers. The Msg Srv and One of the App servers are very similar builds
4x AMD Opteron 875
16gb RAM (10gb Pagefile)
Both running Win Server 2003
We're using MS SQL 2000
The other message server is a bit weaker but has been around for some time (few years) and hasn't caused any issues.
We have recently moved the Msg Server from an old (much weaker) server to the new build and since then seem to have performance issues. Initially after the move we had issues with the number of Page Table Entries (where down at about 6000-8000). Using the /3GB /USERVA=2900 switches we have this up to about 49,000.
If anyone has had a similar experience or could offer some assistance it would be much appreciated!!!
Cheers,
KyeWhile investigating a different issue we've found that we have 3 servers that share the same set of disks on the SAN. Two of these where Message Servers (R/3 and BW) when both of these where running the 'Average disk queue Length' was at times reaching 400! (should be around 1-3).
We've moved one of these instances back to DR which has relieved some of the pressure from the disks.
We have also added another index to the GLPCA table (which was causing most of the problems). -
Performance degradation after setting filesystemio_option=setall from none.
Hi All,
We have facing performance degradation after setting filesystemio_option=setall from none on my two servers as mentioned below.
Red Hat Enterprise Linux AS release 4 (Nahant Update 7) 2.6.9 55.ELhugemem (32-bit)
Red Hat Enterprise Linux Server release 5.2 (Tikanga) 2.6.18 92.1.10.el5 (64-bit)
We are seeing lots of Disk I/O happening. We expected "*filesystemio_option=setall* " will improve performance but it is degrading. We getting slowness complains.
Please let me know do we need to set somethign else along with this ...like any otimizer parameter( e.g. optimizer_index_cost_adj, optimizer_index_caching).
Please help.Hi Suraj,
<speculation>
You switched filesystemio_options to setall from none, so, the most likely reason for performance degradation after switching to setall is the implementation of directio. Direct I/O will skip the filesystem buffer cache, and and allow Oracle to read directly from disk to the database buffer cache. However, on a system where direct I/O is not implemented, which is what you had until you recently messed with that parameter, it's likely that you had an undersized database buffer cache, but that was ok, because many (most) of the physical I/Os your database was doing, were actually being serviced by the O/S filesystem buffer cache. But, you introduced direct I/O, and wiped out the ability of the O/S to service any physical I/Os from filesystem buffer cache. This means that every cache miss on the database buffer cache, turns into a real, physical, spin-the-disk, move-the-drive-head, physical I/O. And, you are suffering the performance consequences.
</speculation>
Ok, end of speculation. Now, assuming that what I've outlined above is actually going on, what to do? Why is direct I/O lower performing than buffered, non-direct I/O? Shouldn't it's performance be superior?
Well, when you have an established system that's using buffered I/O, and you switch to direct I/O, you almost always will have to increase the size of the database buffer cache. The problem is that you took a huge chunk of memory away from the the O/S, that it was using to buffer your I/Os and avoid physical I/O. So, now, you need to make up for it, by increasing the size of the database buffer cache. You can do this, without buying more memory for the box, because the O/S is no longer going to need to use so much memory for filesystem buffers.
So, what to do? Is it worth switching? Well, on balance, it makes sense to use direct I/O, and give Oracle a larger database buffer cache, for the simple fact that (particularly on a server that's dedicated to being an Oracle database server), Oracle has far more sophisticated caching algorithms, and a better understanding of the various types of data being cached, and so should be able to make more efficient use of the memory, than the (relatively) brain dead caching algorithms of the kernel and filesystem mechanisms.
But, once again, it all comes down to this:
What problem are you trying to solve? Did you have any I/O related issues? Do you have any compelling reason to implement direct I/O? Rule #1 is "if it ain't broke, don't fix it." Did you just violate rule #1? :-)
Finally, since you're on Linux, you can use the 'free' command to see how much memory is on the box, how much is free, and how much is dedicated to filesystem cache buffers. This response is already pretty long, so, I'm not going to get into details, however, if you're not familiar with the command, the results could be misleading. Read the man page, and try to be clear about understanding it before you make any assumptions about the output.
Hope that helps,
-Mark -
Performance degraded with VirtualListView control
Hi,
We are using VirtualListView control for retrieving LDAP entries from SunOne directory server. We observed that with VirtualListView control, search performance degraded considerabaly (almost down by 95%) as compared to retrieving same result without using Paging mechanism.
We have configured the directory server for better performance. Also added the index on attributes which we are retrieving using search operation. But still performance is very bad. Does any one has faced this issue earlier? Are there any settings which we can use to improve the performance?
We do not want to retrieve all records without using paging to avoid any memory issue.
Thanks,
Kiran"Do i need to some setting adjustments ?"Probably not.
"The performace degraded drastically."Could you elaborate a bit more please? Could you give an example please?
/r -
Hello all,
i am planning to apply the patchset 10.2.0.3 on my 2 node 10.2.0.2 RAC on Solaris 10.
In the readme.html, under the known issues its mentioned
11.8 Memory Access Mode not Supported in Oracle RAC
In an Oracle RAC set up, if database instances are not running on the same node as Oracle Enterprise Manager Management Service, then monitoring in memory access mode does not display performance chart data.
This issue is tracked with Oracle bug 5559618.Now how do i know whether oracle has some fix for bug 5559618 .
I did search in th metalink, but could not find any patch or anything.
Is this bug fixed in this patchset or will oracle provide some other one off patches for this bug ??
any idea ??
TIA,
JJSomebody replied my query on Metalink telling me applying the patchset is the same as the server and to follow the installation manual instructions :) Oracle patchset installation help file (document) could be clearer for the Client installation (as they did include explicit sections for RAC's, etc in the same doc).
Edited by: zaferaktan on Jul 2, 2009 5:31 PM -
SQL Performance Degrades Severely in WAN
The Oracle server is located in the central LAN and the client is located in the remote LAN. These two LANs are connected via 10Mbps wide network. Both LANs are of 100M bps inside. If the SQL commands are issued in the same LAN as Oracle server is located in, the speed is fast. However, if the same commands are issued in the remote LAN, the speed is degraded severly, almost 10 times slower. There are only a few results returned by these SQL commands. My questions are the reason of this performance degradeness and how to improve the performance in the remote client.
The server is Oracle817 and OPS, and the SQL commands are issued in PB programs in the remote client.
Thanks very much.Thank you very much.
I found another point which might lead to the performance problem. The server's Listener.ora is configured as following:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
And the client's TNSNAMES.ORA is configured as following:
EMIS02.HZJYJ.COM.CN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.26.17.18)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = emis)
It shows that the listener protocol is set as IPC. However, the client is set to the protocol of TCP. Would there be a network latency for the protocol conversion between IPC and TCP?
Thanks a lot.
Maybe you are looking for
-
So what's going on? It all started a few months ago when I realized that the $20 iTunes gift-card credit was used without my knowledge. Someone had purchased an iPhone app that was obviously foreign. I immediately changed my passwords to something I'
-
why is itunes saying "iphone could not be synced because this computer is no longer authorized for purchased items that are on this iphone"
-
10.4.5 to hp1200 10.3.9 iMac
I'm slowly going crazy with this one. My wife's iMac is running 10.3.9 and has an hp laserjet 1200 attached via usb. My desktop G5 and my MacBook Pro are both running 10.4.5 with all the latest ugrades, etc. The network is Airport Extreme with the bb
-
Linq Queries : Translate Local variable references
Hello ! I'm using the information from this MSDN Blog : http://blogs.msdn.com/b/mattwar/archive/2007/08/01/linq-building-an-iqueryable-provider-part-iii.aspx everything works very well except the case when I have a DateTime variable inside the query
-
Secret recording of actions in photoshop elements revealed!
First screenshot shows getting ready to record an action in photoshop elements 8. The second screenshot shows testing the newly recorded action with the script events manager.