Max Degree of Parallelism, Timeouts and Deadlocks

We have been trying to track down some timeout issues with our SQL Server. Our SQL Server runs a AccPac Account System, an Internal Intranet site and some SharePoint Content DBs.  
We know that some of the queries are complex and take some time, though the timeouts seem to happen randomly and not always on the same pages/queries.   IIS and the SQL timeouts are set to 60 and 120 seconds. 
Looking at some of the SQL Server settings, I noticed that MAXDOP was set to 1. Before doing extensive research on it and looking through the BOL, another DBA and I changed it to 0. 
Server is a Hyper-V VM with:
Processors:4
NUMA nodes:2
Sockets: 2
Interesting thing happened. Our Timeouts seem to have disappeared. Going from several a day, we are now at 1 every few days. Though now the issue we are having is that our Deadlocks have gone through the roof. Going from one or two every few days, we are
up to 8+ a day!
We have been changing our Select statements to include WITH (NOLOCK) so they do not compete with the UPDATE statements they usually fall victim to. The Deadlocks and timeouts do not seem to be related to any of the SharePoint Content DBs. All of the deadlocks
are with our Intranet Site and When it communicates with the AccPac DB, or the Internet Site on its own.
Any Suggestions on where I should be focusing my energy on benchmarking and tuning the server?
Thank you,
  Scott<-

Thank you all for your replies.
The server had 30GB of RAM and then we bumped it up to 40GB at the same time we changed the MAXDOP to 0.
It was set to 1 because if it isn't MS won't support your SharePoint installation. This is from the Setup Guide for SharePoint on SQL Server, MAXDOP = 1 is a must in their book for official support. It always forces serial plans, because to be honest
the sharepoint queries are extremely terrible.
I understand this, though I would guess that the install of SharePoint didn’t actually do the MAXDOP =1 setting during the install? We basically have two Sharepoint sites on the server. One has about 10 users and the other has maybe 20 users. The Sites are
not used very much either. So I didn't think there would be too much impact. 
Though now the issue we are having is that our Deadlocks have gone through the roof.
You probably didn't get this before (though they probably still happened) because the executions were forced serially and you dodged many-a-bullet because of this artificial speed bump. Deadlocks are application based, pure and simple.
The accounting system we are running is something we typically do not alter the DB Contents directly, we are only peering into the database to present information to the user. We looked at READ_COMMITTED_SNAPSHOT, though since that is a DB setting on not
on individual queries, we do not want to alter the Accounting DB as we could not know the potential ramifications of what that would cause.
A Typical Deadlock will occur when the Accounting system is creating or modifying an order’s master record so no one else can modify it, though instead of row lock, it locks the table. This is something that is out of our control. When we do a select against
the same table from the Intranet site, we get a deadlock unless we use the WITH (NOLOCK). The data that we get is not Super Critical. The only potential issue that might happen is that in an uncommitted Transaction form the Accounting system, it could be adding
multiple rows to an order and when we SELECT the data, we might miss a line Item or two.
We have been changing our Select statements to include WITH (NOLOCK) so they do not compete with the UPDATE statements they usually fall victim to.
This really isn't going to get very far to be honest. Are you deadlocking on the same rows? It seems to be the order of operations taken by either the queries or the logic used in them. Without deadlock information there really is nothing to diagnose, but
that definitely sounds like the same resource being used in multiple places when it probably doesn't need to be.
This is one of the typical deadlocks that we get. Intranet Site is getting Totals for Orders while the accounting system is in the process of setting its internal record Lock on an Order. 
<EVENT_INSTANCE>
<EventType>DEADLOCK_GRAPH</EventType>
<PostTime>2014-05-12T15:26:09.447</PostTime>
<SPID>23</SPID>
<TextData>
<deadlock-list>
<deadlock victim="process2f848b048">
<process-list>
<process id="process2f848b048" taskpriority="0" logused="0" waitresource="OBJECT: 12:644249400:0 " waittime="1295" ownerId="247639995" transactionname="SELECT" lasttranstarted="2014-05-12T15:26:08.150" XDES="0x69d1d3620" lockMode="IS" schedulerid="2" kpid="2856" status="suspended" spid="184" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2014-05-12T15:26:08.150" lastbatchcompleted="2014-05-12T15:26:08.150" lastattention="2014-05-12T14:50:52.280" clientapp=".Net SqlClient Data Provider" hostname="VSVR-WWW-INT12" hostpid="15060" loginname="SFA" isolationlevel="read committed (2)" xactid="247639995" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056">
<executionStack>
<frame procname="SFA.dbo.SAGE_SO_order_total_no_history_credit" line="20" stmtstart="1190" stmtend="5542" sqlhandle="0x030007004c642a7a7c17d000e9a100000100000000000000">
SELECT SUM(t.price * t.qtyord) AS On_Order_Total
FROM PRODATA01..somast m
INNER JOIN PRODATA01..sotran t ON m.sono = t.sono
INNER JOIN SFA..item i ON i.our_part_number COLLATE DATABASE_DEFAULT = t.item COLLATE DATABASE_DEFAULT
INNER JOIN SFA..supplier s ON s.supplier_key = i.supplier_key
INNER JOIN SFA..customer c ON c.abbreviation COLLATE DATABASE_DEFAULT = m.custno COLLATE DATABASE_DEFAULT
INNER JOIN SFA..sales_order_ownership soo ON soo.so_id_col = m.id_col
LEFT JOIN PRODATA01..potran p ON p.id_col = t.po_id_col
LEFT JOIN SFA..alloc_inv a ON a.sono COLLATE DATABASE_DEFAULT = t.sono COLLATE DATABASE_DEFAULT AND a.tranlineno = t.tranlineno
WHERE c.is_visible = 1 AND m.sostat NOT IN ('V','X') AND m.sotype IN ('C','','O')
AND t.sostat NOT IN ('V','X') AND t.sotype IN ('C','','O')
--AND t.rqdate BETWEEN @start_ordate AND @end_ordate
AND UPPER(LEFT(t.item,4)) &lt;&gt; 'SHIP' AND t.item NOT LIKE '[_]%'
AND ((SUBSTRING(m.ornum,2,1) = 'A' AND p.expdate &lt;= @end_ordate) OR (t.rqdate &lt;= @en </frame>
</executionStack>
<inputbuf>
Proc [Database Id = 7 Object Id = 2049598540] </inputbuf>
</process>
<process id="process51df0c2c8" taskpriority="0" logused="28364" waitresource="OBJECT: 12:1369823992:0 " waittime="1032" ownerId="247639856" transactionname="user_transaction" lasttranstarted="2014-05-12T15:26:07.940" XDES="0xf8b5b620" lockMode="X" schedulerid="1" kpid="7640" status="suspended" spid="292" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-05-12T15:26:08.410" lastbatchcompleted="2014-05-12T15:26:08.410" clientapp="Sage Pro ERP version 7.5" hostname="VSVR-DESKTOP" hostpid="15892" loginname="AISSCH" isolationlevel="read uncommitted (1)" xactid="247639856" currentdb="12" lockTimeout="4294967295" clientoption1="536870944" clientoption2="128056">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="22" sqlhandle="0x02000000304ac5350b86da8b9422b389413bf23015ac25d0">
UPDATE PRODATA01..SOTRAN WITH (TABLOCK HOLDLOCK) SET lckuser = lckuser WHERE id_col = @P1 </frame>
<frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P1 float)UPDATE PRODATA01..SOTRAN WITH (TABLOCK HOLDLOCK) SET lckuser = lckuser WHERE id_col = @P1 </inputbuf>
</process>
</process-list>
<resource-list>
<objectlock lockPartition="0" objid="644249400" subresource="FULL" dbid="12" objectname="PRODATA01.dbo.somast" id="lock18b5e3900" mode="X" associatedObjectId="644249400">
<owner-list>
<owner id="process51df0c2c8" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process2f848b048" mode="IS" requestType="wait" />
</waiter-list>
</objectlock>
<objectlock lockPartition="0" objid="1369823992" subresource="FULL" dbid="12" objectname="PRODATA01.dbo.sotran" id="lock2ce1c4680" mode="SIX" associatedObjectId="1369823992">
<owner-list>
<owner id="process2f848b048" mode="IS" />
</owner-list>
<waiter-list>
<waiter id="process51df0c2c8" mode="X" requestType="convert" />
</waiter-list>
</objectlock>
</resource-list>
</deadlock>
</deadlock-list>
</TextData>
<TransactionID />
<LoginName>sa</LoginName>
<StartTime>2014-05-12T15:26:09.447</StartTime>
<ServerName>VSVR-SQL</ServerName>
<LoginSid>AQ==</LoginSid>
<EventSequence>2335848</EventSequence>
<IsSystem>1</IsSystem>
<SessionLoginName />
</EVENT_INSTANCE>
I'd (in parallel) look at why parallel plans are being chosen. Not that parallel plans are a bad thing, but is the cost of the execution so high that parallelism is chosen all of the time?
How can I determine the cost of different statements? The Current Cost threshold value is 5.
The last place I would set my effort is on the Dev team. Internal Intranet queries should not take 60 to 120 seconds. That's just asking for issues that you already have. If some larger functionality with that is needed, do it on the back end as
aggregation over a certain time period and use that new static data. Recompute as needed. This is especially true if your deadlocks are happening on these resources (chances are, it is).
We are working with the long queries.  Trying to Break them up.  We thought about backend processing of the data so its available for the user when they need it, but some of the pages that take time are not accessed that often, so if we gathered
the data every 10 minutes in the background it would be called way many more times in a day than would be called on demand. 
Thank you all again!

Similar Messages

  • Degree of parallelism and number of files in tablespace

    Hi,
    I am trying to find out the relationship between the number of files in a tablespace and degree of parallelism. does number of files in a tablespace effect dop in parallel query. because if more files are in the tablespace so the IO of the system has increased and system has become more feverable for parallel query.
    However i looked into the formulas for calculating dop i dont find any parameter which specify how many file are there in a tablespace. please give me the formula of calculting dop in oracle.
    regards
    Nick

    Maurice Muller wrote:
    Did you run this test on an Exadata Storage Server? How much IO throughput did you have per process? No. That one is from a RAC cluster of DL580 G5s. There were 7 in that query, but I had 8 total. One was down at the time due to hardware failure.
    The >500 DOP is from a RAC cluster of 16 DL580s (so DOP=512) to be exact.
    The amount of I/O you get per slave (or talking aggregate) is dependent on what the query execution plan is. Simple things like select count(*) are very I/O intensive, but not very CPU/memory intensive. The blocks are read, counted and then discarded. A group by or hash join will be more CPU intensive and less I/O intensive. Over the course of a given query execution the use of resources will generally alternate. Heavy on I/O at first, then more CPU heavy for a hash join, etc.
    How much IO throughput do you recommend per CPU(Core) for an Oracle DWH server?As a rule of thumb I size systems to have around 100MB/s of physical I/O throughput per CPU core. So for a four socket quad core DL580 G5 (16 cores) target 1600MB/s, which works out nicely to four 4Gbps FCP ports. Do note if you are using compression you will be delivering a logical I/O rate higher than the physical based on the compression ratio for the data.
    With Exadata, things are a little different. Since the CPUs that do the physical I/O are not at the DB layer, no wait I/O shows up in the metrics. The db blocks are transferred to the host using iDB protocol, not FCP like most external storage.
    For instance, here is a screen capture of a HP Oracle Database Machine running a single query that uses PQ
    http://structureddata.org/files/life_without_waitio.png
    The problem is that most storage system are, from my point of view, much to slow compared to the available CPUs performance. In most cases people don't realise that the IO throughput required for large parallel queries is not comparable with the one required for an OLTP system having >90% cache hit ratio.Very true. Also discussed here:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/hardware.htm
    Regards,
    Greg Rahn
    http://structureddata.org

  • SMQS in 2004s - recommended degree of parallel processing for BW

    Hi
    Regarding max connections for the outbound scheduler.
    With BI SPS 08 the standard was changed and the degree of parallelity '2' was introduced. We are now running BI 7.0 SPS 12 and we still have a degree of parallelity at '2' for the BW system when we look at SMQS.
    What is the recommended number of max parallel connections and what is your expirence with changing the number of parallel connections (problems, performance etc.)?
    Thanks in advance, kind regards,
    Torben
    Message was edited by:
            Torben Pedersen

    well, we have set it to 10 and it really seems to increase the performance! More oracle 060 dumps (deadlocks) have occured since this changed, but by choosing the right processing type in the infopackage and by deleting index before a load oracle 060 errors have descreased significant.

  • Traceroute timeouts and lots of packet loss when a...

    I host various site via the above, and since late last night and today, I am having connection timeout issues on all of them (but sites like bbc, bt etc are fine). I contacted them and performed a traceroute to my default site southee.co.uk which timed out. Below are the results:
    traceroute to southee.co.uk (37.61.236.12), 64 hops max, 52 byte packets
    1 bthomehub (192.168.1.254) 2.733 ms 2.414 ms 2.415 ms
    2 esr5.manchester5.broadband.bt.net (217.47.67.144) 72.412 ms 29.705 ms 131.735 ms
    3 217.47.67.13 (217.47.67.13) 31.390 ms 29.680 ms 103.936 ms
    4 213.1.69.226 (213.1.69.226) 41.172 ms 32.700 ms 129.323 ms
    5 31.55.165.103 (31.55.165.103) 30.791 ms 31.639 ms 130.306 ms
    6 213.120.162.69 (213.120.162.69) 31.248 ms 59.138 ms 30.657 ms
    7 31.55.165.109 (31.55.165.109) 32.159 ms 31.507 ms 31.513 ms
    8 acc2-10gige-9-2-0.mr.21cn-ipp.bt.net (109.159.250.228) 31.499 ms 31.325 ms
    acc2-10gige-0-2-0.mr.21cn-ipp.bt.net (109.159.250.194) 31.197 ms
    9 core2-te0-12-0-1.ealing.ukcore.bt.net (109.159.250.147) 41.744 ms
    core2-te0-13-0-0.ealing.ukcore.bt.net (109.159.250.139) 41.346 ms
    core2-te0-5-0-1.ealing.ukcore.bt.net (109.159.250.145) 41.744 ms
    10 peer1-xe3-3-1.telehouse.ukcore.bt.net (109.159.254.211) 39.527 ms
    peer1-xe10-0-0.telehouse.ukcore.bt.net (109.159.254.122) 38.791 ms 38.910 ms
    11 te2-3.sov-edge1.uk.timico.net (195.66.224.111) 54.032 ms 37.941 ms 38.642 ms
    12 78-25-201-30.static.dsl.as8607.net (78.25.201.30) 45.830 ms 46.413 ms 42.448 ms
    13 * * *
     They then performed a traceroute from the server and got the following, again with timeouts and packet loss. See below:
    1. 37.61.236.1 0.0% 10 0.5 0.7 0.4 2.9 0.8
    2. ae0-2061.ndc-core1.uk.timico 0.0% 10 0.3 0.3 0.2 0.5 0.1
    3. te2-3.sov-edge1.uk.timico.ne 0.0% 10 10.5 9.7 4.2 30.2 8.7
    4. linx1.ukcore.bt.net 0.0% 10 4.1 4.3 4.1 5.9 0.6
    5. host213-121-193-153.ukcore.b 0.0% 10 5.5 8.0 4.9 12.7 2.3
    6. acc2-10GigE-4-3-1.mr.21cn-ip 0.0% 10 11.4 11.4 11.4 11.6 0.1
    7. ??? 100.0 10 0.0 0.0 0.0 0.0 0.0
    8. 31.55.165.108 0.0% 10 12.1 12.1 11.8 12.4 0.2
    9. 213.120.162.68 0.0% 10 12.0 12.1 12.0 12.3 0.1
    10. ??? 100.0 10 0.0 0.0 0.0 0.0 0.0
     I've just spent a fustrating 15 minutes with Bt Support chat who just seemed want to pass me on to the BT Business team, so I thought I'd post here, for a more informed response.

    Hi Jane, Thanks for the reply. I have now purchased an AEBS(n) to try to overcome this problem. The Apple site says it is compatible with all versions of Airport card so I thought it would solve the problem. My new problem is to be found here: http://discussions.apple.com/thread.jspa?threadID=1087292&tstart=0
    However to answer your questions, The OS is 10.4.10 and I have run every updater I can find for all Macs concerned. hope this helps.

  • Creating a primary key with the parallel option and the tablespace option

    I know I can create a unique index with these options and then make the primary key with the "using index" clause. anyway to do this and skip the create unique index and just create the primary key ?

    SQL> ALTER TABLE t
      2  ADD CONSTRAINT pk_t
      3  PRIMARY KEY (testcol)
      4  USING INDEX
      5  TABLESPACE example
      6  PARALLEL (DEGREE 2);
    PARALLEL (DEGREE 2)
    ERROR at line 6:
    ORA-03001: unimplemented featureYou can name the tablespace but you must do an alter index thereafter.

  • Wlc webauth devices timeout and have to reauth

    We are in the process of setting up a new guest wireless network using our current WLC 4402 (code 7.0.98).  Guests will use web auth to gain access to our network. We will also use this same network for our company (personal) handheld devices to gain access ie allow iphones, ipads, androids to access Internet , etc. 
    The problem right now is that users fire open their mobile browser and authenticate to gain access. Then for whatever reason, maybe they step out of the office or reboot their phone, their device has deauthenticated and to gain access again they have to reauth via their browser.  I've seen similar posts on the boards, but haven't found an exact answer.
    Could someone tell me where/how to change the setting to allow users to remain authenticated once they authenticate? Thanks

    To elaborate a little more, there are primarily two causes for the scenario described.
    1) The Session Timeout has expired. This is 1800 second by default (30 minutes) and is located on the Advanced Tab of the WLAN Configuration. It is a per WLAN setting, and your client will be removed at the session timeout meaning they will have to reauthenticate.   For PSK or EAP type security, this might be seamless in the background, but you see the impact for web auth since it requires manual input.
    2) The Idle Timeout has expired.   This is 300 seconds by default (5 minutes) and is located on the Controller Tab of the WLC GUI.  It is a global setting, and your client will be removed from the WLC after having recieved no traffic from the client.  This is something that might happen with smartphones as they could shutoff the radio as soon as you stop using it, and definitely could trigger if you reboot a device and don't associate within 5 minutes.
    With that said, I believe there is a bug in 7.0.98.0 where Idle Timeout is hit earlier than 5 minutes due to max retries. And smartphones going into powersave off and on is the trigger.   
    So... my suggestion is for you to figure out why your clients are being removed (debug client ) and if it says Idle-Timeout   and you know the device isn't idle, then upgrade to 7.0.116.0.

  • How to know the optimal Degree of Parallelism for my database?

    I have an important application on my databae (Oracle 10,2,0) and the box has 4 CPU. All the tables are not partitioned. Should I set the parallel degree by myself?
    How to know the optimal Degree of Parallelism for my database?

    As far as I am concerned there is no optimal degree of parallism at the database level. The optimal value varies by query based on the plan in use. It may change over time.
    It is not that difficult to overuse the PQO and end up harming overall database performance. PQO is a brute force methology and should be applied carefully. Otherwise you end up with inconsisten results.
    You can let Oracle manage it, or you can manage it on the statement level via hints. I do not like specifying degrees of parallelism at the object level. As I said, no two queries are exactly alike and what is right for one query against a table may not be right for another query against the table.
    If in doubt set up the system to let Oracle manage it. if what you are really asking is how many PQO sessions to allocate then look at your Statspack or AWR reports and judge your system load. Monitor v$px_session and v$pq_slave to see how much activity these views show.
    IMHO -- Mark D Powell --

  • Database connection timeouts and datasource errors

    Connections in the pool randomly die overnight. Stack traces show that for some reason, the evermind driver is being used even though the MySql connection pool is specified.
    Also, the evermind connection pool is saying connections aren't being closed, and the stack trace shows they're being allocated by entity beans that are definitely not left hanging around.
    Sometimes we get non-serializable errors when trying to retrieve the datasource (this is only after the other errors start). Some connections returned from the pool are still good, so the application limps along.
    EJBs and DAOs both use jdbc/SQLServerDSCore.
    Has anyone seen this problem?
    <data-sources>
         <data-source
              class="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource"
              name="SQLServerDSCore"
              location="jdbc/SQLServerDSCore"
              xa-location="jdbc/xa/SQLServerXACore"
              ejb-location="jdbc/SQLServerDSCore"
              connection-driver="com.mysql.jdbc.Driver"
              min-connections="5"
              username="xxx"
              password="xxx"
              staleness-timeout="3600"
              alive-poll-query="SELECT 1 FROM medispan"
              url="jdbc:mysql://1.2.3.4:3306/dbo?autoReconnect=true&autoReconnectForPools=true&cachePrepStmts=true&is-connection-validation-required=true"
              inactivity-timeout="30"
         >
              <property name="autoReconnect" value="true"/>
              <property name="autoReconnectForPools" value="true"/>
              <property name="is-connection-validation-required" value="true"/>
              <property name="cachePrepStmts" value="true"/>
         </data-source>
    </data-sources>

    Rick,
    OC4J 9.0.4.0.0 - BTW, do you know of any patches?As far as I know, there are no patches for the 9.0.4
    production version of OC4J stand-alone.
    I'm using container managed persistence,It was not clear to me, from your previous post, that you
    are using CMP entity beans.
    I found staleness-timeout and alive-poll-query
    somewhere on a website when trying to track this
    down. Here's four sources:Those sources refer to OrionServer -- and an older version, too, it seems.
    Like all other Oracle products that start out as somebody
    else's -- including, for example, JBuilder (that became "JDeveloper"), Apache Web Server (that became "Oracle HTTP Server") and TopLink -- their development paths diverge, until, eventually, there is absolutely no similarity between them at all. Hence, the latest versions of OC4J and "OrionServer" are so different, that you cannot be sure that something that works for "OrionServer" will work for OC4J.
    I recall reading something, somewhere, sometime about configuring OC4J to use different databases (other than Oracle), but I really don't remember any details (since it was not relevant to me, because we only use Oracle database). In any case, it is possible to use a non-Oracle database with OC4J.
    Good Luck,
    Avi.

  • Re: timeouts and ExternalConnection

    Tim,
    Use a Timer class of Framework library to have an explicit control over
    the connection. If the end-of-stream marker is not received within a
    specified interval of time, handle the situation.
    ExternalConnection class does not have any timeout feature of its own,
    but it would be a good idea to have one in future.
    Braja.
    \\\|///
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~oOOo- (_) -oOOo~~~~~~~
    BRAJA KISHORE CHATTARAJ
    Consultant, Analysts International Corporation.
    Work : Sphinx Pharmaceuticals (A division of Eli Lilly & Co.)
    (919) 419-5798
    Home : 1801, Williamsburg Rd., #41H, Durham, NC 27707.
    (919) 403-7296
    E-mail : [email protected]
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Oooo~~~~~~~
    oooO ( )
    Please respond to Tim Kannenberg <[email protected]>
    Subject: timeouts and ExternalConnection
    I am working on a Forte service that uses a socket listener (implemented
    using the ExternalConnection class) to handle incoming messages. The
    messages vary in length, so the code for processing each connection
    loops until it finds an end-of-stream marker. If some client is
    erroneously sending messages without the marker, it'll loop forever. I
    would like the connection to time out if it doesn't receive any valid
    messages within a fixed length of time. Is there some functionality
    I've overlooked in ExternalConnection that would handle this? If not,
    does anybody have an example of a good way to implement it?
    Thanks in advance,
    Tim
    Tim Kannenberg
    Strong Capital Management
    Get Your Private, Free Email at http://www.hotmail.com
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    I've re-run the speed test. Here they are. Download speedachieved during the test was - 19915 Kbps For your connection, the acceptable range of speedsis 12000-29036 Kbps . Additional Information: IP Profile for your line is -29036 Kbps Upload speed achieved during the test was - 3819 Kbps Additional Information: Upstream Rate IP profile on your line is - 10000 Kbps Although the speed has dropped markedly since Saturday, its the dropouts, the VPN unable to connect and the slow respnses when trying to connect to websites quickly (DNS issue?) that is bothering me. It's not just one website that the problem is with, its any.

  • Frequent timeouts and poor performance

    Hello,
    Starting this morning I'm experiencing a lot of timeouts and performance issues with my SQL Azure DB (located in West US), causing several Azure hosted services to fail. I've checked everything I can think of and it nothing shows a clear problem with our
    databases. I'm not seeing anything on the dashboard, but can't help but wonder if there is an issue going on that hasn't been detected.
    Any suggestions on how to troubleshoot this issue?
    Thanks,
    -Fabio

    Hi Fabio,
    I work on the Cotega monitoring service for SQL Azure and I can also confirm that we saw a few customer who we notified of performance issues with SQL Azure as well.   
    Now that I write this, I am wondering if it might be worth adding to the service a page that lets anyone see the historical performance of SQL Azure based on aggregated data from all of the customers databases that we currently monitor.  I wonder if
    that would be useful for situations like this?

  • SQL Timeouts and Blocking Locks

    SQL Timeouts and Blocking Locks
    Just wanted to check in and see if anyone here has feedback on application settings, ColdFusion settings, JBOSS settings or other settings that could help to limit or remove SQL Timeouts and blocking locks on SID's.
    We're using MS SQL 2000 with JBOSS and IIS5.
    We've been seeing the following error in our logs that starts blocking locks in SQL:
    java.sql.SQLException: [newScale] [SQLServer JDBC Drive] [SQLServer] Lock request time out period exceeded.
    Once this happens, we're hosed until we remove the blocking SID in SQL.  These are the connections to the application.
    Any feedback would be great.  Thanks!

    Hi
    This is your exact solution:
    Select a.username, a.sid, a.serial#, b.id1, c.sql_text
    From v$session a, v$lock b, v$sqltext c
    Where b.id1 in( Select distinct e.id1
    from v$session d , v$lock e
    where d.lockwait = e.kaddr ) and
    a.sid = b.sid and
    c.hash_value = a.sql_hash_value and
    b.request =0;
    Thanks
    Sarju
    Oracle DBA
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by I'm clueless:
    Can someone give me the SQL statement to
    show if there are any blocking database locks and if so - which user is locking the Database?
    Thanks in Advance<HR></BLOCKQUOTE>
    null

  • Session timeout and custom sso

    Hi,
    can anyone tell me how the session and idle timeout feature in Apex exactly works?
    I built several applications in a workspace and do a sso authorization by setting a common cookie name. In addition to that i set the values for session length and idle timeout and assumed that the session length would be synchronized over all applications. But this doesn't seem to work. For instance, i set the idle timeout to 10 minutes in all applications and now i work for 15 minutes continously in application A and after that i switch over to application B (using the same session id!), the session is already expired in B.
    Is this behavior correct? And, if yes, how can i set up a synchronization over all applications?
    Jens

    Anyone?

  • Is there a risk of setting a console connection timeout and what is the recommended setting?

    Is there a risk of setting a console connection timeout and what is the recommended setting? Please suggest if there is any best prctice documentation that can be referred.

    Hi Henrik
    depend on what you need or what your security policy says for my lab gear i use 60 minutes. because i know how can access this. if you have gear outside in insecure space set it to a minimum or disable the console. everybody how can access your gear can break in. simple restart and boot w/o config. and you are in.
    it realy depends how secure is your space and how much security you need.
    and than the settings for policy have to match, what sec do you have if your console login and logout is secure. but when you restart you can simple break in by starting w/o config and than load it.
    HTH
    Patrick

  • What is the difference between Session timeout and Short Session timeout Under Excel Service Application -- session management?

    Under Excel Service Application --> session management; what is the difference between Session timeout and Short Session timeout?

    Any call made from the API will automatically be set to the “Session Timeout” period, no matter
    what. Calls made from EWA (Excel Web Access) will get the “Short Session Timeout” period assigned to it initially.
    Short Session Timeout and Session Timeout in Excel Services
    Short Session Timeout and Session Timeout in Excel Services - Part 2
    Sessions and session time-outs in Excel Services
    above links are from old version but still applies to all.
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • BPEL - Handling invocation timeouts and Modifying Partner Link endpoints

    Hi,
    We've built the basic functionality that we need in our BPEL process but are facing 2 specific questions that we are a bit stuck with and would really appreciate some help on..
    1. Our BPEL process calls an external synchronous web service. We have a requirement that if this external web service is unable to respond to our BPEL process within a fixed timespan (say 1 minute), we need to treat this as a timeout and move on. Can anyone suggest what settings are required for this?
    2. The second query is with regards to a likely situation we will face after go-live. If the URL of the external service changes (lets say the service moves from one server to another), ideally we would want to be able to configure this URL change rather than have to modify the WSDL and rebuild the BPEL project in JDev with the new WSDL. Does the BPEL Admin Console provide any such feature? As far as I can recall from a project a couple of years ago, Websphere Process Server did provide such a feature and I'm looking for something similar here but have not found it yet. I am not looking to use dynamic endpoints within our flow - just for an admin feature that would allow me to modify the URL externally via the console.
    Would really appreciate any suggestions on these 2 points..
    Thanks and Regards,
    TB

    In response to your second query -
    a) you don't need to rebuild the BPEL project in Jdev in order to change the wsdl file. If you update the WSDL file with new values for your endpoint simply clear the WSDL cache and the process will pick up the new values in the new instances created from thereon.
    b) or if you dont' want to update the wsdl manually, you can write a piece of java code to change the endpoint URL's for the deployed BPEL processes using the code given here
    hth

Maybe you are looking for

  • How can I add music from my Itunes to a slideshow?

    I am having trouble adding music to a slideshow I created. When I click on the "add audio" it takes me to a screen where I can open my itunes but then it says the folder is empty.  Does it not recognize the itunes format?

  • SMQ2 error HUSELECT005

    Hi I have a WM system with stocks inclusing of HUs because one of my WM managed storage location is also HU managed. Now, I have converted WM storage locations to EWM and removed HU management indicators were earlier HU managed and completed all nece

  • Maximum length/size for tracks?

    I have a couple of these white noise mp3 which are really long, like 8 hours (~400MB), and I can play them in iTunes, but they won't synchronize with the iPod. So I was just wondering, maybe there's a limit for the length (or size?) for the file the

  • Why are my timeline triggers different?

    I am trying to add a Get Element timeline trigger. When I click to add a trigger, this is what I see and there is no option to add a Get Element - even if I do a search. In the tutorial I am following on Lynda.com and in other documentation I have se

  • I just bought a router WRT54G and a Compact wireless-G

    i just bought a router WRT54G and a  Compact wireless-G and by the time to connect them it seems that both things work correctly but there are two icons that show if u are connect to the internet, so here is the problem one icon says that my connecti