Large Number of Lync Conference Session Failures Every 3 Minutes

I'm wondering if anyone has ever seen this issue before.  It's new to me.
A client of mine is experiencing extremely high conferencing session failures in Lync.  Once looking into the conferences in question, it looks as though one specific users conferences are in question.  A conference session for this users Lync
meetings are attempting to be joined EVERY 3 minutes, 24 Hours a day with a Response Code of 486 and a Diagnostic Code of 34007.  Here is a ScreenShot of 10 Minutes worth of these attempts:
This user is not experiencing any problems in joining meetings on his own and he is the only user listed as having unexpected conference failures. 
My thought was to restart the AV Conferencing Service or reboot the Front End Servers but wanted to run this past some other first. 
Any thoughts?
John K. Boslooper | Lync Technical Specialist | MCITP
Project Leadership Associates
2000 Town Center, Suite 1900, Southfield, MI 48075
Phone: 312.448.2269 | Fax: 435.304.3335
www.projectleadership.net

UPDATE:
It seems as though it was a hung up process with the AVCMU Agent on one of the Front-End servers.  A simple reboot over the weekend took care of the problem.
John K. Boslooper | Lync Technical Specialist | Project Leadership Associates Phone: 312.448.2269 | www.projectleadership.net

Similar Messages

  • Rman backup failure, and is generating a large number of files.

    I would appreciate some pointers on this if possible, as I'm a bit of an rman novice.
    Our rman backup logs indicated a failure and in the directory where it puts its files there appeared a large number of files for the 18th, which was the date of the failure. Previous days backups generated 5 files of moderate size. When it failed it generated between 30 - 40 G of files ( it looks like one for each database file ).
    The full backup is early monday morning, and the rest incremental :
    I have placed the rman log, the script and a the full directory file listing here : http://www.tinshed.plus.com/rman/
    Thanks in advance - George
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055071_s244_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055096_s245_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734573008_s281_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055045_s243_s1
    -rw-r----- 1 oracle dba 524296192 Jan 18 00:03 database_f734055121_s246_s1
    -rw-r----- 1 oracle dba 1073750016 Jan 18 00:03 database_f734055020_s242_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054454_s233_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054519_s234_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054595_s235_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054660_s236_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054725_s237_s1
    -rw-r----- 1 oracle dba 4294975488 Jan 18 00:02 database_f734054790_s238_s1
    -rw-r----- 1 oracle dba 209723392 Jan 18 00:02 database_f734055136_s247_s1
    -rw-r----- 1 oracle dba 73408512 Jan 18 00:02 database_f734055143_s248_s1
    -rw-r----- 1 oracle dba 67117056 Jan 18 00:02 database_f734055146_s249_s1
    -rw-r----- 1 oracle dba 4194312192 Jan 18 00:02 database_f734054855_s239_s1
    -rw-r----- 1 oracle dba 2147491840 Jan 18 00:02 database_f734054975_s241_s1
    -rw-r----- 1 oracle dba 3221233664 Jan 18 00:02 database_f734054920_s240_s1
    drwxr-xr-x 2 oracle dba 4096 Jan 18 00:00 logs
    -rw-r----- 1 oracle dba 18710528 Jan 17 00:15 controlfile_c-1911789030-20110117-00
    -rw-r----- 1 oracle dba 1343488 Jan 17 00:15 database_f740621746_s624_s1
    -rw-r----- 1 oracle dba 2958848 Jan 17 00:15 database_f740621745_s623_s1
    -rw-r----- 1 oracle dba 6415990784 Jan 17 00:15 database_f740620829_s622_s1
    -rw-r----- 1 oracle dba 172391424 Jan 17 00:00 database_f740620814_s621_s1

    george3 wrote:
    Ok, perhaps its my understanding of RMAN that is at fault. From the logs :
    Starting recover at 18-JAN-11
    channel m1: starting incremental datafile backup set restore
    channel m1: specifying datafile copies to recover
    recovering datafile copy file number=00001
    name=/exlibris1/rmanbackup/database_f734055020_s242_s1
    recovering datafile copy file number=00002
    name=/exlibris1/rmanbackup/database_f734055045_s243_s1
    it seems to make backup copies of the datafiles every night, so the creation of these large files is normal ?Above results indicate that you have full (incremental level 0) backup(all datafiles copies ) and there happen update/recover (applying incremental level 1) backup.So there was happen applying */exlibris1/rmanbackup/database_f734055045_s243_s1* inremental backup to full(level 1) backup.And size should be normal
    Why is it making copies of the datafiles even on days of incrementals ?
    Because after getting level 1 backup and need applying and every day will apply one incremental backup.

  • DBA Reports large number of inactive sessions with 11.1.1.1

    All,
    We have installed System 11.1.1.1 on some 32 bit windows test machines running Windows Server 2003. Everything seems to be working fine, but recently the DBA is reporting that there are a large number of inactive sessions throwing alarms that we are reaching our Max Allowed Process on the Oracle Database server. We are running Oracle 10.2.0.4 on AIX.
    We also have some System 9.3.1 Development servers that point at separate schemas in this environment and we don't see the same high number of inactive connections?
    Most of the inactive connections are coming from Shared Services and Workspace. Anyone else see this or have any ideas?
    Thanks for any responses.
    Keith
    Just a quick update. Originally I said this was only with 11.1.1.1 but we see the same high number of inactive sessions in 9.3. Anyone else see a large number of inactive sessions. They show up in Oracle as JDBC_Connect_Client. Does Shared Service, Planning Workspace etc utilize persistent connections or does it just abandon sessions when the windows service associated with an application is shutdown? Any information or thoughts are appreciated.
    Edited by: Keith A on Oct 6, 2009 9:06 AM

    Hi,
    Not the answer you are looking for but have you logged it with Oracle as you might not get many answers to this question on here.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Large number of concurrent sessions

    What optimizations are used to provide a large number of concurrent sessions?

    Generally:
    1) Design so that clustering is easy - e.g. cache only read-only data, and
    cache it agressively
    2) Keep replication requirements down - e.g. keep HTTP sessions small and
    turn off replication on stateful session beans
    3) Always load test with db shared = true so that you don't get nasty
    surprise when clustering
    4) Don't hit the database more than necessary - generally the db scales the
    poorest
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    Clustering Weblogic? You're either using Coherence, or you should be!
    Download a Tangosol Coherence eval today at http://www.tangosol.com/
    "Priya Shinde" <[email protected]> wrote in message
    news:3c6fb3bd$[email protected]..
    >
    What optimizations are used to provide a large number of concurrentsessions?

  • Large number of http posts navigating between forms

    Hi,
    i'm not really a forms person (well not since v3/4 running character mode on a mainframe!), so please be patient if I'm not providing the most useful information.
    An oracle forms 10 system that I have fallen into supporting has to me very poor performance in doing simple things like navigating between forms/tabs.
    Looking at the java console (Running Sun JRE 1.6.0_17), and turning on network tracing, I can see a much larger number of post requests than I would expect (I looked here first as initially we had an issue with every request going via a proxy server, and I wondered if we had lost the bypass proxy setting). Only a normal number of GETS though.
    Moving to one particualr detail form from a master record is generating over 300 post requests - I'v confirmed this looking at the Apache logs on the server. This is the worst one I have found, but in general the application appears to be extremely 'chatty'
    The only other system I work with which uses forms doesn't generate anything like these numbers of requests, which makes me think this isn't normal (As well as the fact this particular form is very slow to open)
    This is a third party application, so i don't have access to the source unfortunately.
    Is there anything we should look at in our setup, or is this likely to be an application coding issue? This app is a recent conversion from a forms 6 client server application (Which itself ran ok, at least this bit of the application did with no delays in navigation between screens).
    I'm happy to go back to the supplier, but it might help if I can point them into some specific directions, plus i'd like to know what's going on too!
    Regards,
    Carl

    Sounds odd. 300 Requests is by far too much. As it was a C/S application: did they do anything else except the recompile on 10g? Moving from C/S to 10g webforms seems to be easy as you just need to recompile but in fact it isn't. There are many things which didn't matter in a C/S environment but have disastrous effects once the form is deployed over the web. The synchronize built in for example. In C/S calls to synchronize wasn't that bad; But when you are using web deployed forms...each call to synchronize is a roundtrip. The usage of timers is also best kept on a low level in webforms for example.
    A good starting point for the whole do's and dont's when moving forms to the web is the forms upgrade center:
    http://www.oracle.com/technetwork/developer-tools/forms/index-095046.html
    If you don't have the source code available that's unfortune; but if you want to know what's happening behind the scenes there is the possibility to trace a forms session:
    http://download.oracle.com/docs/cd/B14099_19/web.1012/b14032/tracing002.htm#i1035515
    maybe this sheds some light upon what's going on.
    cheers

  • Fastest way to handle and store a large number of posts in a very short time?

    I need to handle a very large number of HTTP posts in a very short period of time. The handling will consist of nothing more than storing the data posted and returning a redirect. The data will be quite small (email, postal code). I don't know exactly how
    many posts, but somewhere between 50,000 and 500,000 over the course of a minute.
    My plan is to use the traffic manager to distribute the load across several data centers, and to have a website scaled to 10-instances per data center. For storage, I thought that Azure table storage would be the ideal way to handle this, but I'm not sure
    if the latency would prevent my app from handling this much data.
    Has anyone done anything similar to this and have a suggestion for storing the data? Perhaps buffering everything into memory would be ideal and then batching from there to table storage. I'm starting to load-test the direct to table-storage solution and
    am not encouraged.

    You are talking about a website with 500,000 posts per minute with re-direction, so you are talking about designing a system that can handle at least 500,000 users? Assuming that not all users are doing posts within a one minute timeframe, then you
    are talking about designing a system that can handle millions of users at any one time.
    Event hub architecture is completely different from the HTTP post architecture, every device/user/session writes directly to the hub. I was just wondering if that actually work better for you in your situation.
    Frank
    The site has no session or page displaying. It literally will record a few form values posted from another site and issue a redirect back to that originating site. It is purely for data collection. I'll see if it is possible to write directly to the event hub/service
    bus system from a web page. If so, that might work well.

  • Large number of federated contacts causes problems with client

    Hi,
    We have added a large number of federated contacts to our Lync clients (using Vytru Contact Manager) when we do this the behavior of the client is bad, presence information is not updated for internal or external contacts, messaging is sporadic sometime
    message go through sometimes they don't.
    Is there any limit / recommendation for the number of Federated contacts? we have added around 230 contacts.
    Andrew.

    Solved problem of synching iPad with desktop containing large photo library (20,000 photos): In summary, solution was to rename, copy and live iPHoto Library to XHD and reduce size of library on home (desktop) drive.
    In the home directory, rename "iPhoto Library" to "iPhoto Global" (or any other name you want), and copy into an external hard drive via simple drag and drop method.
    Then, go back to the newly renamed iPhoto Global and rename it again, to iPhoto 2010. Now, open iPhoto by holding down the Alt/Option key and opening iPhoto. This provides option to choose library iPhoto 2010. Open the library, and eliminate every photo before 2010. This got us down to a few thousand photos and a much smaller library. Synch this smaller library with the iPad.
    Finally, I suggest downloading a program "iPhoto Library Manager" and using this to organize and access the two libraries you now have (which could be 2, 10 or however many different libraries you want to use.) The iPhoto program doesn't want you to naturally have more than one library, but a download called iPhoto Library Manager allows user to segregate libraries for different purposes (eg. personal, work, 2008, 2009, etc.). Google iPhoto Library Manager, download, and look for links to video tutorials which were very helpful.
    I did not experience any problems w/ iPhoto sequencing so can't address that concern.
    Good luck!

  • How to calculate the area of a large number of polygons in a single query

    Hi forum
    Is it possible to calculate the area of a large number of polygons in a single query using a combination of SDO_AGGR_UNION and SDO_AREA? So far, I have tried doing something similar to this:
    select sdo_geom.sdo_area((
    select sdo_aggr_union (   sdoaggrtype(mg.geoloc, 0.005))
    from mapv_gravsted_00182 mg 
    where mg.dblink = 521 or mg.dblink = 94 or mg.dblink = 38 <many many more....>),
    0.0005) calc_area from dualThe table MAPV_GRAVSTED_00182 contains 2 fields - geoloc (SDO_GEOMETRY) and dblink (Id field) needed for querying specific polygons.
    As far as I can see, I need to first somehow get a single SDO_GEOMETRY object and use this as input for the SDO_AREA function. But I'm not 100% sure, that I'm doing this the right way. This query is very inefficient, and sometimes fails with strange errors like "No more data to read from socket" when executed from SQL Developer. I even tried with the latest JDBC driver from Oracle without much difference.
    Would a better approach be to write some kind of stored procedure, that adds up all the single geometries by adding each call to SDO_AREA on each single geometry object - or what is the best approach?
    Any advice would be appreciated.
    Thanks in advance,
    Jacob

    Hi
    I am now trying to update all my spatial table with SRID's. To do this, I try to drop the spatial index first to recreate it after the update. But for a lot of tables I can't drop the spatial index. Whenever I try to DROP INDEX <spatial index name>, I get this error - anyone know what this means?
    Thanks,
    Jacob
    Error starting at line 2 in command:
    drop index BSSYS.STIER_00182_SX
    Error report:
    SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXDROP routine
    ORA-13249: Error in Spatial index: cannot drop sequence BSSYS.MDRS_1424B$
    ORA-13249: Stmt-Execute Failure: DROP SEQUENCE BSSYS.MDRS_1424B$
    ORA-29400: data cartridge error
    ORA-02289: sequence does not exist
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 27
    29856. 00000 - "error occurred in the execution of ODCIINDEXDROP routine"
    *Cause:    Failed to successfully execute the ODCIIndexDrop routine.
    *Action:   Check to see if the routine has been coded correctly.
    Edit - just found the answer for this in MetaLink note 241003.1. Apparently there is some internal problem when dropping spatial indexes, some objects gets dropped that shouldn't be. Solution is to manually create the sequence it complains it can't drop, then it works... Weird error.

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Large number of entries in Queue BW0010EC_PCA_1

    Dear BW experts,
    Our BW system 2004s is extracting data from R3 700. I am a Basis guy and observing large number of entries in SMQ1 in R3 system under Queue BW0010EC_PCA_1. I observe in RSA7 similar number of entries for 0EC_PCA_1.
    The number of entries for this queue in SMQ1 everyday are 50000+. The extraction job in BW is running everyday 5:00AM morning in BW but this only clears data which is lying before 2 days (Example on 10.09.2010, it will extract data of 08.09.2010)
    My questions
    1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
    2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
    Many thanks in advance for your valuable comments

    Hi,
    The entries lying in RSA7 and SMQ1 are one and the same. In SMQ1, BW0010EC_PCA_1 entry means that this data is lying to be sent across to you BW001 client system. Whereas in RSA7, same data is displayed as 0EC_PCA_1.
    1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
    As I can understand from the data thats lying in your R/3 system in SMQ1, this datasource has delta mode as Direct Delta. SAP recommends that if the number of postings for a particular application is greater than 100000 per day, you should have delta mode as Queued Delta. Since in your system it is in some thousands, therefore BI guys would have kept it as direct delta. So, these entries lying in SMQ1 are not problem at all. As for scheduler from BI, it will pick up these entries every morning to clear the queue for the previous data.
    2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
    I dont think that it is only fetching the data for 2 days before. The delta concept works in such a manner that once you have pulled the delta load from RSA7, this data would still be lying there under Repeat delta section until the next delta load has finished successfully.
    Since in your system, data is pulled only once a day, therefore even though your today's dataload has pulled yesterday's data, it would still be lying in the system till tomorrow's delta load from RSA7 is successful.
    Hope this was helpful.

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Large number of event Log entries: connection open...

    Hi,
    I am seeing a large number of entries in the event log of the type:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    Are these anything I should be concerned about? I have tried a couple of forum and Google searches, but I don't quite know where to start beyond pasting the first bit of the message. I haven't found anything obvious from those searches.
    DHCP table lists 192.168.1.78 as the desktop PC on which I'm writing this.
    Please could you point me in the direction of any resources that will help me to work out if I should be worried about this?
    A slightly longer extract is shown below:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:11, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] TIME_WAIT/CLOSED ppp0 NAPT)
    21:49:03, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [178.190.63.75:55535] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:00, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [2.96.4.85:23939] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:59, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.144.143.222:21617] CLOSED/TIME_WAIT ppp0 NAPT)
    21:48:58, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28188] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28288] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:18048] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:54199] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.144.91.49:60704] ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:50875] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:45, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:57656] ppp0 NAPT)
    21:48:39, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:56975] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:29, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [79.99.145.46:8368] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:27, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [90.192.249.173:45250] ppp0 NAPT)
    21:48:16, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [212.17.96.246:62447] ppp0 NAPT)
    21:48:10, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [82.16.198.117:49942] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:08, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:04, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [89.153.251.9:53729] TIME_WAIT/CLOSED ppp0 NAPT)
    21:47:54, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:37150] ppp0 NAPT)

    Hi,
    Thank you for the response. I think, but can't remember for sure, that UPnP was already switched off when I captured that log. Anyway, even if it wasn't, it is now. So I will see what gets captured in my logs.
    I've just had to restart my Home Hub because of other connection issues and I notice that the first few entries are also odd:
    19:35:16, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:04, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:46, 12 Mar.
    IN: BLOCK [12] Spoofing protection (IGMP 86.164.178.188->224.0.0.22 on ppp0)
    19:33:45, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:39, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:33, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:33:29, 12 Mar.
    IN: BLOCK [15] Default policy (UDP 111.252.36.217:26328->86.164.178.188:12708 on ppp0)
    19:33:16, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 193.113.4.153:80->86.164.178.188:49572 on ppp0)
    19:33:14, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:14, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:44266 on ppp0)
    19:33:14, 12 Mar.
    ( 164.240000) CWMP: session completed successfully
    19:33:13, 12 Mar.
    ( 163.700000) CWMP: HTTP authentication success from https://pbthdm.bt.mo
    19:33:05, 12 Mar.
    BLOCKED 106 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 213.1.72.209:80->86.164.178.188:49547 on ppp0)
    19:33:05, 12 Mar.
    BLOCKED 94 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 199.59.148.87:443->86.164.178.188:49531 on ppp0)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:04, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:04, 12 Mar.
    ( 155.110000) CWMP: Server URL: https://pbthdm.bt.mo; Connecting as user: ACS username
    19:33:04, 12 Mar.
    ( 155.090000) CWMP: Session start now. Event code(s): '1 BOOT,4 VALUE CHANGE'
    19:32:59, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:54, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:32:53, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:52, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:51, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:48, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:47, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:46, 12 Mar.
    BLOCKED 4 more packets (because of First packet is Invalid)
    19:32:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49461->199.59.149.232:443 on ppp0)
    19:32:44, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:44, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:43, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49398->193.113.4.153:80 on ppp0)
    19:32:42, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:42, 12 Mar.
    BLOCKED 3 more packets (because of First packet is Invalid)
    19:32:42, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49277->119.254.30.32:443 on ppp0)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:41, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:38, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:36, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:34, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:30, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:47022 on ppp0)
    19:32:30, 12 Mar.
    ( 120.790000) CWMP: session closed due to error: WGET TLS error
    19:32:30, 12 Mar.
    ( 120.140000) NTP synchronization success!
    19:32:30, 12 Mar.
    BLOCKED 1 more packets (because of Default policy)
    19:32:29, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49458->217.41.223.234:80 on ppp0)
    19:32:28, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:26, 12 Mar.
    ( 116.030000) NTP synchronization start
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49442->74.125.141.91:443 on ppp0)
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (TCP 192.168.1.78:49310->204.154.94.81:443 on ppp0)
    19:32:25, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 88.221.94.116:80->86.164.178.188:49863 on ppp0)

  • RE: Tab Groups. 1. What will erase saved tab groups unintentionally? E.g. : Clearing Cashe, running CCleaner? 2. Does keeping a large number of tab groups active degrade my computer's performance? 3. Are tab groups saved during back-ups?

    RE: Tab Groups.
    1. What will erase saved tab groups unintentionally? E.g. : Clearing Cache, running CCleaner, other actions?
    2. Does keeping a large number of tab groups active degrade my computer's performance?
    3. Are tab groups saved during back-ups?
    Running Win 7 Pro, browsing Firefox 7.0.1

    App (pinned) tabs and Tab Groups (Panorama) are stored as part of the session data in the file sessionstore.js in the Firefox profile folder.
    Make sure that you do not use "Clear Recent History" to clear the "Browsing History" when Firefox is closed because that prevails and prevents Firefox from opening tabs from the previous session.
    * https://support.mozilla.com/kb/Clear+Recent+History
    If you use cleanup software like CCleaner then make sure that Session is unchecked in the settings for the Firefox application.

  • How to design Storage Spaces with a large number of drives

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 
    1) Try to avoid parity and esp. double parity RAIDs with a typical VM workload. It's dominated by small reads (OK) and small writes (not OK as whole parity stripe gets updated with any "ready-modify-write" sequence). As a result writes would be DOG slow.
    Another nasty parity RAID characteristic is very long rebuild times... It's pretty easy to get second (third with double parity) drive failure during re-build process and that would render the whole RAID set useless. Solution would be to use RAID10. Much safer,
    faster to work and rebuild compared to RAID5/6 but wastes half of raw capacity...
    2) Creating "islands" of storage is an extremely effective way of stealing IOPS away from your config. Typical modern RAID set would run out of IOPS long before running out of capacity so unless you're planning to have a file dump of an ice cold data or
    CCTV storage you'll absolutely need all IOPS from all spindles @ the same time. This again means One Big RAID10, OBR10.
    Hope this helped a bit :) Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Business connector scheduler gets hanged and Next run value is large number

    Hi All,
    I see the scheduler in business connector gets hanged and the Next run value shows huge/large number like 9223370831700598.0 sec. Please can anyone suggest what can be done.
    Currently BC ver is 4.7. The problem gets resolved every time when i restart the server.

    Hi,
    Not aware of the reason though, I guess you must be using the simple scheduled tasks.
    Try using complex reapting tasks where you specify days , minutes etc.
    It should work fine.
    Hope it helps. Reward if useful.
    Regards,
    Siddhesh S.Tawate

Maybe you are looking for

  • Problem while developing stasteless session bean

    Hi All, I am learning Ejb,i developed a simple hello world application using statelesssession bean in weblogic server8.1.And i packed it in a jar file named slb.jar. Then i am trying to create stub and skeleton using following command,but i am gettin

  • IPod classic 80GB: TV out settings cleared all the time

    Hi, despite of the experiences others made, I succeeded in connecting my iPod to the TV. I'm currently using the Universal Dock with a S-Video connection. BUT: I have to tell the iPod everytime I place it into the dock to use the TV-out. Having done

  • IE Crashes when page loads

    My site loads perfectly in FF2 and on Safari (Mac), but crashes IE7 (i.e. "Internet Explorer has encountered a problem and needs to close. We are sorry for the inconvenience") Details: Faulting application iexplore.exe, version 7.0.6000.16544, faulti

  • Exact model from the serial code

    Hello, from site, www.cisco.com/public/scc/, can I determine the exact model of firewall, from the serial code? My device is not covered by the SMARTNet contract. Let me know,

  • How to recover selectively deleted data in Transaction cube ?

    hi Experts, We load data in transaction cube from manual entry layout, flat file source as well and then subsequently to basic cube. <b>Selective deletion has been done in transactin cube and corresponding basic cube without giving proper selection c