Backup rserver with sticky configured

Hi,
I would like to ask regarding the configuration for the backup rserver with sticky configured. 
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:SimSun;
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;}
This is not documented in the Cisco guides.
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:SimSun;
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;}
Suppose the real server1 fails and connections are diverted to server2. Then server1 resumes service. What happens to existing connections on server2 and the new connections?
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:SimSun;
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;}
serverfarm SFARM1
rserver SERVER1
  backup-rserver SERVER2
  inservice
rserver SERVER2
  inservice standby

- Existing connections keep accessing server2.
- If a new client request (connection) matches a sticky entry for server2, ACE forwards this request to server2.
ACE looks up sticky entries and use server2 since standby state is handled as UP.
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA2_3_0/configuration/slb/guide/rsfarms.html#wp1000385
- If a new client request (connection) doesn't match any sticky entry for server2, ACE forwards this request to server1.
If you want to use server1 after coming back OPERATIONAL, I recommend you use 'backup serverfarm' without sticky option as below.
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA2_3_0/configuration/slb/guide/sticky.html#wp1137791
serverfarm SFARM1
rserver SERVER1
  inservice
serverfarm SFARM2
rserver SERVER2
  inservice
sticky ip-netmask 255.255.255.255 address both sticky_ip
  serverfarm SFARM1 backup SFARM2
The following is a test result of standby rserver and sticky ip.
ACE20a/Admin# sh rserver
rserver              : sv1, type: HOST
state                : OPERATIONAL (verified by arp response)
                                                ----------connections-----------
       real                  weight state        current    total
   ---+---------------------+------+------------+----------+--------------------
   serverfarm: sf
       192.168.72.11:0       8      PROBE-FAILED 0          2
rserver              : sv2, type: HOST
state                : OPERATIONAL (verified by arp response)
                                                ----------connections-----------
       real                  weight state        current    total
   ---+---------------------+------+------------+----------+--------------------
   serverfarm: sf
       192.168.72.12:0       8      OPERATIONAL  0          8
ACE20a/Admin#
!___ access from client to ACE vip
ACE20a/Admin# sh sticky database
sticky group : sticky_ip
type         : IP
timeout      : 1440          timeout-activeconns : FALSE
  sticky-entry          rserver-instance                 time-to-expire flags
  ---------------------+--------------------------------+--------------+-------+
  13882423967172020068  sv2:0                            86384          -
!___ ACE learns client address and registers the entry
ACE20a/Admin#
ACE20a/Admin# sh rserver
rserver              : sv1, type: HOST
state                : OPERATIONAL (verified by arp response)
                                                ----------connections-----------
       real                  weight state        current    total
   ---+---------------------+------+------------+----------+--------------------
   serverfarm: sf
       192.168.72.11:0       8      OPERATIONAL  0          2
!___ return OPERATIONAL
rserver              : sv2, type: HOST
state                : OPERATIONAL (verified by arp response)
                                                ----------connections-----------
       real                  weight state        current    total
   ---+---------------------+------+------------+----------+--------------------
   serverfarm: sf
       192.168.72.12:0       8      STANDBY      0          9
!___ return STANDBY
ACE20a/Admin# sh sticky database
sticky group : sticky_ip
type         : IP
timeout      : 1440          timeout-activeconns : FALSE
  sticky-entry          rserver-instance                 time-to-expire flags
  ---------------------+--------------------------------+--------------+-------+
  13882423967172020068  sv2:0                            86356          -
!___ ACE keeps sticky entry to server2.
ACE20a/Admin#
!___ access from client with new syn packet
ACE20a/Admin# sh sticky database
sticky group : sticky_ip
type         : IP
timeout      : 1440          timeout-activeconns : FALSE
  sticky-entry          rserver-instance                 time-to-expire flags
  ---------------------+--------------------------------+--------------+-------+
  13882423967172020068  sv2:0                            86389          -
!___ use this sticky entry (time-to-expire flag is reset) and send packets to server2
ACE20a/Admin#
ACE20a/Admin# sh ver | i image
  system image file: [LCP] disk0:c6ace-t1k9-mz.A2_3_1.bin

Similar Messages

  • ACE backup-server and sticky

    Hi all,
    a question:
         if a configure a serverfarm with backup-server
    serverfarm host S_Das
      rserver DAS1
        backup-rserver DAS1_1
        inservice
      rserver DAS_1
        inservice standby
      rserver DAS2
        backup-rserver DAS2_1
        inservice
      rserver DAS_1
        inservice standby
    sticky ip-netmask 255.255.255.255 address both SF_DAS
      timeout 10
      replicate sticky
      serverfarm S_Das
    and rserver DAS1 goes down what will be behaviour of sticky and balancing?
    New connection wel'll go towards DAS2 or a tricky and clever sticky take precedence? (i mean persistence on DAS1_1 that is my backup server..)
    tnx
    Das

    Hi Danilo,
    If your primary rserver goes down the sticky entries associated with that server will be automatically flushed from the sticky table so that
    all new incoming connections will be diverted to your backup rserver.
    In case that primary rserver comes back then:
    - Existing connections on backup keep accessing backup.
    - For new connection requests ACE looks up sticky entries, if there's already an entry for backup server the connections is sent to the standby rserver.
    - If a new client request (connection) doesn't match any sticky entry for backup rserver ACE forwards this request to primary.
    In case that you want to use the primary rserver for all the connections after coming back to operational state then the backup option would be configured like this:
    rserver Primary
    ip address 10.10.10.2
      inservice
    rserver Standby
    ip address 10.10.10.3
      inservice
    serverfarm host Primary
      rserver Primary
        inservice
    serverfarm host Standby
      rserver Standby
        inservice
    policy-map type loadbalance http first-match slb
    class class-default
    serverfarm Primary backup Standby
    HTH

  • ACE 4710 and load balancing with sticky cookie

    Configuring load balancing with SSL termination and stickiness for a couple of citrix xenapp servers.  I'm doing a source-NAT as the ACE resides in the DMZ and these particular servers reside on the inside arm of the firewall.  The ACE is in bridged mode to load balance web servers that reside in the DMZ.  Everything seems to work just fine, but the cookie stickiness does not seem to be working.

    Hi David,
    As you may know, using Wireshark to look at an HTTPS capture is only useful if you've installed the server SSL key.This is why I find it easier to use something like LiveHTTPHeaders or HTTPWatch.
    When using cookie-insert, the ACE will not create any dynamic cookie entries.  It will simply create one static entry for each rserver with a cookie value, such as R3911631338, and any client that gets load balanced to that rserver will receive a cookie with that value.  So what you see there is what is expected.
    You are correct in that when using location cookies that the server supplies, the ACE will create a dynamic entry when it sees the server response with the cookie.   The cookie is included in the server's response, and the ACE will look for the value as configured.  The cookie will also be sent to the client.  If the cookie is not in the server's first response, you will need enable persistence-rebalance so that it will look in subsequent server responses.  If the browser opens new connections with that cookie, then the ACE will stick to the same server.
    My suggestion would be to get sticky working with cookie-insert first.  Then if that meets your needs, go with that permanently.  If you need to use server cookies, then once cookie insert is working, migrate your sticky to cookie location.
    Sean

  • Backup fail with VSSwriter error

    Hello,
    we currently are using Symantec Backup on tape and Windows Server Backup to make two differents backups on a SBS 2008.
    We are getting failed backup randomly during the week for both. Sometimes one is succeed and not the others.
    We have found several post about the VSS issues and how to fix it but nothing has changed.
    What we have done:
    - check the VSS writer and all is in "no error" state
    - check the settings of the backups
    - restart the VSS services
    - Configure the Shadow Copy on the drives
    - reboot the server
    - Tried several solutions from forums
    What we can see in the event viewer
    ID 12289
    "Volume Shadow Copy Service error: Unexpected error VSS_E_WRITER_STATUS_NOT_AVAILABLE. An older active writer session state is being overwritten by a newer session. The most common cause is that the number of parallel backups has exceeded the maximum
    supported limit.  hr = 0x80042409. 
    Operation:
       Aborting Writer
    ID 7001
    VssAdmin: Unable to create a shadow copy: The shadow copy provider had an error. Please see the system and application event logs for more information. 
    Command-line: 'C:\Windows\system32\vssadmin.exe Create Shadow /AutoRetry=15 /For=C:\'. "
    ID 12341
    Volume Shadow Copy Warning: VSS spent 75 seconds trying to flush and hold the volume \\?\Volume{c22d5d46-0540-11e0-9496-806e6f6e6963}\.  This might cause problems when other volumes in the shadow-copy set timeout waiting for the release-writes phase,
    and it can cause  the shadow-copy creation to fail.  Trying again when disk activity is lower may solve this problem. 
    Operation:
       Executing Asynchronous Operation
    Context:
       Current State: flush-and-hold writes
       Volume Name: \\?\Volume{c22d5d46-0540-11e0-9496-806e6f6e6963}\"
    Some will post the link of "how to troubleshoot VSSwriter issue": we have already used it.
    Does someone have a fresh idea about this issue ?
    Thanks in advance

    Hi Nerea,
    Before going further, would you please let me confirm if temporarily disable either of the two backup, will
    another backup operation run smoothly? In other words, if temporarily disable Symantec Backup, will Windows Server Backup always run successfully? Or if disable Windows Server Backup, will Symantec Backup always run successfully? Please check if Windows Server
    Backup conflicts with Symantec Backup.
    Based on the error message, please run
    chkdsk command and check if there is any issue in the disk. Then use
    Vssadmin resize shadowstorage to resizes the maximum amount of storage space that can be used for shadow copy storage. And check if this issue still persists.
    In addition, would you please let me know whether re-register Volume VSS components? If this issue still persists,
    please re-register and monitor the result.
    From command prompt:
    cd windows\system32
    Net stop vss
    Net stop swprv
    regsvr32 ole32.dll
    regsvr32 vss_ps.dll
    Vssvc /Register
    regsvr32 /i swprv.dll
    regsvr32 /i eventcls.dll
    regsvr32 es.dll
    regsvr32 stdprov.dll
    regsvr32 vssui.dll
    regsvr32 msxml.dll
    regsvr32 msxml3.dll
    regsvr32 msxml4.dll
    net start swprv
    net start vss
    By the way, did the Symantec Backup and Windows Server Backup run at the same time? Or before one backup finished,
    had another backup started?
    Hope this helps.
    Best regards,
    Justin Gu

  • Backup Fails with Invalid RECID Error

    Hi All,
    Please help me to understand Caution -section
    below text is from
    [http://download.oracle.com/docs/cd/B10501_01/server.920/a96566/rcmtroub.htm#447765]
    Backup Fails with Invalid RECID Error: Solution 2
    This solution is more difficult than solution 1:
    To create the control file with SQL*Plus:
       1. Connect to the target database with SQL*Plus. For example, enter:
          % sqlplus 'SYS/oracle@trgt AS SYSDBA'
       2. Mount the database if it is not already mounted:
          SQL> ALTER DATABASE MOUNT;
       3. Back up the control file to a trace file:
          SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
       4. Edit the trace file as necessary. The relevant section of the trace file looks something like the following:
          # The following commands will create a new control file and use it
          # to open the database.
          # Data used by the recovery manager will be lost. Additional logs may
          # be required for media recovery of offline data files. Use this
          # only if the current version of all online logs are available.
          STARTUP NOMOUNT
          CREATE CONTROLFILE REUSE DATABASE "TRGT" NORESETLOGS  ARCHIVELOG
          --  STANDBY DATABASE CLUSTER CONSISTENT AND UNPROTECTED
              MAXLOGFILES 32
              MAXLOGMEMBERS 2
              MAXDATAFILES 32
              MAXINSTANCES 1
              MAXLOGHISTORY 226
          LOGFILE
            GROUP 1 '/oracle/oradata/trgt/redo01.log'  SIZE 25M,
            GROUP 2 '/oracle/oradata/trgt/redo02.log'  SIZE 25M,
            GROUP 3 '/oracle/oradata/trgt/redo03.log'  SIZE 500K
          -- STANDBY LOGFILE
          DATAFILE
            '/oracle/oradata/trgt/system01.dbf',
            '/oracle/oradata/trgt/undotbs01.dbf',
            '/oracle/oradata/trgt/cwmlite01.dbf',
            '/oracle/oradata/trgt/drsys01.dbf',
            '/oracle/oradata/trgt/example01.dbf',
            '/oracle/oradata/trgt/indx01.dbf',
            '/oracle/oradata/trgt/tools01.dbf',
            '/oracle/oradata/trgt/users01.dbf'
          CHARACTER SET WE8DEC
          # Take files offline to match current control file.
          ALTER DATABASE DATAFILE '/oracle/oradata/trgt/tools01.dbf' OFFLINE;
          ALTER DATABASE DATAFILE '/oracle/oradata/trgt/users01.dbf' OFFLINE;
          # Configure RMAN configuration record 1
          VARIABLE RECNO NUMBER;
          EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE DISK
          DEBUG 255');
          # Recovery is required if any of the datafiles are restored backups,
          # or if the last shutdown was not normal or immediate.
          RECOVER DATABASE
          # All logs need archiving and a log switch is needed.
          ALTER SYSTEM ARCHIVE LOG ALL;
          # Database can now be opened normally.
          ALTER DATABASE OPEN;
          # Commands to add tempfiles to temporary tablespaces.
          # Online tempfiles have complete space information.
          # Other tempfiles may require adjustment.
          ALTER TABLESPACE TEMP ADD TEMPFILE '/oracle/oradata/trgt/temp01.dbf' REUSE;
          # End of tempfile additions.
       5. Shut down the database:
          SHUTDOWN IMMEDIATE
       6. Execute the script to create the control file, recover (if necessary), archive the logs, and open the database:
          STARTUP NOMOUNT
          CREATE CONTROLFILE ...;
          EXECUTE ...;
          RECOVER DATABASE
          ALTER SYSTEM ARCHIVE LOG CURRENT;
          ALTER DATABASE OPEN ...;
    Caution:
          If you do not open with the RESETLOGS option,
    then two copies of an archived redo log for a given log sequence number may
    exist--even though these two copies have completely different contents.
    For example, one log may have been created on the original host and the other on the new host.
    If you accidentally confuse the logs during a media recovery,
    then the database will be corrupted but Oracle and RMAN cannot detect the problem.

    Please help me to understand Caution -section
    Caution:
    If you do not open with the RESETLOGS option,
    then two copies of an archived redo log for a given log sequence number may
    exist--even though these two copies have completely different contents.
    For example, one log may have been created on the original host and the other on the new host.
    If you accidentally confuse the logs during a media recovery,
    then the database will be corrupted but Oracle and RMAN cannot detect the problem.As per my understanding it says. If you don't open database with RESETLOGS option then there may be archived logs with log sequence number which is already archived on the source host. This may happen due to difference in RECIDs. Now when the database needs media recovery for this particular log sequence, you may provide any of them. So in this case, RMAN and Oracle will not be able to differentiate the two files and can accept any of the archived log files during recovery. Since the contents of two archived logs are different, because they are generated at different times and they contains different transactions. So, internally it corrupts your database.
    Rgds.

  • Issue with sticky sessions

    My application has the following architecture:
    1.) a load balanced Flex frontend with sticky sessions which queries
    2.) a load balanced REST service also with sticky sessions
    The flex frontend queries the service using a Flex HTTPService object.  However, although sticky sessions are enabled on both the flex frontend and
    rest service, we are seeing queries go to different instances. For example
    user will request Flex App1 which will then call RestService1
    then user will request Flex App1 again which will call RestService2(instead of RestService1).
    Has anyone seen this issue before in a load balanced environment?  I need this to work because the REST service does not have a distributed cache, so subsequent requests must hit the same box to use the cache.
    thanks

    NW6 SP5 needs nw6nss5c in order for NSS to work properly; once applied
    then do
    nss /poolrebuild /purge
    on all pools. Make sure you have tested backups first, just in case.
    Also Load Monitor - Server Parameters - NCP. Set Level 2 OpLocks Enabled
    = Off, and Client File Caching Enabled = Off.
    What lan driver, date and version, on the server?
    Andrew C Taubman
    Novell Support Forums Volunteer SysOp
    http://support.novell.com/forums
    (Sorry, support is not provided via e-mail)
    Opinions expressed above are not
    necessarily those of Novell Inc.

  • Backup Deletion with DB13?

    Hey there,
    I have planned some AllOnlineRedolog Backups to a disk. But there is only enough space for ONE Backup. So, after the Backup on the disk has been saved to a tape, it should be deleted. Not directly after the Tape-Backup but  relatively before the new AllOnlineRedolog Backup.
    Example:
    01.01.2008 - 3:00am - AllOnline + RedoLog Backup (to Disk)
    01.01.2008 - 8:00pm - Backup to Tape
    08.01.2008 - 1:00am - Deletion of old Backup
    08.01.2008 - 3:00am - AllOnline.... and so on.
    Is this possible just with DB13? How could I realize this?
    Thanks in advance!

    Hi there,
    If you are using the DB13 you probably are familiar with the profile file init<SID>.sap in that file you can configure the disk path if you didn't then the backups goes to the default path that is %SAP_DATA%\sapbackup.  In the same file you can configure the retention period of that files and all the logs created during backup activities.
    There is an activity pattern in the DB13 called Clean up logs, that pattern could be configured to delete old backup files.
    We implement exactly the way you want, and in the init<SID>.sap located in %ORACLE_HOME%\database\ we set the parameters as follow:
    retention period in days for archive log files saved on disk
    default: 30
    cleanup_disk_archive = 14
    This means that offline readologs will be deleted if they are 14 days old
    retention period in days for database files backed up on disk
    default: 30
    cleanup_disk_backup = 1
    This means that all datafiles only will be available 1 day and at the next day when you run "Clean up logs" activity the previous backup will be deleted.
    You can configure that for example the Clean up runs every day one our before the backup and with that you first delete the old backup and then you make a new one.
    You can use the scheduler task if you want but if you use the DB13 you dont need to enter to you server every day to review the logs. We configure the scheduler task for our Portal because has no ABAP instance .
    the instructions in the .bat file coul be "brconnect -u / -c force -p init<SID>.sap -f cleanup " and that is actually the pattern in DB13.
    You can review the documentation of brconnect for more details.
    http://help.sap.com/saphelp_sm40/helpdata/EN/50/7dd41742210144aee3fdee21c553eb/content.htm
    Regards.
    Gustavo Balboa

  • Backups failing with error 19: the backup disk could not be resolved, or there was a problem mounting it

    Hello all,
    Doubt this issue relates to a recent and ugly issue but sharing anyways (resolved thanks to a user here: LaPastenague)
    Time Capsule won't backup if Modem (Motorola SB6121) attached
    This was on Feb 10th resolved. Few days later I upgraded to Yosemite.  Backups were working fine since.  
    Yesterday I happen to notice my last backup was 5 days ago.  I looked in the console logs and saw:
    2/25/15 3:55:06.127 PM com.apple.backupd[2777]: Attempting to mount network destination URL: afp://Paul;AUTH=SRP@Time%20Capsule._afpovertcp._tcp.local./Data
    2/25/15 3:55:17.778 PM com.apple.backupd[2777]: NAConnectToServerSync failed with error: 2 (No such file or directory) for url: afp://Paul;AUTH=SRP@Time%20Capsule._afpovertcp._tcp.local./Data
    2/25/15 3:55:17.792 PM com.apple.backupd[2777]: Backup failed with error 19: The backup disk could not be resolved, or there was a problem mounting it.
    2/25/15 3:57:18.808 PM com.apple.prefs.backup.remoteservice[2021]: Attempt to use XPC with a MachService that has HideUntilCheckIn set. This will result in unpredictable behavior: com.apple.backupd.status.xpc
    I called Apple.  Their front line was clueless. He suggested resetting the TC (latest "tower" model btw") which then a backup kicked off YEAH
    But the next hour it failed
    Called Apple again, this time to Sr Advisor.  He suggested factory reset of the TC and set up again.  Also first backup worked then rest failed
    Third call to Apple useless also.  He was asking questions I didn't feel relevant.  He also was asking me to reset the TC which I told him was done 2 hours prior. He asked me to reboot which I could NOT at that time.
    I begged him to collect data to submit to engineering, which will take days to get a reply.
    So I thought I would throw this out to the community....
    TC Is NEW (replaced from last issue).  Mac is 8 months old (iMac 27"), Os is Yosemite,  TC also new (latest model)
    Thanks...

    Yosemite is problematic on two fronts.. Networking and Time Machine.. so the combo of doing both with a Time Capsule is like bugs on bugs.
    Here is my standard list.. but your problem maybe difficult to resolve.. and my suggestion is simple.. until Apple get their act together and fix TM.. buy Carbon Copy Cloner and use that for your backup. It is solid and reliable.. even better if you use USB drive plugged into the computer and do a bootable clone.. because then you have a backup that is able to be tested for full functionality 2min after the end of the backup.
    This also starts from a factory reset.. but the reason for it is to change the configuration which is much more easily handled with factory reset to begin.. the instructions are there.. because this is my standard reply.. this is not uncommon!!
    Factory reset universal
    Power off the TC.. ie pull the power cord or power off at the wall.. wait 10sec.. hold in the reset button.. be gentle.. power on again still holding in reset.. and keep holding it in for another 10sec. You may need some help as it is hard to both hold in reset and apply power. It will show success by rapidly blinking the front led. Release the reset.. and wait a couple of min for the TC to reset and come back with factory settings. If the front LED doesn’t blink rapidly you missed it and simply try again. The reset is fairly fragile in these.. press it so you feel it just click and no more.. I have seen people bend the lever or even break it. I use a toothpick as tool.
    N.B. None of your files on the hard disk of the TC are deleted.. this simply clears out the router settings of the TC.
    Setup the TC again.
    ie Start from a factory reset. No files are lost on the hard disk doing this.
    Then redo the setup from the computer with Yosemite.
    1. Use very short names.. NOT APPLE RECOMMENDED names. No spaces and pure alphanumerics.
    eg TCgen5 and TCwifi for basestation and wireless respectively.
    Even better if the issue is more wireless use TC24ghz and TC5ghz with fixed channels as this also seems to help stop the nonsense. But this can be tried in the second round.
    2. Use all passwords that also comply but can be a bit longer. ie 8-20 characters mixed case and numbers.. no non-alphanumerics.
    3. Ensure the TC always takes the same IP address.. you will need to do this on the main router using dhcp reservation.. or a bit more complex setup using static IP in the TC. But this is important.. having IP drift all over the place when Yosemite cannot remember its own name for 5 min after a reboot makes for poor networking. If the TC is main router it will not be an issue.
    4. Check your share name on the computer is not changing.. make sure it also complies with the above.. short no spaces and pure alphanumeric.. but this change will mess up your TM backup.. so be prepared to do a new full backup. Sorry.. keep this one for second round if you want to avoid a new backup.
    5. Mount the TC disk in the computer manually.
    In Finder, Go, Connect to server from the top menu,
    Type in SMB://192.168.0.254 (or whatever the TC ip is which you have now made static. As a router by default it is 10.0.1.1 and I encourage people to stick with that unless you know what you are doing).
    You can use name.. SMB://TCgen5.local where you replace TCgen5 with your TC name.. local is the default domain of the TC and doesn't change.
    However names are not so easy as IP address.. nor as reliable. At least not in Yosemite they aren't. The domain can also be an issue if you are not plugged or wireless directly to the TC.
    6. Make sure IPv6 is set to link-local only in the computer. For example wireless open the network preferences, wireless and advanced / TCP/IP.. and fix the IPv6. to link-local only.
    There is a lot more jiggery pokery you can try but the above is a good start.. if you find it still unreliable.. don't be surprised.
    You might need to do some more work on the Mac itself. eg Reset the PRAM.. has helped some people. Clean install of the OS is also helpful if you upgrade installed.
    Tell us how you go.
    Someone posted a solution.. See this thread.
    Macbook can't find Time Capsule anymore
    Start from the bottom and work up.. What I list here is good network practice changes but I have avoided Yosemites bug heaven.
    This user has had success and a few others as well.
    RáNdÓm GéÉzÁ
    Yosemite has serious DNS bug in the networking application.. here is the lets say more arcane method of fixing it by doing a network transplant from mavericks.
    http://arstechnica.com/apple/2015/01/why-dns-in-os-x-10-10-is-broken-and-what-yo u-can-do-to-fix-it/

  • ACE Graceful Server Shutdown with Sticky

    I would like a way to gracefully shutdown a server without killing the sessions of the current users on that server.
    I know the "no inservice" command will allow the server to finish servicing existing TCP connections, but what happens to the users that are 'stuck' to that server?
    What happens with sticky sessions when you reduce the connection limit for a server below the current connection count? How about reducing the weight of the server in the farm? Will the 'stuck' sessions continue to go to the correct server in the farm?

    switch/Admin(config)# serverfarm linux1
    switch/Admin(config-sfarm-host)# rserver linux1
    switch/Admin(config-sfarm-host-rs)# inservice ?
    standby Only allow connections reassigned from failed servers
    Carriage return.
    switch/Admin(config-sfarm-host-rs)# do sho ver
    Cisco Application Control Software (ACSW)
    TAC support: http://www.cisco.com/tac
    Copyright (c) 2002-2008, Cisco Systems, Inc. All rights reserved.
    The copyrights to certain works contained herein are owned by
    other third parties and are used and distributed under license.
    Some parts of this software are covered under the GNU Public
    License. A copy of the license is available at
    http://www.gnu.org/licenses/gpl.html.
    Software
    loader: Version 12.2[121]
    system: Version A2(1.0a) [build 3.0(0)A2(1.0a) adbuild_04:14:49-2008/04/18_
    As you can see I run A2(1.0a) and the command is there.
    G.

  • Ace Sticky Configuration

    Hi Guys,
    I'm trying to set up a sticky configuration on an ACE modeule in a 6500.
    I've got the loadbalancing woking happily but need to ammend the config to add stickiness.
    As far as I know the first command is someting on the lines of...
    sticky http-cookie COOKIENAME STICKYGROUP
    however when I put this in I get the following error.
    Error: Sticy resource not available
    I suspect that i'm missing something obvious.
    Any assistance is greatly appreciated.
    Regards
    Steve

    By default all the resources are available to ACE contexts except sticky resource.
    You need a resource class with sticky resource defined and this class applied to the context.
    for example
    resource-class GOLD
    limit-resource sticky minimum 1 maximum equal-to-min
    Thanks
    Syed Iftekhar Ahmed

  • Time Machine. Backup failed with error code: 21

    Hi,
    I recently upgraded to Leopard 10.5.4 mainly for the time machine backup. I'm using a DNS-323 in a RAID1 configuration as the TM backup drive. Also I have the DNS-323 connected to my Airport Extreme through the ethernet to give NAS ability to my home network.
    After much hunting around the forums I got the sparsebundle created and copied to the DNS-323 share volume. See http://www.flokru.org/2008/02/29/time-machine-backups-on-network-shares-in-leopa rd/ My thanks to flokru.
    However, TM is still not working. The console messages have the following:
    Network mountpoint /Volume/Volume_1 not owned by backupd... remounting
    Network volume mounted at /Volume/Volume11
    Failed to mount disk image /Volume/Volume1_1/MacBook001b6334c307.sparebundle
    Backup failed with error: 21
    It would appear this error lies in the way TM is trying to mount the volume, specifically that Volume_1, the DNS-323 volume name, is not owned by TM. TM then tries to create a new volume, Volume11 and stick the sparebundle on it.
    Is there a way of making TM "own" the volume, Volume_1?
    Has anybody else seen this error or better yet, got a fix for it?

    James,
    It would appear this error lies in the way TM is trying to mount the volume, specifically that Volume_1, the DNS-323 volume name, is not owned by TM. TM then tries to create a new volume, Volume11 and stick the sparebundle on it.
    Is there a way of making TM "own" the volume, Volume_1?
    Actually, it is not TMs failure to mount the volume that is causing your failure to backup. I have received the same Console message using a Time Capsule. However, Time Machines' attempt to 'remount' the volume always succeeds and the backup goes through. That appears not to be your experience, but it is NOT because the volume is not being mounted. It is.
    *Mount Point Conflict (TMDisk-1)*
    NOTE: the following distinguishes “network drive” (the physical hard disk) from “disk image” (the sparsebundle file TM uses to backup to).
    First of all, I am convinced that the second mount point that is created (in your case Volume11) is due to the fact that you already had the network drive mounted on your desktop at the time of the attempted backup. Was that the case?
    When you manually mount a network drive the system creates a mount point in the /Volumes folder (Let’s call it TMDisk ). While that exists any attempt by Time Machine to mount the same disk will created a second mount point ( TMDisk-1 )
    Nevertheless, as I have observed, this doesn't seem to prevent backups from taking place successfully, at least on the Time Capsule. And according to your Console logs it appears that your volume is being ‘remounted’ successfully.
    *”The Backup Disk Image Could Not Be Mounted”*
    The real issue is Time Machines' failure to mount the disk image (sparsebundle) contained on your network drive.
    +Failed to mount disk image /Volume/Volume1_1/MacBook001b6334c307.sparebundle+
    +Backup failed with error: 21+
    More often than not this is due to having the Time Machine backup disk image (sparsebundle) mounted on your desktop during a backup attempt.
    Eject the backup disk image (sparsebundle) by either clicking the little Eject icon to the right of the disk image in the Finders’ Sidebar, or Ctrl-Click the drive icon on the desktop and select “Eject” from the contextual menu. Now try backing up again or launching Time Machine to view previous backups.
    For more information, see an article on this here:
    [http://discussions.apple.com/thread.jspa?threadID=1715977]
    Alternatively, a backup disk image can fail to mount if there is a problem with your computer name.
    *Proper Computer Name* #
    Make sure your computer has a proper name. Go to System Preferences --> Sharing. Time Machine needs to differentiate your computer from others on your network (i.e. "Bills MacBook" or "Office iMac"). If the "Computer Name" field is blank, create a name. Realize that if this step is necessary, you will likely have to start the Time Machine backup process over again and do another full initial backup.
    According to THIS article [http://support.apple.com/kb/TS1760], Time Machine may experience problems if your computer name includes certain characters. Make sure the computer name only includes ASCII characters from the following set.
    (0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)
    Once a change in the computer name occurs, you should stop and restart Sharing on the affected computer. Uncheck and then recheck Sharing in the Services list on the left.
    Let me know if any of this helps.
    Cheers!

  • Client installation with Net Configuration-Assistant : ora-00604 / ora-0224

    Hi forum,
    I am trying to install a client on a remote computer with Net Configuration-Assistant. When testing connection as user system I always get oracle errors
    ora-00604 / ora-0224.
    When I try connection with sys the reaction is : should be done with sysdba or sysoper privilege.
    On command line level I can login with : sqlplus sys/...@srvtstora:1527/oeg
    where oeg is the ORACLE_SID in question and srvtstora is the PC on which the
    database resides.
    what can i do ?
    Kind regards
    Samplitude

    error ORA-00604: error occurred at recursive SQL level
    ORA-00224, 00000, "controlfile resize attempted with illegal record type (%s)"
    Is it what it shows?
    Then there are problems with the controlfile. Make sure you have a valid backup before proceeding any further.
    On the other hand, the error when connection with sys it's because you have to specify the SYSDBA/SYSOPER role at the connecting string ( sqlplus sys/SysPasswd as sysdba)

  • Problems with Apple Configurator

    I've been having trouble with Apple Configurator lately. For reference, the devices are not being supervised. They are just being restored from a backup file with four profiles, and two free applications. The backup file is fairly large - coming in at 12.9GB. Using a Mac Mini with around 150GB of free HD space. I'm using a Bretford tray so I usually try to plug in 10 devices at once. This has worked in the past and have only recently run into problems since the update to Configurator 1.4.2 and/or iOS7. So, I'm able to to run Configurator with 10 devices plugged in (I've since started only plugging in 6) and no matter what I try I cannot get more than 4 devices to successfully complete. And actually about half the time less than 4 actually complete successfully. The error I'm getting is Code 4 and the description just says "unable to restore backup." It takes about an 45 minutes to complete one device and about double that to complete 10. So getting 10 done at once is a huge time saver. When I'm only getting 4 devices completed and many time less than that at almost an hour, it is absolutely maddening. I've tried every combination of restarting and reinstalling Apple Config and still the problem exists. 
    I have a constant stockpile of iPads, Minis, and iTouches coming in so it would be extremely greatful if anyone had any tips or advice.
    Additional questions regaurding Apple Config.
    1. Is there any way to install multiple profiles on multiple devices on the newest version (1.4.2)? It seems like the new version only allows you to install 1 profile on multiple devices or multiple profiles on 1 device.
    2. Everytime a backup fails, Apple Config saves that backup in the MobileSync folder buried in the Containers folder. Each of there backups is almost 13GBs, so I'm having to constantly delete them to conserve HD space inorder to run so many devices at the same time. Is there anyway to prevent this from happening or auto delete failed backup restores?
    I know this is very long winded but I wanted to get as much on my info out there as I could. Anyway, thanks for any help that anyone can provide!

    I recommend posting in the iPhone or iPad for the Enterprise forums

  • Lync Backup/Restore with SQL backup

    Is it possible to backup and restore the XDS database (Lync topology and configuration information) using SQL database tools?  Technet is very confusing, it describes in great detail how to use cmdletts to export the configuration then back it up with
    te regular backups.  But then at link for Backup - restore best practices  it states "The simplest and most commonly used backup type and rotation schedule is a full, nightly backup of the entire SQL Server database. Then, if restoration is necessary,
    the restoration process requires only one backup and no more than a day’s data should be lost."
    http://technet.microsoft.com/en-us/library/hh202184.aspx
    Best Practices for Backup and Restoration
    To facilitate your back up and restoration process, apply the following best practices when you back up or restore your data:
    Perform regular backups at appropriate intervals. The simplest and most commonly used backup type and rotation schedule is a full, nightly backup of the entire SQL Server database. Then, if restoration is necessary, the restoration process requires only
    one backup and no more than a day’s data should be lost.
    If you use cmdlets or the Lync Server Control Panel to make configuration changes, use the
    Export-CsConfiguration cmdlet to take a snapshot backup of the topology configuration file (Xds.mdf) after you make the changes so that you won't lose the changes if you need to restore your databases.
    Ensure that the shared folder you plan to use for backing up Lync Server 2010 has sufficient disk space to hold all the backed up data.
    Schedule backups when Lync Server usage typically is low to improve server performance and the user experience.
    Ensure that the location where you back up data is secure.
    Keep the backup files where they are available in case you need to restore the data.
    Plan for and schedule periodic testing of the restoration processes supported by your organization.
    Validate your backup and restoration processes in advance to ensure that they work as expected.
    Thanks for your help,
    Paul MacLean

    Take a look if this helps you out:
    http://designinglync.blogspot.com/2011/04/lync-backup-and-recovery.html
    and in detail:
    http://blogs.technet.com/b/uc_mess/archive/2011/03/17/lync_2d00_server_2d00_2010_2d00_backup_2d00_instructions.aspx
    +Say thanks and observe basic forum courtesy:
    +If this post answered your question, Mark As Answer
    +If this post was helpful, Vote as Helpful
    windowspbx blog: my thots/howtos
    see/submit Lync suggestions here: simple and public

  • VSS snapshots for DPM 2010 Hyper-V backup conflict with SQL backup on a virtual SQL server

    We currently use DPM 2010 to backup our virtual servers which reside on a 5 node Hyper-V clustered share volume.  DPM uses the hardware VSS writer to backup the Hyper-V guests.   Several of these Hyper-V guests are SQL servers (SQL 2008) and they
    are all configured to run point in time SQL backups using SQL Management Plans.
    The SQL backups are scheduled to run a full database backup on a Friday and differential backups on the other days of the week.  Transaction backups are scheduled to run several times throughout the day.
    However we have recently discovered that there is a conflict between these two methods as it seems as though when a restore is required using a differential SQL backup, it fails as the snapshot created by DPM forces SQL to believe it has had a full backup
    carried out externally from the Management Plan and is therefore unable to perform the restore.
    DPM backs up the Hyper-V guests on a daily basis from 8pm.
    Can anyone provide any advice or guidance on this as we need both types of backup to run successfully.  We are required to backup SQL with point in time backups and we also need to protect the Hyper-V guests in their entirety.

    Thanks Mike,
    I have tried this but unfortunately it has no effect.  The VM has Oracle installed (although not the Oracle VSS Writer).  It is the Oracle application server, not the database server, and the customer has a script that is used to stop and start
    the Oracle application when required.  Through troubleshooting this with them I have noticed that after the WLS_Reports service/process is stopped the backups run successfully but when it is running the backups fail.
    I have also noticed that when I stop the Hyper-V Volume Shadow Copy Requestor the backup runs successfully, which I guess is as expected.
    When the backups fail I get 2 errors in the application log:
    Event Id 12293, VSS - Error calling a routine on a shadow copy provider {GUID for the Hyper-V IC Software Shadow Copy Provider}.  Routine details PreFinalCommitSnapshots ({GUID}, 5) [hr = 0x800705b4, This operation returned because
    the timeout period expired.]
    Event Id 19, vmicvss - Not all the shadow volumes arrived in the guest operating system.
    This is also part of the same problem I have posted here: Backup
    fails for a Hyper-V guest with VSS Writer failures using DPM 2012 R2 - Hyper-V guest has Oracle application installed
    Regards
    Chris

Maybe you are looking for