My high availability drops dead when replication quieces

I have a linux HA cluster using oracle 8.0.5 with replication. I am generating replication support now, and as a table becomes part of the replication group, i.e. it has it's generate_replication_support job processed, it becomes unavailable for access. When the repgroup resumes activity, they will be available again. This is a real problem because we will be adding tables on the fly, as new customers sign up. If each new customer marks a point where the cluster goes down, it just wont work. There must be a way around this, anyone have any ideas?
Tim Doyle
System Administrator: Easyware Software
[email protected]

This is standard Advance Replication action.
When you add a table to a master group, that master group is suspened. When the master group is quiesced all update access to all objects in that group is prevented. This is the way it has always worked with multi-master replication.
Depending on how many customers you plan on adding, you could add a new master group for each customer.
Or You could create a temporary master group, add the tables to that master group, and then move the tables to the main master group during a scheduled downtime.
Or Look into using the advance queuing option. This can provide similar features, but requires much more application development for it to be successful.
Mark Greenhalgh
TUSC The Ultimate Software Consultants www.tusc.com
[email protected]
null

Similar Messages

  • Question about using DB_NOSYNC when using replication (high availability)

    Hi,
    I am using Berkeley db with replication (high availability ).
    I want to ask if i use the DB_NOSYNC flag would be any problem with the Durability of the system?
    I know that if there is no sync, data is not stored from memory to hard disk and i will lose uncommited transactions on system failure. Right?
    But i read in a thread that if you use Replication, all transactions pass to clients so there no loss if master server fails?
    I would like your knowledge about this issue.!

    Hi,
    It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
    You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
    It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
    This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2).

  • Question on replication/high availability designs

    We're currently trying to work out a design for a high-availability system using Oracle 9i release 2. Having gone through some of the Oracle whitepapers, it appears that the ideal architecture involves setting up 2 RAC sites using Dataguard to synchronize the data. However, due to time and financial constraints, we are only allowed to have 2 servers for hosting the databases, which are geographically separate from each other in prevention of natural disasters. Our app servers will use JDBC pools to connect to the databases.
    Our goal is to have both databases be the mirror image of each other at any given time, and the database must be working 24/7. We do have a primary and a secondary distinction between the two, so if the primary fails, we would like the secondary database to take over the tasks as needed.
    The ability to query existing data is mission critical. The ability to write/update the database is less important, however we do need the secondary to be able to process data input/updates when primary is down for a prolonged period of time, and have the ability to synchronize back with the primary site when it is back up again.
    My question now is which replication technology should we try to implement? I've looked into both Oracle Advanced Replication and Dataguard, each seems to have its own advantages and drawbacks:
    Replication - can easily switch between the two databases using multimaster implementation, however data recovery/synchronization may be difficult in case of failure, and possibly will lose data (pending implementation). There has been a few posts in this forum that suggested that replication should not really be considered as an option for high availability, why is that?
    Dataguard - zero data loss in failover/switchover, however manual intervention is required to initiate failover/switchover. Once the primary site fails over to the standby, the standby becomes the primary until DBA manually goes back in and switch the roles. In Oracle 10g release 2, seems that automatic failover is achieved through the use of an extra observer piece. There does not seem to be anyway to do this in Oracle 9i release 2.
    Being new to the implementation of high-availability systems, I am at somewhat of a loss at this point. Both implementations seem to be a possible candidate, but we will need to sacrifice some efforts for both of them also. Would anyone shine some light on this, maybe point out my misconceptions with Advanced Replication and Dataguard, and/or suggest a better architecture/technology to use? Any input is greatly appreciated, thanks in advance.
    Sincerely,
    Peter Tung

    Hi,
    It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
    You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
    It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
    This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2).

  • NAC Manager High Availability Peer CAM DEAD

    Hi,
    I have two NAC Managers with High Availability and  i have used both sides eth1 interface as a Heartbit link.  
    I have done following Steps for High Availability.
    1) Synchronize the times between two CAMs.
    2) Generate a Temporary SSL certificate in both CAMs and done export-import procedure in each other.
    3) Make One CAM as a  Primary and another as Seconday.
    But after all this configuration done i can see the status in Monitoring> Reports as--------Primary CAM is up in both the servers and Redundant CAM is down.
    Also in Failover Tab i can see ------Local CAM - OK [Active] and   Peer CAM :- DEAD.
    I have also attached some screenshots so you can find out the same.
    Your help will highly appreciated.
    Thanks 

    Try the following steps and verify that all the steps were followed :
    http://www.cisco.com/c/en/us/support/docs/security/nac-appliance-clean-access/99945-NAC-CAM-HA.html

  • HT1476 My iphone dropped dead all of a sudden. I can use it in the afternoon, however when I tried to charge it in the evening but not able to charge it. Please help me.

    my iphone (3g) was given by my daughter who used it for 3 years. She did charged it everynight, however, when she gave me this iphone at Christmas, it was working properly and the batery last at least 5 to 6 hours when I used it continueslly. Last Monday, I used it as usal in the afternoon, work fine but when I tried to charge it in the evening, it will not able to be charged as usal. Is it possible to drop dead all of a sudden?
    Lucy

    What to do if your iPhone will not turn on

  • High availability when using Oracle Service Bus as Middleware?

    Hi there,
    We are designing a new solution here which will be composed of SAP WM, Middleware and a Java application.
    We have been using OSB when sending data from Java to SAP, and we can get many "channels" working. So we get higher throughput and also HA.
    Now, in that new project we will need to send data from SAP to a Java application, but some people told me we can't have more than one channel when sending data from SAP to OSB. I mean, we won't be able to have High Availability unless we use SAP PI. The only option we would have is to set the gateway to broadcast the message to all available channels, which would duplicate the messages got by the Java application.
    Is that true? Is there any alternative for using OSB with SAP and still get HA and High throughput (SAP -> OSB).
    What do you guys think?
    Regards

    anyone?
    Please

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Gradients dropping out when opening PDFs in Illustrator

    Hi everyone. I work for a large format printing company. We receive thousands of files every year from many different designers. We started to notice that gradients would drop out of some PDFs when opening them in Illustrator but they would preview fine when opened in Acrobat. I searched high and low for an explanation for this glitch. The only response I received is "Stop opening exported PDFs in Illustrator." This is an unsatisfactory answer for someone that needs to preflight directly in the Illustrator program. I've done a ton of research and have been racking my brain for the last 2 years and have finally found sufficient workarounds for this issue. The downfall with these workarounds: all spot colors located in the gradients will convert to CMYK mixes. I'm still researching to see if there is a way to keep spot colors intact in these areas.
    A little back story: InDesign is converting the gradients into something called NChannel. It enables more accurate handling of color blending by including additional dot gain and color mixing info. Both InDesign and Acrobat have the ability to display these elements whereas Illustrator, from what I’m finding, does not. Which is why we’re seeing gradient elements drop out when opened in Illustrator.
    Workaround for CMYK Gradients
    Open PDF exported from INDD in Acrobat
    Go to Tools > Print Production > Convert Colors
    Object Type: Smooth Shade (this tells Acrobat to hone in on Gradients only)
    Color Type: DeviceCMYK
    Check Embed next to Conversion Profile (should be SWOP)
    Expand Document Colors on the right and Select DeviceCMYK in Color Spaces then click ok
    Save the PDF
    Workaround for 1 Spot Color in Gradients (Converts Spot Color located in Gradient to CMYK – all other Spots stay intact)
    Open PDF exported from INDD in Acrobat
    Go to Tools > Print Production > Convert Colors
    Object Type: Smooth Shade (this tells Acrobat to hone in on Gradients only)
    Color Type: Spot Color
    Check Embed next to Conversion Profile (should be SWOP)
    Expand Document Colors on the right and Select the Spot that is located in the Gradient in Color Spaces then click ok
    Save the PDF
    Workaround for 2 Spot Colors in Gradients (Converts Spot Colors located in Gradient toCMYK – all other Spots stay intact)
    Open PDF exported from INDD in Acrobat
    Go to Tools > Print Production > Convert Colors
    Object Type: Smooth Shade (this tells Acrobat to hone in on Gradients only)
    Color Type: Spot Color
    Check Embed next to Conversion Profile (should be SWOP)
    Expand Document Colors on the right and Select the DeviceN spot color that is located in the Gradient in Color Spaces then click ok
    Save the PDF
    Please test it and let me know if you run into any issues with these workarounds.  I'm always looking for new problems to solve!
    - Jenny

    you said--"...needs to preflight directly in the Illustrator program. I've done a ton of research and have been racking my brain for the last 2 years and have finally found sufficient workarounds for this issue"
    Your workarounds seem time consuming and (possibly) unnecessary for the following reasons:
    1. large format printers use 4-8 colors to achieve color (so spot colors shouldn't be an issue)
    2. Acrobat Pro has it's own 'preflight' function that works fairly well
    additionally (don't know if you know this) there is an add-on to acrobat called 'Pitstop' (google it) that allows just about any change/correction/alteration to be made to a PDF
    ...and...opening a PDF in illustrator (that wasn't created in illustrator--do you know how to determine this?) can lead to many problems i.e. gradients dropping out, text reflow, etc.
    as far as testing your workarounds, seriously, who has the time when there are tools available that make using them unnecessary (this assumes that you have created these 'workarounds' to match your worflow--but a nearly pure PDF workflow should work in this day and age, depending on your RIP and software).

  • HTTP Server High Availability

    Hello All.
    I have a question regarding OC4J and HTTP server High Availability.
    I want to do something like the Figure 3-1 of the Oracle Application Server High Availability Guide 10.1.2. See this link
    http://download-east.oracle.com/docs/cd/B14099_11/core.1012/b14003/midtierdesc.htm#CIHCEDFC
    What I have now is the following:
    Three hosts
    Two of them are an OAS 10.1.2 which we already configured the Cluster and deployed our applications (used this tutorial: http://www.oracle.com/technology/obe/obe_as_1012/j2ee/deploy/j2eecluster/farmcluster.htm)
    Let's say this nodes are:
    - host1
    - host2
    The other one is the Oracle WebCache stand alone (will act as Load Balancer). We will call this
    - hostwc3
    We already configured the WebCache as Load Balancer and is working just fine. We also configured the session replication successful and work great with our applications.
    What we have not clear is the following:
    When a client try to visit http://hostwc3/application/ the LOAD BALANCER routes him to, let's say http://host1/application/ and in the browser's URL will not show the Virtual Server anymore (the webcache server) and will show the actual real Apache address (host1 )that is attending him. IF we "kill" on ENTIRE host1 (apache, oc4j, etc..) the clients WILL perceive the down and if they try to press F5, the will try to access to an Apache that doesn't is up and running.... The behavior expected is that the browser NEVER shows the actual Apache URL, so, when some apache goes down, the client do not disconnect (as it happens with and OC4J downfall ) and always works with the "virtual web server".
    I came up with some ideas but I want you Guys to give me an advice:
    - In Web Cache, do not route for load balancing to Apache, and route the Oc4J directly (Is this possible?)
    - Configure a HTTP Server Cluster, this means that we have to have a "Virtual Name"to the Apaches (two of them). Is this possible? how?
    - Use the rewrite mode of the Apache. Is this a good idea?
    - Any other idea how to fix the Apache "Single Point Of Failure" ?
    According with the figure 3-1 ( Link above ) we do can have HTTP Server in a cluster. But I have no idea how to manage it or configure it.
    Thanks in advance any help!

    You cannot point Outlook Anywhere to your DAG cluster IP address. It must be pointed to the actual IP address of either server.
    For no extra cost DNS round robin is the best you will get, but it does have some drawbacks as it may give the IP address of a server you have taken down for maintenance or the server has an issue.
    You could look to implement a load balancer but again if you are doing this for high availability then you want more than one load balancer in the cluster - otherwise you've just moved your single point of failure.
    Having your existing NAT and just remembering to update it to point to the other server during maintenance may suit your needs for now.
    If you can go into more detail about what the high availability your business is looking to achieve and the budget we can suggest the best method to meet those needs for the price point.
    Have a great day
    Oliver
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • Topology.svc - Endpoints - Web Services High Availability

    Hi,
    I was recently performing some simple DRP tests before going to production and i faced some issues i never encountered before..
    (followed
    http://blogs.msdn.com/b/besidethepoint/archive/2011/02/19/how-i-learned-to-stop-worrying-and-love-the-sharepoint-topology-service.aspx for usefull commands related to enpoints)
    My farm:
    SP 2013 - CU August 2013
    2 WFE (WFE1, WFE2)
    2 App (App1, App2) : Most services started on both servers (UPSS on APP1, UPS on both) - Central Admin on Both.
    SQL Cluster
    At normal state the command : (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    returns > ServiceEndpointUri :
    https://app1:32844/Topology/topology.svc
    (if i'm not wrong, this topology.svc can run on only one server at a time)
    I stopped WFE1, no pb, the NBL (appliance) is doing the job.
    Then I stopped the App1 and started to have some issues. (most enpoints not balanced to app2)
    I run the job "Application Addresses Refresh Job"
    Or launch PS command : Start-SPTimerJob job-spconnectedserviceapplication-addressesrefresh
    Wait 20 sec.
    A few endpoints are now on APP2 (MMS, Search), it seems to work, i reached my web page.
    I ask a mate to try and he got the "sorry we encountered and error..." > Can't load user profile.
    I refreshed my browser and got the same error.......
    ULS review, I can see that some svc request (most about user, profiledbcacheService) are still on APP1 !!
    A failure was reported when trying to invoke a service application: EndpointFailure Process Name: w3wp Process ID: 6784 AppDomain Name: /LM/W3SVC/93617642/ROOT-1-130445830237578923 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:e8315f8e5d7d4b1b90876e3b0043a4ae#authority=urn:uuid:164efb17f28c4d2d9702ce3e86f0c0e8&authority=https://app1:32844/Topology/topology.svc
    Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint:
    http://app1:32843/e8315f8e5d7d4b1b90876e3b0043a4ae/ProfileService.svc
    The command (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    Still return me that my topology.svc is on
    https://app1:32844/Topology/topology.svc
    But App1 is down !!
    If my understanding is ok: Normally, the internal round robin loadbalancer (Application Discovery and Load Balancer Service, started on all servers, not configurable) should manage this.
    Application Addresses Refresh Job is running each 15 min. and refreshs available endpoints, using the topology.svc
    But, the topology.svc called is always on APP1 which is down !
    At this time, i haven't found why SharePoint is not detecting that APP1 is down and is not
    automatically recreating a topology service on another available server.....
    If you have any idea...your help is welcome :)
    Regards,
    O.P

    Hi  ,
    For achieving Web Services High Availability, you need to make sure service-applications have more than one server to service them to increase back-end resiliency to a SharePoint server dropping off the network.
    You can refer to the blog:
    http://blogs.msdn.com/b/sambetts/archive/2013/12/05/increasing-service-application-redundancy-high-availability-sharepoint.aspx
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Eric Tao
    TechNet Community Support

  • High Availability Options Without Enterprise Edition

    I think I've found the answer to this already but I just wanted to confirm that I'm getting this right. In SQL Server 2012, is there no way to implement a high-availability solution which doesn't require shared storage or use of a deprecated feature without
    having Enterprise Edition?
    At the moment we have databases hosted on a primary server running Windows Server 2012 Standard and SQL Server 2012 Standard.  We have a second, identical server, and the databases are mirrored between the two servers using Database Mirroring. 
    This works perfectly and meets all of our requirements.  The only possible problem is that we're looking at storing documents using FILESTREAM, which isn't available in Mirrored databases.
    From what I've read, Database Mirroring is deprecated in SQL Server 2012.  High Availability Groups, which sound great, are only available in the Enterprise Edition.  This feels like a real kick in the teeth from Microsoft.  We're stuck
    with either using a deprecated feature (Mirroring), which is only postponing the problem until the end of 2012's life cycle, or laying out significant amounts of money (in the tens of thousands of pounds GBP) to upgrade to Enterprise Edition, which we just
    don't have the budget for.
    Why couldn't Microsoft continue to offer a high availability option for businesses without deep pockets?  Do I have any options I'm not thinking of?  Shared storage is not an option (too costly, single point of failure, geographically separated
    datacentres).

    Thanks for all the feedback.
    I was forgetting that even data stored as FILESTREAM would need to be backed up using SQL backups, so I guess that's an issue either way, thank for reminding me, Sebastian.
    Geoff, I agree that replication isn't a viable HA solution.  FCI has lots going for it, but from what I can make out (and I am just reading up on this now so I could be wrong) requires either shared storage or a third-party tool to move the data from
    the Primary server's local storage to the local storage of the Secondaries.  It all seems overly complicated when Mirroring does exactly what I need it to do, merrily writing transactions to the mirrors at the same time as the primary databases. 
    Additionally, whilst you might be right about making the case for Enterprise Edition, it's difficult to make a clear case to non-technical people when we already have a working HA solution and document management.  Trying to explain the nuanced advantages
    of SQL Server full-text indexing (at present we use a third-party indexing product which stores the index discretely) and simplified querying when the documents are in the same database as their associated data as justification for spending tens of thousands
    of pounds is a challenge!
    David, that's useful to know, about the disadvantages of using FILESTREAM and how just storing the documents in a VARBINARY(MAX) column might actually give better performance, thank you.

  • Many VLFs in high availability database

    Hi,
    I have a database in high availability mode with a Log file of 16GB. Running DBCC SQLPERF (LOGSPACE) reveals that only 0.03% of the file is used. So I'd like to shrink the file. I performed full and transaction backups and tried to shrink, but nothing happens.
    I executed DBCC OPENTRAN but no transaction is open on the DB. Executing SELECT name,log_reuse_wait_desc FROM sys.databases; returns "NOTHING". But If I run DBCC LOGINFO I see 320 Virtual log files, with about 200 being marked with STATUS 2 (not
    reusabale).
    Looking at the Always-on availability dashboard shows that replication is fine.
    Does somebody know why these VLFs are marked as such?
    Thanks

    with about 200 being marked with STATUS 2 (not reusabale).
    No need to worry about this, you can read below link
    http://blog.moserit.com/virtual-log-file-monitoring-with-dbcc-loginfo-in-alwayson
    Though we mark the log records as available for cleanup, the actual process of cleaning up is deferred.  Since the new available space is tracked but the VLFs themselves are not
    yet marked as inactive it is not reported as such by DBCC LOGINFO directly.  The other commands such as DBCC SQLPERF(‘LOGSPACE’) accurately report the free space since they include the VLFs marked for deferred cleanup when accounting for space. 
    There is, unfortunately, no equivalent of the DBCC LOGINFO currently that can track VLFs marked for deferred cleanup.

  • JSPM in high availability

    Dear Friends,
    I have installed PI 7.1 in High availability using Microsoft windows MNS based cluster. It was with initial patch level 04 of Netweaver 710.
    I have installed SCS, ASCS, Enque Replication Services etc in cluster mode. I have installed primary application server on Node 2 and Additional Application server on Node 1.
    Now I installed SP07 using JSPM from my Primary Application server i.e. Node 1. It went successfully and my support Pack level was upgraded to Sp07. But now when I open Java stack in Node 1, It shows only home page using url - http://10.6.4.178:50000
    After that if I open any link e.g. UME or NWA, it does not open and throughs following error :
    Service cannot be reached
    What has happened?
    URL http://10.6.4.178:50000/wsnavigator call was terminated because the corresponding service is not available.
    Note
    The termination occurred in system PIP with error code 404 and for the reason Not found.
    The selected virtual host was 0 .
    What can I do?
    Please select a valid URL.
    If it is a valid URL, check whether service /wsnavigator is active in transaction SICF.
    If you do not yet have a user ID, contact your system administrator.
    ErrorCode:ICF-NF-http-c:000-u:SAPSYS-l:E-i:sappicl1_PIP_00-v:0-s:404-r:Notfound
    HTTP 404 - Not found
    Your SAP Internet Communication Framework Team
    Java is started properly on both the nodes without any error. While it is working fine on Node 2 i.e. 10.6.4.179.
    Please suggest ! Do we need to do so special step for installing patch through JSM in High availability setup?
    Jitendra Tayal

    Hi Jitendra,
    While doing a patch upgrade with JSPM in HA scenario, you need to switch off the automatic switchover option because, once the components are deployed, the J2ee gets restarted. If the automatic failover is active, the service gets switched over to the other node.
    In this case, shutdown all the instances and stop the cluster.
    Restart the systems. Start the cluster in the original servers.
    Restart JSPM.
    It should work fine then.
    Let me know if it is helpful.
    Best Regards
    Raghu

  • UCCX stand alone to high availability

    I have a production UCCX signle server deployment on 7.0(1). I am going to upgrade this to high availability and just needed to check out the procedure to upgrade the existing production server to the first node in HA. My 2nd server is already built, with UCCX and SQL 2000 so it is ready to go. Upon reading the upgrade guide it seems like all I need to do is upgrade SQL and then apply the HA license (it is sort of vague I feel for something this crucial). So you do not need to re-run the install to define first node?
    Any opinion (or link to a more detailed document) will be appreciated.

    I know this is an old post but for the sake of the community the answer is yes. if it was set up as a
    stand alone node to begin with it can be changed to first node using the cet.bat tool to change the startup behavior of the appadmin page as described here.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    http://www.cisco.com/en/US/products/sw/custcosw/ps1846/products_tech_note091
    86a00805a7acc.shtml
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    1. On the CRS server, go to C:\Program Files\wfavvid\, and double-click the cet.bat file.
    2. Click No when the warningappears.
    3. Right-click the AppAdminSetupConfig object in the left pane, and select the option Create.
    4. Click OK.
    5 .In the new window, click the com.cisco.crs.cluster.config.AppAdminSetupConfig tab.
    6. Choose Fresh Install from the drop-down list in order to change the value for Setup  State.
    7. Click OK
    8. After you create the AppAdminSetupConfig object, log in with the user name  Administrator and password ciscocisco, and then run the setup again.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    this will allow you to change the node from stand alone to first node  by re-running the initial UCCX setup wizard.
    Don Mynes

  • Timesten high availability question

    I have a case presented here and wanted to know if it is actually possible to implement.
    Let us consider four nodes with timesten (11.2) installed in all of them. A datastore with the same name is created on each of the four servers. Two of these servers are in location A and the other two in location B. Servers in location A have replication defined between them and similarly servers in B have replication defined between them. But note that there is no replication defined between any server in location A with any server in location B.
    The basic idea of this entire setup is to maintain high availability of timesten at any point of time (in case of natural disasters, etc..) irrespective of the location of the servers
    Now, we have oracle software installed in four other systems. Two of these servers are in location A and the other two are in location B. Note that they are not installed on the same box as Timesten.
    Scenario 1:
    Question: Timesten in location A goes down, how is the high availability taken care of?
    Answer: Timesten in the other server in location A should come up and because of the replication process, this will solve the problem.
    Is this correct? I think it is.
    Scenario 2:
    Question: Timesten installed on both the nodes at location A go down, how is high availability taken care of?
    Answer: ?
    Please remember from above that timesten does not have a replication policy defined between any server in location A with any server in location B. The requirement says that we should be able to recover all the latest data that the nodes at location A had, by pulling it from oracle DB at location A and putting it into TT server in location B. I would like to know if it is possible to do this?

    Hello,
    Your approach is correct in designing a Disaster Recovery architecture for TimesTen and Oracle Database. TimesTen supports an Active-Standby pair topology that works well with integrating with the Oracle database within a particular site. However, like for any geographical based replication, it is recommended to configure replication across the WAN using the Oracle Database GoldenGate or Streams technologies in ASYNC mode for better throughput and efficiency. It is also recommended to compress replication traffic across the WAN between the Oracle databases.
    So while using the Oracle Database to replicate transactions across the WAN is the right thing to do (using Streams Replication or GoldenGate between the two Oracle databases (assuming using an Oracle RAC 2-node cluster in each site), you will not be able to guarantee that any transactions in site A has made it to site B. GoldenGate and Streams technologies have the ability to replicate the data bi-directionally. What this means is that when site A recovers, transactions that had been trapped there (either between TimesTen and the Oracle DB or in the Oracle DB transaction logs), will attempt to replicate again to site B, so it is important to set up a conflict detection/resolution approach which is possible to do in either GoldenGate or Streams.
    Note that Oracle Data Guard replication is not supported with TimesTen in such a configuration across the WAN where TimesTen datastores need to be maintained in both sites.
    To fully answer the question, however, we should get into the details of the type of cache group tables that you intend to use in TimesTen. If using TimesTen as just a read-only cache while all inserts/updates happen in the Oracle database, then OracleDB would be regarded as the database of record and it would be used to handles all changes while data changes get auto-refreshed from teh Oracle databases in each site into the respective TimesTen tables.
    If the application will be looking to take advantage of fast writes into TimesTen using AWT (async writethrough cache group tables), then it is recommended to configure those tables to be DYNAMIC AWT tables so that if a failover to site B takes place and the data is not in TimesTen (but it is in the Oracle Database), it will be automatically loaded on demand as needed from the Oracle database in site B. Note however that there are restrictions that exist with DYNAMIC load on demand cache groups that you need to look into to find out whether those would work in your application's case (particularly, load on demand works only if the where clause includes an equality predicate on the primary keys, foreigh keys, or unique indexes, etc...)
    To fully answer your question on non data loss across geographies, you'd have to use Synchronous replication between TimesTen and Oracle using Synchronous Writethrough Cache Groups and SYNC replication in Streams for example between the two geographies. Neither of those configurations are used to my knowledge in the field because they are very non optimal and carry huge response time expense, which slows down replication considerably and affects application response times.
    My assumption also is that the need for the Oracle database is because all data does not fit into memory. If the data does fit into memory, then you could also consider a pure timesten replication across the two sites using an active-standby pair on site A and a read-only subscriber on site B that would be made ACTIVE only in the case of a disaster on site A. Once site B takes over, you can also create an active-standby pair in site B based on the newly elected ACTIVE datastore in that site. In all these cases, it is recommended to use SYNC 2-SAFE replication between TimesTen datastores in the same site and ASYNC replication between the two sites.

Maybe you are looking for