Database high availability (Best solution in this sceneio)
Dear All,
We have 2 database servers, in 2 different continents. Operating system is Linux and Oracle 11gR1 is the database. Connectivity among the servers is over the internet.
There are many possible ways to provide high availability of data to applications that are connected to both database. Obviously people near to the server will be accessing data from their nearest server to improve performance of the applications as applications connecting database will also be on server nearest to their geographical location.
However inserts, updates and deletes functions should be online and changes must reflect to the users accessing any database server.
There are several ways to achieve this objectives, like oracle streams, oracle data guard and also there are several options available within these main categories.
I want to find out the best possible way to perform this functionality. Please guide and help me in this.
You can ask me any question you want that can help to understand the scenario. The main concern is the performance of database servers.
Thanks, Imran
As i read the whole thread there is always require you need absolute current data on both end, non of them can be far away or behind to each other, the solution you are proposing yourself to replicate the data itself require more bandwidth between these two node, not only require more bandwidth but require constant traffic on both end, if you go with data guard then there is one way traffic , you cannot go tro and far , as far as concern about multi master replication then ask yourself how much the data change rate will imply to identify network overhead between them, what would you do if network goes down between both node? it would be like pulling yours hair if any complexity occurs between two node like constant updating the same data at both node.
As you said
First: All the entries are stored in one database i.e. the primary database, and using some method these records are also transfered to the other database. However the queries and reporting should be done from local database as per user's/application's geographical location.+
Second: Entries are records locally and immediately replicated to other database, as the employee table we have has employee_id the primary key and this has to be always unique. There are many other tables like this. reporting and query will obviously from the local database.+
From above two point there is confusion with yours first point . You cannot ignore current data availability at both end due to application integrity like (primary key , foreign key etc) otherwise alter yours application with unique site key "PK","UK" within yours all application primary key and foreign key.I am not much aware about replication but i am sure you cannot handle real time data availability at both end, there is always some delay at both end.
If you ask my opinion then i would go one centralized location for yours scenario irrespective the geographical disbursement cause there are applications that have already seen over the net which access the centralized location from too far from the centralized server i.e yahoo server, google server and many more.There are various dedicated submarine optical fiber cable link exist there , you need to contact yours regional vendors like wateen,PTCL etc for such fibre link.
Khurram
Similar Messages
-
Zenworks Database - High Availability
Hello,
we just use Zenworks 10 with an Oracle Database. We have two primary Zenworks Servers at two different Locations (Other Town - Linked via VPN).
Is it possibile to configure the Database High Availability - so that the external primary Zenworks Server is still available/manageable after a VPN Connection breakdown.
Best regerds,
Alex SommerThis would need to be done with Oracle.
ZCM talks to a single Database.
If you can configure Oracle so that ZCM is unaware that there are
multiple back-end databases and Oracle can somehow figure out how to
resolve changes when the different DBs are not talking, I presume this
would work. This would all need to be handled by Oracle, however.
Normally, All Primaries would be in the Data Center with the Database.
Satellite Servers would be in the remote offices.
If the VPN connection was down, users would authenticate with Cached
Credentials and have their cached bundles/policies.
On 5/4/2011 11:06 AM, alexsommer wrote:
>
> Hello,
>
> we just use Zenworks 10 with an Oracle Database. We have two primary
> Zenworks Servers at two different Locations (Other Town - Linked via
> VPN).
>
> Is it possibile to configure the Database High Availability - so that
> the external primary Zenworks Server is still available/manageable after
> a VPN Connection breakdown.
>
>
> Best regerds,
>
> Alex Sommer
>
>
Craig Wilson - MCNE, MCSE, CCNA
Novell Knowledge Partner
Novell does not officially monitor these forums.
Suggestions/Opinions/Statements made by me are solely my own.
These thoughts may not be shared by either Novell or any rational human. -
My 500 GB can't verify nor repair. I have photoshop work that I need to recover. I would like to know what option would be the best solution for this problem?
You appear to have two issues: 1) a hard drive that is not working properly and 2) files you wish to recover.
Re 1) you need to answer Kappy's questions.
Re 2) does the drive load and can you see your photo files? If so can you copy them to another drive?
Do you not have a backup of the photo files? -
Help Me.
What is the best solution for this problem ?Encore is activated when you activate Premiere Pro... so, as Stan asked, how did you install P-Pro?
Ask for serial number http://forums.adobe.com/thread/1234635 has a FAQ link
-and a fix for Encore http://forums.adobe.com/thread/1421765?tstart=0 in reply #7
-plus more Encore http://helpx.adobe.com/encore/kb/cant-write-image-fie-larger1.html -
Migrated Aperture 3 upgrade from old Mac Book Pro to new Mac Book Pro. Can't open Aperture on new machine because I don't have the original serial number for Aperture 2 program originally installed on the old machine. What is the best solution to this situation
Call Apple and make a appointment.
You have 3 months of care and up to 3 years with paid AppleCare, let them handle it and bring everything in.
Good Luck -
I have iphone 5, after upgrading it to iOS7, front is working find unfortunately rear camera became blurred, what is the best way to fix this? Looking forward to the best solution of this problem.
WORKAROUND FOUND ! Download and install "Awesome Camera" app and take a picture with that app. After 1-2 seconds of standby, it will work. Then you can go back to default Camera app which would work again.Please let me know
-
What is the best solution to this problem?
I have so many solutions in mind right now but i am looking for the best solution if possible. i have the following query
SELECT one_query.date_required as Month_id,
nvl(one_query.amount_used, 0) as overalluserhours_A,
nvl(second_query.amount_used_b, 0) as overalluserhours_B,
nvl((trunc(((second_query.amount_used_b/one_query.amount_used) * 100), 2)), 0) as p_change
from
(select to_char(b1.needed_date,'YYYY-MM') as date_required,
SUM(b1.amount_used) as amount_used,
b1.type_id as type_id
from table_one b1
where b1.zone_type like 'NEWYORK%'
and b1.type_id = 'CARS'
and trunc(b1.needed_date) between to_date('2009-01-01', 'YYYY-MM-DD') and to_date('2009-12-31', 'YYYY-MM-DD')
group by to_char(b1.needed_date,'YYYY-MM'), b1.type_id) one_query,
(select to_char(b2.needed_date, 'YYYY-MM') as date_required,
SUM(b2.amount_used) as amount_used_b,
b2.type_id as type_id
from table_one b2
where b2.zone_type like
'CHICAGO%'
and b2.type_id = 'BIKES'
and trunc(b2.needed_date) between to_date('2009-01-01', 'YYYY-MM-DD') and to_date('2009-12-31', 'YYYY-MM-DD')
group by to_char(b2.needed_date, 'YYYY-MM'), b2.type_id)second_query
where one_query.date_required = second_query.date_required(+);the above query is being used on table_one. The current problem I am having is based on the fact that table_one might sometimes contain data for only chicago and not for newyork. in this case, table_one would look like this
identification_id needed_date zone_type type_id
2 3/22/2006 12:00:00 CHICAGO BIKES
3 2/12/2006 12:00:00 CHICAGO BIKEShowever though, in other case, it could be the other way around. in this case, table_one will look like this
identification_id needed_date zone_type type_id
4 4/21/2007 12:00:00 NEWYORK CARS
5 1/12/2007 12:00:00 NEWYORK CARS
and finally table_one could contain information for both cases. hence, we could have the following situation
identification_id needed_date zone_type type_id
6 6/21/2008 12:00:00 NEWYORK BIKES
7 8/12/2008 12:00:00 CHICAGO CARSKindly note, my above query is currently being used inside a function. I know I can write so many if statement to handle but the main issue is regarding the fact, i am also using that query in another query which performs so many union all.I'm not sure how you're going to parameterize it, how those filters change from call to call, but an idea would be something like this:
select date_required month_id,
max(amt_used_chicago_bikes) overalluserhours_A,
max(amt_used_newyork_cars) overalluserhours_B,
(max(amt_used_chicago_bikes) / max(amt_used_newyork_cars)) * 100 p_change
from (select zone_type,
to_char(b1.needed_date, 'YYYY-MM') as date_required,
b1.type_id as type_id,
SUM(case when zone_type like 'CHICAGO%' and type_id = 'BIKES'
then b1.amount_used end) as amt_used_chicago_bikes,
SUM(case when zone_type like 'NEWYORK%' and type_id = 'CARS'
then b1.amount_used end) as amt_used_newyork_cars
from table_one b1
where trunc(b1.needed_date) between
to_date('2009-01-01', 'YYYY-MM-DD') and
to_date('2009-12-31', 'YYYY-MM-DD')
group by b1.zone_type,
to_char(b1.needed_date, 'YYYY-MM'),
b1.type_id)
where amt_used_chicago_bikes is not null or amt_used_newyork_cars is not null
group by date_required;Again, this is not the biggest concern regarding performance and certainly not the best way of doing it. Cracking those 250 lines of SQL and making it optimized would probably be the best way to approach the issue here. -
2 Unity nodes in High availability... is this possible?
Hi,
I have two Unity express nodes running the same version, is it possible to have them in high availability so that if one dies the other will take over?
Thanks,
BRHi Brent,
Most possible my friend
About a Cisco Unity Connection Cluster
http://www.cisco.com/en/US/docs/voice_ip_comm/connection/7x/cluster_administration/guide/7xcuccag020.html
Task List for Installing a Cisco Unity Connection 7.x System with a Connection Cluster Configured
http://www.cisco.com/en/US/docs/voice_ip_comm/connection/7x/installation/guide/7xcucig010.html#wp1150289
Configuring a Cisco Unity Connection Cluster
http://www.cisco.com/en/US/docs/voice_ip_comm/connection/7x/cluster_administration/guide/7xcuccag005.html
Configuring a Subsequent Node
http://www.cisco.com/en/US/docs/voice_ip_comm/connection/7x/installation/guide/7xcucig020.html#wp461506
Active Active Redundancy - Check out the Power point slides and Video
http://www.ciscounitytools.com/TOI_CUC701.htm
Cheers!
Rob -
Coalesce or compress this index? what is the best solution in this case?
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biI have executed the following query on a specific index that I suspected to be smashed and got the following result
select
keys_per_leaf, count(*) blocks
from (
select sys_op_lbid (154813, 'L', jus.rowid) block_id,
count (*) keys_per_leaf
from xxx_table jus
where jus.id is not null
or jus.dat is not null
group by sys_op_lbid (154813, 'L', jus.rowid)
group by keys_per_leaf
order by keys_per_leaf;
keys_per_leaf blocks
1 80
2 1108
3 2816
4 3444
5 3512
6 2891
7 2579
8 2154
9 1943
10 1287
11 1222
12 1011
13 822
14 711
15 544
16 508
17 414
18 455
19 425
20 417
21 338
22 337
23 327
24 288
25 267
26 295
27 281
28 266
29 249
30 255
31 237
32 259
33 257
34 232
35 211
36 209
37 204
38 216
39 189
40 194
41 187
42 200
43 183
44 167
45 186
46 179
47 179
48 179
49 171
50 164
51 174
52 157
53 181
54 192
55 178
56 162
57 155
58 160
59 153
60 151
61 133
62 177
63 156
64 167
65 162
66 171
67 154
68 162
69 163
70 153
71 189
72 166
73 164
74 142
75 177
76 148
77 161
78 164
79 133
80 158
81 176
82 189
83 347
84 369
85 239
86 239
87 224
88 227
89 214
90 190
91 230
92 229
93 377
94 276
95 196
96 218
97 217
98 227
99 230
100 251
101 266
102 298
103 276
104 288
105 638
106 1134
107 1152
229 1
230 1 This is a 5 columns unique key index on (id number, dat date, id2 number, dat2 date type number).
Furthermore, a space analysis of this index using dbms_space.space_usage gives the following picture
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 75 -------> total bytes = ,5859375
Number of Blocks with with at least 50 to 75% free space = 0 -------> Total Bytes = 0
number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 99848 -------> total bytes = 780,0625
Total blocks ______________________________
99923
Total size MB______________________________
799,384It seems for me that this index needs to be either coalesced or compressed.
Then, what would be the best option in your opinion?
Thanks in advance
Mohamed Houri
Edited by: Mohamed Houri on 12-janv.-2011 1:18So let me continue my case
I first compressed the index as follows
alter index my_index rebuild compress 2;which immediately presents two new situations
(a) index space
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 40 -------> total bytes =, 3125
Number of Blocks with at least 50 to 75% free space = 0 -------> total Bytes = 0
Number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 32361 -------> total bytes = 252, 8203125
Total blocks ______________________________
32401
Total size Mb______________________________
259,208meaning that the compress command freed up 67487 leaf blocks and reduced the size of the index from to 799,384 MB to 259,208 MB.
It also shows a relative nice pictue of number of keys per leaf block (when compared to the previous situation)
(b) on the number of key per leaf block
KEYS_PER_LEAF BLOCKS
4 1
6 1
13 1
15 1
25 1
62 1
63 1
88 1
97 1
122 1
123 3
124 6
125 4
126 2
289 4489
290 3887
291 3129
292 2273
293 1528
294 913
295 442
296 152
297 50
298 7
299 1 In a second step, I have coalesced the index as follows
alter index my_index coalesce;which produces the new figure
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 298 -------> total bytes = 2,328125
Number of Blocks with at least 50 to 75% free space = 0 -------> Total Bytes = 0
Number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 32375 -------> total bytes = 252, 9296875
Total blocks ______________________________
32673
Total size MB______________________________
261,384meaning the the coalesce command has made
(a) 298-40 = 258 new blocks with 25-50% of free space
(b) 32375-32361 = 14 new additional blocks which have been made full
(c) The size of the index increased by 2,176MB (261,384-259,208)
While the number of key per leaf block keeps in the same situation
KEYS_PER_LEAF BLOCKS
4 2
5 3
9 1
10 2
12 1
13 1
19 1
31 1
37 1
61 1
63 1
73 1
85 1
88 1
122 1
123 4
124 4
125 3
126 1
289 4492
290 3887
291 3125
292 2273
293 1525
294 913
295 441
296 152
297 50
298 7
299 1 Could you please through some light on the difference between the compress and the coalesce on the effect they have made on
(a) the number of keys per leaf blocks within my index
(b) the space and size of my index?
Best regards
Mohamed Houri -
RSV 400 best solution for this scenario?
Hi
As shown in the diagram below, I have a central office and two branch offices, these offices are connected by a private routing service that has no connection to the Internet, the telecommunications operator in each office installs a router with a LAN and a WAN IP and configuration of these devices cannot be changed except the LAN IP. Only the central office network that is 192.168.0.0 have a router that has internet access. Remote offices have no access to the internet, what is needed is that remote offices can access the internet using ADSL router 192.168.0.254 at the central office. There are a small devices in each remote office that must connect to the internet and do not support any configuration except IP, mask and gateway, for example you cannot add a static route. Currently the pc’s at remote offices has IP communication with the server from the central office using a static route.
Does the solution would be to put some VPN routers between each LAN and the operator’s routers (where RT yellow star appears in the diagram) and put the hosts of the two branch offices same IP range that the central office network?
I had thought to use RSV400 routers, Is this the most appropriate equipment for what we want to do?
Thank you very much for the helpOriginally Posted by kjhurni
This is just my opinion, of course, but:
If you don't want to have to migrate your NSS data and keep the same server names/IP/s and cluster load scripts, then I believe a Rolling Cluster Upgrade is a good way to go.
If you look in my signature, there's a link to my OES2 guides. Somewhere there is one that I did for our Rolling Cluster Upgrade.
If all you have is NSS and iPrint, then you only need to use the miggui (migration utility) for iPrint--or so I think (I do have to followup on this one as I vaguely recall I asked this a while back and there may have been a way to do this without migrating stuff again).
But your NSS data will simply "re-mount" on the OES11 nodes and you'll be fine (so that's why I like the rolling cluster upgrades).
Let me double-check on the OES2 -> OES11 cluster option with iPrint.
--Kevin
Thank you Kevin for your answer.
Finally, I think Im going to proceed using transfer ID on my servers that Im only using NSS over NCS (I only have two machines with one NSS volume)
because it seems that its a good option. I would like to keep old IPs from all the servers, cluster and resources if possible. So, testing this
migration on my test environment it seems that it works fine:
- I use miggui for transfer id between all the machines: physical -> physical and virtual -> virtual. eDirectory assumes new hostname, IP, etc.
the only task "out of the box" is that I have to delete the cluster and regenerate (reconfigure NCS on the new servers) but its pretty easy. This way
I have the two old ips from the older machines and all the ips from the cluster and the cluster resources also. I think that its the best plan.
For the other two machines that has 4 nss volumes and iPrint I must think about a plan. But with those, Im going to proceed this way. I hope
I have chosen a good plan!
thank you so much all for your answers / advices -
Is an array the best solution for this problem?
Hi there,
I'm working up a demo where a couple of little games would show up in a panel. There is a main menu that you bounce around to select things (the games as well as other apps.)
When a single game is running, it takes up the whole panel. When two or more are running, they run in a "mini" format. Also, when a game is running, a special "return to game" button appears in the main menu.
This is a click through dog and pony show demo. It's not a production app, but it has to work well enough to be played around with by internal clients. So it has to work well.
Right now I have some variables set up like so:
var gameActive:Boolean = false;
var gameDual:Boolean = false;
In my launch game and menu function, I am checking these (and one or two other) variables to see if I should do things like show a mini version of the game or show the return to game button. As I add features though, this is becoming slightly unwieldy.
The key issue is the games. Let's say I have only two. I could make an array, and then load in the game name when a game is launched. I could check this array in my functions to see not only if games are launched, but which game is launched so I can use the full or mini games as appropriate.
Is this a good approach, or is there a better way? I'm rusty with my coding and not super comfortable making objects right now. But I could go that way if it was the best.there's not much to it. here are the only 3 things you're likely to need to do with your associative array:
var yourAA:Object={};
function addAAItem(aa:Object,o:DisplayObject){
aa[o.name]=o;
function removeAAItem(aa:Object,o:DisplayObject{
aa[o.name]=null;
function aaLength(aa:Object):int{
var i:int=0;
for(var s:String in aa){
i++;
return i; -
need our help guys...thanks and more power
Apple's servers are being hammered by everyone and his dog trying to get the update the minute it was released. I'd suggest he/she wait a few hours, or perhaps until tomorrow, and then try again.
Regards. -
ASM Instance high availability.
what woululd be the best practice to keep the ASM instance high available,
Right now i have 3 dbs running on one ASM instance, i am wondering if this ASM instance has problem all the 3 dbs will be down....Have you looked at the Oracle® Database High Availability Best Practices Guide?
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b25159/toc.htm
Other areas to look would be clustering, either using Oracle clustering or clustering from another vendor such as Veritas or PollyServ. -
SQL Server 2012 - Wat Is The Best Solution For Creating a Read Only Replicated/AlwaysOn Database
Hi there I was wondering if someone may have a best recommendation for the following requirement I have with regards setting up a third database server for reporting?
Current Setup
SQL Server 2012 Enterprise setup at two sites (Site A & Site B).
Configured to use AlwaysOn Availability groups for HA and DR.
Installed on Windows 2012 Servers.
This is all working and failover works fine and no issues. So…
Requirement
A third server needs to be added for the purpose of reporting, to be located on another site (Site C) possibly in another domain. This server needs to have a replicated read only copy of the live database from site A or Site B, whichever is in use. The Site
C reporting database should be as up-to-date to the Site A or Site B database as possible – preferably within a few seconds anyway….
Solution - What I believe are available to me
I believe I can use AlwaysOn and create a ReadOnly replica for the Site C. If so do I assume Site C needs to have the Enterprise version of SQL server i.e. to match Site A & Site B?
Using log shipping which if I am correct means the Site C does not need to be an Enterprise version.
Any help on the best solution for this would be greatly appreciated.
Thanks, Stevefor always on - all nodes should be part of one windows cluster..if there site C is on different domain - I do not think it works.
Logshipping works --as long as the sql on site C is is same or higher version(sql 2012 or above). you can only do read only.
IMHo, if you can make site C in the same domain then, Always is better solution else log shipping
also, if your database has enterprise level features such as - partitonin, data compression -- you cannot restore the database on lower editions- so you need to have enterprise edition.
Hope it Helps!! -
SQL Server Analysis Services (SSAS) 2012 High Availability Solution in Azure VM
I have been testing an AlwaysOn high availability failover solution in SQL Server Enterprise on an Azure VM, and this works pretty well as a failover for SQL Server Databases, but I also need a high availability solution for SQL Server
Analysis Server, and so far I haven't found a way to do this. I can load balance it between two machines, but this isn't working as a failover and because of the restriction of not being able to use shared storage in a Failover Cluster in Azure
VM's, I can't set it up as a cluster which is required for AlwaysOn in Analysis Services.
Anyone else found a solution to use an AlwaysOn High Availability for SQL Analysis Services in Azure VM? As my databases are read-only, I would be satisfied with even just a solution that would sync the OLAP databases and switch
the data connection to the same server as the SQL databases.
Thanks!
BillBill,
So, what you need is a model like SQL Server failover cluster instances. (before sql server 2012)
In SQL Server 2012, AlwaysOn replaces SQL Server failover cluster, and it has been seperated to
AlwaysOn Failover Cluster Instances (SQL Server) and
AlwaysOn Availability Groups (SQL Server).
Since your requirement is not in database level, I think the best option is to use AlwaysOn Failover Cluster Instances (SQL Server).
As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a
failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL
Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
It is similar to SQL Server failover cluster in SQL 2008 R2 and before.
Please refer to these references:
Failover Clustering in Analysis Services
Installing a SQL Server 2008 R2 Failover Cluster
Iric Wen
TechNet Community Support
Maybe you are looking for
-
Swiftdove won't open links in Firefox
When I click the links they won't open in Firefox. Preferred Applications has /usr/lib/firefox-3.0.1/firefox %s as my custom browser Here is the "~/.swiftdove/e8ftools5.default/pres.js" of Swiftdove. You'll notice that I've attempted to route it to "
-
Best way to create a library of PL/SQL to be used by many schemas
I looking for the best way to build a library of PL/SQL packages that can be used from multiple schemas in my DB. The way we use our DB is that we create a new schema for each client and then create tables, import data and do a bunch of validation. S
-
Hi, Unfortunately I do not have a very specific question here; I'm looking for some direction. I have an application that is receiving a large buffer of bytes, 680x480 if I recall correctly, and I'm basically wondering how to format that buffer to di
-
Serial number not accepted after trying reinstall cs5design premium on mac . Mavericks
CAn any one help me? Ik got the serialnumbers from adobe, whymis it not accepted. Is it because I have mavericks on the mac?
-
We have a WebCenter Spaces PS4 (upgraded from PS3) env on Linux64 OS and we are trying to migrate our policy store from the file-based store (system-jazn-data.xml) to an Oracle DB using the OPSS schema. Issue: When migrating the policy store to Oracl