Soalris Cluster Interconnect over Layer 3 Link
Is it possible to connect two Solaris Cluster nodes over a pure layer 3 (TCP/UDP) network connection over a distance of about 10 km?
The problem with having just single node clusters is that, effectively, any failure on the primary site will require invocation of the DR process rather than just a local fail-over. Remember that Sun Cluster and Sun Cluster Geographic Edition are aimed at different problems: high availability and disaster recovery respectively. You seem to be trying to combine the two and this is a bad thing IMHO. (See the Blueprint http://www.sun.com/blueprints/0406/819-5783.html)
I'm assuming that you have some leased bandwidth between these sites and hence the pure layer 3 networking.
What do you actually want to achieve: HA or DR? If it's both, you will probably have to make compromises either in cost or expectations.
Regards,
Tim
---
Similar Messages
-
We run Oracle databases on Veritas VCS cluster over Layer 3 on non Sun HW. The Primary and Seconday (Server/SAN) are on different datacenters (1 Mile) on different subnets. Veritas Storage Foundation writes the data to both storage and VCS has worked well for us. We are beginning to consider a move to Sparc T4 HW and ZFS Appliance as storage and Oracle Solaris Cluster 4.0
We found out that OSC 4.0 for Solaris 11 doesn't support Interconnects over Layer 3 and Campus cluster is the only option for that. We'd like to know what complexity is added to Campus cluster compared to standard cluster. Are the licensing costs much higher? Does OSC have something similar to Storage foundation to write to both SAN's.
Also, How does the Solaris cluster community compare OSC with Veritas VCS? We really like the VCS HAGUI tool that is fabulous to use. Does OSC have a similar Graphical management tool?
These questions may sound very basic to some, but any comments and opinions are appreciated.
Thanks
Edited by: user4397602 on May 16, 2012 2:54 PMWe found out that OSC 4.0 for Solaris 11 doesn't support Interconnects over Layer 3 and Campus cluster is the only option for that. I'm not sure I understood that statement. You can run a campus cluster over layer 2 you just have to have one broadcast domain between sites for the heartbeat networks. Many years ago (at Sun), my colleagues and I wrote some design papers for the Data Center Reference Implementation which included patterns for LAN and SAN using dark fibre of DWDMs for stretched clusters. We had implementations for both Cisco and Nortel. If you set it up correctly, there shouldn't be any problem using this approach.
We really like the VCS HAGUI tool that is fabulous to use. Does OSC have a similar Graphical management tool? OSC 4.0 does not currently have a GUI.
Tim
--- -
Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions
Hi All,
I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
What I have in mind as follows:
2 x X4170 servers with 8 x NIC's in each.
On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
igb0 device in aggr1
igb1 device in aggr1
igb2 device in aggr1
igb3 stand-alone device for iSCSI network
e1000g0 device in aggr2
e1000g1 device in aggr2
e1000g2 device in aggr3
e1000g3 stand-alone device of iSCSI network
Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
At this point, my questions are:
[1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
[2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
[3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
Thanks in advance for all comments/suggestions.For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
For 2) each cluster needs to have its own seperate set of VLANs.
Greets
Thorsten -
IPFC (ip over fc) cluster interconnect
Hello!
It a possible create cluster interconnect with IPFC (ip over fc) driver (for example - a reserve channel) ?
What problems may arise?Hi,
technically Sun Cluster works fine with only a single interconnect, but it used to be not supported. The mandatory requirement to have 2 dedicated interconnects was lifted a couple of months ago. Although it is still a best practice and a recommendation to use 2 independent interconnects.
The possible consequences of only having one NIC port have been mentioned in the previous post.
Regards
Hartmut -
LDOM SUN Cluster Interconnect failure
I am making a test SUN-Cluster on Solaris 10 in LDOM 1.3.
in my environment, i have T5120, i have setup two guest OS with some configurations, setup sun cluster software, when executed, scinstall, it failed.
node 2 come up, but node 1 throws following messgaes:
Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: test1
Configuring devices.
Loading smf(5) service descriptions: 37/37
/usr/cluster/bin/scdidadm: Could not load DID instance list.
/usr/cluster/bin/scdidadm: Cannot open /etc/cluster/ccr/did_instances.
Booting as part of a cluster
NOTICE: CMM: Node test2 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node test1 (nodeid = 2) with votecount = 0 added.
NOTICE: clcomm: Adapter vnet2 constructed
NOTICE: clcomm: Adapter vnet1 constructed
NOTICE: CMM: Node test1: attempting to join cluster.
NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
NOTICE: clcomm: Path test1:vnet1 - test2:vnet1 errors during initiation
NOTICE: clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
WARNING: Path test1:vnet1 - test2:vnet1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
WARNING: Path test1:vnet2 - test2:vnet2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
CREATED VIRTUAL SWITCH AND VNETS ON PRIMARY DOMAIN LIKE:<>
532 ldm add-vsw mode=sc cluster-vsw0 primary
533 ldm add-vsw mode=sc cluster-vsw1 primary
535 ldm add-vnet vnet2 cluster-vsw0 test1
536 ldm add-vnet vnet3 cluster-vsw1 test1
540 ldm add-vnet vnet2 cluster-vsw0 test2
541 ldm add-vnet vnet3 cluster-vsw1 test2
Primary DOmain<>
bash-3.00# dladm show-dev
vsw0 link: up speed: 1000 Mbps duplex: full
vsw1 link: up speed: 0 Mbps duplex: unknown
vsw2 link: up speed: 0 Mbps duplex: unknown
e1000g0 link: up speed: 1000 Mbps duplex: full
e1000g1 link: down speed: 0 Mbps duplex: half
e1000g2 link: down speed: 0 Mbps duplex: half
e1000g3 link: up speed: 1000 Mbps duplex: full
bash-3.00# dladm show-link
vsw0 type: non-vlan mtu: 1500 device: vsw0
vsw1 type: non-vlan mtu: 1500 device: vsw1
vsw2 type: non-vlan mtu: 1500 device: vsw2
e1000g0 type: non-vlan mtu: 1500 device: e1000g0
e1000g1 type: non-vlan mtu: 1500 device: e1000g1
e1000g2 type: non-vlan mtu: 1500 device: e1000g2
e1000g3 type: non-vlan mtu: 1500 device: e1000g3
bash-3.00#
NOde1<>
-bash-3.00# dladm show-link
vnet0 type: non-vlan mtu: 1500 device: vnet0
vnet1 type: non-vlan mtu: 1500 device: vnet1
vnet2 type: non-vlan mtu: 1500 device: vnet2
-bash-3.00# dladm show-dev
vnet0 link: unknown speed: 0 Mbps duplex: unknown
vnet1 link: unknown speed: 0 Mbps duplex: unknown
vnet2 link: unknown speed: 0 Mbps duplex: unknown
-bash-3.00#
NODE2<>
-bash-3.00# dladm show-link
vnet0 type: non-vlan mtu: 1500 device: vnet0
vnet1 type: non-vlan mtu: 1500 device: vnet1
vnet2 type: non-vlan mtu: 1500 device: vnet2
-bash-3.00#
-bash-3.00#
-bash-3.00# dladm show-dev
vnet0 link: unknown speed: 0 Mbps duplex: unknown
vnet1 link: unknown speed: 0 Mbps duplex: unknown
vnet2 link: unknown speed: 0 Mbps duplex: unknown
-bash-3.00#
and this configuration i give while setting up scinstall
Cluster Transport Adapters and Cables <<<You must identify the two cluster transport adapters which attach
this node to the private cluster interconnect.
For node "test1",
What is the name of the first cluster transport adapter [vnet1]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
All transport adapters support the "dlpi" transport type. Ethernet
and Infiniband adapters are supported only with the "dlpi" transport;
however, other adapter types may support other types of transport.
For node "test1",
Is "vnet1" an Ethernet adapter (yes/no) [yes]?
Is "vnet1" an Infiniband adapter (yes/no) [yes]? no
For node "test1",
What is the name of the second cluster transport adapter [vnet3]? vnet2
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test1",
Name of the switch to which "vnet2" is connected [switch2]?
For node "test1",
Use the default port name for the "vnet2" connection (yes/no) [yes]?
For node "test2",
What is the name of the first cluster transport adapter [vnet1]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test2",
Name of the switch to which "vnet1" is connected [switch1]?
For node "test2",
Use the default port name for the "vnet1" connection (yes/no) [yes]?
For node "test2",
What is the name of the second cluster transport adapter [vnet2]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test2",
Name of the switch to which "vnet2" is connected [switch2]?
For node "test2",
Use the default port name for the "vnet2" connection (yes/no) [yes]?
i have setup the configurations like.
ldm list -l nodename
NODE1<>
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:61:63 1 1500
vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f8:87:27 1 1500
vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:f8:f0:db 1 1500
ldm list -l nodename
NODE2<>
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:a1:68 1 1500
vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f9:3e:3d 1 1500
vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:fb:03:83 1 1500
ldm list-services
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
cluster-vsw0 primary 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
cluster-vsw1 primary 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
ldm list-bindings primary
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet1@gitserver 00:14:4f:f8:c0:5f 1 1500
vnet1@racc2 00:14:4f:f8:2e:37 1 1500
vnet1@test1 00:14:4f:f9:61:63 1 1500
vnet1@test2 00:14:4f:f9:a1:68 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
cluster-vsw0 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet2@test1 00:14:4f:f8:87:27 1 1500
vnet2@test2 00:14:4f:f9:3e:3d 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
cluster-vsw1 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet3@test1 00:14:4f:f8:f0:db 1 1500
vnet3@test2 00:14:4f:fb:03:83 1 1500
Any Idea Team, i beleive the cluster interconnect adapters were not successfull.
I need any guidance/any clue, how to correct the private interconnect for clustering in two guest LDOMS.You dont have to stick to default IP's or subnet . You can change to whatever IP's you need. Whatever subnet mask you need. Even change the private names.
You can do all this during install or even after install.
Read the cluster install doc at docs.sun.com -
What is the best way to submit a Concurrent Request over a DB Link?
Hi,
We have a requirement to submit a Concurrent Request over a DB Link. What is the best way to do this?
What I've done so far is I've created a function in the EBS instance that executes FND_GLOBAl.APPS_INITIALIZE and submits the Concurrent Request. I then call this function remotely from our NON-EBS database. It seems to work fine but I found out from metalink article id 466800.1 that this is not recommended.
Why are Concurrent Programs Calling FND_GLOBAL.APPS_INITIALIZE Using DBLinks Failing? [ID 466800.1]
https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=11129815723825&type=DOCUMENT&id=466800.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=17dodl8lyp_108
Can anyone suggest a better approach?
Thanks,
AllenWhat I've done so far is I've created a function in the EBS instance that executes FND_GLOBAl.APPS_INITIALIZE and submits the Concurrent Request. I then call this function remotely from our NON-EBS database. It seems to work fine but I found out from metalink article id 466800.1 that this is not recommended.
Why are Concurrent Programs Calling FND_GLOBAL.APPS_INITIALIZE Using DBLinks Failing? [ID 466800.1]
https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=11129815723825&type=DOCUMENT&id=466800.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=17dodl8lyp_108
Can anyone suggest a better approach?Please log a SR and ask Oracle support for any better (alternative) approach. You can mention in the SR that your approach works properly and ask what would be the implications of using it (even though it is not recommended).
Thanks,
Hussein -
Can you cluster Coherence over data centers?
We're currently running two separate Coherence clusters in different data centers. One is prod, the other DR.
Would it be possible to cluster the nodes from each of these to create one cluster spanning both data centers? Then in a failover scenario the data would already be available.
I know Coherence nodes heartbeat to one another to retain cluster membership and that there is a TTL setting to determine packet life. Would have nodes in different data centers result in heartbeats being missed or TTLs killing packets?
Has anyone had any success with this?Coherence performance is related to the latency between nodes. Having one cluster spread over 2 data centers could harm performance (some timeouts could have to be changed to prevent nodes from data center A to claim another node in datacenter B is out of reach/possibly dead).
When you lose network connectivity between the 2 data centers (note i'm not saying "if you lose connectivity". It WILL happen), you're welcome into the "split brain world", each half of the grid believing the other is dead and claiming to be the "master". And thus, if you have data replicated on N nodes, the master/backups are redispatched all over each datacenter, harming performance for a few minutes (the timing depending of course on many parameters...). And of course the data will no longer be synchronized between the 2 data centers. The quorum has to be thought of, and stuff like that...
I might be wrong, but AFAIK I'd rather have 2 separate clusters. I believe 12.1 has new features to replicate data the the master grid to the DR one, I have not been through all the new documentation. -
Insert, update and delete trigger over multiple Database Links
Hello guys,
first of all I'll explain my environment.
I've got a Master DB and n Slave Databases. Insert, update and delete is only possible on the master DB (in my opinion this was the best way to avoid Data-inconsistencies due to locking problems) and should be passed to slave databases with a trigger. All Slave Databases are attached with DBLinks. And, additional to this things, I'd like to create a job that merges the Master DB into all Slave DB's every x minutes to restore consistency if any Error (eg Network crash) occurs.
What I want to do now, is to iterate over all DB-Links in my trigger, and issue the insert/update/delete for all attached databases.
This is possible with the command "execute immediate", but requires me to create textual strings with textually coded field values for the above mentioned commands.
What I would like to know now, is, if there are any better ways to provide these functions. Important to me is, that all DB-Links are read dynamically from a table and that I don't have to do unnecessary string generations, and maybe affect the performance.
I'm thankful for every Idea.
Thank you in advance,
best regards
ChristophWell, I've been using mysql for a long time, yes, but I thought that this approach would be the best for my requirements.
Materialized View's don't work for me, because I need real-time updates of the Slaves.
So, sorry for asking that general, but what would be the best technology for the following problem:
I've got n globally spread Systems. Each of it can update records in the Database. The easies way would be to provide one central DB, but that doesn't work for me, because when the WAN Connection fails, the System isn't available any longer. So I need to provide core information locally at every System (connected via LAN).
Very important to me is, that Data remain consistent. That means, that it must not be that 2 systems update the same record on 2 different databases at the same time.
I hope you understand what I'd need.
Thank you very much for all your replies.
best regards
Christoph
PS: I forgot to mention that the Databases won't be very large, just about 20k records, and about 10 queriees per second during peak times and there's just the need to sync 1 Table.
Edited by: 907142 on 10.01.2012 23:14 -
End-to-end layer-2 link with CPE administration
Dears
I am working on a scenario to monitor a CPE in a layer-2 setup. The CPE is connected with the local PE across a last mile with a single vlan from the provider. The customer has purchased layer-2 end-to-end connection from local CPE to remote CPE. Within MPLS core, I have configured x-connect between local PE and remote PE to setup layer-2 link. Within the CPEs, I am bridging both the interfaces of the the router to handover end-to-end layer-2 link to customer. I also need to manage and monitor the CPE. What I am thinking of doing is that I have two PE routers within the local POP. On primary PE router, i will extend lastmile vlan from the switch and configure x-connect to remote PE. On the backup PE router, i will extend the same lastmile vlan from the switch and configure IP address on the PE vrf enabled interface to be imported in management network. On the CPE, the interface with lastmile connection is concurrently configured with bridge and IP configuration.
I need to know if is this a standard setup of management for this type of solution and what could be the possible technical limitations/complications within this overall solution keeping in mind that it is a layer-2 end-to-end connection and what impact it can have on my core network.
RegardsHi All,
Can someone help me in this.
Regards -
How to test issue with accessing tables over a DB link?
Hey all,
Using 3.1.2 on XE, I have a little app. The database schema for this app only contains views to the actual tables, which happen to reside over a database link in a 10.1.0.5.0 DB.
I ran across an issue where a filter I made on a form refused to work (see [this ApEx thread| http://forums.oracle.com/forums/message.jspa?messageID=3178959] ). I verified that the issue only happens when the view points to a table across a DB link by recreating the table in the local DB and pointing the view to it. When I do this, the filter works fine. When I change the view back to use the remote table, it fails. And it only fails in the filter -- every other report and every other tool accessing the remote table via the view works fine.
Anyone know how I can troubleshoot this? For kicks, I also tried using a 10.2.0.3.0 DB for the remote link, but with the same results.
TIA,
Rich
Edited by: socpres on Mar 2, 2009 3:44 PM
Accidental save...ittichai wrote:
Rich,
I searched metalink for your issue. This may be a bug in 3.1 which will be fixed in 4.0. Please see Doc ID 740581.1 Database Link Tables Do NoT Show Up In Table Drop Down List In Apex. There is a workaround mentioned in the document.
I'm not sure why I never thought of searching MetaLink, but thanks for the pointer! It doesn't match my circumstances, however. The Bug smells like a view not being queried in the APEX development tool itself -- i.e. the IDE's coding needs changing, not necessarily those apps created with the IDE.
I'm working on getting you access to my hosted app...
Thanks,
Rich -
TE over ospf virtual-link does not work
Hi Expert,
I want to practise the TE over ospf virtual-link. The topo is like this one:
R1 R2
| |
R3---R4
| |
R5 R6
all links are in area 0 except link between R3 and R4.
rt5_1#ro
router ospf 1
router-id 1.1.1.1
log-adjacency-changes
network 0.0.0.0 255.255.255.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
interface Tunnel16
ip unnumbered Loopback0
tunnel destination 6.6.6.6
tunnel mode mpls traffic-eng
tunnel mpls traffic-eng autoroute announce
tunnel mpls traffic-eng priority 5 5
tunnel mpls traffic-eng bandwidth 5
tunnel mpls traffic-eng path-option 10 dynamic
rt5_2#ro
router ospf 1
router-id 2.2.2.2
log-adjacency-changes
network 0.0.0.0 255.255.255.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
rt5_3#ro
router ospf 1
router-id 3.3.3.3
log-adjacency-changes
area 1 virtual-link 4.4.4.4
network 3.3.3.3 0.0.0.0 area 0
network 10.10.13.0 0.0.0.255 area 0
network 10.10.34.0 0.0.0.255 area 1
network 10.10.35.0 0.0.0.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
mpls traffic-eng interface
Ethernet1/0/1 area 0
rt5_4#ro
router ospf 1
router-id 4.4.4.4
log-adjacency-changes
area 1 virtual-link 3.3.3.3
network 4.4.4.4 0.0.0.0 area 0
network 10.10.24.0 0.0.0.255 area 0
network 10.10.34.0 0.0.0.255 area 1
network 10.10.46.0 0.0.0.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
mpls traffic-eng interface Ethernet1/1 area 0
rt5_5#ro
router ospf 1
router-id 5.5.5.5
log-adjacency-changes
network 5.5.5.5 0.0.0.0 area 0
network 10.10.35.0 0.0.0.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
rt5_7#ro
router ospf 1
router-id 6.6.6.6
log-adjacency-changes
network 6.6.6.6 0.0.0.0 area 0
network 10.10.46.0 0.0.0.255 area 0
mpls traffic-eng router-id Loopback0
mpls traffic-eng area 0
The tunnel 16 on R1 from R6 does not come up .
rt5_1#sh mpls traffic-eng tunnels t16
Name: rt5_1_t16 (Tunnel16) Destination: 6.6.6.6
Status:
Admin: up Oper: down Path: not valid Signalling: Down
path option 10, type dynamic
Config Parameters:
Bandwidth: 5 kbps (Global) Priority: 5 5 Affinity: 0x0/0xFFFF
Metric Type: TE (default)
AutoRoute: enabled LockDown: disabled Loadshare: 5 bw-based
auto-bw: disabled
Shortest Unconstrained Path Info:
Path Weight: UNKNOWN
Explicit Route: UNKNOWN
History:
Tunnel:
Time since created: 1 hours, 7 minutes
Path Option 10:
Last Error: PCALC:: No path to destination, 6.6.6.6
Does anyone the cause of the problem?
thanks,Hi,
There is one virutal link between R3 and R4.
rt5_1#sh mpls traffic-eng topology
IGP Id: 3.3.3.3, MPLS TE Id:3.3.3.3 Router Node (ospf 1 area 0)
link[0]: Broadcast, DR: 10.10.35.5, nbr_node_id:24, gen:224
frag_id 1, Intf Address:10.10.35.3
TE metric:10, IGP metric:10, attribute flags:0x0
link[1]: Broadcast, DR: 10.10.13.1, nbr_node_id:25, gen:224
frag_id 0, Intf Address:10.10.13.3
TE metric:10, IGP metric:10, attribute flags:0x0
link[2]: Broadcast, DR: 10.10.34.3, nbr_node_id:-1, gen:224
frag_id 3, Intf Address:10.10.34.3
TE metric:10, IGP metric:invalid, attribute flags:0x0
sh mpls traffic-eng topology brief
IGP Id: 3.3.3.3, MPLS TE Id:3.3.3.3 Router Node (ospf 1 area 0)
link[0]: Broadcast, DR: 10.10.35.5, nbr_node_id:31, gen:106
frag_id 1, Intf Address:10.10.35.3
TE metric:10, IGP metric:10, attribute flags:0x0
link[1]: Broadcast, DR: 10.10.13.1, nbr_node_id:32, gen:106
frag_id 0, Intf Address:10.10.13.3
TE metric:10, IGP metric:10, attribute flags:0x0
link[2]: Broadcast, DR: 10.10.34.3, nbr_node_id:-1, gen:106
frag_id 3, Intf Address:10.10.34.3
TE metric:10, IGP metric:invalid, attribute flags:0x0
IGP Id: 4.4.4.4, MPLS TE Id:4.4.4.4 Router Node (ospf 1 area 0)
link[0]: Broadcast, DR: 10.10.46.6, nbr_node_id:34, gen:104
frag_id 0, Intf Address:10.10.46.4
TE metric:10, IGP metric:invalid, attribute flags:0x0
link[1]: Broadcast, DR: 10.10.24.2, nbr_node_id:19, gen:104
frag_id 1, Intf Address:10.10.24.4
TE metric:10, IGP metric:10, attribute flags:0x0
link[2]: Broadcast, DR: 10.10.34.3, nbr_node_id:-1, gen:104
frag_id 2, Intf Address:10.10.34.4
TE metric:10, IGP metric:invalid, attribute flags:0x0
IGP Id: 5.5.5.5, MPLS TE Id:5.5.5.5 Router Node (ospf 1 area 0)
link[0]: Broadcast, DR: 10.10.35.5, nbr_node_id:31, gen:94
thanks,
guoming -
Wb_rt_api_exec over a database link
When I call wb_rt_api_exec over a database link I receive:
ORA-20001: Task not found - Please check the Task Type, Name and Location are correct.
ORA-06512: at "OWB_10_2_0_1_31.WB_RT_API_EXEC", line 704.
When I run the command on the other database it works.
Is there a way to do this?
Thanks,
JoeWe had the same error when starting a package with WB_RT_API_EXEC.RUN_TASK from JBoss Middleware. It ran fine when started directly from SQLDeveloper, but threw the error when started from JBoss. Adding the pragma solved the problem, we also had to put a commt; at the end of the procedure.
-
EEM -automatic shut down or switch over of WAN link in OSPF when packet drop increase
Hi,
Need help..
can any one help me how can EEM help for automatic shut down or switch over of WAN link in OSPF when packet drop increase a predefined level.
I have a set up different branches connected together...OSPF is the routing protocol and need to communicate with two branches via hub locations.
need to shut or switch some percent of traffic from primary to back up when packet drop in the link.I am not sure EEM can do what you want.
Another option could be to use SLA tacking/monitoring. But you will fall back to the new route when you lose some percentage of pings, you can't switch only part of the traffic.
I hope it helps.
PK -
Hi,
We want to setup Solaris cluster on LDOM environment.
We have:
- Primary domain
- Alternate domain (Service Domain)
So we want to setup the cluster interconnect from primary domain and service domain, like below configuration:
example:
ldm add-vsw net-dev=net3 mode=sc private-vsw1 primary
ldm add-vsw net-dev=net7 mode=sc private-vsw2 alternate
ldm add-vnet private-net1 mode=hybrid private-vsw1 ldg1
ldm add-vnet private-net2 mode=hybrid private-vsw2 ldg1
It's supported the configuration above?
If there is any documentation about this, please refer me.
Thanks,Hi rachfebrianto,
yes, the commands are looking good. Minimum requirement is Solaris Cluster 3.2u3 to use hybrid I/O. But I guess you running 3.3 or 4.1 anyway.
The mode=sc is a requirement on the vsw for Solaris Cluster interconnect (private network).
And it is supported to add mode=hybrid to guest LDom for the Solaris Cluster interconnect.
There is no special documentation for Solaris Cluster because its using what is available in the
Oracle VM Server for SPARC 3.1 Administration Guide
Using NIU Hybrid I/O
How to Configure a Virtual Switch With an NIU Network Device
How to Enable or Disable Hybrid Mode
Hth,
Juergen -
Calling a Procedure and Function over a db link
I am experiencing some errors with the following and would greatly appreciate the advice of some experts here.
My use-case is to insert some records into a table via a database link. The records to be inserted will be queried from an identical table in the local data dictionary. Everything works, but occasionally I will get a unique constraint violation if I try to insert a duplicate record so I wrote a simple function to check for the scenario. My issue is that I can run my procedure using the db link and I can run my function using the db link, but I can't use them both together without getting errors.
My test case just uses the standard emp table:
create or replace procedure test_insert(p_instance varchar2)
IS
l_sql varchar2(4000);
begin
l_sql := 'insert into EMP@'||p_instance||' (EMPNO, ENAME, JOB, MGR, SAL, DEPTNO) (Select EMPNO, ENAME, JOB, MGR, SAL, DEPTNO from EMP)';
execute immediate l_sql;
END;
BEGIN
test_insert('myLink');
END;
This works fine and the insert occurs without any issues.
If I run the same procedure a second time I get:
00001. 00000 - "unique constraint (%s.%s) violated" which is what I expect since EMPNO has a unique constraint. So far so good.
Now I create a function to test whether the record exists:
create or replace function record_exists(p_empno IN NUMBER, p_instance IN varchar2) return number
IS
l_sql varchar2(4000);
l_count number;
BEGIN
l_sql := 'select count(*) from EMP@'||p_instance||' where empno = '||p_empno;
execute immediate l_sql into l_count;
IF
l_count > 0
THEN return 1;
ELSE
return 0;
END IF;
END;
I test this as follows:
select record_exists(8020, 'myLink') from dual;
RECORD_EXISTS(8020,'myLink')
1
That works ok, so now I will add that function to my procedure:
create or replace procedure test_insert(p_instance varchar2)
IS
l_sql varchar2(4000);
begin
l_sql := 'insert into EMP@'||p_instance||' (EMPNO, ENAME, JOB, MGR, SAL, DEPTNO) (Select EMPNO, ENAME, JOB, MGR, SAL, DEPTNO from EMP WHERE record_exists( EMPNO, '''||p_instance||''') = 0)';
execute immediate l_sql;
END;
I test this as follows:
BEGIN
test_insert('myLink');
END;
Result is:
Error report:
ORA-02069: global_names parameter must be set to TRUE for this operation
ORA-06512: at "FUSION.TEST_INSERT", line 6
ORA-06512: at line 2
02069. 00000 - "global_names parameter must be set to TRUE for this operation"
*Cause: A remote mapping of the statement is required but cannot be achieved
because global_names should be set to TRUE for it to be achieved
*Action: Issue alter session set global_names = true if possible
I don't know why I am getting this. The function works, the procedure works, but when I combine them I get an error. If I set the global names parameter to true and then rerun this I get:
02085. 00000 - "database link %s connects to %s"
*Cause: a database link connected to a database with a different name.
The connection is rejected.
*Action: create a database link with the same name as the database it
connects to, or set global_names=false.
Any advice is much appreciated. I don't understand why I can run the procedure and the function each separately over the db link but they don't work together.
thank you,
johnThe proper procedure depends on what how you define failure and what it should mean - error back to caller, log and continue, just continue. Constraints are created to ensure invalid data does not get into a table. Generally they provide the most efficient mechanism for checking for invalid data, and return useful exceptions which the caller can handle. You are also doubling the work load of the uniqueness check by adding in your own check.
In general I'd say use Exceptions, they are your friend
Maybe you are looking for
-
Trying to connect my macbook to LG HDTV
hi there new guy here, i have been trying for the last 2 months to Connect my Macbook to my LG HDTV and having no luck with the sound, visual is fine but full screen isnt end to end, top to bottom is fine but end to end isnt..... but its the sound th
-
I just updated my iPhone 4s to iOS7. I had an old iCloud account on it that I have closed. How do i delete the account on the iPhone and update it to my new account?
-
PSD files don't up-load because "maximum compatability" issues
I have a load of PSD files that don't load because they were saved in previous versions of PS when maximum compatability did not exist in preferences. The problem is they are mixed in with other files and Light Room does not identify the specific fil
-
Copy 500 million records between 2 tables
Hi, I have 2 tables T1 and T2 in the same database and schema as well. But the Size of data in T1 is around 500 million. Which is the best way to Copy data from T1 to T2. How fast is the SQL PLUS Copy Command? Will BULK COLLECT into collection and FO
-
Glossary and links in text boxes
I add textboxes and give them the appearance of pieces of paper. They support the main textflow with additional info. Links and glossary terms work fine but as soon as I give the boxes titles with the widget tool the links die. I need the boxes to be