Mapping to VMware Clusters
Is it possible to map volumes to more than one cluster without possibly corrupting data? We currently have 5 volumes that are mapped to our cluster of 5 ESXi hosts. We are adding 2 more hosts, so added another cluster for those hosts. I was wondering if I could map those 5 volumes to both clusters, or do I need to create all new volumes for the new cluster?
Thanks for your help!
Hello, jyoung709.
So, basically mapping a volume to 2 different clusters would the same as mapping to 2 hosts not in a cluster. It would result in corruption. The beauty in clustering is that there is a 'moderator' (if you will) that manages when a read or write hits a disk. The hosts in the cluster are managed by the software that allows them to 'take turns' so to say, so there is not simultaneous writing. (this is a crude explanation, but serves the purpose)
In your scenario, each cluster is unaware of the other, and is not managed by said moderator. If there were a 'cluster management system' for the clusters? ...sure. but, I have my doubts that it's out there.
I hope this helps.
Let me know if you have any questions.
Similar Messages
-
Vmware clusters and L2 Spanning
Server virtualization and coud computing are the driving forces behind the need to span a network's L2 domain. This is where TRILL comes into play. You get horizontal scale-out in a flattened network, with multiple L2 forwarding paths powered by TRILL.
The question I have is just how far across the data center does the L2 domain really need to span? Lets say this is a vmware environment. The latest version of vSphere allows up to 32 hosts per cluster. Furthermore, it is only within that 32-host cluster that a vmotion or DRS or FT action can take place. Theoretically, you can configure the vmware cluster to span the entire server farm, end-to-end. But in reality, how are vmware clusters configured? I would think that the hosts in a cluster are largely configured in adjacent cabinets, not across the entire data center. So, do you REALLY need to span the L2 domain across the entire data center server farm?Hi Ayodeji,
Is this type of deployment support.
Also off the topic question, can I have a top level domain(example.com) and sub-domain(staging.example.com) form a peer relationship in im & presence servers.
As per Cisco 2 sub-domains apart of the same top-level domain can form a peer relationship.
Thanks. -
Exchange 2013 DAG with VMWare Clustering
Hi team,
I'm looking at a scenario where there are 3 Hosts with a SAN for storage and is
Clustered for failover with so many other server/applications. The requirement is to have a High Available environment with Exchange 2013. Now I want to know the possibility of having 2xCAS (WNLB) and 2xMailbox Servers with DAG.
1) Want to know if this is a supported scenario, where DAG is implemented in a VMWare cluster.
2) The DAG on cluster is to ensure that even if no host failure occurs i have VM level failover. and if Host Level failover occurs I have cluster based failover.
Any guidance is much appreciated.
Thank you.
Cheers!Hello,
It is not supported configuration that combining VMware HA solution with the Exchange application-aware high availability solution.
Here are some articles for your reference.
http://blogs.technet.com/b/keithmayer/archive/2012/08/28/best-practices-for-virtualizing-microsoft-exchange-2010-msexchange-itpro-hyperv-vmware.aspx#.Uxfd2P7xuM8
http://exchangeserverpro.com/microsoft-vs-vmware-on-exchange-virtualization-and-ha-best-practices/ (
Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please make
sure that you completely understand the risk before retrieving any suggestions from the above link.)
If you have any feedback on our support, please click
here
Cara Chen
TechNet Community Support -
Deployment Guide....anything changed?
There hasn't been a deployment guide published in a while. I'm trying to decide on a layout for a new environment (all virtual so I have a lot of options...)
I've been told by my hardware guy that VMware clusters work best when all virtuals in them have the same number of cpu's. Anyhow, that basically translates to our portal servers having 1 "CPU" (virtual cpu?) assigned per virtual. Is this going to be an issue? Currently our production portals have 2 (virtual) CPUs each. Is this going to be a big performance hit?
this is windows / iis...
thanks!Hi,
Thanks for your post.
Windows Server 2012 is compatible with Windows 7.
But BranchCache in Windows Server 2012 and Windows 8 provides substantial performance, manageability, scalability, and availability improvements. And there are some group policy only applied to client computers that are running Windows 8 not Windows 7.
So i would suggest you may choose client computers that running Windows 8.
More detail information, please refer to:
http://technet.microsoft.com/en-us/library/jj127252.aspx
Regards.
Vivian Wang -
Hi,
Is there any documentation that outlines best practices when implementing VDI, the install guides are great but they are just that; install guides...I'm looking for real life examples, worked out issues and such; case studies with instructions if you will. I've done the Google searches, but as with most things like this there are bits an pieces of the puzzle but none that I've found have start to finish examples.
For instance, where would I go to look for documentation for the best way to convert an environment of about 100 or so Windows desktops (all together around 4-6 different templates), so that I have a near zero IT workload maintaining 4-6 different installs vs. 100 hardware desktops. Do I still need a WUS server...etc.
My goal is near zero IT at least for the desktops. Has anyone gotten close?
As an example; Creating templates - install guides work fine; but what is the best way to configure a template so that each clone gets registered (assuming a flexible pool is the best way to go) with AD, or does one have to go to each cloned machine and add it to the MS domain?
What issues are there integrating all of the normal MS stuff with the VDI environment.
I'll contribute what I find as I go along but if there is anyone out there with links to docs; I'm sure I'm not the only one that would appreciate it...thanks
ronHey Ron
Regarding the VDI servers, we could probably get by with less.. only a few months ago, we had only 3 and we were getting by. Then one day I took one down for maintenance and things got a little dicey for the users... but we were eeking by. Then we had some crazy network issue which I am blaming on our switches and so we lost one of the remaining two, leaving us with only one for 300+ thin clients. It didn't work out well... basically the system was unusable. So on that day I vowed to have enough servers to have one or two down for maintenance, and still have failover capacity. Luckily we were blessed with extra servers, so I doubled the server count and now I can take one down for maintenance or testing, and we still have plenty of space for any random failures, and the users will still be able to work fine. Hard lessons learned.
We are an Oracle hardware shop as well, so all 6 of those servers are some of their older blades... Sun/Oracle x6250 with 2x quad core 2.5ghz procs, and maxed out on ram at 64GB. In the VDI admin/install guide, they do a pretty good job of helping you size your VDI servers for the number of users. Pay attention to the number of users and the number of available cores in your servers. RAM is plentiful and cheap these days, so that's generally not an issue on the core VDI (sun ray) servers. And Oracle's ALP protocol the thin clients use is pretty optimized so network traffic is relatively low compared to other VDI vendors out there.
We are a VMware shop, so instead of vbox virtualization, we are doing VMware. Those are older blades from HP - hp bl460 g1 I believe. They are 2x dual core 2.2ghz and only 32gb of ram each. Our virtual desktops are stored on our SAN. Honestly we probably have plenty of extra space in this arena too... according to what VMware tells me, we could turn off close to half the blades and still be fine. This cluster is dedicated to VDI -- all our other virtual servers for the rest of the business live on other vmware clusters on separate hardware.
We maintain basically two types of desktop images... one, the dynamic desktop I referred to in an earlier post. Users log in, use the machine, and they get reset back to a clean state when they log out. There is nothing persistent and we have preloaded all the apps we know they will use. We have about 400 of this type sitting around available for us. The apps our clinical people use are pretty lightweight, so we get by fine with 512mb of ram and 1vcpu on XP SP3. Our second desktop image is a "static" desktop... basically it's assigned to a user and it remains theirs permanently until we blow it away. These are reserved for special people who use special software that is not preloaded on the dynamic desktop image. The more we try to expand use of VDI, the more we end up handing these out... we just have too many types of software and don't want to clutter up our clean little clinical desktop image. That image is XP SP3 again with 1GB of ram and 1vcpu. They also get a bigger 25GB hard drive to give them plenty of space for their special crapplications.
Our biggest bottleneck on the virtual desktop side is SAN I/O. Unfortunately we're forced to use full clones of these desktops rather than "linked cloning" or what you get with vbox and zfs which make much better use of disk space and I/O. I think we currently have most of this squeezed onto about 10 600gb 15krpm fibre channel drives and this is a bare minimum. We recently had an assessment that said we need to probably triple the number of spindles to get the proper I/O. This seems to be a trend in virtualization lately... space is not a problem with modern drives. The problem is that you can squeeze 60 virtual desktops on the space of one hard drive, which is a bad idea when you consider the performance you're going to get. Oh, and the ONLY way we have made this work thus far is by fine tuning our antivirus settings on the virtual desktops to not scan anything coming off the disk (which is clean because it was clean when I built the template). Before we did that, things were crawling and the SAN was doing 3x the I/O.
Again, read the install/admin guide if you haven't yet... I'm pretty sure they give some basic guidelines for storage sizing and performance which we should have read closer early on.
If you have other questions you think you'd like to talk about offline, you can send me an email at my personal address and we'll set something up - dwhitener at gmail dot com. Otherwise, keep the questions coming here and I'll give out whatever info I can. -
Multiple VLAN traffic on one switchport
Good Morning all,
I would like some help with a switchport config on one of my VMware clusters.
Currently the live vDS sits with the below config on a Cisco 4500
switchport trunk encapsulation dot1q
switchport trunk native vlan 8
switchport mode trunk
spanning-tree portfast trunk
spanning-tree bpduguard enable
I require the hots to be able to communicate on multiple VLANs, it sits on VLAN 8 but needs to communicate on 200 and 201 and 8.
Any help would be greatly appreciated.
Thanks,
Hassan.Hassan
The switch port that you show us is correctly configured as a trunk. You have not shown us whether these three vlans are correctly configured on the switch and active on the interface. The output of show interface trunk would be helpful in determining this. If the switch appears to be correctly configured then the other part of the question is whether your VMware cluster is correctly configured to use the three vlans on that interface.
HTH
Rick -
Good day folks,
I'm currently in the process of putting together an Inmage POC and am having a little trouble setting up a protection plan using Offline Export/Import between two VMware clusters. My current setup is as follows:
Customer Site:
Process Server
Master Target Server
Service Provider Site:
CX-CS Server
Master Target Server/vContinuum Server
The following are the steps I've taken thus far:
Select Offline Sync, OfflineSync Export (within vContinuum)
Enter details of vCenter server in customer's site
Select VM to be protected
Enter details of vCenter server (again in customer's site)
Select Master Target server in customer's site
Select process server in customer's site, set retention information, datastore etc (chosen datastore holds VM to be protected and Master Target server)
Run readiness check...passing successfully
Protection status screen reports all OK and the VM protection status changes to differential sync (as expected)
Power down the Master target server on the customer's site and remove it from the inventory
Copy the Master Target and InMage_Offline_Sync_Folder from the customer's site datastore to a datastore on the VMware cluster in the Service Provider's site Select Offline Sync, OfflineSync Import (within vContinuum)
Enter details of vCenter server in Service Provider's site
Select the datastore which holds the files copied above
It's at this point that I'm starting to have issues. The import job is failing with the following error:
"DRS cannot find a host to power on or migrate the virtual machine"
I'm sure it's something that I'm doing wrong but I'm at a loss as to what that might be, any help you guys can offer would be much appreciated.
Thanks in advance...FixxHi there ILikeRecovery,
I resolved this issue by removing the NIC from the customer site Master Target server before exporting it. I assume it was failing to import (at least in my environment) because the port group didn't exist on the remote vSphere environment.
Being that you've already carried out the import, you should be able to work round this by browsing to the imported Master Target server (remote site), downloading it's .vmx file and deleting the lines corresponding to the NIC. It should looks something
like this:
<tt abp="346">ethernet0.present = "true"
ethernet0.virtualDev = "vmxnet"
ethernet0.networkName = "Virtual Machine Network"
ethernet0.addressType = "vpx"
ethernet0.generatedAddress = "00:00:00:00:00:00"
ethernet0.pciSlotNumber = "00"</tt>
<tt abp="346">Then upload the edited .vmx to the same location overwriting the original. You should now be able to power up the imported Master Target server, add a NIC and configure it.</tt>
I've repeated this process several times now and confirm it works perfectly :)
Good luck<tt abp="346"></tt> -
I want to use losetup to map a disk image file to a loopback device (dev/loop0) during boot.
I would also want to mount this device as my home partition.
The disk image is not located on the root partition, but on another separate partition.
I can do this manually by using following commands:
losetup --find --show /mnt/data/home-flat.vmdk
losetup -o 32256 /dev/loop1 /dev/loop0
mount /dev/loop1 /home
The reason I need this, is that I have a dualboot setup (windows,archlinux) but also want access to an arch system via a virtual machine from within windows.
So I basically have two arch-installations, which share a home partition(the disk image above).
I cannot just map a real partition for the vm, because my disk has a gpt partition map and vmware currently does not support that.
I've looked at creating a custom hook for this, but I am at a loss, especially because the disk image is itself located on a (ntfs)partition,
which itself needs to be mounted before executing the losetup commands.
Any ideas?Sorry for the delayed response. I've been away from the computer all weekend.
hungerfish wrote:
Thanks for the help.
I'm still using the init scripts, but will be migrating soon, so now I know where to look come the time
Your solution however doesn't quite work, as I had to add
mount /dev/loop1 /home/
to /etc/rc.local instead of /etc/fstab , because (I assume) losetup hadn't finished by the time fstab gets parsed.
This isn't ideal, but it certainly works, so again thank you for your post!
Hmmm. The "sysinit_premount" parameter to add_hook should have made this happen before
/etc/fstab is even parsed.
Can you show your /etc/fstab entry?
Ok, I found that hitting scroll-lock pauses during boot/shutdown, so I was able to read:
loop: Write error at byte offset xxx, length 4096
Buffer I/O error on device loop0
Buffer I/O error on device loop1
JBD2 Error -5 detected when updating journal superblock
This pattern repeats a few times, always at different 'offsets'.
Maybe since the file is on an NTFS partition (I assume ntfs-3g?) Perhaps it killed off the
fuse module that kept NTFS volume mounted before unmounting /home, or removing the
loop devices. Just a guess on that one.
I have used the technique I describe before to pre-setup loop devices for old loop-AES volumes, and it
worked for me.
EDIT3:
So I just had a really bad crash (running as vm), after which I needed to manually fsck and repair the filesystem on the virtual disk. So I guess I'm missing something... sad
Is it possible you suspended the VM rather than shutdown before mounting on linux? That might cause this. -
Coherence Deserialization issue
HI,
I am facing an issue whilst deserializing an attribute form the Coherence session. Please find the stack trace below:
Caused by: java.io.InvalidClassException: weblogic.servlet.internal.session.CoherenceWebSessionData; weblogic.servlet.internal.session.CoherenceWebSessionData; no valid constructor
at java.io.ObjectStreamClass.checkDeserialize(ObjectStreamClass.java:713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at java.util.HashMap.readObject(HashMap.java:1220)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2217)
at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2348)
at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2746)
at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
... 52 more
Caused by: java.io.InvalidClassException: weblogic.servlet.internal.session.CoherenceWebSessionData; no valid constructor
at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:471)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:310)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1114)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
at java.util.HashMap.writeObject(HashMap.java:1188)
at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1469)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330)
at com.tangosol.util.ExternalizableHelper.writeSerializable(ExternalizableHelper.java:2253)
at com.tangosol.util.ExternalizableHelper.writeObjectInternal(ExternalizableHelper.java:2671)
at com.tangosol.util.ExternalizableHelper.serializeInternal(ExternalizableHelper.java:2601)
at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:211)
at com.tangosol.coherence.servlet.OptimizedHolder.serializeValue(OptimizedHolder.java:216)
at com.tangosol.coherence.servlet.OptimizedHolder.getBinary(OptimizedHolder.java:120)
at com.tangosol.coherence.servlet.OptimizedHolder.prepareWrite(OptimizedHolder.java:247)
at com.tangosol.coherence.servlet.SplittableHolder.prepareWrite(SplittableHolder.java:310)
at com.tangosol.coherence.servlet.AttributeHolder.flush(AttributeHolder.java:242)
at com.tangosol.coherence.servlet.SplittableHolder.flush(SplittableHolder.java:108)
at com.tangosol.coherence.servlet.AbstractHttpSessionModel.flush(AbstractHttpSessionModel.java:1620)
at com.tangosol.coherence.servlet.SplitHttpSessionModel.flush(SplitHttpSessionModel.java:104)
at com.tangosol.coherence.servlet.AbstractHttpSessionCollection.exit(AbstractHttpSessionCollection.java:733)
at com.tangosol.coherence.servlet.AbstractHttpSessionCollection.exit(AbstractHttpSessionCollection.java:696)
at weblogic.servlet.internal.session.CoherenceWebSessionContextImpl.exitSession(CoherenceWebSessionContextImpl.java:498)
at weblogic.servlet.internal.session.CoherenceWebSessionContextImpl.sync(CoherenceWebSessionContextImpl.java:530)
at weblogic.servlet.internal.ServletRequestImpl$SessionHelper.syncSession(ServletRequestImpl.java:2860)
at weblogic.servlet.internal.ServletRequestImpl$SessionHelper.syncSession(ServletRequestImpl.java:2835)
at weblogic.servlet.internal.ServletResponseImpl$1.run(ServletResponseImpl.java:1485)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
at weblogic.servlet.internal.ServletResponseImpl.send(ServletResponseImpl.java:1479)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1462)
I have been looking at the code to find the the parent-child class that may be written to the session abd the potential no-args constructor missed out from any non-serializable class, but it all looks well ok so far. I need help to identify whether this could be a tangosol coherence issue, or whether this could be avoided by tweaking some configurations..
Further details on Coherence configurations:
Version: 3.7.1.6
Collection class: com.tangosol.coherence.servlet.SplitHttpSessionCollection
Session-cache-config:
<?xml version="1.0"?>
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- -->
<!-- Cache configuration descriptor for Coherence*Web -->
<!-- -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<caching-scheme-mapping>
<!--
The clustered cache used to store Session management data.
-->
<cache-mapping>
<cache-name>session-management</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store ServletContext attributes.
-->
<cache-mapping>
<cache-name>servletcontext-storage</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store Session attributes.
-->
<cache-mapping>
<cache-name>session-storage</cache-name>
<scheme-name>session-near</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store the "overflowing" (split-out due to size)
Session attributes. Only used for the "Split" model.
-->
<cache-mapping>
<cache-name>session-overflow</cache-name>
<scheme-name>session-distributed</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store IDs of "recently departed" Sessions.
-->
<cache-mapping>
<cache-name>session-death-certificates</cache-name>
<scheme-name>session-certificate</scheme-name>
</cache-mapping>
<!--
The local cache used to store Sessions that are not yet distributed (if
there is a distribution controller).
-->
<cache-mapping>
<cache-name>local-session-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
<!--
The local cache used to store Session attributes that are not distributed
(if there is a distribution controller or attributes are allowed to become
local when serialization fails).
-->
<cache-mapping>
<cache-name>local-attribute-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Replicated caching scheme used by the Session management and ServletContext
attribute caches.
-->
<replicated-scheme>
<scheme-name>replicated</scheme-name>
<service-name>ReplicatedSessionsMisc</service-name>
<request-timeout>30s</request-timeout>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</replicated-scheme>
<!--
Near caching scheme used by the Session attribute cache. The front cache
uses a Local caching scheme and the back cache uses a Distributed caching
scheme.
-->
<near-scheme>
<scheme-name>session-near</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>session-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<local-scheme>
<scheme-name>session-front</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<high-units>200000</high-units>
<low-units>150000</low-units>
</local-scheme>
<distributed-scheme>
<scheme-name>session-distributed</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
<!-- for disk overflow use this backing scheme instead:
<overflow-scheme>
<scheme-ref>session-paging</scheme-ref>
</overflow-scheme>
-->
</backing-map-scheme>
</distributed-scheme>
<!--
Distributed caching scheme used by the "recently departed" Session cache.
-->
<distributed-scheme>
<scheme-name>session-certificate</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>4000</high-units>
<low-units>3000</low-units>
<expiry-delay>86400</expiry-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<!--
"Base" Distributed caching scheme that defines common configuration.
-->
<distributed-scheme>
<scheme-name>session-base</scheme-name>
<service-name>DistributedSessions</service-name>
<thread-count>0</thread-count>
<lease-granularity>member</lease-granularity>
<local-storage system-property="tangosol.coherence.session.localstorage">true</local-storage>
<partition-count>257</partition-count>
<backup-count>1</backup-count>
<backup-storage>
<type>on-heap</type>
</backup-storage>
<request-timeout>30s</request-timeout>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Disk-based Session attribute overflow caching scheme.
-->
<overflow-scheme>
<scheme-name>session-paging</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<external-scheme>
<bdb-store-manager/>
</external-scheme>
</back-scheme>
</overflow-scheme>
<!--
Local caching scheme definition used by all caches that do not require an
eviction policy.
-->
<local-scheme>
<scheme-name>unlimited-local</scheme-name>
<service-name>LocalSessionCache</service-name>
</local-scheme>
<!--
Clustered invocation service that manages sticky session ownership.
-->
<invocation-scheme>
<service-name>SessionOwnership</service-name>
<request-timeout>30s</request-timeout>
</invocation-scheme>
</caching-schemes>
</cache-config>Hi,
This appears to be a classloader issue. It is doubtful that something in your cache configuration could fix this. If you need further assistance I suggest filing a request with Oracle support.
Hope this helps,
Patrick -
Client system: iPad 4 running iOS 8.1.1
Client language/locale: US English
Client app: MRD 8.1.5
Server system: 64-bit Windows 8.1 Pro
Server language/locale: US English
Problem: client-side keyboard in MRD 8.1.5 does not work properly within VMware Player 6 console only
When interacting with VMs through the VMware Player 6 console app on the remote Windows 8.1 host, the client-side iPad keyboard for MRD 8.1.5 is not properly mapped. Letter keys and the top-row number keys send function keys or non-standard key scan codes
that aren't used by traditional PC keyboards. The spacebar sends the D key, and some keys send nothing at all. Many other keys work properly, including the client-side numeric keypad, and uppercase letters sent by using the Shift key (but not the iPad up-arrow
shift key). The only way to send the proper keystrokes to VMware Player from MRD 8.1.5 is to not use the client-side iPad keyboard and instead switch to the on-screen keyboard provided by the remote Windows server.
This problem only occurs within the VMware Player console app, and only with RDP connections that use the MRD 8.1.5 client on iOS. I do not encounter the problem with other iOS RD clients such as iFreeRDP by Thinstuff or Pocket Cloud Remote Desktop by Wyse.
Steps to reproduce:
Connect to Windows 8.1 Pro system from the MRD 8.1.5 client for iOS 8.1.1
Using the client-side iPad keyboard within MRD 8.1.5, bring up the Run dialog by typing Windows-R
Launch Notepad by typing notepad.exe in the Run dialog and pressing Enter on the client-side iPad keyboard
Type some sample text in Notepad until you're confident that the client-side iPad keyboard is functioning properly
Launch VMware Player 6 and start up a VM (mine was Windows Server 2008)
Open the sign-on prompt in the VM by sending Ctrl-Alt-Ins from the client-side keyboard or by pressing the Ctrl-Alt-Del icon in VMware Player
Touch or click in the password field in the VM to ensure it has keyboard focus
Using the client-side keyboard, try to type letters or numbers in the password field, and notice that dots generally do not appear for most keypresses
Switch to the server-side on-screen keyboard and delete the contents of the password field if it is not already empty
Use the server-side on-screen keyboard to sign on to the VM
Inside the VM, open Notepad or some other text editor
Enter text into the editor from both the client-side and server-side keyboards to verify that only the server-side keyboard is functioning properly within the VM
This issue is the only problem I'm having with MRD for iOS, and I hope it is resolved soon.
Thanks,
FredClient system: iPad 4 running iOS 8.1.1
Client language/locale: US English
Client app: MRD 8.1.5
Server system: 64-bit Windows 8.1 Pro
Server language/locale: US English
Problem: client-side keyboard in MRD 8.1.5 does not work properly within VMware Player 6 console only
When interacting with VMs through the VMware Player 6 console app on the remote Windows 8.1 host, the client-side iPad keyboard for MRD 8.1.5 is not properly mapped. Letter keys and the top-row number keys send function keys or non-standard key scan codes
that aren't used by traditional PC keyboards. The spacebar sends the D key, and some keys send nothing at all. Many other keys work properly, including the client-side numeric keypad, and uppercase letters sent by using the Shift key (but not the iPad up-arrow
shift key). The only way to send the proper keystrokes to VMware Player from MRD 8.1.5 is to not use the client-side iPad keyboard and instead switch to the on-screen keyboard provided by the remote Windows server.
This problem only occurs within the VMware Player console app, and only with RDP connections that use the MRD 8.1.5 client on iOS. I do not encounter the problem with other iOS RD clients such as iFreeRDP by Thinstuff or Pocket Cloud Remote Desktop by Wyse.
Steps to reproduce:
Connect to Windows 8.1 Pro system from the MRD 8.1.5 client for iOS 8.1.1
Using the client-side iPad keyboard within MRD 8.1.5, bring up the Run dialog by typing Windows-R
Launch Notepad by typing notepad.exe in the Run dialog and pressing Enter on the client-side iPad keyboard
Type some sample text in Notepad until you're confident that the client-side iPad keyboard is functioning properly
Launch VMware Player 6 and start up a VM (mine was Windows Server 2008)
Open the sign-on prompt in the VM by sending Ctrl-Alt-Ins from the client-side keyboard or by pressing the Ctrl-Alt-Del icon in VMware Player
Touch or click in the password field in the VM to ensure it has keyboard focus
Using the client-side keyboard, try to type letters or numbers in the password field, and notice that dots generally do not appear for most keypresses
Switch to the server-side on-screen keyboard and delete the contents of the password field if it is not already empty
Use the server-side on-screen keyboard to sign on to the VM
Inside the VM, open Notepad or some other text editor
Enter text into the editor from both the client-side and server-side keyboards to verify that only the server-side keyboard is functioning properly within the VM
This issue is the only problem I'm having with MRD for iOS, and I hope it is resolved soon.
Thanks,
Fred
I'm experiencing exactly the same problem. Is there a solution yet? -
Removing clustering setup for SQL Server on VMware server
This is an unusual question that I'm hoping someone can offer advice.
I have a SQL server 2008 cluster setup between 2 VMware ESXi hosts (4.1) with shared FC LUNs for the data, tempdb and any required cluster devices (ie. quorum). The SQL server is part of our active directory infrastructure.
Unfortunately, we have been experiencing issues with our shared storage; so I would like to dismantle the cluster and make use of vmdks instead. Ideally, we remove the clustered nodes and go back to a single node.
However, when removing the cluster setup, the IP of the database changes. So the cluster IP will be removed and the single node IP will be used. Unfortunately, a lot of our apps point to the database via IP (yes, I realize this was
a mistake and will be fixed to use hostnames in the future).
To get around the IP problem, I'm thinking of doing the following:
1) remove the 2nd node from the cluster. Now I have single node in a cluster.
2) create the appropriate number of vmdk disks and move the databases (user, master, model, msdb) and any other cluster devices (quorum) to the vmdks
Since I'm keeping the single node in a cluster, I don't need shared storage anymore (since there will never be a failover to a second node) and I'm assuming the newly created "clustered" vmdks SHOULD work since the SQL Server cannot differentiate
between local vmdks or shared disks.
Now I have a cluster with a single node and no dependencies on shared storage. But most importantly, I can keep the cluster IP intact.
Does this approach work? Is there an easier solution? This is just temporary until we can correct all of our apps to replace the IPs with hostnames. then the cluster will be completely uninstalled.Even with a single-node, a Failover Cluster Instance requires shared storage.
You need to perform a side-by-side migration to a new standalone instance (either on an existing cluster node or a new VM).
After you have tested the migration you can remove the IP address from the Client Access Point on the old Failover Cluster Instance and assign the IP to the VM hosting the new instance. By default SQL Server will listen on all the IP addresses on the
server, and you can optionally configure it to listen on selected IPs/Ports with SQL Server Configuration Manger.
For Windows Integrated Auth, you may need to perform some additional security configuration after moving the IP address. See, eg
http://blogs.msdn.com/b/dbrowne/archive/2012/05/21/how-to-add-a-hostname-alias-for-a-sql-server-instance.aspx
David
David http://blogs.msdn.com/b/dbrowne/ -
Silly as it sounds, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
Our current setup is
shareclus
= cluster (1 node – sharedb (SharePoint database server))
shareapp1
= application server (LB)
shareapp2
= application server (LB)
shareweb1 = WFE (LB)
shareweb2 = WFE (LB)
and would like to go to
sharedb01vm = SharePoint Database server
shareapp1
= application server (LB)
shareapp2
= application server (LB)
shareweb1 = WFE (LB)
shareweb2 = WFE (LB)
So at the moment the database is referenced in Central Addministration as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
Can anyone help? Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.I havent done this specifically with sharepoint, but I dont think it will be any different.
Basically you build the new VM with the name sharedb01vm. Now when you are doing the cut-over, ie when you are basically moving all the databases, you rename the servers.. ie the new VM will be renamed as Shareclus and the old cluster can be named anything
you like .
At this point the sharepoint server should point to the new VM where you have already migrated the db's.
Another option is to create a Alias in the sharepoint server "shareclus" to point to sharedb01vm.
I have seen both of this in different environments but I bascially dont prefer the Alias option as it creates confusion to people who dont know this.
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
Silly as it sounds, yes, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
So our current setup is an SQL cluster (1 node), 2 App servers (VM’s) and 2 Web servers (VM’s)
We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
I’ve had a look around and seen the option to use SQL aliases but I’m not sure that’s the best option. I was thinking of rebuilding the DB server but was wondering if there is any other options
Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.Hi, yes that's correct, but my query really is about the SharePoint side with regard to SharePoint and maybe using SQL aliases.
My current setup is
shareclus
= cluster (1 node – sharedb (SharePoint database server))
shareapp1
= application server (LB)
shareapp2
= application server (LB)
shareweb1 = WFE (LB)
shareweb2 = WFE (LB)
and would like to go to
sharedb01vm = SharePoint Database server
shareapp1
= application server (LB)
shareapp2
= application server (LB)
shareweb1 = WFE (LB)
shareweb2 = WFE (LB)
So at the moment the database is referenced in CA as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
Can anyone help? -
ACE (ANM) to VMWare rserver mapping
I've successfuly imported a vmware vcenter v4 server into anm (to manage our ace modules) v4.2 and although the mappings have correctly matched VMs to rservers in ANM, in vmware vCenter the mapping doesn't show in the Cisco ACE SLB tab. I noticed during the import that ANM logged info level events in vCenter with a custom filed called ANM_MAPPING_FIELD for each matched VM, but the value between <rserver> and </rserver> is missing. I don't see how to correct this. The VM names only use alpha and numeric characters.
Hi Abijith
This is what will work for sure, but it is not feasable in my customer's environment. Since NAT is not really supported, I will have to find another solution with the customer. ANM does not really fit then unless the security officer of my customer makes an exception to his policy...
Thanks and cheers
Andi -
Google Marker clusters (BC Map App)
Hi guys
Hoping you can help me with a little problem.
We have been working with the BC Map App from Kiyuco (http://www.bcmapapp.com) for a couple of months now and it works great, but i am trying to add a new feature and
it is causing me some problems.
At the moment i am working on a new client, they have almost 200 resellers in our little country (Denmark),
and it is hard to get a good overview on the map app, so i was looking into the possibility of using "MarkerCluster" 
function in Google Map, and i am almost there, but i was hoping you could help me with the finishing touches.
I am running a test here: http://kinnanlg.geniesite.net/forhandlere2013/forhandlere2
It works OK, but it seems to be running very slow, i am not sure if it is "looping" correct when zooming in/out etc. 
I've been using this example (http://jsfiddle.net/Sas67/3/) and it is running a lot more smoothly even though there are 
more markers.
Any tips/advice will be greatly appreciated.
- Please let me know if you need more infoI finally found the error.
The Marker Cluster "script" was firing multiple times, instead of just one time.
I moved it out of the "loop" and now it works!
Maybe you are looking for
-
2 users on imessage with the same apple id but setup so they messages can't be seen by each other
My kids are using my itunes account and they both just recently got ipod touches. They both use imessage but they can see each others messages. Is there a way to set it up so they can't see the messages for each other. I've got them setup with dif
-
I've tried the recommended steps in the forum and posted on the help page and nothing is working. My 1st gen shuffle is not connecting to iTunes on my 64-bit PC. Please help. Thanks
-
Creation of customer in R12.(version:12.1.1)-Getting Error
Hi, When I creating a customer In Accounts Receivables module R12 (Version 12.1.1.) I am getting the following error after filling all the details in Customer information in Account Information,Account site address, Account site details, Business pur
-
How to display a html file without tags?
Hello, I am new for Java programming. Now i have to display a html file without all the tags. My code is the fllowing: u = new URL("http://www.ncbi.nlm.nih.gov/entrez/viewer.fcgi?db=protein&val=55584070"); BufferedReader in = new BufferedReader(new I
-
Oracle.jbo.JboException: JBO-3300 on Jdevelooper 9.0.2.8
Hi All, When i tried to run my test.jpr project, project get compiled successfuly and run first index.jsp but when i click any link on index.jsp it gives an error oracle.jbo.JboException, msg=JBO-33001: Cannot find the configuration file /org/act/epr