OES Clusters

I am looking for suggestions on the best way to move our current 3 pair of Netware 6.5 clusters to OES11. We are looking at doing this over the summer. We will need to support macs and ipads in our environment next year. I am trying to figure out if we can use the hardware we have/can or go virtual or a mixture of both.
All of the clustered servers are HP DL380's G4's (i386). Each clustered pair have their own MSA500 G2's connected. 1 of the clusters is used for Zenworks and the other 2 are used for DHCP & Data.
Additional hardware we have available for use currently:
1 - HP 380DL G4 (i386)
1 - HP 380DL G5 (64bit)
We also have an Vsphere with ESXi servers. (Can I install OES11 clusters on ESXi)? I had read that many were having problems joining a OES11 cluster on ESXi. When they joined 1 vmserver to the cluster everything seemed fine, when a second server from inside the vm tried to connect it always failed. Is it possible to have more than one clustered OES11 vmserver using vsphere and ESXi?
If I cannot install on ESXi (I believe I read that we have to use Xen), can I use the two servers listed above to run Xen and install the OES11 clusters on them? Then I could connect the MSA's to the Xen Servers.
I am looking for what others have done in a hardware pinch.
Thank you for your suggestions in advance!
Tracy

On 07/05/2012 17:06, tmishler wrote:
> I am looking for suggestions on the best way to move our current 3 pair
> of Netware 6.5 clusters to OES11. We are looking at doing this over the
> summer. We will need to support macs and ipads in our environment next
> year. I am trying to figure out if we can use the hardware we have/can
> or go virtual or a mixture of both.
I think you need to explain what you mean by "need to support macs and
ipads" as NetWare 6.5 will certainly support Mac access via AFP though
has issues with OS X Lion (10.7) and later due to user authentication
method in use.
> All of the clustered servers are HP DL380's G4's (i386). Each
> clustered pair have their own MSA500 G2's connected. 1 of the clusters
> is used for Zenworks and the other 2 are used for DHCP& Data.
What does the "i386" reference mean? That your servers have 32-bit CPUs
installed? HP product support page suggests they have 64-bit Xeon CPUs.
> Additional hardware we have available for use currently:
>
> 1 - HP 380DL G4 (i386)
> 1 - HP 380DL G5 (64bit)
>
> We also have an Vsphere with ESXi servers. (Can I install OES11
> clusters on ESXi)? I had read that many were having problems joining a
> OES11 cluster on ESXi. When they joined 1 vmserver to the cluster
> everything seemed fine, when a second server from inside the vm tried to
> connect it always failed. Is it possible to have more than one
> clustered OES11 vmserver using vsphere and ESXi?
You can certainly install OES11 servers, standalone or clustered, as
virtual guests using either ESXi or Xen as the host.
> If I cannot install on ESXi (I believe I read that we have to use Xen),
> can I use the two servers listed above to run Xen and install the OES11
> clusters on them? Then I could connect the MSA's to the Xen Servers.
>
> I am looking for what others have done in a hardware pinch.
First things first establish what CPUs you have in your servers. If
they're 64-bit then you have several options:
1) install OES11 (on SLES11 SP1) directly
2) install ESXi or SLES11 SPn + Xen and virtualise OES11
but if they're 32-bit then unless you can replace the CPUs you're going
to new to replace the servers themselves (or at least remove them from
the equation).
> Thank you for your suggestions in advance!
HTH.
Simon
Novell/SUSE/NetIQ Knowledge Partner
Do you work with Novell technologies at a university, college or school?
If so, your campus could benefit from joining the Novell Technology
Transfer Partner (TTP) program. See novell.com/ttp for more details.

Similar Messages

  • CIFS clustering

    I have a 6 node OES 2 sp3 cluster that is generally working well. We are moving to Windows 7 workstations without the Novell client. So far, our two main NSS volumes seem to be working fine with that but are accessed via IP address. I need to get the workstations seeing the CIFS shares with the same name as the Novell client sees. For example:
    Novell NSS path:
    \\name-lc-share1\share1\share\ is mapped to S:\
    My Win 7 systems see:
    \\10.1.1.1\share1\share\ just fine and in fact also map that to S:\. The problem is only accessing the CIFS via name.
    But in Win 7 if there is a link with the NSS path, it doesn't work, it can't find the share name. In doing a novcifs -sln share1, I see the server name is "name-node1_w". So I created an cname in DNS pointing that to name-lc-share1 and it still fails.
    What am I doing wrong with the config? There are 2 shares on the cluster I need users to access and they may run on any of the 6 nodes. They generally don't ever get migrated but sometimes they do. And it's going to be a slow rollout of updated Win 7 clients, so I need to make sure our XP machines with the Novell client still work as well.
    Any ideas?
    Todd Bowman
    Senior Network Analyst
    University of Minnesota Physicians
    612-884-0744
    [email protected]

    Originally Posted by T. Bowman
    I have a 6 node OES 2 sp3 cluster that is generally working well. We are moving to Windows 7 workstations without the Novell client. So far, our two main NSS volumes seem to be working fine with that but are accessed via IP address. I need to get the workstations seeing the CIFS shares with the same name as the Novell client sees. For example:
    Novell NSS path:
    \\name-lc-share1\share1\share\ is mapped to S:\
    My Win 7 systems see:
    \\10.1.1.1\share1\share\ just fine and in fact also map that to S:\. The problem is only accessing the CIFS via name.
    But in Win 7 if there is a link with the NSS path, it doesn't work, it can't find the share name. In doing a novcifs -sln share1, I see the server name is "name-node1_w". So I created an cname in DNS pointing that to name-lc-share1 and it still fails.
    What am I doing wrong with the config? There are 2 shares on the cluster I need users to access and they may run on any of the 6 nodes. They generally don't ever get migrated but sometimes they do. And it's going to be a slow rollout of updated Win 7 clients, so I need to make sure our XP machines with the Novell client still work as well.
    Any ideas?
    Todd Bowman
    Senior Network Analyst
    University of Minnesota Physicians
    612-884-0744
    [email protected]
    When you "cluster" CIFS, you define the server names, WINS, Oplocks, DFS settings on the physical node(s) themselves, not on the virtual nodes.
    Example:
    Server1 = physical node
    vserver1 = NCS Virtual clustered server object.
    So let's say:
    \\vserver1\vol1 can reside on server1, server2, and server3
    You need to install and configure CIFS on ALL three Server1-3 nodes
    The NCS Virtual Server object can have a corresponding CIFS "virtual server" name as well, but I do not believe it can be the SAME name as the NCP one.
    Example:
    CIFS virtual cluster server would be:
    vserver1-cifs
    for example
    Just use DNS names accordingly.
    Also, note that underscores in DNS names are not normally allowed in most BIND compliant DNS servers. So we usually change to dashes and not underscores.
    Also, CIFS requires the use of NETBIOS, so you need to make sure your Win7 has NetBIOS enabled as well.
    Here's an example (I changed the IP's of course)
    We have a virtual cluster server:
    CS1-DATA1
    The NCP "share" is:
    \\cs1-data1\data1
    For CIFS, we share/access as:
    \\cs1-data1-w\data1
    (although technically you can still get to it via: \\cs1-data1\data1 if you have NO Novell Client on the machine because the cluster resources are on the same server and the same IP,and CIFS just shares out the volumes locally).
    Keep in mind that the CIFS "name" is on the eDir object when you created it with NSSMU (or later edited via iManager).
    ## Load script (with modifications) ##
    #!/bin/bash
    . /opt/novell/ncs/lib/ncsfuncs
    exit_on_error nss /poolact=DATA1POOL
    exit_on_error ncpcon mount DATA1=250
    exit_on_error add_secondary_ipaddress 10.10.10.135
    exit_on_error ncpcon bind --ncpservername=CS1-DATA1 --ipaddress=10.10.10.135
    exit_on_error novcifs --add '--vserver=".cn=CS1-DATA1.ou=SVCS.o=ABC.t=ACME."' --ip-addr=10.10.10.135
    exit 0

  • Moving a NetWare iPrint installation to an OES/Linux cluster

    Hi folks,
    I have an iPrint installation on my NetWare 6.5 cluster that i want to
    move to my OES/Linux cluster. The two clusters are completely
    independent at present, and need to remain that way. Is there any way i
    can move printers and drivers without recreating them and manually
    duplicating the settings?
    Thanks,
    Paul
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.0 (GNU/Linux)
    Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
    iD8DBQFE0AExafGwptjWFZkRAnT9AKCjmXKk7dwuIXA3QlYPRM mywN0u1ACeOrZX
    NdOTqsZ/wbpjUD8iq744i1I=
    =szY+
    -----END PGP SIGNATURE-----

    Niclas Ekstedt wrote:
    > On Wed, 02 Aug 2006 01:34:43 +0000, Paul Gear wrote:
    >
    > The Consolidation Utiltity available from the Download site
    > is capable of moving PA's between Managers.
    Thanks, Niclas - i'll check it out.
    Paul
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.0 (GNU/Linux)
    Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
    iD8DBQFE0T/rafGwptjWFZkRApzVAKCpmDhCGzGM5s3MKnaIyYo+47kmqwCfd xfz
    8lnocVoQ+peLvI60muL19a0=
    =+/p6
    -----END PGP SIGNATURE-----

  • Clustered volume question

    I've recently taken over the administration of a network that has the core file shares on 3 physical, clustered Netware 6.5 machines. They are extremely old servers and the Fiber-Channel SAN on which the file shares reside is at end-of-life. The combined size of these file shares is around 1 TB. I need to migrate these volumes to some other platform.
    I'm much more familiar with SLES and OES2 than I am with Netware. I need to move these file shares to new Equalogic SAN volumes. I'd like to create a clustered or at least an HA environment. I'd like the hosts to be VM's on ESX 3.5 (soon to be upgraded to ESXi 4.0).
    What I'm looking for are some suggestions from the community as to how to proceed. Reliability is an issue as is speed. The Netware/Clustered volume environment has been extremely reliable.
    We are a private university with approximately 600-800 concurrent users. Any suggestions would be helpful.

    Novell does provide migration tools.. Personally I like to do the work
    manually.
    To move the data shouldn't be a problem.. you can export on netware the
    volumes via NFS and Rsync the Data directly to an OES Server (obviously
    you could use CIFS or NCP as well, I just find NFS faster)
    If you use the rsync -a option it preserves most file information (there
    are other options)
    Backup the trustees on Netware (using trustee.nlm)
    you will then need to convert this file format to OES file format,
    Again I do this via a script that turns each line in to a RIGHTS import
    command (rights is used on OES Linux)
    If you have moved homedrives you will need to re-apply file permissions
    either via trustee export or I just run in linux an 'ls' in a for loop
    on the Users folders and apply that using the rights command.
    You will also need to re-add the homeDirectory attribute. again I use a
    tool, that a colleague wrote, it is called ldapdo and is on coolsolutions
    We run a 5 node cluster, on physical hardware, I am not sure how well
    supported clustering is on ESX.. Personally I would use physical
    hardware. But then I am not a fan of ESX.
    Personally, I don't find OES Linux as good as Netware, We had the same
    number of resources on fewer servers, That said with the latest updates
    stability has vastly improved and so has the cluster migrate speeds.
    We currently have up to 2000 concurrent users on a busy day (with up to
    80 - 90 concurrent logins for labs). We provide access via NCP, CIFS
    and NFS for Linux labs, although we are looking to remove NFS as with
    NSS you can only export the file system with a NO_ROOT_SQUASH.
    Our data usage is about 14TB across 30 resources, Our university is try
    to centralise all IT, onto Windows however are failing to provide a
    solution that is stable and able to support multi protocol access. In
    the mean time, our system limps on, on old hardware.
    If you want any of the scripts let me know, they really were thrown
    together but they do the job.. I have to migrate data quite often on to
    new disks (Bigger) As our data requirements grow. We use some pretty
    reliable but cheap sata SANS called infortrends... As our budget is
    minimal..
    On 24/09/10 12:36 AM, tagross wrote:
    >
    > I've recently taken over the administration of a network that has the
    > core file shares on 3 physical, clustered Netware 6.5 machines. They are
    > extremely old servers and the Fiber-Channel SAN on which the file shares
    > reside is at end-of-life. The combined size of these file shares is
    > around 1 TB. I need to migrate these volumes to some other platform.
    >
    > I'm much more familiar with SLES and OES2 than I am with Netware. I
    > need to move these file shares to new Equalogic SAN volumes. I'd like to
    > create a clustered or at least an HA environment. I'd like the hosts to
    > be VM's on ESX 3.5 (soon to be upgraded to ESXi 4.0).
    >
    > What I'm looking for are some suggestions from the community as to how
    > to proceed. Reliability is an issue as is speed. The Netware/Clustered
    > volume environment has been extremely reliable.
    >
    > We are a private university with approximately 600-800 concurrent
    > users. Any suggestions would be helpful.
    >
    >

  • SLP.CFG where one DA is clustered, 2nd is not?

    I am running SLPDA on a 4-node cluster, OES-NW65sp5 in eDir 8.81. This
    is replacing our old DA which ran on a single box.
    However, I would like to still (at least for a while) run a secondary
    DA on the non-clustered box. For a normal two-DA setup the practice
    would be to have SLP.CFG on each of the DA boxes point to each other.
    On clustered SLP the SLP.CFG entry points to the secondary address
    being used by the clustered SLP Resource. Perhaps the numbers explain
    it best:
    Old DA: XXX.XXX.XXX.249
    SLP.CFG for this box currently has no entry in SLP.CFG, I assume would
    need to add XXX.XXX.XXX.172 to point to the clustered SLP resource.
    NEW Clustered DA: Runing on XXX.XXX.XXX.172. Each node in the
    cluster has an entry in SYS:ETC\SLP.CFG pointing to XXX.XXX.XXX.172.
    Do I need to add an entry here to point to XXX.XXX.XXX.249? Everything
    I've read indicates this would be the case, but I don't want to chance
    disrupting access for everyone if I get this wrong.
    All our clients receive SLP information via DHCP.
    Thanks,
    DeVern

    On 02/05/07 DeVern Gerber wrote:
    > you point the other DA
    > >to the cluster resource IP address not the node's own IP address.
    To clarify, this is on the assumption that you have cluster-enabled the
    SLPDA. If not then simply use the node's own IP address as normal.
    Andrew C Taubman
    Novell Support Forums Volunteer SysOp
    http://support.novell.com/forums
    (Sorry, support is not provided via e-mail)
    Opinions expressed above are not
    necessarily those of Novell Inc.

  • Issue in Synchronous File Read in clustered environment

    Hi,
    We are using clustered environment (4 managed servers) in Unix. In a OSB 11gR3 proxy service we are using Synchronous File Read. Randomly we are getting the below error. Let us know what could cause the issue. But the same code is working fine in a single stand-alone server configuration.
    Error Code : BEA-380002 , Error Reason : Invoke JCA outbound service failed with connection error, exception: com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/wsdlPathAndName [ SynchRead_ptt::SynchRead(Empty,body) ] - WSIF JCA Execute of operation 'SynchRead' failed due to: No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content ; nested exception is:
    BINDING.JCA-11007
    No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderNamer/Filename.txt to be processed was not found or not available or has no content Please make sure that the file exists in the specified directory.
    com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/wsdlPathAndName [ SynchRead_ptt::SynchRead(Empty,body) ] - WSIF JCA Execute of operation 'SynchRead' failed due to: No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content ; nested exception is:
    BINDING.JCA-11007
    No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content Please make sure that the file exists in the specified directory.
    at com.bea.wli.sb.transports.jca.binding.JCATransportOutboundOperationBindingServiceImpl.invoke(JCATransportOutboundOperationBindingServiceImpl.java:153)
    at com.bea.wli.sb.transports.jca.JCATransportEndpoint.sendRequestResponse(JCATransportEndpoint.java:209)
    at com.bea.wli.sb.transports.jca.JCATransportEndpoint.send(JCATransportEndpoint.java:170)
    at com.bea.wli.sb.transports.jca.JCATransportProvider.sendMessageAsync(JCATransportProvider.java:598)
    at sun.reflect.GeneratedMethodAccessor1115.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.wli.sb.transports.Util$1.invoke(Util.java:83)
    at $Proxy142.sendMessageAsync(Unknown Source)
    at com.bea.wli.sb.transports.LoadBalanceFailoverListener.sendMessageAsync(LoadBalanceFailoverListener.java:148)
    at com.bea.wli.sb.transports.LoadBalanceFailoverListener.sendMessageToServiceAsync(LoadBalanceFailoverListener.java:603)
    at com.bea.wli.sb.transports.LoadBalanceFailoverListener.sendMessageToService(LoadBalanceFailoverListener.java:538)
    at com.bea.wli.sb.transports.TransportManagerImpl.sendMessageToService(TransportManagerImpl.java:558)
    at com.bea.wli.sb.transports.TransportManagerImpl.sendMessageAsync(TransportManagerImpl.java:426)
    at com.bea.wli.sb.pipeline.PipelineContextImpl.doDispatch(PipelineContextImpl.java:670)
    at com.bea.wli.sb.pipeline.PipelineContextImpl.dispatchSync(PipelineContextImpl.java:551)
    at stages.transform.runtime.WsCalloutRuntimeStep$WsCalloutDispatcher.dispatch(WsCalloutRuntimeStep.java:1391)
    at stages.transform.runtime.WsCalloutRuntimeStep.processMessage(WsCalloutRuntimeStep.java:236)
    at com.bea.wli.sb.stages.StageMetadataImpl$WrapperRuntimeStep.processMessage(StageMetadataImpl.java:346)
    at com.bea.wli.sb.stages.impl.SequenceRuntimeStep.processMessage(SequenceRuntimeStep.java:33)
    at com.bea.wli.sb.pipeline.PipelineStage.processMessage(PipelineStage.java:84)
    at com.bea.wli.sb.pipeline.PipelineContextImpl.execute(PipelineContextImpl.java:1055)
    at com.bea.wli.sb.pipeline.Pipeline.processMessage(Pipeline.java:141)
    at com.bea.wli.sb.pipeline.PipelineContextImpl.execute(PipelineContextImpl.java:1055)
    at com.bea.wli.sb.pipeline.PipelineNode.doRequest(PipelineNode.java:55)
    at com.bea.wli.sb.pipeline.Node.processMessage(Node.java:67)
    at com.bea.wli.sb.pipeline.PipelineContextImpl.execute(PipelineContextImpl.java:1055)
    at com.bea.wli.sb.pipeline.Router.processMessage(Router.java:214)
    at com.bea.wli.sb.pipeline.MessageProcessor.processRequest(MessageProcessor.java:96)
    at com.bea.wli.sb.pipeline.RouterManager$1.run(RouterManager.java:593)
    at com.bea.wli.sb.pipeline.RouterManager$1.run(RouterManager.java:591)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at com.bea.wli.sb.security.WLSSecurityContextService.runAs(WLSSecurityContextService.java:55)
    at com.bea.wli.sb.pipeline.RouterManager.processMessage(RouterManager.java:590)
    at com.bea.wli.sb.transports.TransportManagerImpl.receiveMessage(TransportManagerImpl.java:375)
    at com.bea.wli.sb.transports.jca.binding.JCATransportInboundOperationBindingServiceImpl$4.run(JCATransportInboundOperationBindingServiceImpl.java:415)
    at com.bea.wli.sb.transports.jca.binding.JCATransportInboundOperationBindingServiceImpl$4.run(JCATransportInboundOperationBindingServiceImpl.java:413)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at weblogic.security.Security.runAs(Security.java:61)
    at com.bea.wli.sb.transports.jca.binding.JCATransportInboundOperationBindingServiceImpl.sendMessage(JCATransportInboundOperationBindingServiceImpl.java:413)
    at com.bea.wli.sb.transports.jca.binding.JCATransportInboundOperationBindingServiceImpl.invokeOneWay(JCATransportInboundOperationBindingServiceImpl.java:126)
    at com.bea.wli.sb.transports.jca.binding.JCAInboundRequestListener.post(JCAInboundRequestListener.java:39)
    at oracle.tip.adapter.sa.impl.inbound.JCAInboundListenerImpl.onMessage(JCAInboundListenerImpl.java:170)
    at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:502)
    at oracle.tip.adapter.file.inbound.Publisher.onMessageDelegate(Publisher.java:493)
    at oracle.tip.adapter.file.inbound.Publisher.publishMessage(Publisher.java:419)
    at oracle.tip.adapter.file.inbound.InboundTranslatorDelegate.xlate(InboundTranslatorDelegate.java:484)
    at oracle.tip.adapter.file.inbound.InboundTranslatorDelegate.doXlate(InboundTranslatorDelegate.java:121)
    at oracle.tip.adapter.file.inbound.ProcessorDelegate.doXlate(ProcessorDelegate.java:388)
    at oracle.tip.adapter.file.inbound.ProcessorDelegate.process(ProcessorDelegate.java:174)
    at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:349)
    at weblogic.work.ContextWrap.run(ContextWrap.java:41)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:176)
    Caused by: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/wsdlPathAndName [ SynchRead_ptt::SynchRead(Empty,body) ] - WSIF JCA Execute of operation 'SynchRead' failed due to: No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content ; nested exception is:
    BINDING.JCA-11007
    No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content Please make sure that the file exists in the specified directory.
    at oracle.tip.adapter.sa.impl.JCABindingReferenceImpl.request(JCABindingReferenceImpl.java:259)
    at com.bea.wli.sb.transports.jca.binding.JCATransportOutboundOperationBindingServiceImpl.invoke(JCATransportOutboundOperationBindingServiceImpl.java:150)
    ... 56 more
    Caused by: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/wsdlPathAndName [ SynchRead_ptt::SynchRead(Empty,body) ] - WSIF JCA Execute of operation 'SynchRead' failed due to: No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content ; nested exception is:
    BINDING.JCA-11007
    No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content Please make sure that the file exists in the specified directory.
    at oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.performOperation(WSIFOperation_JCA.java:662)
    at oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.executeOperation(WSIFOperation_JCA.java:353)
    at oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.executeRequestResponseOperation(WSIFOperation_JCA.java:312)
    at oracle.tip.adapter.sa.impl.JCABindingReferenceImpl.invokeWsifProvider(JCABindingReferenceImpl.java:350)
    at oracle.tip.adapter.sa.impl.JCABindingReferenceImpl.request(JCABindingReferenceImpl.java:253)
    ... 57 more
    Caused by: BINDING.JCA-11007
    No Data to process.
    No Data to process.
    File /root/oracle/domains/osb/11.1.1.4/cluster/data/osb2/FolderName/Filename.txt to be processed was not found or not available or has no content Please make sure that the file exists in the specified directory.
    at oracle.tip.adapter.file.outbound.FileReader.readFile(FileReader.java:277)
    at oracle.tip.adapter.file.outbound.FileReader.executeFileRead(FileReader.java:181)
    at oracle.tip.adapter.file.outbound.FileInteraction.executeFileRead(FileInteraction.java:331)
    at oracle.tip.adapter.file.outbound.FileInteraction.execute(FileInteraction.java:395)
    at oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.performOperation(WSIFOperation_JCA.java:529)
    ... 61 more
    Edited by: 842347 on Jul 6, 2011 3:11 AM

    I face the same issue and I have given all permissions to the folder for OS user.
    Because of this error my server is not starting up . Is there any way I can undeploy this composite to get my server running.
    I cant do this from EM because SOA server is failing to start up.
    I have tried removing it from $DOMAIN_HOME/deployed-composites but still when i try restarting the soa server the composite comes up there. Do we need to delete the entry some where else too. Kindly help.
    Thanks,
    Sri.

  • What is RID in non clustered index and its use

    Hi All,
    I need help regarding following articles on sql server
    1) what is RID in non clustered index and its use.
    2) What is Physical and virtual address space. Difference in 32 bit vs 64 bit Virtual address space
    Regards
    Rahul

    Next time Please ask single question in a thread you will get better response.
    1. RID is location of heap. When you create Non clustered index on heap and
    lookup happens to get extra records RID is used to locate the records. RID is basically Row ID. This is basic definition for you. Please read
    this Thread for more details
    2. I have not heard of Physical address space. I Know Virtual address space( VAS)
    VAS is simple terms is amount of memory( virtual )  'visible' to a process, a process can be SQL Server process or windows process. It theoretically depends on architecture of Operating System. 32 bit OS will have maximum range of 4 G VAS, it's calculated
    like a process ruining on 32 bit system can address max up to 2^32 locations ( which is equivalent to 4 G). Similarly for 64 bit max VAS will be 2^64 which is theoretically infinite. To make things feasible maximum VAS for 64 bit system is kept to 8 TB. Now
    VAS acts as layer of abstraction an intermediate .Instead of all request directly mapping to physical memory it first maps to VAS and then mapped to physical memory so that it can manage request for memory in more coordinated fashion than allowing process
    to do it ,if not it will  soon cause memory crunch.Any process when created on windows will see virtual memory according to its VAS limit.
    Please read
    This Article for detailed information
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Space occupied by clustered index Vs non-clustered index

    I am trying to understand the indexes. Does clustered index occupy more space than a non-clustered index because it carries the information about rest of the other columns also. Could you guys please help me understand this. Thanks in advance.
    svk

    Hi czarvk,
    Clustered index in SQL Server takes up more space than non-clustered indexes.
    Clustered index arranges the way records are stored in a table putting them in order (key, value), all the data are sorted on the values of the index.
    A non-clustered index is a completely different object in a table, containing only a subset of columns and a row locator to the table’s rows or to the clustered index’s key.
    So clustered index in SQL Server takes up more space than non-clustered indexes.
    If you have any question, please feel free to let me know.
    Regards,
    Donghui Li

  • Clustered Index Vs Reverse Index

    I am searching against a column of table (4 mil rows) which has a datatype varchar(19), what index is recommended clustered or reverse index
    any thoughts on this would help
    Thanks

    Reverse indexes are used for write performance.
    If you insert a lot of rows quickly and you use an incrementing value as the index key, typically a sequence, then your insert performance will benefit.
    An index is simply ordered data, so if you need to insert into the index
    9567843
    9567844
    9567845
    9567846
    9567847
    9567848
    They will all write to the same block.
    Whereas with
    3487659
    4487659
    5487659
    6487659
    7487659
    8487659
    they will not as there are big gaps in the index values.
    If anything flipping the value back may have some overhead, so read performance could be very slightly degraded, but I must emphasize that this is untested speculation.

  • How to upgrade SAP Kernel in clustered environment

    I have a SAP R3 4.6 C installed on cluster environment, such that if one server goes down, the second clustered server takes the charge of all the resources from failing server. Also I have 6 application servers.
    This is the reason, I have to mantain the SAP exe files in /exe/run on the 6 different application servers seperately. My question is: If I have a shared directory for exe files on central instance, and it goes down, then the 6 application servers will lose connection to those exe files. Then, how should I then connect these 6 apps servers to exe files of the 2nd failover clustered server?
    So please let me know how to resolve this problem.

    Hi Matt,
    Thanks for the update. The cluster is installed with virtual host. But unfortunately we have all exe files seperately on different 6 app servers. Should we NFS mount the exe directory on all apps server. Is NFS mounting sufficient or there are any additional steps to done. I heard that we have to activate sapcpe on the apps server.
    Also after the kernel upgrade, do we have to distribute it on all the apps servers. Can you kindly guide me in this issue.
    Thanks,
    Arun

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Need some help with 2012 R2 Clustering Permissions (with Hyper-V)

    Could someone please point me in the direction of  some sensible documentation regarding how 2012 cluster permissions work?
    I have a hosted Powershell application that sits in a C# web service wrapper. It automates everything to do with Hyper-V and works 100% of the time.
    I want the account it is running under (via impersonation) to be able to automate the creation of cluster shared volumes and the addition of VMs as clustered roles.
    I've written the code and i am confident it works, but I cannot get my head around how the permissions are supposed to be set-up.
    Ideally, I want a single account per hyper-v node, that can:
    a.) Administer Hyper-V locally (there seems to be a group for this bit - this is working OK).
    b.) Create and add clustered shared volumes within the cluster.
    c.) Can add and remove cluster (VM) roles.
    d.) Can add and remove data (VHDs) on the CSV themselves.
    I ideally like to have this account not be a domain admin, and pare down its rights to the those listed above.
    I can't even begin to explain how lost I've got trying to get this to work. Even if I do add a new domain admin, for instance, it doesn’t even seem to be capable off adding cluster roles or deleting files from a CSV. I think I really need to take a step
    back.
    Is what I want to do even  possible? 
    Thank you. 

    As far as storage: Your script would create the VHDX files on C:\ClusterStorage\Volume#
    There is no special permission required for this other than perhaps local admin rights. I can create, delete, and mount VHDX files via Windows Explorer on any system in the domain within the CSV folder set.
    UNC: \\NODE\C$\ClusterStorage\Volume#
    As far as creating and deleting VMs on the cluster use the PowerShell commands as normal.
    One option would be to use Group Policy Preferences to deliver a domain user account to the Local Administrators Group on the Hyper-V Nodes. That user account would be the one that would be used to run your required PS commands in a local admin context.
    Another option would be to provision a local admin account with the same UN/Pwd across all nodes and use that for your PowerShell needs.
    Philip Elder Microsoft Cluster MVP Blog: http://blog.mpecsinc.ca

  • JDBC Connection pools and clusters (is max connection for entire cluster?)

    Hi,
    Quick question.
    When using JDBC connection pools in WAS 6.40 (SP13) in a clustered environment. Are the max connections the number
    a)Each application server can use
    b)The entire cluster can use
    Would believe a), but I'd like it confirmed from someoneelse

    Hi Dagfinn,
    your assumption is correct. Therefore, in a cluster environment you'd need to make sure the DB can open <i>Number of nodes X max connections</i>.

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • Cisco Jabber Client for Windows 9.7 Can't Connect to Other IPSec VPN Clients Over Clustered ASAs

    Environment:
    2 x ASA 5540s (at two different data centers) configured as a VPN Load Balancing Cluster
    Both ASAs are at version 8.4(5)6
    IPSec VPN Client version: 5.0.07.440 (64-bit)
    Jabber for Windows v9.7.0 build 18474
    Issue:
      If I am an IPSec VPN user…
       I can use Jabber to another IPSec VPN user that is connected to the same ASA appliance.
       I can’t use Jabber to another IPSec VPN user that is connected to the different ASA appliance that I am connected to.
    In the hub-and-spoke design, where the VPN ASA is a hub, and the VPN client is a spoke; if you have two hubs clustered together, how does one spoke communicate with another spoke on the other hub in the cluster? (How to allow hairpinning to the other ASA)

    Portu,
    Thanks for your quick reply.
    Unfortunately, I do not have access to the ASA logs nor would I be permitted to turn on the debug settings asked for above.  I might be able to get the logs but it will take awhile and I suspect they wouldn't be helpful as this ASA supports thousands of clients, therefore, separating out my connection attempts from other clients would be difficult.
    I can, though, do whatever you want on the Linux router.  Looking over the firewall logs at the time of this problem, I don't see anything that looks suspicious such as dropped packets destined for the Windows client.
    As I said in my original post, I'm not a networking expert - by any means - but I am willing to try anything to resolve this.  (But I might need a bit of handholding if I need to set up a  wireshark andor tcpdump.)
    Thanks again.

Maybe you are looking for

  • Cannot remote into server from outside the network

    I have two laptops that all I want is to have them able to VPN to my server.  When the laptops are on my network, they are able to network in and see the server.   But when I go to Starbucks or McD's, I can connect to their network but I cannot VPN i

  • Oracle 10gR2 (10.1.2.0.0)

    is there any way to install 10gR2 (10.1.2.0.0) on unbreakable Linux ? i had install 10.1.2.0.2 on unbreakable but it's not possible installation 10.1.2.0.0 ! ! !

  • Date Formula In Detail Selection

    Hello I'm really new to business objects and could really do with some advice. I have to use Business Objects 6 although, judging by what I have seen from this website, it is quite old! I have selected the universe that I need, I have selected the fi

  • Reg-Round off in sales documents

    Dear all I need a solution for given below point. For Purchase and local sales there should option for rounded off on total amount,but incase of export sales roundoff should not effect. Please kindly suggest. Regards M Auditya Functional Consultant

  • Adobe Illustrator Keeps Crashing in Windows

    I have Windows Vista on a Dell Inspiron 14 and everytime I start Illustrator it gets to the dialog box where it asks you to start a file, but no options appear and then the box dissapears and the program itself shuts down. It worked fine yesterday an