High Availability File Adapter in OSB

If you use the JCA FileAdapter in OSB, it is necessary to use the eis/HAFileAdapter version, to ensure that only one instance of the adapter picks up a file; you must then configure a coordinator, by setting the
controlDir, inboundDataSource, outboundDataSource, outboundDataSourceLocal, outboundLockTypeForWrite
parameters.
controlDir refers to the filesystem, the others to the DB
This document http://www.oracle.com/technetwork/database/features/availability/maa-soa-assesment-194432.pdf says
"Database-based mutex and locks are used to coordinate these operations in a File Adapter clustered topology. Other coordinators are available but Oracle recommends using the Oracle Database."
Using a Oracle Database as coordinator means using RAC, otherwise no HA.
I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
If DB is required, I am considering using the good old "native" OSB File Poller, since it doesn't require complicated setup to be run in a cluster... but I don't want to use MFL, I would rather use the XSD-based Native Format. Here comes the second question:
Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
Thank you so much for your help - it will be rewarded with "helpful/answered" points .
pierre

I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
I have not tried it but I think there are several options available invlucing writing your own custom mutex. Please find the details in Oracle File and FTP Adapters High Availability Configuration section on this link
http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/ha_soa.htm#sthref434
Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
When you create a JCA Adapter based Proxy Service to read the files, the nXSD translation happens before the proxy service is invoked. JCA Engine first reads the data, translates using nXSD and then invokes the Proxy with the translated content. (You can verify this easily by creating a JCA based file read service and open the test console for it in sbconsole, it will show you XML request instead of native).
So you can not read the text content using File Transport of OSB and then calling nXSD directly or calling an nXSD based Proxy Service.
HOWEVER, you certainly can use file and nXSD in a combination if thats what you want.
1. Create a Synchronous Read File Adapter with an nXSD created for it
2. Create a Business Service for that Synchronous Read JCA in OSB
3. Create a File Transport based service in OSB which will read the content of file and then call the Business Service to again read the content (which will include the translation using nXSD defined in step one to convert the content to XML).
So basically you will need to read the file twice! Once using File Transport Proxy service (which will take care of polling in cluster) and then using Sync Read JCA based business service(which will do nXSD translation). To reduce the impact of reading the file twice you can use trigger files. File Proxy to read trigger file and and invoke JCA business service to read the actual file for that trigger.
Another alternative can be to create a similar class as present here(http://blogs.oracle.com/adapters/entry/command_line_tool_for_testing) but instead of writing a file it will just return the translated content. Call this class with native content from the File Transport proxy using a Java Callout to do the translation.

Similar Messages

  • File adapter in OSB

    Hi ,
    I am trying to use file adapter in OSB. My intention is the OSB should write a file in a specified location in my sytem. I have configured the Business Service with Messagin service in (general tab), request msg as text and response msg as none in(messagin tab), protocol as file in (transport tab), prefix as AddUser suffix as *.txt* in (file transport tab).
    whenever i give input in proxy service it writing the file at the specified location, but the issue is,it writes the file name as AddUser3225734920456246193--12c3763f.132dd1769a3.-7f5b.txt . i dont need this part(AddUser3225734920456246193--12c3763f.132dd1769a3.-7f5b) in file name which is written. can anyone tell me how i can solve this?
    Any help is appreciated.
    thanks

    Hi,
    We can go with transport JCA FILE for file writing into the specified location.
    Create a file adapter in jdeveloper with all specified things like (file name,physical path location and schema).This creates the jca and wsdl file in jdevloper copy this to eclipse(OEPE) and later u can create business service based on jca file in eclipse.Or else in osb sbconsole copy import related jca and wsdl files and create business service based on this.This process wil create file in particular location with name specified in the jca file in particular location.....

  • How to use Oracle File Adapter in OSB

    We are trying to use File Adapter in OSB 11g. Is it supported in this version? If yes then some one can specify a document which contains File Adapter - OSB integration steps.

    according to the ESB JCA transport:
    [ESB JCA|http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15866/jca.htm#i1106345]
    http://download.oracle.com/docs/cd/E14571_01/doc.1111/e15866/jca.htm#i1106345
    there is only these adapters:
    _25.2.1 Adapter Support_
    The Oracle Service Bus JCA transport lets you interact with the following JCA-compliant adapters:
    Oracle Adapter for Oracle Applications
    Oracle JCA Adapter for AQ
    Oracle JCA Adapter for Database
    Oracle JCA Adapter for Files
    Oracle BAM Adapter (Business Activity Monitoring)
    PeopleSoft (Oracle Application Adapters 10g)
    SAP R/3 (Oracle Application Adapters 10g)
    Siebel (Oracle Application Adapters 10g)
    J.D. Edwards (Oracle Application Adapters 10g)
    See the following guides for more information on Oracle adapters:
    I wanna use FTP Adapter and MQ Adapter in Oracle ESB 11.1.1.3, can we make it?

  • File Adapter in OSB 11g

    Hi
    I am using a File Adapter created to read a file using OSB 11.1.1.3.0. But after i publish that osb project, i am getting the below warning and file is not been polled by the proxy.
    ####<Feb 23, 2011 10:08:30 AM IST> <Warning> <JCA_FRAMEWORK_AND_ADAPTER> <wipro-e16351a68> <AdminServer> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <dcc2d8a908de630a:59887477:12e50cc23d3:-7ff2-0000000000000084> <1298435910015> <BEA-000000> <PollWork::run exiting, Worker thread will die>
    After 600 secs i could see StuckThread error in server.
    ####<Feb 23, 2011 10:15:01 AM IST> <Error> <WebLogicServer> <wipro-e16351a68> <AdminServer> <[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <dcc2d8a908de630a:59887477:12e50cc23d3:-7ff2-0000000000000092> <1298436301203> <BEA-000337> <[STUCK] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "601" seconds working on the request "weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl@dd20cd", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
         java.lang.Object.wait(Native Method)
         oracle.tip.adapter.file.inbound.FilesToProcess.dequeueToProcess(FilesToProcess.java:101)
         oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:269)
         weblogic.work.ContextWrap.run(ContextWrap.java:41)
         weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
         weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Please help what is going wrong in my approach.
    Thanks,
    Sesha

    I even tired creating a new WM thread and assigned to this proxy.
    I have chosen MaxThreadsConstraint option with Maximum of 5 concurrent thread.
    Also i have enabled Ignore Stuck Threads in that WM thread.
    Now i dont see any thread stuck in the server log, but sametime the file is not been picked by the Proxy which means the error is suppressed alone from displaying in logs..
    Thanks,
    Sesha

  • 2012 R2 - Clustering High Availability File Server

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.
    What's your goal? If you want to have file server with no service interruption then you need generic SMB file server built on top of a guest VM cluster. HA VM is not going to work for you (service interruption) and SoFS is not going to work for you (workload
    different from SQL Server and / or Hyper-V is not supported). So... Tell what do you want to do and not what you're doing now :)
    For a fault-tolerant file server scenarios see following links (make sure you replace StarWind with your shared storage specific, I'd suggest to use shared VHDX). See:
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    http://technet.microsoft.com/en-us/library/cc753969.aspx
    http://social.technet.microsoft.com/Forums/en-US/bc4a1d88-116c-4f2b-9fda-9470abe873fa/fail-over-clustering-file-servers
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    http://www.starwindsoftware.com/configuring-ha-file-server-for-smb-nas
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.
    The goal is a file server that's as resilient as possible. I need to still use DFS namespace though. Its on a Dell VRTX server with built-in shared storage.
    I'm having issues getting the High Availability wizard for General Purpose File Server to see local drives :(
    So I'm just trying to understand the key differences between creating one of these and  a hyper-v server with file services installed.

  • Configuring File adapter for OSB

    Hi,
    I wish to process invoice data that will be in the csv file . The structure of the file is as following :
    $H$<<Supplier Name>>,<<Supplier Site>>,<<Invoice Number>>
    $D$<<Line Number>> ,<<Business Unit>>,<<Location>>,<<Store Name>>
    $D$<<Line Number>> ,<<Business Unit>>,<<Location>>,<<Store Name>>
    $H$<<Supplier Name>>,<<Supplier Site>>,<<Invoice Number>>
    $D$<<Line Number>> ,<<Business Unit>>,<<Location>>,<<Store Name>>
    $D$<<Line Number>> ,<<Business Unit>>,<<Location>>,<<Store Name>>
    The intent is to configure the file adapter in such a fashion that it rejects the entire invoice if there are any errors in any of the lines that belong to that invoice. The invoices that follow the erred inovice should be processed normally. How do I configure the adapter to achieve this? Any help is greatly appreciated.

    Hi,
    Configuring the adapter for rejecting the invalid invoice rows is not possible. Adapter should be configured as usual to read all the rows. Once the rows are read, you can have all validation in the proxy service. If you find any invalid rows, you can write those rows to a saperate file so that it can be corrected and fed again. And all the other valid rows can be sent for further processing.
    Thanks,
    Vishwa

  • Problem with File Adapter in OSb

    Hi All,
    I have a scenario where I need to read 2 data files based on a trigger file.Ex: ABC.TRG file should poll ABC1.DATA and ABC.2 DATA files from the same directory.If by any chance,If the second data file is not read, I need to rollback the first one also.I just need to pass only metadata of my .DATA files.I am able to do one .TRG file and one .DATA file in OSB. But I am struck with processing 2 different DATA files and one TRG file.I have tried like this:Using read operation I created a proxy service and in my message flow ,using service callout I tried to get metadata of  .DATA files (I used synchread  for this JCA) .But when I use service callout, I was not able to get metadata of my .DATA files.
    Any help would be appreciated.
    Thanks,
    PK.

    Can you try this :
    <getWidgetsResponse>
              <getWidgetsResult>           
    for $widget in $yourwidgetresponseelement/widgets/widget
    return
    <widget>
    <widgetnumber> { data($widget/widgetnumber) }</widgetnumber>
    <widgetclass> { data($widget/widgetclass) }</widgetclass>
    </widget>
              </getWidgetsResult>
    </getWidgetsResponse>
              or change the
              { data($widget/widgetnumber) }</widgetnumber> }
    { data($widget/widgetclass) }</widgetclass> }
    in
              $widget/widgetnumber
                   $widget/widgetclass
    Good luc.
    havent tested it, but should be something like this

  • Base64Binary OSB and File Adapter Issue

    Hi all,
    I am converting an xml to flat file after that i want to write the flatfile to a file using File Adapter in OSB.
    So using java callout we converted the binary content to base64 string and we are able to write the data using file adapter .
    But the file contains as extra line in between each line which my legacy system wont accept.
    I printed the base64 string in the output file and when i decode that file using the website as safe decode as text i am able to get the correct file.
    So where is the problem ? File Adapter ? Why i am getting extrace line in between each line . How to write safely using file adapter.
    If i use file transport of OSB i am able to write without any issues but i want to write dynamic location so i am looking for file transport of OSB.
    Thanks
    Phani

    Hi Anju,
    Thanks for the response . If i decode the base64 binary data using this website if i use Notepad++ i cant see an extra line in the notepad++ it showing as below
    Its a fixed length file 1 to 513 then next line so for the file transport it showing as below
    ISA ............................
    1 to 513 CRLF
    For file adapter it is showing as below in the same notepad++ editor
    ISA..........................
    1 to 513 CR
    CRLF
    an extra CR so an extra line .
    I used MFL and converted the xml to non-xml ( flat file fixed length file) for file transport i am i created another proxy message type Text .It is working fine
    Added to the above problem i have another few questions on the file transport
    How can append to an exiting file using File transport ?
    How to change the File directory dynamically ?

  • 2 BPEL Processes doing Read at same directory in File Adapter.

    Hi,
    I am using File Adapter for Read operation. Normally we would make a Logical Directory for reading files at that location. There is a design design scenario where 2 BPEL processes will need to read file from same File location.
    Incase such a design is implemented, will it work? And if yes, what will the precedence for reading file by BPEL processes. Also, will both of them read the files or one will be reading the file?
    OR
    Is there any other way that the same design can be implemeted??
    I am using Oracle SOA 10.1.3.4 and JDeveloper 10.1.3.4.
    Will really appreciate if someone can help in this regard.
    Thanks for the needful.
    Cheers,
    Varun

    Hi Varun
    As per the best practice guide :
    The file and FTP adapters support the high availability feature for the active-passive
    topology. Perform the following steps to configure the adapter for this feature:
    1. Create a shared folder on a highly available file system. This folder must have
    write permissions and must be accessible from all systems running the file and
    FTP adapters.
    2. Open the pc.properties file available in the SOA_ORACLE_
    HOME\bpel\system\service\config directory on each node.
    3. Set oracle.tip.adapter.file.controldirpath to the shared folder name.
    This is the shared folder that stores the control files for the adapter.
    4. Restart the servers.
    Note that in the case of Oracle ESB, you must rename SOA_ORACLE_
    HOME\integration\esb\config\pc.
    You can use a trigger file .tg to notify the second process that it can pick the file which the first process has written into the folder.
    Regards
    A

  • File Adapter vs BPEL interaction issue on high availability environment

    Hi all,
    i would really appreciate your help on a matter i'm facing about a composite (SCA) deployed on a clustered environment configured for high availability. To help you better understand the issue i briefly describe what my composite does. Composite's instances are started by means of an Inbound File Adapter which periodically polls a directory in order to check if any file with a well defined naming convention is available. The adapter is not meant to read the file content but only its properties. Furthermore, the adapter automatically makes a backup copy of the file and doesn't delete the file. Properties read by the adapter are provided to a BPEL process which obtains them using the various "jca.file.xyz" properties (configurable in any BPEL receive activity) and stores them in some of its process variables. How the BPEL process uses these properties is irrilevant to the issue i'd like to pose to your attention.
    The just described interaction between the File Adapter and the BPEL process has always worked in other non-HA environments. The problem i'm facing is that this interaction stops to work when i deploy the composite in a clustered environment configured for high availability: the File Adapter succeeds to read the file but no BPEL process instance gets started and the composite instance gets stuck (that is, it keeps always running until you don't manually abort it!).
    Interesting to say, if I put a Mediator between the File Adapter and the BPEL, the Mediator instance gets started, that is the file's properties read by the adapter are passed to the mediator, but then the composite gets stuck again 'cos even the mediator doesn't seem to be able to initiate the BPEL process instance.
    I think the problem lies in the way i configured either the SOA infrastructure for HA or the File Adapter or BPEL process in my composite. To configure the adapter, i followed the instructions given here:
    http://docs.oracle.com/cd/E14571_01/integration.1111/e10231/adptr_file.htm#BABCBIAH
    but maybe i missed something. Instead, i didn't find anything about BPEL configuration for HA with SOA Suite 11g (all the material i found refers to SOA Suite 10g).
    I've also read in some posts that for using the db as a coordinator between the file adapters deployed on the different nodes of the cluster, the db must be a RAC! Is that true or is possible to use even another type of oracle db?
    Please, let me know if someone of you has already encountered (and solved :)) a problem like this!
    Thanks in advance,
    Bye!

    Hi,
    thanks for your prompt reply. Anyway, i had already read through out that documentation and tried all settings suggested in it without any luck! I'm thinking the problem could be related to the Oracle DB used in the clustered environment, which is not RAC while all documentation i read about high availability configuration always refers to a RAC db. Anyone knows if a RAC Oracle DB is strictly needed for file adapter configuration in HA cluster?
    Thanks, bye!
    Fabio

  • Synchronous File Read in a High Availability Scenario

    Hi All,
    We have Oracle SOA Suite 10.1.3.4 which is a High Availability instance. As per Oracle's doc Id 730515.1 we have edited the bpel.xml file with the property for Adapter cluster only in the cases where a BPEL process was polling for a file and instantiating the process to avoid the race around condition.
    However, we have not made this change for BPEL process which contain a synchronous read. As I understand, such processes(which contain a synchronous read) are already instantiated before the file is read and hence the race around scenario does not exist.
    Please correct me if my understanding is incorrect. Also, will I need to make any changes for the process in such a case.
    Thanks and regards.

    Hi Marcio,
    The Oracle Document I mentioned available on Metalink states:
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Applies to:
    Oracle9i AS Integration Platform - Version: 10.1.3.3
    Information in this document applies to any platform.
    Goal
    You have installed BPEL cluster. However, you expect that adapters doing the same work on two different nodes might encounter a race condition. So you would like to configure a singleton adapter. This note describes how to do it.
    Solution
    1. In collaxa-config.xml change from
    <property id="clusterName">
    <name>Cluster Id</name>
    <value>machine_name:port</value>
    <comment>
    to
    <property id="clusterName">
    <name>Cluster Id</name>
    <value>bpelCluster</value>
    <comment>
    The value: 'bpelCluster'
    can be anything but has to be the same on both instances.
    2. Multicast host and port in jgroups-properties.xml have to be same on both instances.
    3. To enable a singleton adapter add the following property in bpel.xml of the process using the adapter in question:
    <property name="clusterGroupId">adapterCluster</property>
    where the actual value - 'adapterCluster' in this case can be anything but has to be different from the value used before (bpelCluster) when configuring BPEL cluster. The reason for that is that BPEL engine and an adapter will use different communication channels.
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    I would Like to know if this is applicable incase we have a BPEL Process which contains a synchronous read.
    Thanks and Regards.

  • BPEL High Availability for various adapter configurations

    Hello,
    I have created a BPEL process that uses DBAdapter to poll for inserts into a table using a sequence file on the file system. This is one of the options for the DBAdapter. This adapter is called from an "receive" activity.
    My understanding is that a bpel internal process schedules the adapter to check the sequence file at intervals. When the sequence value is read a query is executed to determine if there are inserts/updates. If so, a new process instance is created.
    Now, how would this work in a HA situation? (Assume a LB in front of two bpel instances with oracle http server (ohs))
    Are there two processes polling the sequence file, one in each bpel instance ?
    Are the requests funnelled through the Load Balancer address ? (In effect being made from the application server tier back out to the load balancer at the "front").
    This is probably a specific example of a general question about event-driven processes and high availability.
    Thanks in advance, this important for a customer who has a large volume of transactions that require multiple nodes to manage the load and will be using FileAdapter and DBAdapter to receive the events.

    No answer

  • OSB Error in deploying a project with jca file adapter

    Hi,
    I am facing an issue where I am getting an error when deploying a service from Eclipse. I am using OSB/SOA 11.1.1.5 2 node cluster. I have an OSB service where I am writing to a file and am using file adapater. I created a composite with file adapter to write a file then imported .jac, .wsdl and composite into eclipse and generated a business service out of .jca file. When I deploy the project from eclipse I am getting the below error and am not able to deploy the project.
    Conflicts found during publish.
    Invalid JCA transport endpoint configuration, exception: javax.resource.ResourceException: Cannot locate Java class oracle.tip.adapter.file.outbound.FileInteractionSpec
    Cannot locate Java class oracle.tip.adapter.file.outbound.FileInteractionSpec
    I have FileAdapter deployed and pointed to OSB manage servers cluster. Here is a snippet of config.xml.
    <app-deployment>
        <name>FileAdapter</name>
        <target>SOA_Cluster,OSB_Cluster</target>
        <module-type>rar</module-type>
        <source-path>/app/oracle/fmw/Oracle_SOA1/soa/connectors/FileAdapter.rar</source-path>
        <deployment-order>321</deployment-order>
        <plan-dir xsi:nil="true"></plan-dir>
        <plan-path>/app/oracle/shared/SOA_Cluster/dp/FileAdapterPlan.xml</plan-path>
        <security-dd-model>DDOnly</security-dd-model>
        <staging-mode>nostage</staging-mode>
      </app-deployment>
    Also inside console -> deployments  -> FileAdapter -> Targets and OSB_Cluster is checked also.
    Here is the .jca file
    <adapter-config name="writeFile" adapter="File Adapter" wsdlLocation="writeFile.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
      <connection-factory location="eis/HAFileAdapter"/>
      <endpoint-interaction portType="Write_ptt" operation="Write">
        <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
          <property name="PhysicalDirectory" value="C:\ORACLE"/>
          <property name="Append" value="false"/>
          <property name="FileNamingConvention" value="a_%SEQ%.doc"/>
          <property name="NumberMessages" value="1"/>
        </interaction-spec>
      </endpoint-interaction>
    </adapter-config>
    Any idea what might be wrong.
    Thanks.

    I already have the fileAdapter JNDI is already configured in console. The error I am getting is during the deployment
    Conflicts found during publish.
    Invalid JCA transport endpoint configuration, exception: javax.resource.ResourceException: Cannot locate Java class oracle.tip.adapter.file.outbound.FileInteractionSpec
    Cannot locate Java class oracle.tip.adapter.file.outbound.FileInteractionSpec
    Thanks

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • File Adapter Transactional in SOA and OSb

    Hi All,
    Is file adapter transactional in SOA and OSB. I have a requirement where a inbound file adapter polls for a file and it is consumed by a topic.If topic is down,how can I know that the same file will goes to topic when is up(how can I make it transactional).I want to do this both in SOA ans OSB.
    Any help would be appreciated.
    Thanks,
    Kumar.

    Hi Kumar,
    File Adapter itself is NON TRANSACTIONAL...
    4.2.9 Nontransactional
    The Oracle File Adapter picks up a file from an inbound directory, processes the file, and sends the processed file to an output directory. However, during this process if a failover occurs in the Oracle RAC back end or in an SOA managed server, then the file is processed twice because of the nontransactional nature of Oracle File Adapter. As a result, there can be duplicate files in the output directory.
    http://docs.oracle.com/cd/E28280_01/integration.1111/e10231/adptr_file.htm#BABIEBJF
    Cheers,
    Vlad

Maybe you are looking for

  • BO XI Enterprise config prep info

    I'm trying to nail down some answers for a client, but running out of time to find this info in the documentation, and would greatly appreciate any full or partial responses to the following. My client would like to install BO XI Enterprise Professio

  • My adobe creative cloud single app cancellation did not go through!

    Early last month I called to cancel my Adobe Creative Cloud single application account (Acrobat) and was told that the account was cancelled. Then, a couple days ago I saw on my bank statement that I am still being charged for the account. I can't ge

  • Help i have this message when i try to strat Illustrator

    Exit Code: 6 Please see specific errors below for troubleshooting. For example,  ERROR: DF037, DF045,D ... -------------------------------------- Summary -------------------------------------- - 0 fatal error(s), 17 error(s) ----------- Payload: Adob

  • Flash Archive restored files

    Hi, We are moving servers (T2000' / Solaris10) to a new Datacentre and using swing kit to reduce the downtime on the production databases. We plan to take a Flash Archive of the OS and restore the swing kit (exactly the same config) from the Flash Ar

  • Query rewrite without a group by

    The actual query is more complicated involving 6 table joins. I have tried to re-produce the issue using a simple example shown below. The "contacts" table has about 14 columns including the ones shown below. All the table statistics are up to date.