Verity and Clustered Coldfusion Servers

Hi All,
I am currently looking at using Verity with our clustered
coldfusion servers but seem to have an issue. We currently have
three coldfusion servers but the version of verity included with
coldfusion only allows a connection from one server.
Has anyone else been in the same situation where they need
more than one coldfusion server talking to the same verity server?
If so how did you solve this issue? ( We have looked at purchasing
the full version of verity but it is to expensive.)
Thanks
Paul

I'll restate this in simpler fashion.
Can you retrieve MX 7 verity search results on a MX 6.1
server?
Can this be done via CFHTTP or any other similar method?
Thanks

Similar Messages

  • Configuring Servers and Clusters

    Hi All,
    Whenever I am creatin HFM application,it is asking server,I have tried to "Configuring Servers and Clusters" ,but I am unable to coniger,please assist me.
    Regards,
    Prabhu

    For your second question I think it is E. Coherence Web Edition
    Because Coherence provides the ability to scale your application and also offers high availability by replicating the session objects to coherence nodes/caches.
    Thanks,
    Vijaya

  • What is the Ideal Production Setup For One Admin and 4 Managed Servers

    Dear Experts
    I will be starting with production setup including one Admin server and 4 managed servers in one single domain.
    I am thinking of creating a single node environment(no clusters) as the machine has following configuration
    OS : Windows Server 2008 R2 Datacenter
    RAM : 48 GB
    System Type : 64 bit
    Processor : Intel(Xenon) 4 processors [email protected]
    Can you please let me know if this configuration would suffice for the 4 managed servers if i assign Xmx and Xms as 4096 and Heap Space as 1024 to all the Managed Servers.
    It is very urgent and i need to convey to the Infrastructure team if harware procurement is required.
    We are looking at somewhere around 300 concurrent users(maximum load) and 100(minimum load) at a given point of time.
    Please reply ASAP.
    Thanks in advance
    Edited by: Abhinav Mittal on Apr 23, 2013 7:58 PM
    Edited by: Abhinav Mittal on Apr 23, 2013 8:03 PM

    Heap size must be calculated according to the applications that are been deployed on each JVM.
    With no deployments, you dont need more than 256k for managed servers heap size and 512k for adminserver. As biggest its your heap size, longer will take your garbage collection. And if you can prevent it, better do it.
    Kinds,
    Gabriel Abelha

  • Could not connect to any JRun/ColdFusion servers on host localhost

    I am a new CF admin and I am trying to upgrade my Apache.  I am following the instructions in http://help.adobe.com/en_US/ColdFusion/9.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fd9 .html but am getting an error.  Could not connect to any JRun/ColdFusion servers on host localhost
    I am running CF8 on Linux Suse and trying to  upgrade apache to version 2.2.22.  I am running apache on server-1 and ColdFuson on server-2.
    I tried running the following on server1:
    /data/jrun4/bin/wsconfig -server cf8-2 -ws Apache -bin /data/web3/apache-2.2.22-general-cf/bin/httpd -script /data/web3/apache-2.2.22-general-cf/bin/apachectl -dir /data/web3/apache-2.2.22-general-cf/conf -coldfusion -v
    but got the error.
    So then I tried installing apache on server-2 and running:
    /data/cf8/bin/wsconfig -server cf8-2 -ws Apache -bin /data/web3/apache-2.2.22-general-cf/bin/httpd -script /data/web3/apache-2.2.22-general-cf/bin/apachectl -dir /data/web3/apache-2.2.22-general-cf/conf -coldfusion -v
    I got the exact same error.
    CF is definately up and running.
    What am I doing wrong?

    Hi Kiran,
    Yes, Coldfusion is running and I have root access. You need
    to be root just to get the installer to run and to execute the
    Apache connector to produce the error I pasted into my message. My
    firewall is disabled, as is SELinux. I'm not sure how to "Write
    small program to check socket creation..."
    I'm reading through some of the tortured things Steven Erat
    had to do to get CF7 running on FC6 here:
    http://www.talkingtree.com/blog/index.cfm/2006/12/6/Running-ColdFusion-MX-7-on-Fedora-Core -6-Linux
    I suspect I'm running into one of these snags. I was just
    wondering if anyone knew if RHEL5 was officially supported yet, or
    more concisely, Apache 2.2?

  • Apache HTTP proxying for load balancing only to a group of non-clustered WL servers

              Hi,
              We're running WL Server 6.1 SP 2 on Solaris 2.8.
              For the Apache HTTP proxy plugin, if you use the WebLogicCluster http.conf option,
              do the WL servers you want to load balance across have to be part of a WebLogic
              cluster (if you are prepared to do without failover, as I know it would need to be
              a proper WL cluster to replicate session info for failover). Can you load balance
              across a group of non-clustered WL servers, and maintain the user session to the
              one WL server so that it doesn't switch between servers on alternate requests for
              the same user session, or must the servers be configured as a WebLogic cluster?
              Paul
              We find that if you have a collection of WL servers that are not configured as a
              cluster, that it will load balance alternate requests to each server, but it will
              not pin a user to a single machine according to their session so for 2 servers, 2
              differetn sessions get created, one on each machine.
              Is this because it doesn't normally do this, but sends the user alternately to a
              primary then secondary which works in a cluster because the session is replicated.
              I thought the secondary was only used when the primary failed.
              

    We're running WL Server 6.1 SP 2 on Solaris 2.8.          >
              > For the Apache HTTP proxy plugin, if you use the WebLogicCluster http.conf
              option,
              > do the WL servers you want to load balance across have to be part of a
              WebLogic
              > cluster (if you are prepared to do without failover, as I know it would
              need to be
              > a proper WL cluster to replicate session info for failover). Can you load
              balance
              > across a group of non-clustered WL servers, and maintain the user session
              to the
              > one WL server so that it doesn't switch between servers on alternate
              requests for
              > the same user session, or must the servers be configured as a WebLogic
              cluster?
              You don't have to use the clustering option. To get failover, you'll have to
              use the JDBC persistence option of WL.
              > We find that if you have a collection of WL servers that are not
              configured as a
              > cluster, that it will load balance alternate requests to each server, but
              it will
              > not pin a user to a single machine according to their session so for 2
              servers, 2
              > differetn sessions get created, one on each machine.
              >
              > Is this because it doesn't normally do this, but sends the user
              alternately to a
              > primary then secondary which works in a cluster because the session is
              replicated.
              > I thought the secondary was only used when the primary failed.
              The primary/secondary stuff requires clustering. If Apache continues to
              "load balance" after the first request, you need to either use JDBC session
              persistence or use a different load balancer (like mod_jk for Apache or a
              h/w load balancer with support for sticky).
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Paul Hammond" <[email protected]> wrote in message
              news:[email protected]...
              >
              

  • Session Failover and Clustering

              Let's say that we have two WebServers (NES) with the weblogic plugin (say WS1 and WS2) and a cluster with two WebLogicCommerce AppServers (say AS1 and AS2). Let's assume that each WebServer and AppServer runs on its own machine (total: 4 machines). Now, let's assume that the WebServer "obj.conf" files (on both WS1 and WS2) are setup so that they point to the servers in the cluster (WebLogicCluster="AS1:7601,AS2:7601").
              When a new request comes in to one of the WebServers (say WS1), the plugin will route it to one of the AppServers using Round-Robin (say AS1). A session will now be initiated in AS1 and it sends a response back to the client.
              Question 1: How does the other proxy in WS2 know that all future requests for this client need to be forwarded to AS1?
              Question 2: For failover, does the cluster automatically replicate the session state existing in AS1 onto AS2 before sending the response (does AS2 automatically become the secondary)?
              Now let's assume that AS1 crashes/dies. When the next request from the client comes to WS1 or WS2, they will forward it to AS1 (assuming that WS2 knows about the client session in AS1) . Since AS1 has crashed, will the client eventually get a timeout error message?
              Question 3: To ensure that the session failover happens so that AS2 gets the request instead (becomes the primary), do we need to setup a WebLogic Proxy Server? If so, why can't the plugins for NES provide the failover themselves?
              Thank you very much for your help!
              Giri
              

              Thank you very much for your responses. It has been very helpful and I am clear on the session/clustering stuff. I have new questions on EJB and clustering which I will post as a separate thread.
              Giri
              "Jason Rosenberg" <[email protected]> wrote:
              >And also, if the browser has cookies disabled, it is important for
              >the app server to embed the WebLogicSession info via url rewriting,
              >otherwise the proxy or NES will not be able to route the session
              >properly.
              >
              >So, in all http responses, be sure to pass the url string through
              >response.encodeURL(). This will do the right thing depending
              >on whether cookies are enabled or not.
              >
              >I've just only recently figured this out. Haven't actually tried it
              >all out yet, so forgive me if it is not quite this simple, but this
              >seems to be the gist of it...
              >
              >Jason
              >
              >
              >"Justin James" <[email protected]> wrote in message news:[email protected]...
              >>
              >> Giri,
              >>
              >> I'm not a weblogic representative, but I tried to replicate this proxing service inside a load balancing switch(BigIP) and I
              >discovered a few things. The weblogic server sets a cookie (WebLogicSession)that the webserver plugin uses to manage the proxying.
              >The cookie (found in the HTTP header information) contains encoded information about the primary and secondary application servers
              >that the session is bound too. Any web server can read the cookie to determine how to dispatch the request to the primary server.
              >If the primary server does not respond, the request is forwarded to the secondary server by the plugin. Regardless of cluster size,
              >the session is replicated to only one other server.
              >>
              >> <[email protected]> wrote:
              >> >Giri Alwar wrote:
              >> >
              >> >> I need a couple of clarifications. First with regard to Question 1, I understand that plugins provide load balancing and
              >failover but what I really was asking is how the plugin in WS2 knows that a session for the client has been initiated in AS1 as a
              >result of WS1 sending the initial request to AS1. If WS2 gets a future request from the client, it needs to know this to send the
              >request to AS1. Does the plugin talk to the cluster to find out if there is a primary and who it is?
              >> >>
              >> >
              >> >> I should have clarified that my other questions pertain to in-memory replication. If I do not persist the session in a database
              >then does the client get an error message (timeout) when AS1 goes down (assuming we use NES with the WebLogic plugin)?
              >> >
              >> >Plugins' know how to route requests based on cookies. If it can't reach the primary server it will automatically try secondary.
              >In your case it doesn't matter if it reaches to proxy 1 or proxy 2, it is still the same.
              >> >
              >> >- Prasad
              >> >
              >> >> To prevent this error message and achieve failover, do I need to use WebLogic as the proxy server? If so, why isn't the NES
              >plugin doing this?
              >> >
              >> >> Thanks.
              >> >> Giri
              >> >>
              >> >> Prasad Peddada <[email protected]> wrote:
              >> >> >Giri Alwar wrote:
              >> >> >
              >> >> >> Let's say that we have two WebServers (NES) with the weblogic plugin (say WS1 and WS2) and a cluster with two
              >WebLogicCommerce AppServers (say AS1 and AS2). Let's assume that each WebServer and AppServer runs on its own machine (total: 4
              >machines). Now, let's assume that the WebServer "obj.conf" files (on both WS1 and WS2) are setup so that they point to the servers
              >in the cluster (WebLogicCluster="AS1:7601,AS2:7601").
              >> >> >>
              >> >> >> When a new request comes in to one of the WebServers (say WS1), the plugin will route it to one of the AppServers using
              >Round-Robin (say AS1). A session will now be initiated in AS1 and it sends a response back to the client.
              >> >> >>
              >> >> >> Question 1: How does the other proxy in WS2 know that all future requests for this client need to be forwarded to AS1?
              >> >> >
              >> >> > Plugin takes care of load balancing and failover, it is all transparent to the client.
              >> >> >
              >> >> >
              >> >> >> Question 2: For failover, does the cluster automatically replicate the session state existing in AS1 onto AS2 before sending
              >the response (does AS2 automatically become the secondary)?
              >> >> >
              >> >> > If you have only two yes it is automatically your secondary. Yes, replication is synchronous.
              >> >> >
              >> >> >>
              >> >> >> Now let's assume that AS1 crashes/dies. When the next request from the client comes to WS1 or WS2, they will forward it to
              >AS1 (assuming that WS2 knows about the client session in AS1) . Since AS1 has crashed, will the client eventually get a timeout
              >error message?
              >> >> >
              >> >> > If you are using some kind of persistence then you will be able to retrieve the session information and since the server
              >is not available the request will automatically failover.
              >> >> >
              >> >> >> Question 3: To ensure that the session failover happens so that AS2 gets the request instead (becomes the primary), do we
              >need to setup a WebLogic Proxy Server? If so, why can't the plugins for NES provide the failover themselves?
              >> >> >
              >> >> > No, you need only one. Either NES or weblogic proxy.
              >> >> >
              >> >> >
              >> >> >> Thank you very much for your help!
              >> >> >> Giri
              >> >> >
              >> >> >- Prasad
              >> >> >
              >> >
              >>
              >
              >
              

  • Clustered blade servers

    We're currently investigating a hardware refresh of our production landscape and we've come across a bit of a stumbling block. For the record, we're not particularly big, currently running ERP 2005 with a database instance and the central instance on dual quad-core x64 1.83GHz processors with 16GB RAM each and two application servers on four quad-core x64 pre-Core-architecture processors with 8GB RAM each. It's all hooked together on a gigabit Ethernet switch (two gigabit connections per server using teaming) and all storage is internal to each server except for the database instance which has an external SCSI RAID array.
    We're now upsizing and are finding ourselves once again on the very limits of what the x86/x64 systems available to us can offer, except this time, we're not sure it's enough. We're investigating blade servers as a potential way to move forward but the documentation of how a "clustered blade" server works is sketchy at best. Itanium, for us, doesn't really hold the answer; sure you can fit 128 processors into the same server but we don't have £1m to spend!
    So, my questions are:
    What actually is a clustered blade server?
    How does it work? Do I assign physical blades to a logical server and they all work together as one much faster server, or is it just for failover purposes?
    Is it still possible to use SAP's recommendations about keeping the operating system, SAP executables, the database, the database log file and the swap space (etc. etc.) on physically seperate drives if we move to Fibre Channel based storage, or is that no longer something to worry about, does it even matter any more?
    Do SAP have any recommendations with regards clustered blade servers?
    Has anyone ever had experience of using clustered blade servers to run ERP 2005? Anything I should know?
    The impression I get of blade servers is that they are to servers what RAID is to hard disk drives, but I'd just like a bit of advice from anyone who can give it before I start spending money!
    Many thanks,
    Rob Moss
    Mark Two
    Bolton, UK

    You can export a VM with snapshots and import those to a later version.
    If your snapshots were taken while the VM was running, then the running VM state file needs to be deleted. (the .bin / .vsv) as the running memory state is never supported when upgrading.
    Now, the other issue.  2012 R2 cannot import a VM from 2008 R2.  Only 2012 can import any of the 2008 releases, and the 2012 R2 release. (yes, there was a fundamental change made and 2012 was the cross-over release).
    The way to handle this is:
    1) try copying the entire VM folder (without exporting) and import to 2012 R2 (test this please).
    2) Use 2012 to Import the export from 2008 R2, then upgrade to 2012 R2.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • WLI and JMS and clusters

    Hi,
    We are deploying an HA solution which includes two servers running WLPI and
    two JMS servers. Each JMS server is associated (target) with one WLPI
    server.
    I want to cluster these servers. However, JMS clustering limitations means
    that a given JMS destination can only be defined on one server within the
    cluster. So I need to change the destinations used internally by WLPI in
    order to define the cluster. Is that possible? How? Am I missing something?
    Thanks,
    Philippe
    Philippe Fajeau [email protected]
    System Architect Tel: (604) 484-6675
    Centrata Fax: (604) 685-5252

    Philippe,
    See my response today in weblogic.interest.developers. Yes you can cluster, but
    for high availability you will have to do write you workflows a particular way,
    until the recovery-code in WLI becomes cluster-safe.
    Richard
    "Philippe Fajeau" <[email protected]> wrote:
    Hi,
    We are deploying an HA solution which includes two servers running WLPI
    and
    two JMS servers. Each JMS server is associated (target) with one WLPI
    server.
    I want to cluster these servers. However, JMS clustering limitations
    means
    that a given JMS destination can only be defined on one server within
    the
    cluster. So I need to change the destinations used internally by WLPI
    in
    order to define the cluster. Is that possible? How? Am I missing something?
    Thanks,
    Philippe
    Philippe Fajeau [email protected]
    System Architect Tel: (604) 484-6675
    Centrata Fax: (604) 685-5252

  • Clustered OAM Servers go only to one server

    The more we do clustering of our OAM Managed Servers... the more questions we seem to come across....
    With OAM 11g, DB 11.2.0.1, RHEL5.6, and WLS 10.3.5... we have clustered the managed servers and all that displays, starts, stops as expected. But when we test the availability of the OAM Servers... only one is ever accessed.
    With 2 hosts (H1 and H2) and the OAM Servers on both running, when we try to access a protected resource we are sent to H1's oam server everytime. To test the clustering of the servers, we shutdown the OAM server on H1 in hopes Oracle would recognize it and send us to the running OAM server on H2 -- at least that's what we thought clustered servers "should" do.
    Much to our dismay, we were directed to the OAM server on H1 thereby receiving an "unable to connect" error... After a few more tries... we are actually able to get directly into the protected resource without having to enter the SSO login page.
    Then only thing we can determine is that the first time we try to access the resource we are stopped because the system wants us to go to the SSO login which it cannot find because it is only willing to look at H1 for the OAM server -- which is down. Continuing to try to access the page and then getting in without the SSO login... tells us that the system has determined that the OAM server is down and therefore there is nothing to stop us from accessing the protected resource.
    There are a few basic questions that come from this... but the one we need answered is... why is it not rolling over to the other running OAM server on the cluster? Where can this be defined?
    thanks for any assistance...

    See my comments below:
    Hosts -- 4 hosts total:
    Testing Host with protected resource app:
    Host: x.x.x.134
    App: servletproject
    OAM Host1: x.x.x.216
    OAM Host2: x.x.x.217
    HAProxy Host: x.x.x.190
    HAProxy conf file (just the LB)
    listen LB x.x.x.190:5575 This is not needed as all OAP (5575) traffic must go directly to each OAM Node and not through a LB. REPLACE THIS WITH x.x.x.190:14101 or x.x.x.190:443
    mode tcp
    balance roundrobin
    server host1 x.x.x.216:5575 REPLACE THIS WITH x.x.x.216:14101 (Ths listen port of your oam1 managed server
    server host2 x.x.x.217:5575 REPLACE THIS WITH x.x.x.217:14101 (Ths listen port of your oam2 managed server
    OAM Console:
    Server Instances:
    original settings before haproxy included: revert this change. You must use port 14101 and go through the LB for this. Using 5575 in this way will not work
    host IP: host1/host2
    port: 14101
    settings after haproxy included:
    host IP: x.x.x.190
    port: 5575
    OAM Proxy values -- did not change with haproxy:
    Port: 5575
    Proxy Server ID: AccessServerConfigProxy
    Mode: Open
    Access Manager Settings: This value must be using the HAProxy host on port 14101 - This explains why when you shutdown host1's OAM Server you see errors trying to connect to host1.
    Load Balancing:
    Host: host1
    OAMPort: 14101
    Protocol: HTTPS
    Mode: External
    Testing Host ObAccessClient.xml excerpt: Once you make the above change in the OAM Console adding back host1/host2 and port 14101 for your instances, this setting should be automatically updated but it's worth validating in the Webgate configuration
    ListName="primaryServer1">
    <NameValPair
    ParamName="host"
    Value="192.168.2.190"></NameValPair>
    <NameValPair
    ParamName="port"
    Value="5575"></NameValPair>
    <NameValPair
    ParamName="numOfConnections"
    Value="1"></NameValPair>
    This is a great way to communicate your configuration. Go ahead and re-post when you've made the changes I reference above.
    Thanks!
    Seth
    Edited by: SethMalmberg on Mar 9, 2012 9:06 AM

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • In Dreamweaver CS3, can I save to Test AND Production FTP Servers when I press save the document?

    Hi, and thank you for reading this.
    I've setup a production and test FTP servers in Dreamweaver CS3. It is either saves to production or test server when I click COMMAND+S (CONTROL on PC), but I'd like teh file to be saved on both simultaneously. Persssing Sync the Servers won't work since the servers are not mine and they a bit different, teh files are not the same. I just would like to be able to save 1 file at a time in both production and server with one click.
    I've been searching the Interenet but could not find anything other than basic FTP setup. Any help would be appreaciated. Thanks!
    Anton

    Hi John, first of all thank you for replying!
    The reason why I would like to have Dreamweaver to save to both servers Test and the Production at the same time is because it would save me a lot of time keeping both servers more or less in sync. I have to keep both Test and Production servers syncronized, but only wihtin a specific content. While I test my design I do use Test server, but then the hassle begins when I am asked to make changes to the content.
    For example, I have a content page called books.html. I first built it on a Test server. Then switched to the tab called Remote, connect to the Remote Server, and save it. Very simple.
    However, then I need to create 10 more pages, lets call them books1.html, books2.html, and so on. Again sync them with the Remote server.
    All is great, but then my boss comes and say, hey there is a mistake on these files: books3, books4, books7, and books8. Please change the text content on those. So, I know the pages are working fine, I just need to update the text on both Test and Remote servers.
    So at this point I have to open these 4 files, change the text, then SAVE ALL to Test server. Then, I switch to Remote tab, go back to files press any key and 'Delete' keys so that Dreamweaver would see the change in file so that it would be able to save it. Then again 'SAVE ALL' to the Remote server.
    This is happening very OFTEN. It would save me so much time and brain power remembering which files I should sync if I am in a hurry and jump from one project to another, losing track of small changes in files when there are a lot of .html files.
    CMS is only in development now so I have to work with what I got. As to regard to backups, I use Time Machine so I am set with that. Plus I use MAMP. So basically I first use MAMP for testing, then upload changes to the Test server with all the BIG databases, and only then to Remote (production) with real Database. However, this is only when I develop new pages, very often I just need to modify what's already has been created. So that's why a feature like Save to BOTH servers (Test and Remote) would make a big difference to me.
    - Nookeen
    http://nookeen.com

  • Using Oracle driver in JDBC for Unix and Linux based servers

    Please, let me know how to mention the Oracle driver within the forName.class(" "); statement in the Jdbc-Servlet for Unix and Linux based servers.
    I'm using Windows-OS for Java programming. Should I have to use the same environment(Unix/Linux with Oracle) for compiling or just compiling, mentioning the driver in the java program would suffice?
    Please, Help me.
    Thank You.
    from,
    Silas eschole.
    email: [email protected]
    [email protected]

    I've used Oracle's thin driver like this:
    Class.forName ( "oracle.jdbc.driver.OracleDriver" );
    Connection DBConnection = DriverManager.getConnection("jdbc:oracle:thin:USER/PASSWD@database" );
    You need Oracles client classes ( e.g. classes111.zip ) at run time, not during compilation.
    Thin client connects directly to the Oracle DB, so the database description is like PORT:MACHINE:SID
    Connection is made through Oracle's listener even when your DB is in the same machine that your program is running. Port number is propably 1521 or 1526, depending on your listener.ora definitions and used Oracle SQL*Net version.

  • Can I have both Oracle 8i and Oracle 9i servers on a s Solarix Box

    Can I have both Oracle 8i and Oracle 9i servers installed on a single Solaris box !? Or say .. any other flavour of unix
    Thanks in advance.
    ---Srini

    Yes! By all means, you can.
    You will just have to install Oracle under different directories for different versions and set the ORACLE_HOME accordingly.

  • False Duplicate ip address error reported on our windows 2008 and windows 2012 servers

    we use windows 2008 and windows 2012 servers our company. my access switches are cisco catalyst 3560.
    A sample of a show version command from one of our access switches is as shown below.
    SW_01#show version
    Cisco IOS Software, C3560E Software (C3560E-UNIVERSALK9-M), Version 15.0(1)SE2, RELEASE SOFTWARE (fc3)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2011 by Cisco Systems, Inc.
    Compiled Thu 22-Dec-11 00:16 by prod_rel_team
    ROM: Bootstrap program is C3560E boot loader
    BOOTLDR: C3560E Boot Loader (C3560X-HBOOT-M) Version 12.2(53r)SE2, RELEASE SOFTWARE (fc1)
    SW_01 uptime is 2 weeks, 5 days, 16 hours, 15 minutes
    System returned to ROM by power-on
    System restarted at 17:31:47 UTC Fri Nov 14 2014
    System image file is "flash:/c3560e-universalk9-mz.150-1.SE2/c3560e-universalk9-mz.150-1.SE2.bin"
    I will be grateful if any one can help with some solution.
    Thank you

    Can you post your switch config?
    How many switches do you have? Presumably you have more than one, this  one is connected to others, and those others have servers and clients?
    Try doing a 'show arp' on the switch and comparing the IPs and MACs to your windows server. Do it a few times as it may change as each device using the IP sends packets.

  • Hi, we need to create the test environment from our production for oracle AP Imaging. we have soa,ipm,ucm and capture managed servers in our weblogic. can anyone tell me what is the best way to clone the environment, can I just tar the weblogic file syste

    Hi, we need to create the test environment from our production for oracle AP Imaging. we have soa,ipm,ucm and capture managed servers in our weblogic..
    Can anyone tell me what is the best way to cloning the application from different environment, the test and production are in different physical server.
    Can I just tar the weblogic file system and untar it to the new server and make the necessary changes?
    Can anyone share their experiences and how to with me?
    Thank in advance.
    Katherine

    Hi Katherine,
    yes and no . You need as well weblogic + soa files as the database schemas (soa_infra, mds...).
    Please refer to the AMIS Blog: https://technology.amis.nl/2011/08/11/clone-your-oracle-fmw-soa-suite-11g/
    HTH
    Borys

Maybe you are looking for

  • Process group R001 does not exist in MSS for Create Requisition Request.

    Hi All In MSS Under Recruiting we have a link called Requisition Request under this we have Create Requisition Request. On clicking on this link Create Requisition Request link i am getting "Process group R001 does not exist " Pls provide me your inp

  • REP-0186

    I want to run the report server 3.0 on Solaris 2.6 machine. But i am getting "REP-0186" error and this error is not listed on the report errors info. Any clues? I have updated the listener.ora and tnsnames.ora ========================================

  • Putaway of Handling units via Inbound delviery

    Hi, I have configured HUM for my warehouse & am putting away my handling units via Inbound delivery. The system currently generates a transfer order for each & every HU/ storage unit that is created. But i would like to create one single transfer ord

  • E51 - Software Update Problem

    Hi! I own a Nokia E51 and I've been having some problems with the software: from time to time, whenever I received a text message I couldn't open it using the phone's "desktop" (the cascade menu). I was told that it might be a sofware problem and tha

  • Oracle SQLException: No more data to read from socket

    I am using JDBC connection pooling to read a large data set and after about an hour or so into the result set processing I get the exception stated blow on all the connections in the pool and the process dies.. What is causing this? How can I correct