InterMedia on separate server?

Is it possible to run Oracle Intermedia Text on another server as the database? The idea is that we don't want the indexing of new documents to slow down the queries done by users (web).
Is it possible to have the load of the indexing on another machine?

Hello Geert:
This is a discussion forum for Oracle Internet Directory, Oracle's LDAP server. Questions regarding Intermedia text should be refered to the interMedia or IFS discussion forums.
Thanks,
Jay
null

Similar Messages

  • Which database driver is required for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server

    Hi,
    i am trying to configure JDBC with weblogic. Can any one tell me which deriver needs to be selected for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server.
    if i use BEA oracle Driver (Type 4) version 9.0.1, 9.2.0,10,11,  i find error (see snap:2)
    Connection test failed.
    [BEA][Oracle JDBC Driver]Error establishing socket. Unknown host: hdyhtc137540d<br/>weblogic.jdbc.base.BaseExceptions.createException(Unknown Source)<br/>weblogic.jdbc.base.BaseExceptions.getException(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnectionHelper(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnection(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.connectAndAuthenticate(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.connect(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.setupImplConnection(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseDriver.connect(Unknown Source)<br/>com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:505)<br/>c om.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConn ectionConfiguration(CreateJDBCDataSource.java:369)<br/>sun.reflect.GeneratedMethodAccessor 826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...
    and
    when i use oracle's driver (thin) version 9.0.1, 9.2.0,10,11, i find error
    Connection test failed.
    Io exception: The Network Adapter could not establish the connection<br/>oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:10 1)<br/>oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112)<br/>oracle .jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)<br/>oracle.jdbc.drive r.DatabaseError.throwSqlException(DatabaseError.java:229)<br/>oracle.jdbc.driver.DatabaseE rror.throwSqlException(DatabaseError.java:458)<br/>oracle.jdbc.driver.T4CConnection.logon( T4CConnection.java:411)<br/>oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnectio n.java:490)<br/>oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:202)<br/>oracle .jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)<br/>oracle.jdbc. driver.OracleDriver.connect(OracleDriver.java:474)<br/>com.bea.console.utils.jdbc.JDBCUtil s.testConnection(JDBCUtils.java:505)<br/>com.bea.console.actions.jdbc.datasources.createjd bcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:36 9)<br/>sun.reflect.GeneratedMethodAccessor826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...

    i am finding this error when i click on Test Configuration button to test the connection wth oracle DB

  • RollbackException using UserTransaction when calling EJB in separate server

    I'm using WL 6.1 on Solaris and am calling a stateless session EJB that
              is running in a separate server. I'm looking up the remote EJB using
              JNDI and calling through it's home interface. This works fine but if
              the client code begins a UserTransaction, then calls the EJB that's in
              the separate server, and then calls commit on the transaction, I get the
              following:
              weblogic.transaction.RollbackException: Aborting prepare because some
              resources could not be assigned - with nested exception:
              [javax.transaction.SystemException: Aborting prepare because some
              resources could not be assigned]
              The code that works looks like:
              rgData =
              HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              whereas the code that fails is:
              UserTransaction transaction = new UserTransaction();
              transaction.begin();
              rgData =
              HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              transaction.commit();
              If I put the EJB in the same server as the client, I don't get the
              exception so it seems to be related to running it in the separate server
              and using the UserTransaction. The deployment descriptor of the EJB
              states that it "Supports" transactions.
              Any ideas?
              Thanks,
              John
              

    Yes, actually we are using:
              AppServerTransaction transaction = new AppServerTransaction();
              which is a wrapper which does what you say:
              Context ctx = new InitialContext(...); // connect to another WLS
              UserTransaction tx = (UserTransaction)ctx.lookup("java:comp/UserTransaction");
              and our HomeHolder does what you say as well:
              Homexxx home = ctx.lookup(...);
              Any ideas why surrounding the EJB call by a UserTransaction causes a problem when
              committing?
              Thanks,
              John
              Dimitri Rakitine wrote:
              > John Hanna <[email protected]> wrote:
              > > I'm using WL 6.1 on Solaris and am calling a stateless session EJB that
              > > is running in a separate server. I'm looking up the remote EJB using
              > > JNDI and calling through it's home interface. This works fine but if
              > > the client code begins a UserTransaction, then calls the EJB that's in
              > > the separate server, and then calls commit on the transaction, I get the
              > > following:
              >
              > > weblogic.transaction.RollbackException: Aborting prepare because some
              > > resources could not be assigned - with nested exception:
              > > [javax.transaction.SystemException: Aborting prepare because some
              > > resources could not be assigned]
              >
              > > The code that works looks like:
              >
              > > rgData =
              > > HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              >
              > > whereas the code that fails is:
              >
              > > UserTransaction transaction = new UserTransaction();
              >
              > It's an interface, how did this work? Assuming that you do not want
              > distributed tx, does this work:
              >
              > Context ctx = new InitialContext(...); // connect to another WLS
              > UserTransaction tx = (UserTransaction)ctx.lookup("java:comp/UserTransaction");
              > Homexxx home = ctx.lookup(...);
              > tx.begin();
              > home.create().getResourceOptions(qd);
              > tx.commit();
              >
              > ?
              >
              > > transaction.begin();
              > > rgData =
              > > HomeHolder.ROUTING_GUIDE_MGR_HOME.create().getResourceOptions(qd);
              > > transaction.commit();
              >
              > > If I put the EJB in the same server as the client, I don't get the
              > > exception so it seems to be related to running it in the separate server
              > > and using the UserTransaction. The deployment descriptor of the EJB
              > > states that it "Supports" transactions.
              >
              > > Any ideas?
              >
              > > Thanks,
              >
              > > John
              >
              > --
              > Dimitri
              

  • SharePoint 2013 workflows for SPD - need a separate server to run workflow engine?

    Hi there,
    I have a single machine dev installation of SharePoint 2013. How to have SP 2013 Workflows done in SharePoint Designer - so we can migrate them from one environment to the other without having to make manual changes again.
    Does this need any special software installed? Can we install it on our APP server?
    Thanks.

    Hi Frob,
    If you want to be able to move workflow manager to another environment, then you need to install the workflow manager in a separate server.
    When you want to use the workflow manager in another environment, you need to leave the workflow manager from the existing farm.
    And then join the workflow manager to the farm where you want to use the workflow manager and also re-register the workflow service.
    Here is a similar issue for you to take a look:
    http://sharepoint.stackexchange.com/questions/132524/move-workflow-manager-to-new-farm-in-a-new-domain
    More references:
    https://msdn.microsoft.com/en-us/library/jj193527(v=azure.10).aspx
    https://msdn.microsoft.com/en-us/library/jj193433(v=azure.10).aspx
    Best regards,
    Thanks
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Is it possible to take the CDR data from a v4.2 Call Manager and copy it to a separate server where it would be made available for reporting?

    Is it possible to take the CDR data from a v4.2 Call Manager and copy it to a separate server where it would be made available for reporting? We are not interested in migrating the CDR data to v6 because of the concerns it introduces to the upgrade process. Is it possible to get the raw data and somehow serve it from a different machine? (knowing it would be 'old' data that stops as of a certain date). If so, what would be the complexity involved in doing so?
    It seems like the CDR data lives within MSSQL and the reporting interface is within the web server portion of the Call Manager... that's as far as we've dug so far.

    Hi
    It is absolutely possible to get the data - anyone you have in your org with basic SQL skills can move the data off to a standalone SQL server. This could be done most simply by backing up and restoring the DB using SQL Enterprise Manager.
    Moving the CAR/ART reporting tool would be more difficult... if you do actually use that for reporting (most people find it doesn't do what they need and don't use it for anything but basic troubleshooting, and get a third party package) then the best option may be to keep your publisher (possibly assigning it a new IP) and leave it running for as long as you need reporting.
    You would then need a new server to run your upgraded V6 CCM; you may find you need this anyway.
    Regards
    Aaron
    Please rate helpful posts...

  • Best method for archiving .mpp files on a separate server or location?

    We want to be able to run a program or job on Project Server 2013 that will export all current published project .mpp files to a separate server or location on our network. What is the best, or suggested, method for something like this? Our managers want
    to have this job run on a weekly or bi-monthly basis in order to have backup files of each active project schedule. This would be beyond the Project Server archive database. This would be for Business Continuity purposes of having those schedules available
    should our servers ever crash. 
    Any help would be much appreciated. I am not a developer, but if there is code available for something like this we have developers in-house that can perform the work. 
    Thank you,
    Travis
    Travis Long IT Project Manager Entry Idaho Power Co. Project Server Admin

    Project server already has an archiving mechanism which backs up project plans based on schedule and maitains versions of it which can be restored at any point ? check administrative backup in central Admin under PWA settings
    However I wouldn't say this is the best method, but you can run a macro which would export all projects and save it at a location(could be network file share), Something like this (havent tested it recently with 2013 but i believe should work
    Sub Archiving()
    Dim Conn As New ADODB.Connection
    Dim Cmd As New ADODB.Command
    Dim Recs As New ADODB.Recordset
    'Connect to Project Server Reporting DB, Get Project Names
    Conn.ConnectionString = "Provider=SQLOLEDB;Data Source=servername;Initial Catalog=ProjectServer_Reporting;User ID=; Password=; Trusted_Connection=yes"
    Conn.Open
    With Cmd
        .ActiveConnection = Conn
        .CommandText = "Select ProjectName From MSP_EpmProject_UserView"
        .CommandType = adCmdText
    End With
     With Recs
        .CursorType = adOpenStatic
        .CursorLocation = adUseClient
        .LockType = adLockOptimistic
        .Open Cmd
      End With
    Dim prjName As String
    Dim ArchivePrjName As String
    Application.Alerts (False)
    If Recs.EOF = False Then
       Recs.MoveFirst
       For x = 1 To CInt(Recs.RecordCount)
        prjName = "<>\" + Recs.Fields(0)
        FileOpenEx Name:=prjName, ReadOnly:=True
        ArchivePrjName = "C:\Temp\" & prjName & "4Sep2014_9PM"
        FileSaveAs Name:=ArchivePrjName, FormatID:="MSProject.MPP"
        FileCloseEx
        prjName = ""
        Recs.MoveNext
       Next x
    End If
       Recs.Close
       Conn.Close
    End Sub
     Let us know if this helps
    Thanks | epmXperts | http://epmxperts.wordpress.com

  • Can GG SQL Server DataPump Process run on separate server to Extract

    Its clear from the Goldengate SQL Server documentation that the Extract process for SQL Server has to be on the database server. Is it possible to split the Data Pump off onto a separate server in order to limit the load on the DB Server? It seems like this would require a separate GoldenGate installation with just a data pump reading the trail files from the Extract installation.
    Appreciate any comments on possibility, worthwhileness etc.
    Stonemason

    An extract (or pump) process reads trail files locally, and writes to either local or remote file system. The pump itself -- if you configure it as "passthru" -- has minimal load.   But if you don't want the pump running on the DB server, then just have the primary extract write out remote trails instead of local trails.  This architecture presents more disadvantages than advantages, so in general it's not recommended.   For example, if the network goes down and the remote trails can't be written, then your primary extract stops processing database logs, and this is something that should continue running, even if just to create local trails until the network is back up.
    Alternatively, you can also have your trail files written to "local" trails (extTrail instead of rmtTrail) that is actually remote storage (NAS/SAN), and then you would have a pump running on a remote machine and reading these trails.  This is more common on linux/unix -- if you're running Windows, you might want to check with support for supported configurations. (Also, even if using linux/unix, check for KM's (support notes) on support.oracle.com for optimal and/or required configurations for the network attached storage/NFS.)

  • Installing APEX on separate server from production database

    I have a security request to install APEX on a separate server from the application content. I have searched terms I know and have only found vague references, but not meaningful assistance. Rather than try to talk me into installing on the same server, can someone point me to some documentation for achieving this without the use of database links which is another security risk? The APEX Listener and APEX instance can be installed on one server but the application content needs to be installed on a separate server.
    Edited by: user4552785 on Apr 16, 2012 12:18 PM

    You can easily set up an Oracle HTTP Server elsewhere to connect to your APEX instance (along with images directory and other static files). However, the application schema is tightly embedded in the same instance as the APEX schema, so that will have to stay there.
    What security concerns are you trying to alleviate? By default, APEX applications run as the user APEX_PUBLIC_USER and then this user masquerades as your application schema (and occasionally a little bit of APEX_0XXXXX). Even if you could manage to move your application schema to another instance, you haven't achieved a whole lot from a security standpoint.

  • IBR on separate server - Custom Conversions

    I have a custom conversion process that interupts the IBR process to do some work on a document before the rendering step. It works well while IBR is installed on the same server as the Content Server, but I am struggling to get it going when I move IBR onto a separate server.
    When run on the same server, the "DocConverter.hda" contains the location of the original location of the document in the vault, but when on a separate server, everything points to a local folder. It appears that a pre-process copies in a copy of the document into a temp directory, and after the completion of the rendering, a post-process copies the pdf out to the weblayout folder. My process needs to know the location of the original document in the vault, but I cannot find any references to the pre/post processes in the documentation or even the logs.
    Anyone know how this works?
    I am running CS 7.5.1 and PDFConverter 7.6 on a Windows Server 2003 platform with a Sql Server 2005 back end.
    Cheers,
    Desmo

    David is correct. Depending on exactly what you are changing (always should be in a component), you need to restart the Content Server first and then restart the refinery because of the way the conversion information gets passed to the refinery from the Content Server. If you are changing the top level file formats entry, you may need to check in a new version of the file vs just resubmitting the file to the refinery using the Repository Manger applet to make sure the right conversion gets called.
    If you don't want to create this yourself, Fishbowl Solutions has a product called Native File Updater (part of the Document Automation Suite) that will likely work for you.
    http://www.fishbowlsolutions.com/StellentSolutions/StellentComponents/fs_docauto_webcopy?WT.mc_id=fb_sig_to
    Hopefully that helps,
    Tim

  • Is it possible to move the wiki to a separate server running Linux?

    I'm wondering, is it possible to move a wiki that you've made (clean install) via Wiki Server 2 to a separate server that does not run Mac OS X?

    Hi Joseph,
    I am afraid that the feature that you are requesting for is not there in Muse at the current stage. I will recommend that you post this in the ideas section and let the devs team know about the requirement for this feature, Ideas for features in Adobe Muse
    - Abhishek Maurya

  • Does CCM need separate server?

    we just started implementing SRM 4.0 (will go with 5.0 once it's released). we are wondering if we need to put CCM on separate server as SRM server.  do people normally put SRM/CAT on one server and CSE on another or put SRM on one and CAT+CSE on separate one? what's the pros and cons and how to decide?
    Thanks
    Jane

    Hi Jane,
    Yes, you can very well install both the components i.e. EBP and SRM on single server.
    The installation on single or different server depends upon the load of catalogs to be handled(this is one criteria , there may be others)
    pl. read CCM config guide for better insight in installation.
    BR
    Dinesh

  • Remote desktop licensing on separate server

    I would like to deploy Remote Desktop Services on Windows Server 2012 R2.
    My idea is isolate the Remote Desktop Licensing role from other Remote Desktop Services roles and put Remote Desktop Licensing role to separate server which is also KMS host ?
    Is that a good idea ?
    Thanks !!

    Hi,
    Thanks for your comment. Sorry for late reply.
    Yeah, you can put the RD License server on separate server and then use the other server to add roles when needed. But please see that, there must be one of the configuration for RD Licensing server. 
    For both Per Device and Per User CALs issuance to work, the RD Session Host and RD Licensing server in any one of the following three configurations:
    • Both in the same workgroup
    • Both in the same domain
    • Both in the trusted (Two-way trust) Active Directory Domains or Forest
    More information:
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Does Deployment Optimizer need a separate server?

    We just finished blue-print and came to a conclusion to use SNP Heuristics and Deployment Optimizer. Since we are not using SNP Optimizer, my question is that should we need a separate server?
    I did not find any quicksizer document for Deployment Optimizer however, I could see for SNP Optimizer though.
    Any help would be appreciated.
    Thanks,
    C.A.

    Any Optimiser (SNP / Deployment / PPDS / TPVS / CTM) will need Optimiser Server connection as the Optimiser Routines are .exe files requiring a Windows-based hardware with sufficient Main Memory (RAM).
    Deployment Optimiser will be part of SNP Optimiser Engine.
    Somnath

  • Running replicat on a separate server from the target DB

    The DBA in charge of the target database of OGG replication doesn't want OGG running on his database server. He is proposing that the replicat process run on a separate server and post to the target DB across the network.
    This would require Oracle on the intermediate server on which replicat is running.
    My question is, can this be done? Can you have replicat writing to a database that is on a different server?
    If it is possible, what are the potential issues with this arrangement?
    Thanks in advance for your help.

    Based on what you've provided, you need a new/better DBA. The overhead is negligible.
    Nonetheless, assuming all other financial restrictions/limitations related to licensing are not an issue, Replicat can run on an intermediate server. Just needs to collect the trails and apply the SQL.

  • Problem deploying bpel process to oc4j_soa container on separate server

    Hi - after searching for a solution both here and on Metalink for some time, I've reached the point where I think I need to post my problem:
    (This might be an application server/BPEL related issue and perhaps more relevant there than JDeveloper. If so, my apologies in advance.)
    On client; Windows XP with JDeveloper 10.1.3.4.0, on server Windows Server 2003 with Oracle SOAsuite 10.1.3.
    Created a very simple bpel process (ftp get a remote file -> write locally) that I want to deploy on the dedicated SOA-server.
    From JDeveloper, the "Application Server and ESB Server connections checks out fine. But the "BPEL Process Manager Server" check failes with the error:
    "BPEL Identity Service: Failed" - Details: "oracle.xml.parser.v2.XMLParseException:Whitespace required."
    I have not found any postings/info on the "Whitespace required" error anywhere... mighty strange....
    I'm therefore unable to use the BPEL Process Deployer, it states that "Server Mode: Server is unreachable".
    I've triple-checked all ports etc. The Integration Server Connection is using port 80.
    I'm able to access and open the BPELConsole (& EM & ESB) on the server from my JDeveloper workstation.
    A "opmnctl status -l" on the SOA-server gives:
    Processes in Instance: SOAProd.aresrv136.gard.local
    ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------
    ias-component | process-type | pid | status | uid | memused | uptime | ports
    ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------
    ASG | ASG | N/A | Down | N/A | N/A | N/A | N/A
    OC4JGroup:default_group | OC4J:OC4JUDDI | 364 | Alive | 707857842 | 153388 | 17:33:58 | jms:12603,ajp:12503,rmis:12703,rmi:12403
    OC4JGroup:default_group | OC4J:oc4j_soa | 500 | Alive | 707857841 | 266220 | 17:33:58 | jms:12601,ajp:12502,rmis:12701,rmi:12401
    OC4JGroup:default_group | OC4J:home | 804 | Alive | 707857840 | 153080 | 17:33:58 | jms:12602,ajp:12501,rmis:12702,rmi:12402
    HTTP_Server | HTTP_Server | 2020 | Alive | 707857839 | 67188 | 17:33:30 | https1:443,http2:7200,http1:80
    I cannot find the "deploy_service" as a separate service on my server however. Only the hw_services are up and running, but it contains a "deploy" web module.
    Would be immensely grateful for any information that might help me further in solving this issue, as I'm rather stuck right now....
    Regards,
    -Haakon-

    Haakon,
    I think the BPEL forum is the better source to ask
    BPEL
    Frank

Maybe you are looking for

  • Hp password recovery in windows 8

    I just wanted to tell you of a recent experience I had when contacting HP support I called them yesterday with what I considered to be a simple question I found out that this question in their eyes was not that simple the question was how do I set Up

  • My faceook app only works when I am connected to wifi all other apps work on 3g except facebook

    I have updated my phone with all the recent updates including the newest facebook version.  All was working fine until one day the facebook app quit working.  I can use all my other apps without fail on 3g or wifi but my facebook app only works while

  • Adobe Photoshop Album

    Got my ATV the other day and at the time I wasn't using any 3rd party software to manage photos (I am using XP Pro). When I went to Sync the ATV, I was able to select the file folders (within the My Pictures folder) that I wanted to sync. Today I ins

  • Textedit.app cannot export metadata to PDF

    Hello, I've create RTF document in TextEdit.app (OS X 10.8.3), fill my own Properties of this file (File->Show Properties) and export to PDF via File->Export as PDF- RTF file contain Properties: Author, Organisation, Copyright, Title, Subject, Keywor

  • The passive node should not be shared by any other SQL instance.

    Dears, the customer has SQL cluster  with multiple instance for many technology.  Can I install new instance to host Lync backend database or not ???  I found in the below URL "SQL Clustering support is for an active/passive configuration. For perfor