DataSource for Replicated Database

Hi, first, of all, I don't know if this forum is the right place to post my question, so sorry for bothering you!
I'm working in a project where I'm thinking about using two Oracle databases one replicating the other. I read somewhere that this using Multimaster Replication is a good way to obtain more availability. But I don't know how to create a DataSource in OC4J that can use both databases and choose the one that's not down. If I were using Weblogic, I could create a connection pool for each database, and then create a MultiPool that uses the already created pools.
I don't know if I miss something in the documentation of Multimaster Replication, but I don't see how to create a single point of access for the Replicated databases, and neither how to create a DataSource for OC4J that can access more than one database. I'm totally new in Oracle world! Perhaps it isn't the better way to obtain more availabilty, perhaps I need to use a third component to provide a single point of access for the databases. I really don't know!
Thanks in advance,
RGB

Roberto,
Multi-master replication is for distributed datbase. The best thing to have a better database availability is 9i Real Application Cluster.
Please follow http://otn.oracle.com/products/oracle9i/content.html to read more about RAC.
regards
Debu

Similar Messages

  • What's the best MS SQL tool(s) for managing (creation, updating, deletion, pausing, starting, etc) a bunch of replicated databases?

    I'm looking for recommendations on how best to program (scripts, procedures, anything else) managing a set of databases across multiple datacenters. The tasks I would like to be able to do database by database is...
    copy a specified database from one server to another ("import") without the user access rights
    add replication to the "imported" database objects based on an extended property
    update the replicating "imported" database, including changes to the data values, and changes in schema, without bringing down the database (rolling updates across the various datacenters is permitted)
    adding/modifying a standard set of maintenance plans for each Database
    removing replication from a selected database
    I'm familiar with the standard SQL product but have no exposure to the other SQL components like SSIS. I have to date several SQL batch scripts that do parts of the above but it seems there must be a better way. Assume that there is one Enterprise Edition
    SQL instance at each datacenter. Replication is transactional Peer 2 Peer. We are also considering if we want to add DB mirroring but have decided not to do so at this time.
    BTW I'm one of those people that can't get the "copy database" menu option in SSMS to work. I'm not a DBA but it looks like I'm going to need how to act like one.
    Developer Frog Haven Enterprises

    I would recommend you to get involve DBA for any such tasks. The task which are listed it's not going to be simple thing. If you don't have prior experience in High availability configuration, it's really difficulty to manage even if you've any third party.
    But you can rely on backup and restore for any third party tools.
    To manage database ,configure  and maintain HA you need to have SQL Server HA knowledge.
    My testing so far seems to show that it's quite easy to break the replication with changes
    to the databases. Using SSMS "canned" wizards and procedures to delete and recreate replication leaves many vestiges behind cluttering up the database instance.
     You can create a script to cleanup the replication and re-create it from the scratch
    http://www.mssqltips.com/sqlservertip/1808/sql-server-replication-scripts-to-get-replication-configuration-information/
    http://support.microsoft.com/default.aspx?scid=kb;en-us;324401
    --Prashanth

  • Datasource: ORA-02019: connection description for remote database not found

    Hi,
    I recently made the datasource to point to a new host, and using the EM console tested the connectivity to the datasource to be successful.
    Java code that refers to the DS is also the same as before that was working as only the connection string has been changed, but now trying to access the web-application shows the following error in the logs:
    ==============
    Exception::java.sql.SQLException: ORA-02019: connection description for remote database not found
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:138)
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
    oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
    oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
    oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:113)
    oracle.jdbc.driver.T4CStatement.execute_for_describe(T4CStatement.java:431)
    oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.java:1029)
    oracle.jdbc.driver.T4CStatement.execute_maybe_describe(T4CStatement.java:463)
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1126)
    oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1274)
    oracle_jdbc_driver_T4CStatement_Proxy.executeQuery()=====================================
    What could be the reason? Is there any other app server datasource related setting that needs to be done or is it some other issue - as I said 'testing the connection from EM console connects successfully'.
    Thanks,
    Rommel.

    The issue is resolved now.
    One of the queries used a db link that was missing on the new database and therefore the error from the java code.
    Since testing for connectivity using DS through EM console does not check for any db link (using the default query it executes) connectivity was successful.
    Thank a lot,
    Rommel.

  • Shall I use one datasource for multiple connection pool?

    Hi,
    I need to clarrify that, Shall I use one Datasource for multiple connection pool in distributed transaction?
    Thanks with regards
    Suresh

    No. If its transactions across multiple databases you should use different datasources.

  • "Error while accessing porting layer for ORACLE database via getSessionId()

    Hi,
    My ejb3.0 Entity is created from Emp table in scott/tiger schema of an Oracle 10g database. I am guessing I made some mistake creating the datasource or uploading the driver, because when I run my application, I get a long exception stack trace. The bottom-most entry in the stack trace is:
    Caused by: com.sap.sql.log.OpenSQLException: Error while accessing porting layer for ORACLE database via getSessionId().
         at com.sap.sql.log.Syslog.createAndLogOpenSQLException(Syslog.java:148)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.createPooledConnection(DirectConnectionFactory.java:527)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.createDirectPooledConnection(DirectConnectionFactory.java:158)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.createDirectPooledConnection(DirectConnectionFactory.java:118)
         at com.sap.sql.connect.factory.PooledConnectionFactory.createPooledConnection(PooledConnectionFactory.java:119)
         at com.sap.sql.connect.factory.DriverPooledConnectionFactory.getPooledConnection(DriverPooledConnectionFactory.java:38)
         at com.sap.sql.connect.datasource.DBDataSourceImpl.createPooledConnection(DBDataSourceImpl.java:685)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.matchPool(DBDataSourcePoolImpl.java:1081)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.matchPooledConnection(DBDataSourcePoolImpl.java:919)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.getConnection(DBDataSourcePoolImpl.java:67)
         at com.sap.engine.core.database.impl.DatabaseDataSourceImpl.getConnection(DatabaseDataSourceImpl.java:36)
         at com.sap.engine.services.dbpool.spi.ManagedConnectionFactoryImpl.createManagedConnection(ManagedConnectionFactoryImpl.java:123)
         ... 90 more

    Actually, now (after the GRANT described in my reply before) the Exception has changed to:
    Caused by: com.sap.sql.log.OpenSQLException: Error while
    accessing porting layer for ORACLE database via
    <b>getDatabaseHost</b>().
         at com.sap.sql.log.Syslog.createAndLogOpenSQLException
    (Syslog.java:148)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.
    createPooledConnection(DirectConnectionFactory.java:527)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.
    createDirectPooledConnection(DirectConnectionFactory.java:158)
         at com.sap.sql.jdbc.direct.DirectConnectionFactory.
    createDirectPooledConnection(DirectConnectionFactory.java:118)
         at com.sap.sql.connect.factory.PooledConnectionFactory.
    createPooledConnection(PooledConnectionFactory.java:119)
         at com.sap.sql.connect.factory.DriverPooledConnectionFactory.
    getPooledConnection(DriverPooledConnectionFactory.java:38)
         at com.sap.sql.connect.datasource.DBDataSourceImpl.
    createPooledConnection(DBDataSourceImpl.java:685)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
    matchPool(DBDataSourcePoolImpl.java:1081)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
    matchPooledConnection(DBDataSourcePoolImpl.java:919)
         at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.
    getConnection(DBDataSourcePoolImpl.java:67)
         at com.sap.engine.core.database.impl.DatabaseDataSourceImpl.
    getConnection(DatabaseDataSourceImpl.java:36)
         at com.sap.engine.services.dbpool.spi.
    ManagedConnectionFactoryImpl.createManagedConnection(ManagedConnectionFactoryImpl.java:123)
         ... 90 more

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

  • Setup MySQL datasource for Sun ONE Studio

    Hello all,
    I've tried to post this message on the Sun ONE studio forum, however, there were compilation errors with JSP pages. As I needed the answer urgently, I decided to get some help here.
    I've successfully made connection to MySQL database during the CMP development. However, when I tried to run it, it said the JNDI Datasource can't be blank and provide username & password if necessary. I went back and gave it a name as jdbc/MySQL, and also provided the username & password. I then reran the app, and I got the following error
    java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.RemoteException: Unable to get JDBC DataSource for CMP ....
    I've mounted the mysql.jar and hacked some other things to no success. Please help.
    Thanks & regards,
    Thinh

    Hi,
    Try validating your data source using
    http://developer.iplanet.com/tech/tools/dbping_overview.jsp
    Get back in case of any issues

  • Multiple processes accessing a replicated database

    Hi
    I am after some help with multiple processes and replicated databases.
    I have a primary and secondary database replicated across a pair of servers and this seems to be working well. I'm trying to run another process on one of the machines that opens the environment and databases to view and/or modify the data.
    The problem is that when I run this process it causes some sort of corruption such that the server process on the same box gets a DB_EVENT_PANIC the next time it accesses the database. I would like to understand what I am doing wrong.
    The servers and standalone process all use the same code to open and close the environment and databases (see below). Just calling
         open_env();
         open_databases();
         close_databases();
         close_env();
    in the utility process causes DB_EVENT_PANIC in the server process.
    Can anybody spot what I am doing wrong? I am using DB Version 4.7
    Thanks
    Ashley
    open_env() {
    db_env_create(&dbenv, 0);
    dbenv->app_private = &my_app_data;
    dbenv->set_event_notify(dbenv, event_callback);
    dbenv->rep_set_limit(dbenv, 0, REPLIMIT);
    dbenv->set_flags(dbenv, DB_AUTO_COMMIT | DB_TXN_NOSYNC, 1);
    dbenv->set_lk_detect(dbenv, DB_LOCK_DEFAULT)
    int flags = DB_CREATE | DB_INIT_LOCK |
              DB_INIT_LOG | DB_INIT_MPOOL |
              DB_INIT_TXN | DB_RECOVER | DB_THREAD;
    flags |= DB_INIT_REP;
    dbenv->repmgr_set_local_site(dbenv, listen_host, port, 0);
    dbenv->rep_set_priority(dbenv, 100);
    dbenv->repmgr_set_ack_policy(dbenv, DB_REPMGR_ACKS_ONE);
    for (x = 0; x < num_peers; x++) {
    dbenv->repmgr_add_remote_site(dbenv, peers[x].name, peers[x].port, &peers[x].eid, 0);
    dbenv->rep_set_nsites(dbenv, num_peers + 1);
    dbenv->open(dbenv, ".", flags, S_IRUSR | S_IWUSR);
    dbenv->repmgr_start(dbenv, 3, DB_REP_ELECTION);
    sleep(SLEEPTIME);
    close_env() {
    dbenv_p->txn_checkpoint(dbenv_p, 0, 0, 0);
    dbenv_p->close(dbenv_p, 0);
    open_databases() {
    db_create(&dbp, dbenv_p, 0)
    flags = 0;
    if (app_data->is_master)
    flags |= DB_CREATE;
    dbp->open(dbp, NULL, "primary", NULL, DB_HASH, flags, 0);
    ... Wait for db if slave and ENOENT ...
    primary = dbp;
    dbp->open(dbp, NULL, "secondary", NULL, DB_BTREE, flags, 0);
    ... Wait for db if slave and ENOENT
    secondary = dbp;
    while (app_data->client_sync) {
    sleep(SLEEPTIME);
    close_databases() {
         secondary->close(secondary, 0);
         primary->close(primary, 0);
         dbenv_p->txn_checkpoint(dbenv_p, 0, 0, 0);
    }

    Running recovery (DB_RECOVER flag to env->open()) must be done only in the first process to open the environment.
    This is a general rule of Berkeley DB, not specific to replication. You can read more about it in the Reference Guide, on the page entitled "Architecting Transactional Data Store applications".

  • Change a migrated DataSource for DB Connect Source System

    Hi,
    I have created a DataSource for a DB connect Source system with transactioncode RSDBC as follows:
    1. select database TableView
    2. choose Edit DataSource
    3. fill in the field Application Component (default stated NODESNOTCONNECTED)
    4. generate datasource
    Result: the datasource has been created in the right application component. As I want to migrate to BW 7 I have migrated the datasource (with export) and afterwards activated it. Now we need to add more fields to the database TableView. After the fields have been added I want to make these fields also available in the DataSource created before.
    How can I do this ?
    When using transaction code RSDBC I found out that the application component is stated again NODESNOTCONNECTED instead of the application component filled before. Is it possible that changes in the TableView will always lead to create the DataSource all over again (new InfoPackage, new Transformation rules, new DTP).
    Thank you in advance for your help. Kind regards, Minouschka

    Hello,
    We have basically the same problem. An existing data source from a DB-connect client and now need to add a new field but edit mode BW does not provide the ability to add or delete existing fields.
    Were you able to complete this and if so how did you do it?
    Thank you,
    Rick

  • Moving Exchange 2010 Mailbox replicated databases path in DAG environments.

    Hi there,
    I’m trying to get some feedback on the topic of moving Exchange 2010 Mailbox replicated databases path in DAG environments.
    Here is the situation: I currently have a 3-Node DAG (Node 1 and Node 2 are in my main datacenter, and Node 3 in my Disaster Recovery (DR) site in a remote location.
    I have DB copies in Node 2 and Node 3. The thing is that the DB copies in Node 2 are in an older storage box and since we got a new storage box, I need to move the DBs and related logs of Node 2 to the new storage box
    I have found some information about how to deal with this (below I’m listing a KB link) but I would like to reconfirm a couple of things to make sure I’m understanding this correctly
    Move the Mailbox Database Path for a Mailbox Database Copy:
    https://technet.microsoft.com/en-us/library/dd979782%28v=exchg.141%29.aspx
    According to the KB: “If the mailbox database being moved is replicated to one or more mailbox database copies, you must follow the procedure in this topic to move the mailbox
    database path”
    Would this apply to my case even when I’m moving the BDs copies and Logs on Node 2 as opposed to Node 1 where the source DBs are?
    On step #3 in the procedure, you are supposed to “Remove all mailbox database copies for the database being moved. After all copies are removed, preserve the database and transaction log files from each server from which the database copy is being removed
    by moving them to another location. These files are being preserved so the database copies do not require re-seeding after they have been re-added.”
    Then in Step # 7, you are supposed to “Add all of the database copies that were removed in Step #3”
    As far as I know, when you add a copy of a database, Exchange creates the copy DB and starts to seed the replica servers with an up to date copy of the DB and all the current transaction logs at that point…according to the instructions
    above, you are supposed to re-add the DB copied we preserved...does it mean that we need to wait for the DBs seed process to finish after “adding the DB copy” and then replace the new DBs copies and logs created by the “Add database copy” function with the
    DB and logs preserved in Step #3?
    Thanks in advance for your feedback!
    FT
    FT

    Hi there,
    What the article is stating is that once you have removed the copies you can keep the existing transaction log files and database edb file to allow you not to have to do a full seed. You can do this by using the -seedingpostponed parameter in
    Add-MailboxDatabaseCopy
    However, and quite honestly, if your database isn't that big and your are not worried about performing a full copy of the database again to the other DAG members once you have moved your database to its preferred new location, just add the copy in the normal
    way and remove the legacy files afterwards.
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • LDAP as datasource for SAP EP7.0

    Hi All,
    I want to configure the LDAP as the datasource for the SAP EP 7.0(have both j2ee + abap stack).
    I followed many of the documents like: Note 777640 - Using an LDAP Directory as UME Data Source and SAP library.
    But when I want to change the datasource for the UME,
    System configuration> system administration>UMconfiguration--> datasource tab
    There I can be able to see only the abap datasource. But when i followd the same for EP6.0(only j2ee),
    there are many data sources like database only, AD etc.
    So how can I proceed for the SAP EP 7.0 configuration with LDAP.
    Any help will be appreacited.
    HAPPY NEW YEAR TO ALL.
    Regards
    Manisha

    Hi MAnish,
    For Dual stack its NOT possible to change UME datasource from ABAP.
    Regards
    Deb

  • Standard Datasource for Payment & Proposal List (S_P99_41000099)

    Hello,
    Looking for standard datasource in SAP R/3 for Payment List and Proposal List.
    The R/3 Query is S_P99_41000099. The fields for this query are coming from PAYR and REGUH tables.
    If we have any standard datasource available, then we can use that otherwise i have to go for database view and
    create a generic datasource on the view.
    I have another issue too, basically i'm also working on Check register Query, and we have ECC 6 (Net-weaver 4) and the standard datasource for Check register is available from ECC 6 SP-3 onwards. Do we have any other alternative for this? Installed the BI content for BW objects as we have 0FIAP_C50 available. But wondering how to work on the back end datasource. In the check register query, we have all the fields coming from PAYR Table.
    Please can someone please guide me on this?
    Thanks
    Vandana

    Hi,
    There is no standard datasource on PAYR table.
    Check the below thread -
    PAYR Table Data source
    Regards,
    Geeta

  • Changing the password for OIM Database User

    We need to change the password of the database user that created and user to run the prepare_xl_db.sh. I changed the <password encrypted="true"> to "false" and modified the password in the xlconfig.xml and restarted the app server but I can't log in. I get the below error. - what else is needed?
    ERROR,30 Oct 2008 09:31:56,265,[XELLERATE.SERVER],Class/Method: XLJobStoreCTM/initialize encounter some problems: Error while connecting to Database. Please check if DirectDB settings are correct in Xellerate configuration file.
    FATAL,30 Oct 2008 09:31:56,265,[XELLERATE.SCHEDULER],QuartzSchedulerImpl constructor Exception
    org.quartz.SchedulerConfigException: Failure occured during job recovery. [See nested exception: org.quartz.JobPersistenceException: Failed to obtain DB connection from data source 'noTXDS': org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (ORA-01017: invalid username/password; logon denied
    ) [See nested exception: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (ORA-01017: invalid username/password; logon denied
         at org.quartz.impl.jdbcjobstore.JobStoreSupport.initialize(JobStoreSupport.java:429)
         at org.quartz.impl.jdbcjobstore.JobStoreCMT.initialize(JobStoreCMT.java:131)
         at com.thortech.xl.scheduler.core.quartz.XLJobStoreCTM.initialize(Unknown Source)
         at org.quartz.impl.StdSchedulerFactory.instantiate(StdSchedulerFactory.java:753)
         at org.quartz.impl.StdSchedulerFactory.getScheduler(StdSchedulerFactory.java:885)
         at com.thortech.xl.scheduler.core.quartz.QuartzSchedulerImpl.initialize(Unknown Source)
         at com.thortech.xl.scheduler.core.quartz.QuartzSchedulerImpl.<init>(Unknown Source)
         at com.thortech.xl.scheduler.core.quartz.QuartzSchedulerImpl.getSchedulerInstance(Unknown Source)
         at com.thortech.xl.scheduler.core.SchedulerFactory.getScheduler(Unknown Source)
         at com.thortech.xl.scheduler.deployment.webapp.SchedulerInitServlet.startScheduler(Unknown Source)
         at com.thortech.xl.scheduler.deployment.webapp.SchedulerInitServlet.init(Unknown Source)
         at com.evermind.server.http.HttpApplication.loadServlet(HttpApplication.java:2371)
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4824)
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4748)
         at com.evermind.server.http.HttpApplication.initPreloadServlets(HttpApplication.java:4936)
         at com.evermind.server.http.HttpApplication.initDynamic(HttpApplication.java:1145)
         at com.evermind.server.http.HttpApplication.<init>(HttpApplication.java:741)
         at com.evermind.server.ApplicationStateRunning.getHttpApplication(ApplicationStateRunning.java:414)
         at com.evermind.server.Application.getHttpApplication(Application.java:570)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.createHttpApplicationFromReference(HttpSite.java:1987)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.<init>(HttpSite.java:1906)
         at com.evermind.server.http.HttpSite.initApplications(HttpSite.java:643)
         at com.evermind.server.http.HttpSite.setConfig(HttpSite.java:290)
         at com.evermind.server.http.HttpServer.setSites(HttpServer.java:270)
         at com.evermind.server.http.HttpServer.setConfig(HttpServer.java:177)
         at com.evermind.server.ApplicationServer.initializeHttp(ApplicationServer.java:2493)
         at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.java:1042)
         at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLauncher.java:131)
         at java.lang.Thread.run(Thread.java:595)
    * Nested Exception (Underlying Cause) ---------------
    org.quartz.JobPersistenceException: Failed to obtain DB connection from data source 'noTXDS': org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (ORA-01017: invalid username/password; logon denied
    ) [See nested exception: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (ORA-01017: invalid username/password; logon denied
    )]

    During oim installation datasources are created to access the database.
    So when you change the password for the database user you have to adjust the password in the datasources.

  • Replicated database deadlocks.

    We have a production database that sits on server1.
    This database has transactional replication. There is a replicated database that sits on server2. On server2 there are other databases that READ from the replicated database.
    Our admins have noticed that there are deadlocks getting created in the replicated database. So the developers have requested for a second replicated instances. This request has been turned down with a note - Another replicated instance would put a load
    on the production database that sits on sqlserver1.
    How do you collect stats to prove or disprove this statement?
    This is SQL Server 2008 (not R2)

    First, you should not have deadlocks on read.  This would be a result of your snapshot isolation level.  Before doing anything, you should look at that.
    Second, having a 2nd subscriber to a database has no impact at all on the source server.  It simply delivers the data to 2 destinations instead of 1.  In addition, this will in no way fix your problem.

  • Moving of EUL from Prod to Replicated Database

    Hi All,
    I have a question, I have to move EUL from PROD Database to the Replicated Database for good, can i do that in R12? version of the ebiz is 12.1.3 and Discovers EUL has to be moved I know it is quite tightly integrated. Is there a way to perform this task. Your comment and view are Appreciated
    Regards
    Younus

    Ah, that old "dynamically". Which means "Some mysterious process which is too complicated for me to understand", at least that's how it's commonly used in Java language forums.
    But it isn't really that mysterious. Given a table, run the query "select * from table". Then use the ResultSetMetaData to find out how many columns you selected. Then create an insert command with that many parameters and build a PreparedStatement from it. For each row in the ResultSet, copy its columns to those parameters and execute the insert.
    Or you could use the database's own features to do the copying. Probably more practical than writing "dynamic" Java code anyway.

Maybe you are looking for

  • "Customer" BAPI in Distribution Model for Training and Events

    We have an HR box with Training and Events - When booking an attendee on a course, you can select the "contact person" tab. Our SD box is separate from HR, so all the contact persons are maintained there. Is it possible to use the "Customer" BAPI in

  • Pages and Automator - simultaneously save as Word

    Okay, we can export as MS Word but my wife wants to be able to do it simultaneously AND have the file saved to a specific folder for Word files. This sounds to me like something Automator might be able to do - has anyone had any luck with this specif

  • Repairing permissions doesn't!

    I used Disk Utility to repair permissions in an attempt to fix some annoying problems. It came up with a list of 31 permissions repaired. I decided to repair again, and the same list appeared! Intrigued I repeated the repair 4 times and each time Dis

  • OS 10.4.11 Update & Printing

    I have been using an Epson Stylus Color 740i for years. I recently updated from 10.3.9 to 10.4.11, and I can no longer print. I cannot find any drivers or software updates to fix the problem. Is this printer no longer supported in 10.4.11? Does anyon

  • Problems with XMLparser

    I've got the following code (extracted from steven Muench his book!) query := 'select id from pts_executors order by id'; xmlgen.clearBindValues; -- (1) Create the stylesheet from TechnicalPaper.xsl loaded by -- name from the xml_documents table. sty