Using BerkeleyDB 12c on HPUX?

Has anyone been able to succesfully use BDB-SQLite on HPUX?
I am trying to create a database called t.db and creating a table t1. Get the same behavior when I try to use my own program to do the same thing.
$ ../db-6.1.19.NC/rls/bin/dbsql t.db
Berkeley DB 12c Release 1, library version 12.1.6.1.19: (June 10, 2014)
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
dbsql> create table t1(col1 int; col2 int);
mpool hash bucket latch id 407  (alloc, shared) waiting to share  [0/2 0%  rd 0/4 0% !Own], mpool hash bucket (alloc, shared)
mpool buffer latch id 1150  (alloc, shared) waiting to share  [0/3 0%  rd 0/1 0% !Own], mpool buffer (alloc, shared)
mpool hash bucket latch id 152  (alloc, shared) waiting to share  [0/2 0%  rd 0/2 0% !Own], mpool hash bucket (alloc, shared)
mpool buffer latch id 1151  (alloc, shared) waiting to share  [0/4 0%  rd 0/1 0% !Own], mpool buffer (alloc, shared)
mpool hash bucket latch id 407  (alloc, shared) waiting to share  [0/2 0%  rd 0/4 0% !Own], mpool hash bucket (alloc, shared)
mpool hash bucket latch id 921  (alloc, shared) waiting to share  [0/2 0%  rd 0/2 0% !Own], mpool hash bucket (alloc, shared)
mpool hash bucket latch id 152  (alloc, shared) waiting to share  [0/2 0%  rd 0/2 0% !Own], mpool hash bucket (alloc, shared)
mpool hash bucket latch id 921  (alloc, shared) waiting to share  [0/2 0%  rd 0/2 0% !Own], mpool hash bucket (alloc, shared)
mpool hash bucket latch id 407  (alloc, shared) waiting to share  [0/2 0%  rd 0/4 0% !Own], mpool hash bucket (alloc, shared)
mpool hash bucket latch id 666  (alloc, shared) waiting to share  [0/2 0%  rd 0/1 0% !Own], mpool hash bucket (alloc, shared)
Error: database disk image is malformed
dbsql>

The compile options are:
CC=aCC ../dist/configure --disable-largefile --enable-sql --enable-sql_compat --enable-cxx CFLAGS="-g -DNDEBUG  -DSQLITE_WITHOUT_MSIZE -DSQLITE_HOMEGROWN_RECURSIVE_MUTEX"
I have also tried with the very simple:
CC=aCC ../dist/configure  --enable-sql --enable-sql_compat --enable-cxx
and
CC=aCC ../dist/configure --enable-sql --enable-cxx --enable-debug
Yes, I am trying to create the database t.db for the first time and then create the tables within it. While tracing the code, the first callto open the database seems to be going through aithout an error:
  sqlite3* db;
  sqlite3_open("./t", &db);
  if (db == NULL) {
    fprintf(stderr, "Cannot open database\n");
    //exit(1);
    return NULL;
However, the second call seems to be failing after a considerable wait:
  execdb(db, "create table 't1' "
         "(f0 text, f1 text, f2 text, f3 text, f4 text);");
where the execdb function is as follows:
void execdb(sqlite3 *db, const char *stmt)
  int ret;
  char *errmsg = NULL;
  bool completed = false;
  for (int tries = 0; tries < 10; tries++) {
    if ((ret = sqlite3_exec(db, stmt,
                            0, 0, &errmsg)) == SQLITE_OK) {
      completed = true;
      break;
    if(ret == SQLITE_BUSY || ret == SQLITE_LOCKED) fails++;
    else success++;
    nretries++;
    sqlite3_free(errmsg);
    sleep(tries);
  if (!completed) {
    fprintf(stderr, "Cannot exec stmt: %s: %s\n", stmt, errmsg);
    sleep(1);
The make file to the driver is:
CFLAGS =  -I$(DBTOP)/include -g
LDFLAGS = -L$(DBTOP)/lib -ldb_sql-6.1 -ldb-6.1 -D_REENTRANT -mt -g
CXX = aCC
all:    stress
run:    stress
        SHLIB_PATH=$(DBTOP)/lib ./stress
gdb:    stress
        SHLIB_PATH=$(DBTOP)/lib gdb ./stress
clean:
        rm -rf ./t ./t-journal stress stress.o
stress: stress.o
        $(CXX) -o stress $(LDFLAGS) stress.o
stress.o: stress.cpp
        $(CXX) -c $(CFLAGS) stress.cpp
Please let me know if you need anything else.

Similar Messages

  • Help to design repository using BerkeleyDB

    Hi,
    I am trying to design a repository using BerkeleyDB but not sure what will be the best design.
    Here is the scenario:
    I have one Schema under which we can multiple classes of Java Objects to be stored.
    Eg: Under Schema1, we can store any of the objects from three classes A, B, C.
    Alll have the same schema.
    I want to persist all the objects of any of the class A, B, C coming in the system.
    Key is the time at which they enter the system.
    So key is time and value is Instance of any of the object A, B,C.
    So we can have duplicates i.e. multiple instances coming at the same time.
    A,B,C are POJO Java Objects so I don't have control over their implementation.
    From my understanding
    Schema should map to the Environment.
    Within one Environment, we can have different databases for each A, B, C since they have different class definitions.
    What should I use DPL or Base APIs?
    If using Base API's how to write MyTupleBinding because the definition of the class can be available after system gets configured.
    Is there any way to write a generic TupleBinding class which can convert any Java Object to an Entry and vice versa.
    If using DPL how to define Entity, Persitent Entities etc.
    User can query on any column of a POJO object so we have no information about secondary keys.
    Please provide me your suggestions.

    Hi,
    Key is the time at which they enter the system.So key is time and value is Instance of any of the object A, B,C.
    So we can have duplicates i.e. multiple instances coming at the same time.>
    Instead of using the time only as the key, use a two part key {time, sequence} to make the key unique. That will give you a unique primary key and also allow you to lookup records by time using the first part of the key.
    A,B,C are POJO Java Objects so I don't have control over their implementation.Can you add annotations to these classes? Are they Serializable?
    From my understandingSchema should map to the Environment.
    Within one Environment, we can have different databases for each A, B, C since they have different class definitions.>
    Yes.
    What should I use DPL or Base APIs?If you can add annotations to the classes, use the DPL. If not, and the classes are Serializable, you could use SerialBinding. Otherwise, you'll have to write tuple bindings.
    If using Base API's how to write MyTupleBinding because the definition of the class can be available after system gets configured.Is there any way to write a generic TupleBinding class which can convert any Java Object to an Entry and vice versa.>
    The only way to do that is use Java reflection to discover the fields in the classes. This is a lot of work, but is how such things are done in Java.
    If using DPL how to define Entity, Persitent Entities etc.If you can't add annotations, you could write a custom subclass of EntityModel. But that's an advanced use case, and it won't work if there is no unique primary key field in each class.
    User can query on any column of a POJO object so we have no information about secondary keys.You can do one of the following:
    1) Define no secondary keys and iterate over all records in a database when doing a query (queries are expensive if there are lots of records).
    2) Define every field as a secondary key (writes are expensive and lots of disk space is used if there are lots of fields).
    3) Try to dynamically determine when queries are frequently done on a field, and dynamically add a secondary key at that time (lots of work to implement).
    Please provide me your suggestions.Please answer my questions above and then I'll suggest an approach. If the POJO classes are required to be Seriallizable, your task will be greatly simplified.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How i use OEM 12c to monitor Microsoft Active directory.

    Hi,
    How i use OEM 12c to monitor Microsoft Active directory.Please assist me on this.
    Thanks,
    Sagar

    Hi,
    The fundamental problem with this scenario is that you have non-failover capable modules in a failover chassis - think of the ASA failover pair as one device and the IPS modules as two completely separate devices.
    Then, as already mentioned, add only the primary ASA. (The secondary will never be passing traffic in standby mode so it's not actually needed in MARS) Then, with the first IPS module you can add it as a module of the ASA or as a standalone device (MARS doesn't care). With the second IPS module the only option is to add it as a separate device anyway.
    In a failover scenario the ASA's swap IP's but the IPS's don't so whereas you'll only ever get messages from the active ASA you'll get messages from both IPS IP's depending on which one happens to be in the active ASA at the time.
    Don't forget that you have to manually replicate all IPS configuration every time you make a change.
    HTH
    Andrew.

  • Is there a way to use berkeleydb in memory only mode?

    Hi, all,
    I want to use berkeleydb as memory only mode, I don't want data to be stored as db file.
    Is there such way and how to implement it?
    Regards,
    -Bruce

    Hello,
    Please see the "Memory-only or Flash configurations" documentation at:
    http://docs.oracle.com/cd/E17076_02/html/programmer_reference/program_ram.html
    Thanks,
    Sandra

  • ADFBindingFilter error while deploying a war to WLS server using jdev 12c

    I've a OSB Server setup using XBUS_MAIN_GENERIC_120131.1402.S which is using JDEVADF_MAIN_GENERIC_120102.0032.6211.
    Launched the 12c Jdev and created a simple adfc web application with a test.jspx page and deployed in the OSB Server. The web app deployed and could launch the test page.
    I have added the page definition for that test page by 'Go to Page Definition' option.
    Now, If I try to deploy this web app war to the OSB Server, I'm getting the following exception in the jdev (Error1) and in the server log I could see the error (Error2).
    Any Idea how to resolve this issue?
    Error1 ( on Jdev )
    [03:15:08 AM] ---- Deployment started. ----
    [03:15:08 AM] Target platform is (Weblogic 10.3).
    [03:15:09 AM] Retrieving existing application information
    [03:15:09 AM] Running dependency analysis...
    [03:15:09 AM] Building...
    [03:15:15 AM] Deploying profile...
    [03:15:16 AM] Wrote Web Application Module to /scratch/sansrini/OSB_DEV/OSBMgmtTestApp/OSBMgmtTaskflowsTestApp/deploy/newosb2.war
    [03:15:16 AM] Deploying Application...
    [03:15:18 AM] [Deployer:149193]Operation "deploy" on application "newosb2" has failed on "AdminServer".
    [03:15:18 AM] [Deployer:149034]An exception occurred for task [Deployer:149026]deploy application newosb2 on AdminServer.: [HTTP:101371]There was a failure when processing annotations for application /scratch/sansrini/view_storage/sansrini_xbus2/xbus/build/MW_HOME/user_projects/domains/base_domain/servers/AdminServer/upload/newosb2/app/newosb2.war. Ensure that the annotations are valid. The error is oracle.adf.model.servlet.ADFBindingFilter.
    [03:15:18 AM] weblogic.application.ModuleException: [HTTP:101371]There was a failure when processing annotations for application /scratch/sansrini/view_storage/sansrini_xbus2/xbus/build/MW_HOME/user_projects/domains/base_domain/servers/AdminServer/upload/newosb2/app/newosb2.war. Ensure that the annotations are valid. The error is oracle.adf.model.servlet.ADFBindingFilter
    [03:15:18 AM] Deployment cancelled.
    [03:15:18 AM] ---- Deployment incomplete ----.
    [03:15:18 AM] Remote deployment failed (oracle.jdevimpl.deploy.common.Jsr88RemoteDeployer)
    Error2 ( on wls log )
    <Feb 3, 2012 3:15:18 AM PST> <Warning> <Deployer> <BEA-149078> <Stack trace for message 149004
    weblogic.application.ModuleException: [HTTP:101371]There was a failure when processing annotations for application /scratch/sansrini/view_storage/sansrini_xbus2/xbus/build/MW_HOME/user_projects/domains/base_domain/servers/AdminServer/upload/newosb2/app/newosb2.war. Ensure that the annotations are valid. The error is oracle.adf.model.servlet.ADFBindingFilter
    at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:732)
    at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:188)
    at weblogic.application.internal.ExtensibleModuleWrapper.prepare(ExtensibleModuleWrapper.java:93)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:100)
    at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:172)
    Truncated. see log file for complete stacktrace
    Caused By: java.lang.ClassNotFoundException: oracle.adf.model.servlet.ADFBindingFilter
    at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:297)
    at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:270)
    at weblogic.utils.classloaders.ChangeAwareClassLoader.findClass(ChangeAwareClassLoader.java:64)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:305)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:246)
    Truncated. see log file for complete stacktrace

    I guess you should be asking on an internal forum somewhere, because we, the unwashed masses, don't have access to JDev 12c.
    John

  • Bug report & possible patch: Wrong memory allocation when using BerkeleyDB in concurrent processes

    When using the BerkeleyDB shared environment in parallel processes, the processes get "out of memory" error, even when there is plenty of free memory available. This results in possible database corruption.
    Typical use case when this bug manifests is when BerkeleyDB is used by rpm, which is installing an rpm package into custom location, or calls another rpm instance during the installation process.
    The bug seems to originate in the env/env_region.c file: (version of the file from BDB 4.7.25, although the culprit code is the same in newer versions too):
    330     /*
    331      * Allocate room for REGION structures plus overhead.
    332      *
    333      * XXX
    334      * Overhead is so high because encryption passwds, replication vote
    335      * arrays and the thread control block table are all stored in the
    336      * base environment region.  This is a bug, at the least replication
    337      * should have its own region.
    338      *
    339      * Allocate space for thread info blocks.  Max is only advisory,
    340      * so we allocate 25% more.
    341      */
    342     memset(&tregion, 0, sizeof(tregion));
    343     nregions = __memp_max_regions(env) + 10;
    344     size = nregions * sizeof(REGION);
    345     size += dbenv->passwd_len;
    346     size += (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    348     size += env->thr_nbucket * __env_alloc_size(sizeof(DB_HASHTAB));
    349     size += 16 * 1024;
    350     tregion.size = size;
    Usage from the rpm's perspective:
    The line 346 calculates how much memory we need for structures DB_THREAD_INFO. We allocate structure DB_THREAD_INFO for every process calling db4 library. We don't deallocate these structures but when number of processes is greater than dbenv->thr_max then we try to reuse some structure for process that is already dead (or doesn't use db4 no longer). But we have DB_THREAD_INFOs in hash buckets and we can reuse DB_THREAD_INFO only if it is in the same hash bucket as new DB_TREAD_INFO. So line 346 should contain:
    346     size += env->thr_nbucket * (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    Why we don't encounter this problem earlier? There are some magic reserves as you can see on line 349 and some other additional space is created by alligning to blocks. But if we have two processes running at the same time and these processes end up in the same hash bucket and we repeat this proces many times to fill all hash buckets with two DB_THREAD_INFOs then we have 2 * env->thr_nbucket(37) = 74 DB_THREAD_INFOs, which is much more than dbenv->thr_max(8) + dbenv->thr_max(8) / 4 = 10 and plus allocation from dbc_put, we are out of memory.
    And how we will create two processes that end up in the same hash bucket. We can start one process (rpm -i) and then in scriptlet we start many processes (rpm -q ...) in loop and one of them will be in the same hash bucket as the first process (rpm -i).
    I would like to know your opinion on this issue, and if the proposed fix would be acceptable.
    Thanks in advance for answers.

    The attached patch for db-4.7 makes two changes:
      it allows enough for each bucket to have the configured number of threads, and
      it initializes env->thr_nbuckets, which previously had not been initialized.
    Please let us know how it works for you.
    Regards,
    Charles

  • Using berkeleydb in Blackberry devices.

    Hi,
    I have some experience in blackberry for small application.
    Now I want to develop a database application for Blackberry device. I am going to use the device version 4.0 which is not having any embedded database like SQLite. I have a lot of doubts when I start the development of my application.
    If anyone can answer my following questions, it will help me a lot.
    1. Can I install berkeleydb in Blackberry version 4 devices? If it is possible how can I install it or How can I use it in my application OR please give some links to read to help me.
    2. How can we synchronize the database with server. I need to store values in database offline and synchronize it to the server in one point.
    Please give me some links I can download some document, How to install the berkeleydb in Blackberry devices.
    Thanks……….

    Hi,
    BDB JE does not work on Blackberry devices because they run Java ME, and BDB JE requires Java SE. BDB (C edition) does not currently work on Blackberry because it is a C-based product, and RIM would have to support it as part of the platform. It may be worth asking RIM to support BDB as part of their platform.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • IP Address Takeover when using BerkeleyDB

    Yo All,
    We are looking at using IP Address takeover for use with BerkeleyDB. Our server code sits on top of the BDB-HA library, and provides an API over TCP for clients to use.
    For more on how IP address takeover works, look at:
    http://www.ultramonkey.org/3/ip_address_takeover.html
    Has anyone wired in the BerkeleyDB Master Election system to trigger IP address takeovers?
    On a new master taking over, it would take over the VIP that clients use. At the same time, all the slaves would debind from it.
    I was thinking of just wiring a configurable shell script to be run on completion of a master election. The master would run vip-up.sh, and all of the slaves would run vip-down.sh.
    Each node would also have a 'real' IP. They would all add-remote-site each other with those static IPs. Only our own clients would use the virtual IP for communication.
    Thoughts?

    Are these two AirPort Express base station the only routers in your current network configuration? What are the model number for each of them? What is the make & model of your Internet modem?
    For reference, the following Apple Support article provides a step-by-step on how to configure an extended network with AirPort base stations. I would suggest that you review it to see if anything was missed when you configured yours.

  • Problem using BAM 12c with BPEL

    Hi All,
    i have created a simple BPEL process in 12c. In this process I have defined business indicators ( 1 dimension and one measure). When I deploy the process I can see the data objects <CompositeName> Project and <CompositeName> Activity created, with the right columns. However when I run the BPEL process, the entries are created in the data objects, but sometimes the dimension columns are not filled. When I check in the BAM logs it is full of messages:
    Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_SOAINFRA.BEAM_8_STRING_1_UC) violated
    Error Code: 1
    Call: INSERT INTO BEAM_VIEW_39 (BEAM_ID, BEAM_OPTLOCK, B_INPUTS, B_SAMPLE, CASEID, CASE_DEFINITION_ID, COMPONENT_INSTANCE_ID, COMPOSITE_DEFINITION_ID, COMPOSITE_INSTANCE_ID, DATAOBJECT_CREATED, DATAOBJECT_HIERARCHY, DATAOBJECT_MODIFIED, ECID, FLOW_ID, ORGUNITID, PROCESS_DEFINITION_ID, PROCESS_DUE_TIME, PROCESS_END_DATE, PROCESS_END_EVENT_TYPE, PROCESS_END_TIME, PROCESS_ESTIMATED_TIME, PROCESS_EXPIRATION_TIME, PROCESS_FAULT_INFO, PROCESS_FAULT_TYPE, PROCESS_INSTANCE_STATUS, PROCESS_INSTANCE_TITLE, PROCESS_LASTUPDATED_TIME, PROCESS_RUNNING_TIME, PROCESS_STARTTIME_MS, PROCESS_START_DATE, PROCESS_START_EVENT_TYPE, PROCESS_START_TIME, PROCESS_SUSPEND_TIME, PROC_SYS_NUM01, PROC_SYS_NUM02, PROC_SYS_NUM03, PROC_SYS_NUM04, PROC_SYS_NUM05, PROC_SYS_STR02, PROC_SYS_STR03, PROC_SYS_STR04, PROC_SYS_STRG01, PROC_SYS_STRG05, PROC_SYS_TS01, PROC_SYS_TS02, PROC_SYS_TS03, S_SAMPLE, TENANT_ID) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
      bind => [48 parameters bound]
    Query: InsertObjectQuery({BEAM_VIEW_39 2003})
      at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl$1.handleException(EntityManagerSetupImpl.java:696)
      at org.eclipse.persistence.transaction.AbstractSynchronizationListener.handleException(AbstractSynchronizationListener.java:275)
      at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:170)
      at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
    The unique index is mapped to the composite instance id.
    Anyone any ideas?

    Marc,
    I was able to re-produce this issue. I have logged a bug and we will look into it and get back to you.
    Thanks,
    Lloyd

  • DB_LOCK_DEADLOCK: Using BerkeleyDB Xml in a threaded environment

    Hello,
    I'm having problems running a Berkeley DB application in a threaded environment. In summary, this is what I'm doing:
    I implemented the following class:
    ref class MyTestClass
    private:
         DbEnv* env;
         XmlManager* man;
         unsigned int ctr;
    public:
         MyTestClass()
              ctr = 0;
              env = new DbEnv(0);
              env->set_cachesize(0, 64*1024, 1);
              env->set_lk_max_lockers(1000);
              env->set_lk_max_locks(1000);
              env->set_lk_max_objects(1000);
              env->open("c:\\temp\\SampleDb",
    DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG |
    DB_INIT_MPOOL | DB_INIT_TXN | DB_THREAD | DB_PRIVATE, 0);
              man = new XmlManager(env, DBXML_ALLOW_AUTO_OPEN);
         void MyTestWriter()
              while(true)
                   DbXml::XmlQueryContext *ctx;
                   DbXml::XmlTransaction *txn;
                   DbXml::XmlResults *res;
                   try
                        ctx = new XmlQueryContext(man->createQueryContext(XmlQueryContext::LiveValues, XmlQueryContext::Lazy));
                        txn = new XmlTransaction(man->createTransaction());
                        res = new XmlResults(man->query(*txn, "for $v in collection('test.dbxml')/sessions return insert nodes <A/> into$v", *ctx, DB_RMW));
                        txn->commit(DB_TXN_SYNC);
                   catch(XmlException& e)
                   finally
                        delete res;
                        delete txn;
                        delete ctx;
         void DeadlockUnblocker()
              while(true)
                   int ret;
                   env->lock_detect(0, DB_LOCK_DEFAULT, &ret);
                   Sleep(5000);
    Basically I create a shared MyTestClass object and then spawn 4 threads: 3 of them execute the MyTestWriter method and 1 executes the DeadlockUnblocker method.
    What happens is that the 3 threads block even before completing the first write. After 5 seconds the DeadlockUnblocker is executed and 1 thread is unblocked and throws a "DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock" which is trapped by my catch block. However, the other threads are still hanging on the execution and the entire flow of the application is stopped.
    Anybody can tell me what I'm doing wrong ??
    Thanks
    Matteo

    Matteo,
    First, the C++ API works best if you avoid new/delete of Xml* objects. There are relatively few cases where that's necessary. Using scoped objects ensures their destruction.
    As for your hang the best tool to use is "db_stat -CA" in the environment directory at the time of the hang to find out what is going on. You'll have to not use DB_PRIVATE in the DbEnv::open() flags for it to work. While a deadlock thread is reasonable if you expect a lot of deadlocks (and concurrent write will do that) it is best to use DbEnv::set_lk_detect() to get immediate detection.
    Regards,
    George

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Transaction aborts after installing ODAC 12c Release 3

    I have .net code that used a transaction scope which works fine using ODAC 11g Release 4, but fails with "Unable to enlist in a distributed transaction" using ODAC 12c Release 1,2, or 3.  The transaction to a single database.  I am at a loss for what could be the issue.
    This issue occurs on both Windows 7 and Windows Server 2008 R2.
    I have reviewed the trace logs for both the Microsoft Distributed Transaction Server, and the Oracle Services for Microsoft Transactions Services.  The MSDTC trace logs indicate that the transaction abort was request was received from the calling application ("RECEIVED_ABORT_REQUEST_FROM_BEGINNER").  The ORAMTS trace logs indicate an OCI error and that there was an attempt to begin a distributed transaction with out logging on ("OCI_ERROR - 2048." ,  "ORA-02048: attempt to begin distributed transaction without logging on")
    I can reproduce this error with a simple code example with just tried to insert records into a table.  If I change the data provider to "System.Data.OracleClient", or uninstall 12c and install 11g this code works fine.
    DataSet1TableAdapters.DataTable1TableAdapter da = new DataSet1TableAdapters.DataTable1TableAdapter();
                using (TransactionScope scope = new TransactionScope())
                    Transaction txn = Transaction.Current;
                    try
                       da.Insert(0, "This ia a title");
                        scope.Complete();
                        lblmessage.Text = "Transaction Succeeded.";
                    catch (Exception ex)
                        txn.Rollback();
                        lblmessage.Text = "Transaction Failed.";
    Can anyone provide any ideas what is happening?  I really would like to use ODAC 12c.
    Thanks.

    Moving to the ODP.NET forum to get a wider audience.

  • Cannot use multiple database in my application

    I have written a C++/CLI wrapper for use berkeleydb without lost performance. This wrapper run perfectly with one database but when i open a second database ( i create a new instance of the wrapper ) the application crash.
    I have written a minimal pure C++ application that use the pure C++ classe of my wrapper, when a open and use only one database the code run perfectly but with two databases, the application be crazy.
    infos : compiler VC++ 2008, OS : Vista 32bit
    this the code of my berkeleydb class :
    #pragma comment (lib, "libdb47.lib")
    #if defined(WIN32) || defined(WIN64)
    #include <windows.h>
    #include <list>
    #endif
    #ifndef ParamsStructCpp
    #include "ParamsStruct.h"
    #endif
    #include <db_cxx.h>
    #include "BerkeleyMethods.h"
    using namespace std;
    using namespace stdext;
    // type
    //typedef list<ParamsStructCpp>::iterator it;
    typedef list<ParamsStructCpp> fetchbuffer;
    // Db objects
    Db * db; // Database object
    DbEnv env(0); // Environment for transaction
    u_int32_t oFlags = DB_CREATE|DB_AUTO_COMMIT|DB_READ_UNCOMMITTED; // Open flags; //
    u_int32_t env_oFlags = DB_CREATE |
    DB_THREAD |
                             DB_INIT_LOCK |
                             DB_INIT_LOG |
                             DB_INIT_MPOOL |
                             DB_INIT_TXN |
                             DB_MULTIVERSION; // Flags for environement
    // Constructeurs
    BerkeleyMethods::BerkeleyMethods()
    BerkeleyMethods::BerkeleyMethods(char * dbname, unsigned int db_cache_gbyte, unsigned int db_cache_size,
                        int db_cache_number, int db_type, char * dberr_file, char * envdir, unsigned int dbtxn_timeout,
                             unsigned int dbtxn_max)
         strcpy_s(this->db_name, strlen(_db_name)+1, dbname);
         this->db_cache_gbyte = db_cache_gbyte;
    this->db_cache_size = db_cache_size;
         this->db_cache_number = db_cache_number;
    this->db_type = db_type;
         this->db_txn_timeout = dbtxn_timeout;
         this->db_txn_max = dbtxn_max;
         strcpy_s(this->db_err_file, strlen(_db_err_file)+1, dberr_file);
         strcpy_s(this->env_dir, strlen(_env_dir)+1, envdir);
         this->Set_restoremode(false);
    // ==========
    // Fonctions
    // http://www.codeproject.com/KB/string/UtfConverter.aspx
    bool BerkeleyMethods::OpenDatabase()
         try
              std::cout << "Dbname " << this->db_name << std::endl;
              if (strlen(this->db_name) < 2) {
    throw new std::exception("Database name is unset");
              // Set database cache
              env.set_cachesize(this->db_cache_gbyte, this->db_cache_size, this->db_cache_number);
              // Set transaction timeout
              if (this->db_txn_timeout > 0) {
                   env.set_timeout(this->db_txn_timeout, DB_SET_TXN_TIMEOUT);
    // Set max opened transactions
              if (this->db_txn_max > 0) {
                   env.set_tx_max(this->db_txn_max);
              // Dupplicate key support;
              if (this->Get_dup_support()) {
    env_oFlags = env_oFlags|DB_DUPSORT;
              // Deadlokcs gesture
              env.set_lk_detect(DB_LOCK_MINWRITE);
    // Set the error file
              env.set_errfile(fopen(this->db_err_file, "w+"));
    // Error prefix
              env.set_errpfx("Error > ");
              // Open environement
              env.open(this->env_dir, env_oFlags, 0);
              // Create database object
              db = new Db(&env, 0);
              // Open the database
              switch(this->db_type)
              case 1:
                   db->open(NULL, this->db_name, NULL, DB_BTREE, oFlags, 0);
                   break;
              case 2:
                   db->open(NULL, this->db_name, NULL, DB_HASH, oFlags, 0);
                   break;
              case 3:
                   db->open(NULL, this->db_name, NULL, DB_QUEUE, oFlags, 0);
                   break;
              case 4:
                   db->open(NULL, this->db_name, NULL, DB_RECNO, oFlags, 0);
                   break;
              default:
                   throw new std::exception("Database name is unset");
                   break;
              u_int32_t gbcacheSize = 0;
    u_int32_t bytecacheSize=0;
    int ncache=0;
    env.get_cachesize(&gbcacheSize,&bytecacheSize,&ncache);
    std::cerr << "Taille du cache est:" << gbcacheSize << "Go plus " << bytecacheSize << " octets." << std::endl;
    std::cerr << "Number of caches : " << ncache << std::endl;
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
         catch(std::exception &e)
              std::cout << e.what() << std::endl;
         return false;
    bool BerkeleyMethods::CloseDatabase()
         try
              db->close(0);
              env.close(0);
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
         catch(std::exception &e)
              std::cout << e.what() << std::endl;
         return false;
    bool BerkeleyMethods::AddData(char * key, unsigned long int value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
              // Set datas
    Dbt _key(key, strlen(key)+1);
    Dbt _value(&value, sizeof(unsigned long int));
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              std::cout << "Error" << std::endl;
              txn->abort();
         return false;
    bool BerkeleyMethods::AddData(unsigned long int key, char * value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key(&key, sizeof(unsigned long int));
              Dbt _value(value, strlen(value)+1);
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);     
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::AddData(char * key, char * value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key(key, strlen(key)+1);
              Dbt _value(value, strlen(value)+1);
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);     
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::AddData(unsigned long int key, unsigned long int value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key(&key, sizeof(unsigned long int));
              Dbt _value(&value, sizeof(unsigned long int));
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);     
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::AddData(char * key, ParamsStructCpp value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key(key, strlen(key)+1);
              Dbt _value(&value, sizeof(ParamsStructCpp));
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);     
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::AddData(unsigned long int key, struct ParamsStructCpp value)
         if (this->Get_restoremode())
              return false;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key(&key, sizeof(unsigned long int));
              Dbt _value(&value, sizeof(ParamsStructCpp));
              env.txn_checkpoint(512, 2, 0);
              int exist = db->put(txn, &_key, &_value, DB_NOOVERWRITE);
              if (exist == DB_KEYEXIST) {
                   std::cout << "This record already exist" << std::endl;
              txn->commit(0);     
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::Exist(unsigned long int key)
         if (this->Get_restoremode())
              return true;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
    Dbt _key(&key, sizeof(unsigned long int));
              int state = db->exists(txn, &_key, DB_READ_COMMITTED);
              txn->commit(0);
              if (state == 0) {
                   return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    bool BerkeleyMethods::Exist(char * key)
         if (this->Get_restoremode())
              return true;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
              Dbt _key(key, strlen(key)+1);
              int state = db->exists(txn, &_key,DB_READ_COMMITTED);
              txn->commit(0);
              if (state == 0) {
                   return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    void BerkeleyMethods::GetData (char * pData, int nbr, unsigned long int key)
         if (this->Get_restoremode())
              return;
         DbTxn * txn;
         Dbc *dbcp;
         try
    env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
              db->cursor(txn, &dbcp, 0);
              Dbt _key;
              Dbt data;
              key.setdata(&key);
              key.setsize(sizeof(unsigned long int));
              dbcp->get(&_key, &data, DB_FIRST);
              char * temp = (char *)data.get_data();
    strcpy_s(pData, strlen(temp)+1, temp);
              dbcp->close();
              txn->commit(0);
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              if (dbcp != NULL)
                   dbcp->close();
              if (txn != NULL)
                   txn->abort();
         catch(...)
              if (dbcp != NULL)
                   dbcp->close();
              if (txn != NULL)
                   txn->abort();
    unsigned long int BerkeleyMethods::GetData(char * key)
         if (this->Get_restoremode())
              return 0;
         DbTxn * txn;
         Dbc *dbcp;
         try
    env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
              db->cursor(txn, &dbcp, 0);
    Dbt _key;
              Dbt data;
              key.setdata(key);
              key.setulen(strlen(key)+1);
              dbcp->get(&_key, &data, DB_FIRST);
    unsigned long int xdata = *((unsigned long int *)data.get_data());
              dbcp->close();
              txn->commit(0);
              return xdata;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              dbcp->close();
              txn->abort();
         catch(...)
              dbcp->close();
              txn->abort();
         return 0;
    ParamsStructCpp * BerkeleyMethods::GetData(unsigned long int key, bool null)
         if (this->Get_restoremode()) {
              return new ParamsStructCpp();
         DbTxn * txn;
         Dbc *dbcp;
         try
    env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
              db->cursor(txn, &dbcp, 0);
              Dbt _key;
              Dbt data;
              key.setdata(&key);
              key.setsize(sizeof(unsigned long int));
    dbcp->get(&_key, &data, DB_FIRST);
              ParamsStructCpp * temp = (ParamsStructCpp *)data.get_data();
              dbcp->close();
              txn->commit(0);
              return temp;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              dbcp->close();
              txn->abort();
         catch(...)
              dbcp->close();
              txn->abort();
         return new ParamsStructCpp();
    ParamsStructCpp * BerkeleyMethods::GetData(char * key, bool null)
         if (this->Get_restoremode()) {
              return new ParamsStructCpp();
         DbTxn * txn;
         Dbc *dbcp;
         try
    env.txn_begin(NULL, &txn, DB_TXN_SNAPSHOT); // Bebin transaction
              db->cursor(txn, &dbcp, 0);
    Dbt _key;
              Dbt data;
              key.setdata(key);
              key.setulen(strlen(key)+1);
              dbcp->get(&_key, &data, DB_FIRST);
    ParamsStructCpp * xdata = (ParamsStructCpp *)data.get_data();
              dbcp->close();
              txn->commit(0);
              return xdata;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              dbcp->close();
              txn->abort();
         catch(...)
              dbcp->close();
              txn->abort();
         return new ParamsStructCpp();
    list<ParamsStruct> BerkeleyMethods::FetchAllDatabase ()
         list<ParamsStruct> temp;
         Dbc *dbcp;
         try
              db->cursor(NULL, &dbcp, 0);
    Dbt _key;
              Dbt data;
    while(dbcp->get(&_key, &data, DB_NEXT))
                   unsigned long int key = *((unsigned long int *)_key.get_data());
                   char * datetime = (char *)data.get_data();
                   ParamsStruct p;
                   strcpy_s(p.lastaccess, strlen(datetime)+1, datetime);
                   p.downloaded
                   temp.push_back(
                   //temp.insert(Tuple(datetime, key));
         catch(DbException &e)
              std::cout << e.what() << std::endl;
         catch(...)
         return temp;
    bool BerkeleyMethods::DeleteData(unsigned long int key)
         if (this->Get_restoremode())
              return true;
         DbTxn * txn;
         try
              env.txn_checkpoint(128, 1, 0);
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key;
              key.setdata(&key);
              key.setsize(sizeof(unsigned long int));
              db->del(txn, &_key, 0);
              txn->commit(0);
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;;
    bool BerkeleyMethods::DeleteData(char * key)
         if (this->Get_restoremode())
              return true;
         DbTxn * txn;
         try
              env.txn_begin(NULL, &txn, 0); // Bebin transaction
    Dbt _key;
              key.setdata(key);
              key.setulen(strlen(key)+1);
              db->del(txn, &_key, 0);
              txn->commit(0);
              return true;
         catch(DbException &e)
              std::cout << e.what() << std::endl;
              txn->abort();
         catch(...)
              txn->abort();
         return false;
    int BerkeleyMethods::Sync()
         if (this->Get_restoremode())
              return -1;
         try
              return db->sync(0);
         catch(...)
    return -1;
    int BerkeleyMethods::Count()
         if (this->Get_restoremode())
              return -1;
         Dbc *dbcp;
         int count = 0;
         try
    Dbt key;
              Dbt data;
              db->cursor(NULL, &dbcp, 0);
              while (dbcp->get(&key, &data, DB_NEXT) == 0) {
    count++;
              dbcp->close();
              return count;
         catch(...)
    return -1;
    BerkeleyMethods::~BerkeleyMethods()
         if (db) {
         db->sync(0);
    db->close(0);
         env.close(0);
    =====
    The code the use this class :
         BerkeleyMethods db("test.db", 0, 524288000, 1, 1, "log.txt", "./Env_dir", 1000000 * 5, 600000);
                   BerkeleyMethods db1("test2.db", 0, 524288000, 1, 1, "log2.txt", "./Env_dir2", 1000000 * 5, 600000);
    bool z = db.OpenDatabase();
    db1.OpenDatabase();
    if (z)
                        std::cout << "Base de données ouverte" << std::endl;
                   for (unsigned int i = 0; i < 1000; i++)
                        ParamsStructCpp p = { 10, "02/08/2008 14:46:23", 789 };
                   bool a = db.AddData(i, p);
                        db1.AddData(i, p);
    if (a)
                        std::cout << "Ajout de données ok" << std::endl;
                   for (unsigned int i = 0; i < 1000; i++)
                        ParamsStructCpp * c = db.GetData(i, false);
                        ParamsStructCpp * c1 = db1.GetData(i, false);
                        std::cout << "Donné récupéré " << c->downloaded << " : " << c->lastaccess << " : " << c->waittime << std::endl;
                        std::cout << "Donné récupéré " << c1->downloaded << " : " << c1->lastaccess << " : " << c1->waittime << std::endl;
    / ====
    The application output show that when using two database the data is not correctly set. It seems that db and db1 is the same object :|.
    For example in db i insert a key => toto with value 4, and in db1 i insert the same value nomaly have no problem. But berkeleydb say the the key toto in db1 already exist while not
    I don't understand.
    NB : sorry for my english

    Michael Cahill wrote:
    As a side note, it is unlikely that you want both
    DB_READ_UNCOMMITTED and DB_MULTIVERSION to be set.
    This combination pays a price during updates for
    maintaining multiple versions, but still requires
    (short term) locks to be held during reads. The BDB/XML Transaction Processing Guide states the following:
    [...]in addition to BDB XML's normal degrees of isolation, you can also use snapshot isolation. This allows you to avoid the read locks that serializable isolation requires.
    http://www.oracle.com/technology/documentation/berkeley-db/xml/gsg_xml_txn/cxx/isolation.html
    This seems to contradict what you're saying here.
    Is there a general guideline on whether or not to use MVCC together with a relaxed isolation degree like DB_READ_UNCOMMITTED? Should the statement in the BDB/XML TP Guide rather have "as an alternative to" instead of "in addition to"?
    Michael Ludwig

  • Oracle EBS 12.1.3 with 12c DB DG

    Hello Gurus,
    Can you kindly what the exact note to follow, I want to implement Data Guard for DR site of EBS database 12.1.0.2
    Regards;

    please see
    Business Continuity for Oracle E-Business Suite Release 12.1 Using Oracle 12c Physical Standby Database (Doc ID 1900663.1)
    ApPsMaStI
    sharing is Caring

  • Adf 12c and wls11g

    HI
    i use adf 12c and i want to deploy my applications on weblogic 11g.
    is it possible? How can i install adf 12c on wls11g
    thanks

    Hi,
    ADF 12C is not supported to be installed with Weblogic 11g .
    The information at http://www.oracle.com/technetwork/developer-tools/jdev/documentation/1212-cert-1964670.html#Abrams-SupportInformation-ApplicationServers should be read as follow:
    In case of ADF 12c the only supported WebLogic server is WebLogic 12c. That is stated in the 3rd column of the table in the above document. The 2nd column states that when you use JDeveloper 12c you will be able to connect and deploy directly from JDeveloper to the mentioned application servers 10.3.5+ and 12.1.2. However for ADF 12c the certification is only against 12c.
    So it is not possible to install FMW 12c onto WebLogic 10.3.6.
    Regards,
    Prakash.

Maybe you are looking for

  • F110 Issue-sort code issue

    Hi Gurus, We have one issue by F110,by using the bank transfer.so all the payments are cleared.and DME file is generated.we have to upload to the bank.before that we have to convert the file into notepad.by using our local software we upload into our

  • Problem with PI 7.0

    Hi all, I have installed PI 7.0 and configured the server,now I have got a problem, that is when I am trying to configure a file to file transfer the system is picking the file from the specified location but it is not placing the file in the locatio

  • Is it possible to change  the file system owner in Oracle EBS R12.1.3

    We are in R12.1.3 runnig in Linux 5. the OS owners of the EBS is orcrp2ebs and apcrp2ebs because of 9 character length the ps-ef shows uid instead of compelte username. So we are plannign to change the owner name to 8 character length. What are the p

  • Intergrated Camera B5400 listed as working but no picture

    Hi, I have a B5400 laptop, my built in camera worked on purchase but has since stopped.  It is listed in the device manager and says its working OK.  When I try to use it on Skype I can record sound but no video. I have downloaded latest drivers and

  • What technique is used to achieve this type of graphic?

    This looks to be some type of mesh and warping, but the lighting might indicate a specialized landscape program. Anyone know how this art was produced? TIA, Ken