Multi Threaded Insert in Berkeley DB

Hi,
This is my requirement.
Have to read 9 million records from Oracle DB.
Have to identify KEY and have to instantiate VO object as VALUE
Have to insert 9million records (key and value VO's) into Berkeley DB.
What is the best way to do this? Performance is a key criteria. Is it OK to use multiple threads?
Thanks
Selva.

Hi Selva,
I could not tell if you have reviewed the Getting Started Guides for Berkeley DB Java Edition. If you have not, that is a great place to start to help you come up with a good design describing many performance tradeoffs. For example,
Running without transactions is faster, but if you're using transactions,
Running with setTxnNoSync() a great way to improve performance. For more information:
http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/usingtxns.html#nodurabletxn
http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/index.html
http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/index.html
Multithreaded operations are supported, but you have to consider whether you're going to introduce contention. JE has record level locking, but there can be contention on shared nodes in the btree. You may want to consider starting with a small number of threads, and measure your performance as you gradually increase your threads.
You may want to consider using deferred write. For more information:
http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/DB.html#dwdatabase
Finally, there are many performance tips both in the JE FAQ and in this forum.
The Berkeley DB Java Edition FAQ has useful tips as does this forum, which already has several performance threads.
Ron Cohen

Similar Messages

  • Any general tips on getting better performance out of multi table insert?

    I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
    I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
    First let me describe my scenario to see if you agree that my performance is slow...
    its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
    Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
    Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
    Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
    Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
    Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
    Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
    The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
    Indexes?
    Table 1 has 3 index + PK.
    Table 2 has 0 index + FK + PK.
    Table 3 has 4 index + FK + PK
    Table 4 has 3 index + FK + PK
    Table 5 has 4 index + FK + PK
    None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
    The query itself looks something like this:
    insert /*+ append */ all
    when 1=1 then
    into table1 (...) values (...)
    into table2 (...) values (...)
    when a=b then
    into table3 (...) values (...)
    when a=c then
    into table3 (...) values (...)
    when p=q then
    into table4(...) values (...)
    when x=y then
    into table5(...) values (...)
    select .... from source_table
    Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
    Now for the performance:
    It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
    Does that seem normal or am I expecting too much?
    I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
    If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
    P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
    Edited by: trant on Jun 27, 2011 9:29 PM

    Looks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
    In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
    510     events in waitclass Other     30670     9161     329759     10.75     196     3297590639     1736664284     1893977003     0     Other
    512     events in waitclass Other     32428     10920     329728     10.17     196     3297276553     1736664284     1893977003     0     Other
    243     events in waitclass Other     21513     5     329594     15.32     196     3295935977     1736664284     1893977003     0     Other
    223     events in waitclass Other     21570     52     329590     15.28     196     3295898897     1736664284     1893977003     0     Other
    241     row cache lock     1273669     0     42137     0.03     267     421374408     1714089451     3875070507     4     Concurrency
    241     events in waitclass Other     614793     0     34266     0.06     12     342660764     1736664284     1893977003     0     Other
    241     db file sequential read     13323     0     3948     0.3     13     39475015     2652584166     1740759767     8     User I/O
    241     SQL*Net message from client     7     0     1608     229.65     1566     16075283     1421975091     2723168908     6     Idle
    241     log file switch completion     83     0     459     5.54     73     4594763     3834950329     3290255840     2     Configuration
    241     gc current grant 2-way     5023     0     159     0.03     0     1591377     2685450749     3871361733     11     Cluster
    241     os thread startup     4     0     55     13.82     26     552895     86156091     3875070507     4     Concurrency
    241     enq: HW - contention     574     0     38     0.07     0     378395     1645217925     3290255840     2     Configuration
    512     PX Deq: Execution Msg     3     0     28     9.45     28     283374     98582416     2723168908     6     Idle
    243     PX Deq: Execution Msg     3     0     27     9.1     27     272983     98582416     2723168908     6     Idle
    223     PX Deq: Execution Msg     3     0     25     8.26     24     247673     98582416     2723168908     6     Idle
    510     PX Deq: Execution Msg     3     0     24     7.86     23     235777     98582416     2723168908     6     Idle
    243     PX Deq Credit: need buffer     1     0     17     17.2     17     171964     2267953574     2723168908     6     Idle
    223     PX Deq Credit: need buffer     1     0     16     15.92     16     159230     2267953574     2723168908     6     Idle
    512     PX Deq Credit: need buffer     1     0     16     15.84     16     158420     2267953574     2723168908     6     Idle
    510     direct path read     360     0     15     0.04     4     153411     3926164927     1740759767     8     User I/O
    243     direct path read     352     0     13     0.04     6     134188     3926164927     1740759767     8     User I/O
    223     direct path read     359     0     13     0.04     5     129859     3926164927     1740759767     8     User I/O
    241     PX Deq: Execute Reply     6     0     13     2.12     10     127246     2599037852     2723168908     6     Idle
    510     PX Deq Credit: need buffer     1     0     12     12.28     12     122777     2267953574     2723168908     6     Idle
    512     direct path read     351     0     12     0.03     5     121579     3926164927     1740759767     8     User I/O
    241     PX Deq: Parse Reply     7     0     9     1.28     6     89348     4255662421     2723168908     6     Idle
    241     SQL*Net break/reset to client     2     0     6     2.91     6     58253     1963888671     4217450380     1     Application
    241     log file sync     1     0     5     5.14     5     51417     1328744198     3386400367     5     Commit
    510     cursor: pin S wait on X     3     2     2     0.83     1     24922     1729366244     3875070507     4     Concurrency
    512     cursor: pin S wait on X     2     2     2     1.07     1     21407     1729366244     3875070507     4     Concurrency
    243     cursor: pin S wait on X     2     2     2     1.06     1     21251     1729366244     3875070507     4     Concurrency
    241     library cache lock     29     0     1     0.05     0     13228     916468430     3875070507     4     Concurrency
    241     PX Deq: Join ACK     4     0     0     0.07     0     2789     4205438796     2723168908     6     Idle
    241     SQL*Net more data from client     6     0     0     0.04     0     2474     3530226808     2000153315     7     Network
    241     gc current block 2-way     5     0     0     0.04     0     2090     111015833     3871361733     11     Cluster
    241     enq: KO - fast object checkpoint     4     0     0     0.04     0     1735     4205197519     4217450380     1     Application
    241     gc current grant busy     4     0     0     0.03     0     1337     2277737081     3871361733     11     Cluster
    241     gc cr block 2-way     1     0     0     0.06     0     586     737661873     3871361733     11     Cluster
    223     db file sequential read     1     0     0     0.05     0     461     2652584166     1740759767     8     User I/O
    223     gc current block 2-way     1     0     0     0.05     0     452     111015833     3871361733     11     Cluster
    241     latch: row cache objects     2     0     0     0.02     0     434     1117386924     3875070507     4     Concurrency
    241     enq: TM - contention     1     0     0     0.04     0     379     668627480     4217450380     1     Application
    512     PX Deq: Msg Fragment     4     0     0     0.01     0     269     77145095     2723168908     6     Idle
    241     latch: library cache     3     0     0     0.01     0     243     589947255     3875070507     4     Concurrency
    510     PX Deq: Msg Fragment     3     0     0     0.01     0     215     77145095     2723168908     6     Idle
    223     PX Deq: Msg Fragment     4     0     0     0     0     145     77145095     2723168908     6     Idle
    241     buffer busy waits     1     0     0     0.01     0     142     2161531084     3875070507     4     Concurrency
    243     PX Deq: Msg Fragment     2     0     0     0     0     84     77145095     2723168908     6     Idle
    241     latch: cache buffers chains     4     0     0     0     0     73     2779959231     3875070507     4     Concurrency
    241     SQL*Net message to client     7     0     0     0     0     51     2067390145     2000153315     7     Network
    (yikes, is there a way to wrap that in equivalent of other forums' tag?)
    v$session_wait;
    223     835     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     10     WAITING
    241     22819     row cache lock     cache id     13     000000000000000D     mode     0     00     request     5     0000000000000005     3875070507     4     Concurrency     -1     0     WAITED SHORT TIME
    243     747     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     7     WAITING
    510     10729     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     2     WAITING
    512     12718     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     4     WAITING
    v$sess_io:
    223     0     5779     5741     0     0
    241     38773810     2544298     15107     27274891     0
    243     0     5702     5688     0     0
    510     0     5729     5724     0     0
    512     0     5682     5678     0     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Multi-thread failure - Error in assignment

    Hello
    I have a c++ program processor running under Windows XP with Oracle 9i. My program access to oracle by an ODBC driver version 9.2.0.4.0. It could be launched in multi-thread to increase performance. When I launch it with one thread everything is fine. When I use several threads I have problems. ODBC driver returns to me a "error in assignment ... General error" message and my updates queries failed. Under SQl server it works without problems. It seems to be a kind of deadlock. When I disable check box in my odbc driver of "enable query timeout" my program encounter a problem and freezes...
    Could someone help me ?

    user13335017 wrote:
    I have thought of the above solutions as workable, however, it is not. Some exhibited errors are:
    A. "Attempt to use database while environment is closed." This error applies to 2, 3 and 4 all the way;
    B. "Attempt to read / write database while database is closed." This error applies to 3 in particular;
    C. "Attempt to close environment while some database is still open." This error applies to 5.
    Please help me with designing a better strategy to solve the concurrent issue. Many thanks in advance.All these are expected errors. You should design the application so that you do not close an environment handle while database handles are still open, keep database handles open for as long as operations need to be performed on the underlying databases, open the database handles after opening the database handles, and close database handles before closing the environment handle.
    In short, in pseudo-code, you should have something like this:
    - open environment handle,
    - open database handles,
    - perform whatever operations are needed on the databases,
    - close database handles,
    - close environment handle.
    You can refer to the Getting Started with Data Storage and the Getting Started with Transaction Processing guides appropriate for the API you are using, from the Berkeley DB documentation page.
    Regards,
    Andrei

  • Is a Trigger Multi Thread?

    I’m trying to execute a SELECT statement WHERE status = X to make some operations when I insert a row in my Table.
    If there is another insert until is execute the firs insert, the Trigger is executed in other Thread or waits until the first ended?
    Is a Trigger Multi Thread?
    If is multi thread, is possible to make it in single thread or wait until the first one end?

    Each session will execute its own copy of the trigger.
    To single thread access to rows within a table look at the select for update statement.
    When designing transactions give thought to Oracle's read consistency model: readers will not wait on updaters to see the row contents.
    But by selecting for update you will cause other sessions that want to update the same selected row as taken by the first session to wait for the first session to commit or rollback before the second session performs its update.
    If you need to single thread access to the entire table you could be designing an applciation that will not scale to support multiple session access.
    HTH -- Mark D Powell --

  • Multi-thread application recompile object with make but runs with old value

    Hi. I am working on a simulation in Java, and there is some behavior going on that I cannot understand. Granted, I'm a mechanical engineer, not a CS student (a fact that is probably made obvious by the code I'm posting below). The simulation comprises three parts: a controller, a robot, and a 3-d sensor. Here's the code for the "Simulation" application.
    class Simulation {
        void go() {
         ThreeDSensor sensorRunner = new ThreeDSensor();
         VSController controllerRunner = new VSController();
         KukaKR15SL robotRunner = new KukaKR15SL();
         Thread sensorThread = new Thread(sensorRunner);
         Thread controllerThread = new Thread(controllerRunner);
         Thread robotThread = new Thread(robotRunner);
         sensorThread.start();
         try {
             Thread.sleep(1000);  // Give sensorThread time to start in order to catch first triggers
         } catch(InterruptedException ex) {
             ex.printStackTrace();
         controllerThread.start();
         try {
             Thread.sleep(1000);  // Give controllerThread time to open the socket for communication with robotThread
         } catch(InterruptedException ex) {
             ex.printStackTrace();
         robotThread.start();
        public static void main(String[] args) {
         Simulation sim = new Simulation();
         sim.go();
    }I guess the big reason I'm using multi-threading is that once this simulation is working I want to be able to just run VSController alone and have it interface with a robot controller and a PC performing image processing. So with multiple threads I'm sending TCP and UDP messages around just like the final system will.
    I have made an object for my VSController that just stores values used by the simulation. That way I could have them all in one place instead of hunting through methods to change them. One example is "double noiseThreshold". Quickly, here is the code for "ControllerSettings.java".
    class ControllerSettings {
        final double cameraXOffset = 0;  // If > 0 then the origin of the camera CS is not on the center line of the lens
        final double cameraYOffset = 0;  // If > 0 then the origin of the camera CS is not on the center line of the lens
        final double cameraZOffset = 700;  // The distance that must be kept between the camera and the target
        // Error magnitude less than this is disregarded (in centimeters if roundingData else in millimeters)
        final double noiseThreshold = 60;
        final boolean estimatingEndPoints = false;  // If the controller is using two images per cycle then true
        final boolean roundingData = false;
        final boolean using3DData = true;  // How double[] sent from Matlab image processing is used
         * If the robot controller uses the output of this controller to command
         * motions in cartesian space then true.  This is used in two places: 1) initial guess of Jacobian,
         * and 2) "commandType" element sent to robot controller.
        final boolean useRobotControllerModel = false;
        final double thetaBumpValueForEstimationOfJacobian = .1;  // Distance each joint is jogged in estimation process
        final double bumpDistance = 50;  // Distance robot moves each time (magnitude of translation in mm)
        final double limitAngular = .5;  // Max amout robot joint will be commanded to rotate (in degrees)
    }And here is some pertinent code from "VSController.java".
    class VSController implements Runnable{
        ControllerSettings cSettings;  // Stores all the preferences used by the controller
        NGN controller;  // This is the controller algorithm.
        int dof;  // The degrees of freedom of the robot being controlled
        KukaSendData ksd;  // This provides communication to the robot.
        protected DatagramSocket socketForVisionSystem = null;
        ImageFeaturesData ifd;  // This parses and stores data from the image system.
        double[] errorVector;  // This is what's acted on by the algorithm
        PrintWriter errorTrackerOut = null;  // For analysis of error vector
        public void run() {
         VSController vsc = new VSController();
         vsc.go();
        public void go() {
         initWriters();
         cSettings = new ControllerSettings();
        public boolean isNoise() {
         boolean ret = false;
         double magnitude = 0;
         for (int i = 0; i < errorVector.length; i++) {
             magnitude += errorVector[i] * errorVector;
         magnitude = Math.sqrt(magnitude);
         if (magnitude <= cSettings.noiseThreshold) {
         System.out.println("VSController: magnitude (of errorVector) = " + magnitude +
                   ", threshold = " + cSettings.noiseThreshold); // Debug
    Now here's my issue: I change the value for "noiseThreshold" in "ControllerSettings.java" then run make from terminal (makefile code posted below) and rerun "Simulation". However, despite my changes to "ControllerSettings.java" the value for "noiseThreshold" is not changed, as evidenced by the output on the terminal screen:VSController: magnitude (of errorVector) = 6.085046125925263, threshold = 10.0 See, that value of 10.0 is what I used to have for noiseThreshold. I do not know why this value does not update even though I save the java file and execute make in between executions of Simulation. I would love it if someone could explain this problem to me.
    Here's the contents of makefile.
    JFLAGS = -cp ../Jama\-1\.0\.2.jar:../utils/:/usr/share/java/vecmath\-1\.5\.2.jar:.
    JC = javac
    .SUFFIXES: .java .class
    .java.class:
         $(JC) $(JFLAGS) $*.java
    CLASSES = \
         ControllerSettings.java \
         ImageFeaturesData.java \
         KukaKR15SL.java \
         ../utils/KukaSendData.java \
         NGN.java \
         Puma560.java \
         Robot.java \
         RobotSettings.java \
         Simulation.java \
         SimulationSettings.java \
         SixRRobot.java \
         Targets.java \
         TargetsSettings.java \
         ThreeDData.java \
         ThreeDSensor.java \
         VSController.java
    default: classes
    classes: $(CLASSES:.java=.class)
    clean:
         $(RM) *.classEdited by: raequin on Apr 5, 2010 1:43 PM

    I saw this explanation about what's causing my problem.
    "When the Java compiler sees a reference to a final static primitive or String, it inserts the actual value of that constant into the class that uses it. If you then change the constant value in the defining class but don't recompile the using class, it will continue to use the old value."
    I verified that the value updates if I also change something in VSController.java, forcing it to recompile. I think I will solve this problem by just making the variables in ControllerSettings no longer final (and then recompile VSController to make sure it takes effect!). Is there another solution? I saw intern(), but that seems to only apply to Strings.
    Thanks.

  • ODI - IKM Oracle Multi Table Insert

    Hi All,
    I am new to ODI, i tried to use "IKM Oracle Multi Table Insert", one interface generates query like
    insert  all
    when 1=1  then
    into BI.EMP_TOTAL_SAL
    *(EMPNO, ENAME, JOB, MGR, HIREDATE, TOTAL_SAL, DEPTNO)*
    values
    *(C1_EMPNO, C2_ENAME, C3_JOB, C4_MGR, C5_HIREDATE, C6_TOTAL_SAL, C7_DEPTNO)*
    select
    C1_EMPNO EMPNO,
    C2_ENAME ENAME,
    C3_JOB JOB,
    C4_MGR MGR,
    C5_HIREDATE HIREDATE,
    C6_TOTAL_SAL TOTAL_SAL,
    C7_DEPTNO DEPTNO
    from BI.C$_0EMP_TOTAL_SAL
    where (1=1)
    because of alias this insert fails. Could you please anyone explain what exactly happens and how to control the query genration?
    Thanks & Regards
    M Thiyagu

    What David is asking is for you to go to operator and review the failed task, copy the SQL and paste it up here, run the SQL in your sql client (Toad / SQL Developer) and try and ascertain what objects your missing causing your SQL Error.
    Have you followed the link posted above? Have you placed the interfaces in a package in the right order? Are you running the package as a whole or individual interfaces? I dont think the individual interfaces will work with this IKM as its designed for one to feed the other.
    Please detail the steps you've taken, how many interfaces you have and what options you have chosen in the IKM options for each interface - Its tricky to diagnose your problem and when you say "I can't understand what to do and how to do...
    So please give the step wise solution to do that interface.. or please give with an example.." it means a lot of people will ignore your post as we cant see any evidence of you trying!
    p.s I see you have resurected a thread from 2009 - 1) I dont think the multi-table insert KM was available with ODI at that time (10G) 2) The thread is answered / closed so not many people will look at it 3) Proceedurs should only really be used when you cant do it with an interface, you lose all the lovely lineage between objects with you get with an interface.
    Hope this helps - please post your setup , your error and how you have configured the interfaces and package so far.

  • DbEnv under mod_perl 2.0/Apache 2.2 worker MPM (multi-threaded)

    I'd like to draw your attention to a thread I've opened on the mod_perl mailing list. I'm having difficulties figuring out how to initialize the Berkeley database environment on this particular platform. It may be that mod_perl is not the most suitable choice of platform for developing an application working on Berkeley DB, but still I'm curious to know whether or not this is at all possible, and if not, what are the reasons for this mismatch.
    Note that I'm actually using the Perl interface to Berkeley DB XML instead of plain Berkeley DB. However, George Feinberg, the DB XML lead developer (I believe), pointed out that the issues I'm seeing pertain to the setup of the database environment, and that once this has been safely accomplished, the DB XML calls are safe, too. That's why he advised me to bring this issue to the attention of the Berkeley DB forum, which I'm now doing. Here we go:
    Initializing Sleepycat::DbXml (Berkeley, Oracle) objects in startup.pl | ModPerl
    http://www.gossamer-threads.com/lists/modperl/modperl/98849
    It may be that a precise answer would require a lot of knowledge of the very particular mod_perl 2.0 multi-threaded platform and its intricacies, which I don't have myself and I know is not commonly found, as Perl has ceded a lot of ground to PHP and Java. But maybe there is someone here who can help.
    The fundamental issue here is the setup of the environment. There has to be (1) a thread of control doing recovery on startup, and then (2) any number of threads of control which may or may not share environment handles. How could this be accomplished in this particular environment?
    If you're interested in this issue, please follow the link to the mod_perl mailing list and see what's there. Note you can switch between flat and threaded view.
    Thanks.
    Michael Ludwig

    Its been a long time since I messed with this, I ended up using Tomcat standalone, hopefully someone with more experience will reply to you after this post!
    But as I remember, it was set up using a wildcard *.jsp to pass JSP files off to tomcat for processing. Apache cannot process jsp, which is why Tomcat is used as a plugin.
    The page showing the examples is HTML, whereas the examples themselves are JSP.
    Have you told Apache to pass all *.jsp's to Tomcat for PROCESSING.
    You are not simply telling apache to open these files, rather Apache tells Tomcat to open and process the files.
    Hope this makes sense and is of some use.

  • Is JDBC multi threaded

    Newbie to JDBC and XMLDB
    Is it possible to create a java application that has multpule threads that are inserting xml documents into10g using the JDBC driver that comes with 10g?
    (Is the JDBC multi-threaded?)

    Yes....
    Search otn for the sax loader example...

  • Multi-thread serialization

    In order to bench my database i made an application that just call some procedures (rselect only no update or insert).
    The application instanciate 15 threads, each with a connection to my database.
    My problem is that it seems that my calls are serialiazed and take a lot of time to answer.
    But if I launch multiple instances of my application ( each with only 1 thread) I don't have this problem.
    How can I avoid this problem ?
    Thanks, bernard

    Here's the test code I've been using for ad-hoc multi-threaded testing; it's a bit sloppy, but it works and I no longer have time to finish the project up right and pretty. At the risk of derision from the better Java programmers here, I post it and hereby dedicate this mess to the public domain. Enjoy...
    3 Java files.
    File 1:
    import java.util.ArrayList;
    public class DBTest {
        public static void main (String args[]) {
            TestHarness testSuite1;
            ArrayList testSet1 = new ArrayList();
            for ( int i=0; i< 2; i++) {
                testSet1.add(new Check1(2) );
            testSuite1 = new TestHarness (testSet1);
            testSuite1.go();
            testSuite1.report(System.out);
    class Check1 extends Thread {
        private long runLength;
        private int myThreadNumber;
        private static int threadNumber = 0;
        private static Object lockObj = new Object();
        public Check1 (int _runLength) {
            runLength = _runLength;
            synchronized(lockObj) {
                myThreadNumber = threadNumber++;
        public Check1 () {
            this(10);
        public void run () {
            try {
                BusyDB workload = new BusyDB();
                workload.beBusy (runLength, myThreadNumber);
            catch (InterruptedException e) {
                System.err.println("Interrupted !");
            catch (Exception e) {
                e.printStackTrace(System.err);
        public String toString() {
            return "thread " + myThreadNumber + ": iterated for " + runLength + " iterations\n";
    }File 2:
    import java.sql.*;
    public class BusyDB
            private static long iterModulus = 1000;
            public void beBusy(long iterations, int threadID)
                    throws Exception
                    Class.forName("oracle.jdbc.driver.OracleDriver");
                    Connection con = null;
                    PreparedStatement ps1;
                    ResultSet rs1;
                    int i = 0;
                    try {
                            while ( i < iterations )
                            con = DriverManager.getConnection("jdbc:oracle:thin:user/password@db_host:1521:db_sid");
                            con.setAutoCommit(false);
                            ps1 = con.prepareStatement("select sysdate from dual");
                            rs1 = ps1.executeQuery();
                            i++;
                            if (i % iterModulus == 0)
                                    System.out.println("Thread number " + threadID +": " + iterModulus
                                                            + " more iterations " + i + " total iterations");
                                    con.commit();
                            rs1.close();
                            if( con != null ) con.close();
                    catch (Exception e)
                            e.printStackTrace(System.out);
                            i = i + 1;
                    } finally {
                            if( con != null ) con.close();
    }File 3:
    import java.io.PrintStream;
    import java.util.Collection;
    import java.util.Iterator;
    public class TestHarness
        private Collection subTests;
        private boolean wasInterrupted = false;
        private final long created = System.currentTimeMillis();
        private long start, end;
        private String testSuiteName;
        public TestHarness ( Collection subTests ) {
            this.subTests = subTests;
            this.testSuiteName = "anonymous";
        public void go () {
            start = System.currentTimeMillis();
            System.out.println("Test suite " + testSuiteName + " started: " + start);
            Iterator i = subTests.iterator();
            while (i.hasNext() ) ((Thread)i.next() ).start();
            System.out.println("All subtests finished starting at: " + System.currentTimeMillis() );
            i = subTests.iterator();
            try {
                while (i.hasNext() ) {
                    ((Thread)i.next() ).join();
            catch (InterruptedException e) {
                wasInterrupted = true;
            end = System.currentTimeMillis();
            System.out.println("All subtests finished running at: " + end );
        public void report( PrintStream out) {
            out.println ("\nTest Suite: " + testSuiteName);
            out.println ("Runtime millis: " + (end - start) );
            Iterator i = subTests.iterator();
            while (i.hasNext() ) out.print( i.next().toString() );
            if (wasInterrupted) {
                out.println("======== ERROR ========");
                out.println("Test thread interrupted");
    }

  • Multi thread and deadlocks

    I am running dataload from a text file to MS SQL SRVR database. The jobs are running in multi thread mode. I am getting deadlock errors with threads.
    each thread is supposed to delete records.
    with single thread mode log file order is as below
    1. get product to be deleted A
    2. delete A
    3. get product to be deleted B
    4. delete B
    5. get product to be deleted C
    6 delete C
    with multi thread mode
    1. get product to be deleted A
    2. get product to be deleted B
    3. get product to be deleted C
    4. get product to be deleted D
    5. delete A
    6. get product to be deleted E
    7. get product to be deleted F
    8. deadlock while deleteing the product B
    Any clues what is going wrong here?
    Here is the code for the delete method:
    ================================
    PreparedStatement prepStatement = null;
    System.out.println("get product to be deleted " + product);
    prepStatement = conn.prepareStatement("delete from MYTABLE where product = ? ");
    prepStatement.setString(1, product);
    prepStatement.executeUpdate();
    ================================
    Please note that only unique data is sent to each thread.
    Any help is greatly appreciated.
    Thanks

    Thanks for the responses. I need to do update/insert new records after delete operation is completed. Later at the end of the db changes, i close the connection.
    From my understanding deadlocks happen when both threads try to touch the same record same time. But, i am sending unique records for each thread. Unless these is somethign wrong with the query, it should always return unique values.
    This is the query that sends unique records for each thread in sql srvr.
    select distinct top ? columnA, columnB from MyTable where processed = 'FALSE' and columnB = ? "
    This should be equivalent of oracle query
    select distinct columnA, columnB from MyTable where columnB=? rownum > ? and rownum <= ?
    if the top in sql srvr is equivalent to rownum in oracle then i always get distinct records and there should be no deadlocks.
    Any help?
    Thanks

  • IKM oracle multi table insert

    Hii...Experts..
    How can I load data from a single source table to multiple target tables using IKM oracle multi table insert ???
    Please help me with an example.
    Regards

    What David is asking is for you to go to operator and review the failed task, copy the SQL and paste it up here, run the SQL in your sql client (Toad / SQL Developer) and try and ascertain what objects your missing causing your SQL Error.
    Have you followed the link posted above? Have you placed the interfaces in a package in the right order? Are you running the package as a whole or individual interfaces? I dont think the individual interfaces will work with this IKM as its designed for one to feed the other.
    Please detail the steps you've taken, how many interfaces you have and what options you have chosen in the IKM options for each interface - Its tricky to diagnose your problem and when you say "I can't understand what to do and how to do...
    So please give the step wise solution to do that interface.. or please give with an example.." it means a lot of people will ignore your post as we cant see any evidence of you trying!
    p.s I see you have resurected a thread from 2009 - 1) I dont think the multi-table insert KM was available with ODI at that time (10G) 2) The thread is answered / closed so not many people will look at it 3) Proceedurs should only really be used when you cant do it with an interface, you lose all the lovely lineage between objects with you get with an interface.
    Hope this helps - please post your setup , your error and how you have configured the interfaces and package so far.

  • Multiple Containers read/write access in multi-threaded environment

    I'm been reading the Berkeley DB XML Transaction Processing document and have a few questions. I have a multi-threaded environment (not multi-process).
    1) In the "Summary and Examples" section, there is an example of several worker threads that perform many writes to a shared container. The one container is passed to all the worker threads, rather them having their own instance of the container. In my application, I could potentially have multiple worker threads acting on the same container, is it better to provide some type of connection pool, so each thread has access to the same container? Is their a problem with each thread creating a new instance of the container and writing to that? Is their pros and cons to each approach?
    2) In some specific web applications, they only require read-access and have no intention of writing to the database. If the environment was created with transactions, can these applications open the environment in read-only mode, so as to not suffer the performance penalty of using transactions? Or do they still require transaction support since other threads in other applications might be writing to it. Is it better in this case to
    3) Is there any decent connection pooling framework that work well with BDB XML? If transactions are involved, it seems pretty important to always close the container once you are finished with it, which kind of defeats the purpose of pooling. I see a container somewhat analogous to a JDBC transaction, where in connection pooling these transactions remain open, but in BDB XML, it doesn't seem like this is good practice. So clarification here would be appreciated.
    Thanks in advance...
    Chris

    Chris,
    I'd be careful about trying to draw too many analogies with other sorts of systems. In many ways BDB XML is simpler -- it's just a library, no server process.
    An open container is analogous to an open file handle/descriptor in your operating system. It can safely be shared by all threads within a given process. In fact, you should not create more than one -- while that works, it can just be confusing, and a source of errors (for example, if you use a different path to open the container each time, you'll have problems). So by all means share them. Most Java applications just point them in a class that is accessible to all threads.
    Other objects that can, and should be shared among threads include:
    Environment
    XmlManager
    XmlQueryExpression
    Most others should not. You also mention closing your containers -- there is no reason to close your active container objects until your application needs to shut down. A long-running application will want to use db_checkpoint (or equivalent) to ensure that modified pages are flushed from the transaction log to the database files, but that's about it.
    As for concurrency, you obviously need transactions for concurrent write operations. If you want concurrent read access to the same containers, they should use transactions as well. Even if you don't explicitly use transactions, locks are always taken on pages in transactional containers. You cannot open the same container transactionally and not transactionally at the same time -- bad things could happen.
    Depending on your performance needs, you could using snapshot concurrency, and see how it works for you. While locking does incur some overhead, it may be acceptable.
    You mention creating your own "snapshot" of a container and running read-only on that copy. That's possible to do, but you have to be careful of how you create the snapshot (following hot backup procedures). Also, if you intend to open that "new" container in the same environment or application as the old one, you need to run a program on it to change its internal identification so Berkeley DB won't think it's the same file. That is, you can't just copy a container file to a new name and just open it like it's a new container if it's still in the vicinity of the original. See the -r option for the db_load program.
    Now that you are probably really confused, good luck!
    Regards,
    George

  • SSRS - Is there a multi thread safe way of displaying information from a DataSet in a Report Header?

     In order to dynamically display data in the Report Header based in the current record of the Dataset, we started using Shared Variables, we initially used ReportItems!SomeTextbox.Value, but we noticed that when SomeTextbox was not rendered in the body
    (usually because a comment section grow to occupy most of the page if not more than one page), then the ReportItem printed a blank/null value.
    So, a method was defined in the Code section of the report that would set the value to the shared variable:
    public shared Params as String
    public shared Function SetValues(Param as String ) as String
    Params = Param
    Return Params 
    End Function
    Which would be called in the detail section of the tablix, then in the header a textbox would hold the following expression:
    =Code.Params
    This worked beautifully since, it now didn't mattered that the body section didn't had the SetValues call, the variable persited and the Header displayed the correct value. Our problem now is that when the report is being called in different threads with
    different data, the variable being shared/static gets modified by all the reports being run at the same time. 
    So far I've tried several things:
    - The variables need to be shared, otherwise the value set in the Body can't be seen by the header.
    - Using Hashtables behaves exactly like the ReportItem option.
    - Using a C# DLL with non static variables to take care of this, didn't work because apparently when the DLL is being called by the Body generates a different instance of the DLL than when it's called from the header.
    So is there a way to deal with this issue in a multi thread safe way?
    Thanks in advance!
     

    Hi Angel,
    Per my understanding that you want to dynamic display the group data in the report header, you have set page break based on the group, so when click to the next page, the report hearder will change according to the value in the group, when you are using
    the shared variables you got the multiple thread safe problem, right?
    I have tested on my local environment and can reproduce the issue, according to the multiple safe problem the better way is to use the harshtable behaves in the custom code,  you have mentioned that you have tryied touse the harshtable but finally got
    the same result as using the ReportItem!TextBox.Value, the problem can be cuased by the logic of the code that not works fine.
    Please reference to the custom code below which works fine and can get all the expect value display on every page:
    Shared ht As System.Collections.Hashtable = New System.Collections.Hashtable
    Public Function SetGroupHeader( ByVal group As Object _
    ,ByRef groupName As String _
    ,ByRef userID As String) As String
    Dim key As String = groupName & userID
    If Not group Is Nothing Then
    Dim g As String = CType(group, String)
    If Not (ht.ContainsKey(key)) Then
    ' must be the first pass so set the current group to group
    ht.Add(key, g)
    Else
    If Not (ht(key).Equals(g)) Then
    ht(key) = g
    End If
    End If
    End If
    Return ht(key)
    End Function
    Using this exprssion in the textbox of the reportheader:
    =Code.SetGroupHeader(ReportItems!Language.Value,"GroupName", User!UserID)
    Links belowe about the hashtable and the mutiple threads safe problem for your reference:
    http://stackoverflow.com/questions/2067537/ssrs-code-shared-variables-and-simultaneous-report-execution
    http://sqlserverbiblog.wordpress.com/2011/10/10/using-custom-code-functions-in-reporting-services-reports/
    If you still have any problem, please feel free to ask.
    Regards
    Vicky Liu

  • Memory leaks and multi threading issues in managed client.

    In our company we use a lot of Oracle, and after the release of the managed provider we migrated all applications to it. First the  things were very impressive : the new client was faster, but after some days applications that uses 100MB with old client goes to 1GB and up. The memory is not the only issue, we use a lot of multi threading, and we experience connection drops and not disposal, after 1 days working one of the application had over 100 sessions on the server. I think there is something wrong with connection pool and multi threading.
    Is someone experience same problems.
    Yesterday we went back with unmanaged provider. Now things are back to normal.

    connection drops: did you try to use "Validate Connection=true" parameter in your connection string?
    the new client was faster: are you sure with this statement? Even in 64bit environment? I got quite serious performance problems when running application under 64bit process: https://forums.oracle.com/thread/2595323

  • How to write a multi threaded Cache Event Listener

    I have a distributed data cache called tokenCache for my application. I have also added a mapListener to this cache to listen to a particular kind of events.
    tokenCache.addMapListener((MapListener) new TokenCacheListenerBean(), new MapEventFilter(tokenFilter), false);
    So bascially everytime a token (The domain object of this cache) is updated the entryUpdated() method in my EJB TokenCacheListenerBean is invoked.
    The issue I have though is that, from what I observe on running my code is that the Cache Listener is single threaded. So if two Token Objects on my tokenCache are updated,
    lets say Token Object A and Token Object B one after the other,  the entryUpdated() method in my EJB is invoked for Token Object A and  once the invocation is complete
    then the entryUpdated() method is invoked again for Token Object B(). At a given point of time there is only one instance of TokenCacheListenerBean EJB.  Is there a way to
    make this happen in multi-threaded manner ?
    Is there a configuration setting somewhere which allows multiple CacheListeners to be instantiated at a given point of time ?
    TokenCacheListenerBean  EJB_
    package oracle.communications.activation.asap.ace;
    import java.util.Iterator;
    import java.util.Set;
    import java.util.logging.Logger;
    import javax.ejb.Stateless;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.ValueUpdater;
    import com.tangosol.util.extractor.PofExtractor;
    import com.tangosol.util.extractor.PofUpdater;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.LikeFilter;
    import com.tangosol.util.filter.LimitFilter;
    import com.tangosol.util.processor.UpdaterProcessor;
    * Session Bean implementation class TokenCacheListenerBean
    @Stateless
    public class TokenCacheListenerBean implements TokenCacheListenerBeanRemote, TokenCacheListenerBeanLocal, MapListener {
    NamedCache asdlCache;
    NamedCache tokenCache;
    private final int PAGE_SIZE = 1;
    private static Logger logger = Logger.getLogger(ConnectionManager.class.getName());;
    * An instance of the JCAModeler EJB, represents the JCA-JNEP
    JCAModeler jcaBean;
    * Default constructor.
    public TokenCacheListenerBean() {
    // TODO Auto-generated constructor stub
    public void entryDeleted(MapEvent Event) {
    public void entryInserted(MapEvent Event) {
    public void entryUpdated(MapEvent Event) {
    Token newToken = (Token) Event.getNewValue();
    Token oldToken = (Token) Event.getOldValue();
    if ((oldToken.getState() == Token.TOKEN_RESERVED)
    && (newToken.getState()== Token.TOKEN_AVAILABLE)) {
    String networkID = newToken.getNeID();
    asdlCache = AceCacheFactory.getCache("asdlCache");
    tokenCache = AceCacheFactory.getCache("tokenCache");
    EqualsFilter filterNE = new EqualsFilter(new PofExtractor(String.class,Asdl.NETWORKID), networkID);
    LimitFilter limitFilter = new LimitFilter(filterNE, PAGE_SIZE);
    Set removeASDL = asdlCache.keySet(limitFilter);
    Iterator asdlIterator = removeASDL.iterator();
    if (asdlIterator.hasNext()) {
    logger.info(printASDLCache());
    ValueUpdater updater = new PofUpdater(Token.STATE);
    System.out.println("Token ID:" + newToken.getTokenID());
    UpdaterProcessor updaterProcessor = new UpdaterProcessor(updater, Integer.toString(Token.TOKEN_RESERVED));
    tokenCache.invoke(newToken.getTokenID(), updaterProcessor);
    jcaBean = new JCAModeler(tokenCache);
    Object asdlID = asdlIterator.next();
    Asdl provisionAsdl = (Asdl) asdlCache.get(asdlID);
    asdlCache.remove(asdlID);
    jcaBean.provision(provisionAsdl, newToken.getTokenID());
    logger.info(ConnectionManager.printTokenCache());
    logger.info(printASDLCache());
    }

    Here is what I am asking!
    I have added 2 listeners (Listener A and Listener B) which each listen on for changes made to 2 different token Cache Objects (Token A and Token B).
    for (i = 0; i < 2 ; i++) {
    Token tokenAdded = new Token(UUID.randomUUID().toString(),TOKEN_AVAILABLE, networkID);
    tokenCache.put(tokenAdded.getTokenID(), tokenAdded);
         tokenCache.addMapListener((MapListener) new TokenCacheListener(), tokenAdded.getTokenID(), false);
    Now assume that updates are made to Token A and Token B simuntaneosly.
    Why do i observe in my diagnostic messages that only one Listener is invoked at a given point of time.
    Which means I see Listener A getting invoked and then once invocation of Listener A is complete I see Listener B bieng invoked.
    Ideally I would want both listeners to be invoked simultaneously rather than in a one off fashion.
    Here is the code for my token cache Listener
    package oracle.communications.activation.asap.ace;
    import java.util.Iterator;
    import java.util.Map;
    import java.util.Set;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.AbstractMapListener;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.ObservableMap;
    import com.tangosol.util.ValueUpdater;
    import com.tangosol.util.extractor.PofExtractor;
    import com.tangosol.util.extractor.PofUpdater;
    import com.tangosol.util.filter.AndFilter;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.LikeFilter;
    import com.tangosol.util.filter.LimitFilter;
    import com.tangosol.util.processor.UpdaterProcessor;
    public class TokenCacheListener extends AbstractMapListener {
         NamedCache asdlCache;
         NamedCache tokenCache;
         AceCacheFactory cacheFactoryBean = new AceCacheFactory();
         private final int PAGE_SIZE = 1;
         private static Logger logger = Logger.getLogger(ConnectionManager.class
                   .getName());;
         * An instance of the JCAModeler EJB, represents the JCA-JNEP
         JCAModeler jcaBean;
         * This is a utility method and prints the tokens cache.
         public String printTokenCache() {
              NamedCache tokenCache = cacheFactoryBean.getCache("tokenCache");
              LikeFilter tokenList = new LikeFilter(new PofExtractor(String.class,
                        Token.STATE), "%", (char) 0, false);
              Set keySet = tokenCache.keySet(tokenList);
              StringBuffer cachedTokenList = new StringBuffer("\n################################## Token(s) Cache ##################################");
              int counter = 1;
              for (Object tokenInCache: keySet) {
                   Token tokenObject = (Token) tokenCache.get(tokenInCache.toString());
                   cachedTokenList.append("\nS.NO:" + (counter++)
                             + "\t ID:" + tokenInCache.toString()
                             + "\t State:" + Token.tokenToString(tokenObject.getState()));
              cachedTokenList.append("\n####################################################################################");
              return cachedTokenList.toString();     
         * This method is a utility method and it prints all the ASDL(s) currently present on the
         * asdlCache.
         private String printASDLCache() {
              NamedCache asdlCache = cacheFactoryBean.getCache("asdlCache");
              LikeFilter asdlList = new LikeFilter(new PofExtractor(String.class,
                                  Asdl.NETWORKID), "%", (char) 0, false);
              Set keySet = asdlCache.keySet(asdlList);
              StringBuffer cachedASDLList = new StringBuffer("\n################ ASDL Cache ######## ########");
              int counter = 1;
              for (Object asdlInCache: keySet) {
                   cachedASDLList.append("\nS.NO:" + (counter++) + "\t ID:" + asdlInCache.toString());
              cachedASDLList.append("\n################ ASDL Cache ######## ########\n");
              return cachedASDLList.toString();     
         public TokenCacheListener() {
         public void checkASDLCache(MapEvent Event) {
         // Not currently used
         public void entryUpdated(MapEvent Event) {
              Token newToken = (Token) Event.getNewValue();
              Token oldToken = (Token) Event.getOldValue();
              logger.info("\n=============================================================================================="
                        + "\nTOKEN CACHE LISTENER"
                        + "\n=============================================================================================="
                        + printTokenCache()
                        + "\n==============================================================================================");
              if ((oldToken.getState() == Token.TOKEN_RESERVED)
                        && (newToken.getState()== Token.TOKEN_AVAILABLE)) {
              String networkID = newToken.getNeID();
              asdlCache = cacheFactoryBean.getCache("asdlCache");
              tokenCache = cacheFactoryBean.getCache("tokenCache");
              EqualsFilter filterNE = new EqualsFilter(new PofExtractor(String.class,Asdl.NETWORKID), networkID);
              LimitFilter limitFilter = new LimitFilter(filterNE, PAGE_SIZE);
              Set removeASDL = asdlCache.keySet(limitFilter);
              Iterator asdlIterator = removeASDL.iterator();
              if (asdlIterator.hasNext()) {
              logger.info(printASDLCache());
              ValueUpdater updater = new PofUpdater(Token.STATE);
              System.out.println("Token ID:" + newToken.getTokenID());
              UpdaterProcessor updaterProcessor = new UpdaterProcessor(updater, Integer.toString(Token.TOKEN_RESERVED));
              tokenCache.invoke(newToken.getTokenID(), updaterProcessor);
              jcaBean = new JCAModeler(tokenCache);
              Object asdlID = asdlIterator.next();
              Asdl provisionAsdl = (Asdl) asdlCache.get(asdlID);
              asdlCache.remove(asdlID);
              jcaBean.provision(provisionAsdl, newToken.getTokenID());
              logger.info(printTokenCache());
              logger.info(printASDLCache());
    I only see one instance of this listener alive at any given point of time.
    Edited by: 807103 on Nov 3, 2011 1:00 PM
    Edited by: 807103 on Nov 3, 2011 1:12 PM

Maybe you are looking for

  • Sold iPad is still connected to Photo Stream & Find My iPhone

    I turned off iMessage and erased all content & settings before selling my iPad 2 last week. Last night I noticed some photos I didn't recognize in my iPhoto & iPhone photo stream. I also noticed that an iPad 2 was still showing up under a new name in

  • File Upload in WebSphere App Server (3.5.4)

    Hi I am trying to upload a file from the client to the server. I am using oReilley's MultipartRequest to do this. This works fine in my Test Environment in Visual Age. But does not work in Websphere App Server. I get a "Error during native read opera

  • Error when trying to run jspx page from JDev - from Linux workstation

    Hi, I am using jdev 11.1.1.7.40.64.38. I am facing an issue while deploying the application to the Integrated WLS, the application takes too long time to run or the following exception is shown in the server log "DFW-99998 [java.lang.ClassCastExcepti

  • Is possible to configure SLB per VRF??

    I have the Cat6500 with Sup720 and the IOS version 12.2(18)SXF8. From the documentation this software is SLB VRF-aware. But I can not configure SLB per VRF:-( I'm sending you the example of my configuration: ip vrf WEB rd 100:1 ip slb probe WEB1 tcp

  • Inventory rebuild

    We are running Zenworks 6.5 on a NW 6.5 SP 3 server. Every time the database starts I see an error stating that the database can benefit from a rebuild. Anyone know how to rebuild a ASA database on NW? Thanks, Mike