Physical Oracle connections consume memory continuously

I am running Oracle Database 10.2.0.1 on a Solaris x86-64bit system. We have recently stumbled upon an issue where the physical Oracle DB connections' footprint continues to grow over time. If you do a 'ps' you'll see:
oracle 13074 1 0 Nov 27 ? 0:00 oracleAMS (LOCAL=NO)
oracle 13076 1 1 Nov 27 ? 146:20 oracleAMS (LOCAL=NO)
oracle 13459 1 1 Nov 27 ? 144:39 oracleAMS (LOCAL=NO)
oracle 13463 1 1 Nov 27 ? 144:22 oracleAMS (LOCAL=NO)
oracle 13457 1 0 Nov 27 ? 0:00 oracleAMS (LOCAL=NO)
oracle 19847 1 0 Nov 11 ? 0:00 oracleAMS (LOCAL=NO)
oracle 13088 1 1 Nov 27 ? 145:52 oracleAMS (LOCAL=NO)
oracle 19925 1 0 Nov 11 ? 0:19 oracleAMS (LOCAL=NO)
oracle 13461 1 1 Nov 27 ? 144:43 oracleAMS (LOCAL=NO)
The connections that seem to be taking up the most memory also have a large value in the 'TIME' column of the output from 'ps'.
Output from top:
13463 oracle 11 49 0 2646M 2561M sleep 144:22 0.46% oracle
13465 oracle 11 59 0 2646M 2561M sleep 144:35 0.27% oracle
13461 oracle 11 59 0 2645M 2560M sleep 144:43 0.74% oracle
13459 oracle 11 49 0 2645M 2560M sleep 144:40 0.59% oracle
13088 oracle 11 59 0 2645M 2560M cpu/0 145:52 0.40% oracle
13076 oracle 11 59 0 2645M 2560M sleep 146:20 0.28% oracle
13090 oracle 11 59 0 2645M 2559M sleep 143:47 0.53% oracle
Notice the size column that these connections are consuming ~2.5 GB of RAM each, but they start out around 800M.
We've looked through our application code and have been unable to find any glaring coding issues (i.e. failure to close a PreparedStatement or ResultSet). Does anyone know if there is/was a bug in the version of Oracle we're using that would cause this to happen? Has anyone else seen this issue?
Any feedback is greatly apprecaited!

Generally, a ideal connection/session(can have more than one connection) uses .5mb memory to hold session related information and then depending upon what your sessions are doing (sql queries) it consumes memory ; one of the memory parameter pga_aggregate_target you might want to take a look and investigate. See following query it might give you some more insight.
SELECT vses.username || ':' || vsst.SID || ',' || vses.serial# username,
vstt.NAME, MAX (vsst.VALUE) VALUE
FROM v$sesstat vsst, v$statname vstt, v$session vses
WHERE vstt.statistic# = vsst.statistic#
AND vsst.SID = vses.SID
AND vstt.NAME IN
('session pga memory', 'session pga memory max',
'session uga memory', 'session uga memory max',
'session cursor cache count', 'session cursor cache hits',
'session stored procedure space', 'opened cursors current',
'opened cursors cumulative')
AND vses.username IS NOT NULL
GROUP BY grouping sets( vses.username), vsst.SID, vses.serial#, vstt.NAME
ORDER BY vses.username, vsst.SID, vses.serial#, vstt.NAME;

Similar Messages

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Oracle.exe consuming 100% CPU on windows and database hang

    Hi all,
    every time my oracle database is hanging when the application run, the problem is the oracle.exe consum 100% CPU but not memory and the server hang and the dabase is going to inaccessible, we need to restart oracle instance service or server to bring the databas eback to normal but it's not permanent because the problem occurs once the application turn on.
    Checking the log file i found the below error every time:
    My database version is 9.2.0.7.0
    OS: Windows 2003 Server Standard Edition Service Pack 2
    RAM: 3,5Gb
    CPU: Inte Xeon 3.20 GHz
    ORA-00600: internal error code, arguments: [kghuclientasp_03], [0xBFEADCE0], [0], [0], [0], [], [], []
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04050: error while attempting to allocate 163500 bytes of memory
    ORA-06512: at "SYS.ORACLE_LOADER", line 14
    ORA-06512: at line 1
    Fri Mar 05 05:35:15 2010
    Errors in file e:\oracle\admin\optprod\udump\optprod_ora_5876.trc:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-04030: out of process memory when trying to allocate 8389132 bytes (pga heap,redo read buffer)
    ORA-04030: out of process memory when trying to allocate 8389132 bytes (pga heap,redo read buffer)
    ORA-04030: out of process memory when trying to allocate 8180 bytes (callheap,kcbtmal allocation)
    Thank you
    Lucienot.

    Is this a new application on this database?
    Has it run well in the past?
    I have had this happen before on a 32bit Windows server. Our problem was a poorly written procedure that kept pegging the cpu to 100%. You should be able to figure out what SQL is being used that is causing this problem, it will be the Top Working SQL most likely.
    I also had this problem on a Logical Standby server which was trying to apply SQL to the SYS.AUD$ table. As soon as SQL Apply was started, the CPU went to 100%. Once I truncated that table, the cpu usage went back to normal. Not sure what you are using to monitor your database but if you can, try to find out what SQL is running when your CPU goes to 100%.

  • CSV to Oracle - (Integration) fails on the target ORACLE connection

    Hi,
    I am trying to load data from a csv file into Oracle db via ODI.
    Here are the steps i followed:
    1- Created the FILE physical schema
    1- Created the Oracle data server and physical schema in Physical Architecture
    2- Created and linked to logical schemas
    3- Created the correspondent data models and stores and a
    ble to view the data contents of the CSV and of the target table in ODI.
    4- Created interface, added csv as source and oracle table as target
    5- Used "LKM File to SQL" and "IKM SQL Control Append" (FLOW_CONTROL is false)
    When i execute the interface, the session starts then i receive the following wrror in Operator:
    ODI-1228: Task ARIBA_G1 (Integration) fails on the target ORACLE connection DEV_DW.
    Caused By: java.sql.SQLException: Non supported SQL92 token at position: 116
    at oracle.jdbc.driver.OracleSql.handleODBC(OracleSql.java:1319)
    at oracle.jdbc.driver.OracleSql.parse(OracleSql.java:1190)
    at oracle.jdbc.driver.OracleSql.getSql(OracleSql.java:341)
    at oracle.jdbc.driver.OracleSql.getSqlBytes(OracleSql.java:649)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1079)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1466)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3752)
    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3937)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1535)
    at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)
    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:537)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
    at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
    at java.lang.Thread.run(Thread.java:662)
    This is the Code section:
    BeanShell script error: Sourced file: inline evaluation of: ``if ( odiRef.getUserExit("FLOW_CONTROL").equals("1") ) { out.print(" \ninsert int . . . '' : Typed variable declaration : Error in method invocation: Method getDataSetMin() not found in class'com.sunopsis.dwg.snpreference.SnpReferenceInterne' : at Line: 25 : in file: inline evaluation of: ``if ( odiRef.getUserExit("FLOW_CONTROL").equals("1") ) { out.print(" \ninsert int . . . '' : odiRef .getDataSetMin ( )
    BSF info: Insert new rows at line: 0 column: columnNo
    if ( odiRef.getUserExit("FLOW_CONTROL").equals("1") ) { out.print(" \ninsert into\t") ;
    out.print(odiRef.getTable("L","TARG_NAME","A")) ;
    out.print(" \n( \n\t") ;
    out.print(odiRef.getColList("", "[COL_NAME]", ",\\n\\t", "", "((INS and !TRG) and REW)")) ;
    out.print(" \n\t") ;
    out.print(odiRef.getColList(",", "[COL_NAME]", ",\\n\\t", "", "((INS and TRG) and REW)")) ;
    out.print(" \n) \nselect\t") ;
    out.print(odiRef.getColList("", "[COL_NAME]", ",\\n\\t", "", "((INS and !TRG) and REW)")) ;
    out.print(" \n\t") ;
    out.print(odiRef.getColList(",", "[EXPRESSION]", ",\\n\\t", "", "((INS and TRG) and REW)")) ;
    out.print(" \nfrom\t") ;
    out.print(odiRef.getTable("L","INT_NAME","A")) ;
    out.print(" \n") ;
    } else { out.print(" \ninsert into\t") ;
    out.print(odiRef.getTable("L","TARG_NAME","A")) ;
    out.print(" \n( \n\t") ;
    out.print(odiRef.getColList("", "[COL_NAME]", ",\\n\\t", "", "((INS and !TRG) and REW)")) ;
    out.print(" \n\t") ;
    out.print(odiRef.getColList(",", "[COL_NAME]", ",\\n\\t", "", "((INS and TRG) and REW)")) ;
    out.print(" \n) \n\nselect\n    ") ;
    out.print(odiRef.getColList("", "[COL_NAME]", ",\\n\\t", "", "((INS and !TRG) and REW)")) ;
    out.print("   \n  ") ;
    out.print(odiRef.getColList(",", "[EXPRESSION]", ",\\n\\t", "", "((INS and TRG) and REW)")) ;
    out.print(" \nFROM (\t\n") ;
    for (int i=odiRef.getDataSetMin(); i <= odiRef.getDataSetMax(); i++){out.print("\n") ;
    out.print(odiRef.getDataSet(i, "Operator")) ;
    out.print("\nselect \t") ;
    out.print(odiRef.getPop("DISTINCT_ROWS")) ;
    out.print("\n\t") ;
    out.print(odiRef.getColList(i,"", "[EXPRESSION] [COL_NAME]", ",\\n\\t", "", "((INS and !TRG) and REW)")) ;
    out.print(" \nfrom\t") ;
    out.print(odiRef.getFrom(i)) ;
    out.print("\nwhere\t") ;
    if (odiRef.getDataSet(i, "HAS_JRN").equals("1")) { out.print("\n\tJRN_FLAG <> 'D'\t") ;
    } else {out.print("\t(1=1)\t") ;
    } out.print("\n") ;
    out.print(odiRef.getJoin(i)) ;
    out.print("\n") ;
    out.print(odiRef.getFilter(i)) ;
    out.print("\n") ;
    out.print(odiRef.getJrnFilter(i)) ;
    out.print("\n") ;
    out.print(odiRef.getGrpBy(i)) ;
    out.print("\n") ;
    out.print(odiRef.getHaving(i)) ;
    out.print("\n") ;
    }out.print("\n) ") ;
    out.print(odiRef.getInfo("DEST_TAB_ALIAS_WORD")) ;
    out.print(" ODI_GET_FROM\n\n") ;
    } out.print("\n") ;
    ****** ORIGINAL TEXT ******
    <%if ( odiRef.getUserExit("FLOW_CONTROL").equals("1") ) { %>
    insert into <%=odiRef.getTable("L","TARG_NAME","A")%>
    <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "((INS and !TRG) and REW)")%>
    <%=odiRef.getColList(",", "[COL_NAME]", ",\n\t", "", "((INS and TRG) and REW)")%>
    select <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "((INS and !TRG) and REW)")%>
    <%=odiRef.getColList(",", "[EXPRESSION]", ",\n\t", "", "((INS and TRG) and REW)")%>
    from <%=odiRef.getTable("L","INT_NAME","A")%>
    <% } else { %>
    insert into <%=odiRef.getTable("L","TARG_NAME","A")%>
    <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "((INS and !TRG) and REW)")%>
    <%=odiRef.getColList(",", "[COL_NAME]", ",\n\t", "", "((INS and TRG) and REW)")%>
    select
        <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "((INS and !TRG) and REW)")%>  
      <%=odiRef.getColList(",", "[EXPRESSION]", ",\n\t", "", "((INS and TRG) and REW)")%>
    FROM (
    <%for (int i=odiRef.getDataSetMin(); i <= odiRef.getDataSetMax(); i++){%>
    <%=odiRef.getDataSet(i, "Operator")%>
    select  <%=odiRef.getPop("DISTINCT_ROWS")%>
    <%=odiRef.getColList(i,"", "[EXPRESSION] [COL_NAME]", ",\n\t", "", "((INS and !TRG) and REW)")%>
    from <%=odiRef.getFrom(i)%>
    where <% if (odiRef.getDataSet(i, "HAS_JRN").equals("1")) { %>
    JRN_FLAG <> 'D' <%} else {%> (1=1) <% } %>
    <%=odiRef.getJoin(i)%>
    <%=odiRef.getFilter(i)%>
    <%=odiRef.getJrnFilter(i)%>
    <%=odiRef.getGrpBy(i)%>
    <%=odiRef.getHaving(i)%>
    <%}%>
    ) <%=odiRef.getInfo("DEST_TAB_ALIAS_WORD")%> ODI_GET_FROM
    <% } %>
    Any suggestions are highly appreciated.
    Mike

    Hi Santy,
    No, no error or bad files.
    Actually the data are moved correctly into the temp table created by ODI, but not from there to the final target table.
    I found out the reason though, the knowledge modules imported were from another ODI installation!
    I made sure to import the proper KM and it worked.
    Thanks!

  • Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache

    Hi,
    What is difference in Oracle TimesTen In-Memory Database VS Oracle In-Memory Database Cache.
    For 32 bit on windows OS i am not able to insert data's more than 500k rows with 150 columns (with combinations of CHAR,BINARY_DOUBLE,BINARY_FLOAT, TT_BIGINT,REAL,DECIMAL,NUMERIC etc).
    [TimesTen][TimesTen 11.2.2.2.0 ODBC Driver][TimesTen]TT0802: Database permanent space exhausted -- file "blk.c", lineno 3450, procedure "sbBlkAlloc"
    I have set Perm size as 700 mb,Temp size as 100mb
    What is the max size we can given for PermSize,TempSize,LogBufMB for 32 bit on windows OS.
    What is the max size we can given for PermSize,TempSize,LogBufMB for 64 bit on windows OS.
    What is the Max configuration of TT for 32 bit what i can set for Perm size Temp size.
    Thanks!

    They are the same product but they are licensed differently and the license limits what functionality you can use.
    TimesTen In-Memory Database is a product in its own right allows you to use TimesTen as a standalone database and also allows replication.
    IMDB Cache is an Oracle DB Enterprise Edition option (i.e. it can only be licensed as an option to an Oracle DB EE license). This includes all the functionality of TImesTen In-Memory Database but adds in cache functionality (cache groups, cache grid etc.).
    32-bit O/S are in general a poor platform to try and create an in-memory database of any significant size (32-bit O/S are very limited in memory addressing capability) and 32-bit Windows is the worst example. The hard coded limit for total datastore size on 32-bit O/S is 2 GB but in reality you probably can;'t achieve that. On Windows the largest you can get is 1.1 GB and most often less than that. If you need something more than about 0.5 Gb on Windows then you really need to use 64-bit Windows and 64-bit TimesTen. There are no hard coded upper limit to database size on 64-bit TimesTen; the limit is the amount of free physical memory (not virtual memory) in the machine. I have easily created a 12 GB database on a Win64 machine with 16 GB RAM. On 64-bit Unix machines we have live database of over 1 TB...
    Chris

  • Is there any command/query/etc, which would allow to understand what database objects (for example tables) are consuming memory and how much of it?

    TimesTen Release 11.2.1.9.6 (64 bit Linux/x86_64)
    Command> dssize;
    PERM_ALLOCATED_SIZE:      51200000
      PERM_IN_USE_SIZE: 45996153
    PERM_IN_USE_HIGH_WATER:   50033464
    TEMP_ALLOCATED_SIZE:      2457600
    TEMP_IN_USE_SIZE:         19680
    TEMP_IN_USE_HIGH_WATER:   26760
    Is there any command/query/etc, which would allow to understand what database objects (for example tables) are consuming memory and how much of it?
    tried to use ttsize function, but it gives some senseless results – for example, for the biggest table, tokens, it produces following output (that this table is 90GB in size – what physically cannot be true):
    Command> call ttsize('tokens',null,null);
    < 90885669274.0000 >
    1 row found.

    Are you able to use the command line version of ttSize instead? This splits out how much space is being used by indexes (in the Temp section of the TT memory segment), which I think is being combined into one, whole figure in the procedure version of ttSize you're using. For example:
    ttSize -tbl ia my_ttdb
    Rows = 4
    Total in-line row bytes = 17524
    Total = 17524
    Command> create index i1 on ia(a);
    ttSize -tbl ia my_ttdb;
    Rows = 4
    Total in-line row bytes = 17524
    Indexes:
    Range index JSPALMER.I1 adds 5618 bytes
      Total index bytes = 5618
    Total = 23142
    Command> call ttsize ('ia',,);
    < 23142.0000000000 >
    1 row found.
    In 11.2.2 we added the procedure ttComputeTabSizes which populates system tables with detailed table size data, and was designed to be an alternative to ttSize. Unfortunately it still doesn't calculate index usage though, and it isn't in 11.2.1.

  • Number of oracle connection increase

    We added more web servers IIS Servers which connects to Oracle Database. Each Server has predefined connection pool. As result of this I see Oracle connections number almost doubled. Here is my question. Could this also impact Database performance somehow down in road. Like memory or CPU utilization. Most of new session sitting idle.
    Oracle Database 9.2

    Each connection to database will have a server process (shadow process) running for it on DB server if using dedicated connection method.
    These server process will take about 1M memory initially, could grow depends on what's running. So make sure you have enough memory to accommodate them and check your DB parameter
    processes and sessions, they govern how many concurrent connections can be opened to database.

  • How is SWAP space and Oracle's Shared Memory related ?

    Platform: RHEL 5.4
    Oracle Version: 11.2
    I was trying to increase MEMORY_TARGET to 15g. Then I encountered the following error
    SQL> alter system set memory_max_target=20g scope=spfile;
    System altered.
    SQL> alter system set memory_target=15g scope=spfile;
    System altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    SQL>
    SQL>
    SQL> startup
    ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
    ORA-00845: MEMORY_TARGET not supported on this system
    SQL>
    SQL>
    SQL> select name from v$database;
    select name from v$database
    ERROR at line 1:
    ORA-01034: ORACLE not available
    Process ID: 0
    Session ID: 189 Serial number: 9From the below post
    MEMORY_TARGET not supported on this system
    I gathered that In Linux, if you want to set MEMORY_TARGET, MEMORY_MAX_TARGET to nGB , then you should have a SWAP ( /dev/shm ) of nGB.
    My Swap was only 16gb and I was trying to set memory_max_target to 20g
    $ df -h /dev/shm
    Filesystem            Size  Used Avail Use% Mounted on
    tmpfs                  16G  7.2G  8.6G  46% /dev/shmNow, I am wondering how is Oracle's Shared Memory (SGA+PGA) related to SWAP space in a server ? Shouldn't Oracle's Shared Memory be related to Physical RAM rather than disk based SWAP space ?

    related question:
    In the above mentioned OTN article it says ,
    You could encounter ORA-00845 if your shared memory is not mapped to /dev/shm
    I think he meant
    You could encounter ORA-00845 if your SWAP space is not mapped to /dev/shm .
    Am I right ?

  • Media Server Consuming Memory

    My project is forwarding an RTMP stream to an RTMFP (multicast stream) using a server action script. In forwarding this stream, the AMSCore process keeps consuming memory. This consumption of memory requires the system to be rebooted repeated. How can this be prevented?

    I did not think I could use the sample multicast application with my
    project's software. Instead,  I compared the outbound NetConnection and
    NetStream code in the sample application with the outbound NetConnection
    and NetStream code in my project's multicast server actionscript. One of
    differences I discovered is that my project's application connected to the
    rtmfp group with the following call:
    nc_connect..connect("rtmfp:");
    while the sample called:
    nc.connect(resetUriProtocol(streamContext.client.uri, "rtmfp"));
    Therefore, in my project's actionscript, I replaced connection to the
    "rtmfp:" serverless group with a connection to the application uri:
    net_connect.connect("rtmfp://127.0.0.1:" + port + "/" + application.name)
    This change fixed the excessive memory consumption. So, why does the
    connection to serverless network endpoint consume memory on Windows Server
    2008 R2 to the point that all of the virtual memory is used or it crashes
    the system?
    Scott F. Wilson
    Principal Software Engineer
    Raytheon SAS
    Marlborough, MA  01752
    Phone: 508-490-3123
    Fax:     508-490-1366

  • Oracle Connection Pool failure in COM+

    I am having some trouble trying to get a specific database to work with an application that makes use of a COM+ Application. When we point the application at the UAT database everything seems fine, but when we point it to the production database after a few successful calls it ends up failing at the COM+ application recycles. The event viewer provides the following information:
    Event Type:     Error
    Event Source:     COM+
    Event Category:     Unknown
    Event ID:     4786
    Date:          8/5/2007
    Time:          12:54:46 PM
    User:          N/A
    Computer:     APPL_SERVER
    Description:
    The system has called a custom component and that component has failed and generated an exception. This indicates a problem with the custom component. Notify the developer of this component that a failure has occurred and provide them with the information below.
    Component Prog ID: Oracle Connection Pool - tnsnames_alias
    Method Name: IDispenserDriver::CreateResource
    Server Application ID: {30A93CB3-25EB-4258-8C88-5AE103B7B86F}
    Server Application Instance ID:
    {A57C513E-519F-45BD-B46D-DC54B285F534}
    Server Application Name: COM+ Application Name
    The serious nature of this error has caused the process to terminate.
    Exception: C0000005
    Address: 0x7C8327F9
    Call Stack:
    + 0x7c8327f9
    ntdll!RtlFindActivationContextSectionGuid + 0x7d2
    ntdll!RtlInitializeSListHead + 0x175
    ntdll!RtlFindActivationContextSectionGuid + 0x1b7
    msvcrt!malloc + 0x6c
    oracommon9!sktsfMalloc + 0x14
    orageneric9!kpummapg + 0x58
    orageneric9!kghalo + 0xabb
    orageneric9!kghalf + 0x102
    orageneric9!kopo2cpc + 0x61
    orageneric9!kopeini + 0x1d
    orageneric9!kopo2cpc + 0xd2
    orageneric9!kopopgi + 0x117
    OraClient9!koudpnp + 0x712
    OraClient9!koudpnp + 0x101
    OraClient9!kpuinit0 + 0xb19
    OraClient9!kpuinit + 0x38
    OraClient9!OCIEnvInit + 0x1c
    oramts!kpntsrvr::kpntsrvr(class kpntdbid *) + 0x80
    oramts!kpntdbid::allocNewSrvr(struct SIDAND_ATTRIBUTES *) + 0x138
    oramts!kpntdbid::GetSrvr(class kpntsvrl * *,unsigned long) + 0x7df
    oramts!kpntdisp::getNet8conn(class kpntsvrl * *,unsigned long) + 0x41
    oramts!kpntsess::initOCI(void) + 0xec
    oramts!kpntsess::sessionBegin(void) + 0x17b
    oramts!kpntdisp::CreateResource(unsigned long,unsigned long *,long *) + 0xc4
    COMSVCS!DispManGetContext + 0xa3d
    COMSVCS!DispManGetContext + 0x1fee
    oramts!kpntdisp::allocateConnection(class kpntsess * *,unsigned long,class kpntrtyp *) + 0x3c4
    oramts!_kpntsvcgetex + 0x183
    oramts!_kpntsvcget + 0x25
    oramts!kpntctra::getConnectionAndHandles(class kpntrtyp *,struct OCISvcCtx * *,struct OCITrans * *,struct OCIError * *) + 0x87
    oramts!kpntctra::abortBranch(struct xid_t &,class kpntbrnch *,struct BOID *,int,struct BOID *) + 0x15a
    oramts!kpntctra::doAbort(struct BOID *,int,struct BOID *) + 0x454
    oramts!kpntajob::doJob(void) + 0x27
    oramts!kpntjobq::serviceRequest(class kpntjob *) + 0x3e
    oramts!workerThread(void *) + 0xd0
    msvcrt!_endthreadex + 0xa3
    kernel32!GetModuleFileNameA + 0xeb
    Not being an expert in Oracle, I have been able to dig up a little bit of information that might be of use ...
    a) Our tnsnames.ora indicates that the connections are to be DEDICATED and running Toad bares this out -- dllhost ends up with a single connection.
    b) most of the database initial set of parameters seem to be very similar. The only difference I noticed was that the archive log mode and db_cache_advice are ON for production.
    c) We have little control over how the connection strings are being created internally in this COM+ application, but however they are created it works for UAT and doesn't for PROD.
    d) When we go into our web application and hit a page that makes use of the COM+ component to render, it will work the first time but when I do a simple browser refresh it will usually fail on the 2nd or 3rd time. Almost like it is trying to expand the connection pool size and the oracle server is throwing up.
    If I didn't mention earlier the two database instances run on different servers but the application server is exactly the same. We only change the alias we are using for the database (both are defined in tnsnames) and the password used to make its connection.
    Does anyone have any clues on this one? I am really spinning my wheels trying to figure out what could cause this type of situation. Anything at all would be very helpful.

    It appeared to us that the problem was with the Oracle server and that it might have been failing when we were trying to expand our application connection pool size or basically obtain more connections.
    The biggest indicator of this is that we run the same application code against two different databases and one works and one does not work. Having said this, I suppose the problem could be rooted in a data error instead of an oracle server error ...
    Is there a specific trace file on the oracle server that would help me point to any error that is truly an oracle server error? Sorry I am very new to Oracle.

  • Crystal report - and oracle connections

    hi to everyone
    i installed client  of oracle
    and after that - the crystal report 2008 sp3
    when i open new (blank page) - and looking for a oracle connection
    its look like he doesnt exist.
    please if someone know why i cant see the connection to oracle/
    yossi bar

    Hello,
    CR looks for the Oracle \bin folder in the PATH statement. If it can't find it then the option to select Oracle native driver will not show.
    Thank you
    Don

  • My computer failed to install the compatible driver software when updating my iphone and now my iphone is unusable as it is requesting me to connect it to itunes but i physically cannot connect to itunes... help me please?

    my computer failed to install the compatible driver software when updating my iphone and now my iphone is unusable as it is requesting me to connect it to itunes but i physically cannot connect to itunes... help me please?

    Hi there croadstar,
    You may find the troubleshooting steps in the article below helpful.
    iOS: Device not recognized in iTunes for Windows
    http://support.apple.com/kb/ts1538
    Issues installing iTunes or QuickTime for Windows
    http://support.apple.com/kb/ht1926
    -Griff W. 

  • Imac 27" Intel Core 7 CPU. Screen goes black and will not respond except with a push of power button. Second monitor connected via displayport continues to display fine. Apply Store did full hardware scan and all is fine. Did clean wipe from Mavrick back

    Imac 27" Intel Core7  CPU 16 Gig RAM. Screen goes black and will not respond except with a push of power button. Second monitor connected via displayport continues to display fine. Apply Store did full hardware diagnostic and all is fine. Did clean wipe from Mavrick back to Mountain Lion but problem remains. Apple Store can do no more.

    I did some more digging, it appears to be a backlight problem only. I can see the screen very dimly if I use a bright flashlight in a very dark room. It also seems to run ok if the brightness is turned down a LOT.
    So I'm thinking this is a LED driver board issue or the display itself. I'll open it up and check the connection between the two and see if I can get any more clues. At least I can use it somewhat now by dimming the display significantly...

  • JDBC-ORACLE CONNECTIVITY ISSUE WITYH OCI8 DRIVER using oracle 11g client..

    JDBC-ORACLE CONNECTIVITY ISSUE WITYH OCI8 DRIVER using oracle 11g client..
    I am getting below error when i m trying to access oracle db using oracle 11g client. It works with earlier oracle client versions. how do i resolve this. is there any issue with version of ojdbc6.jar that i am using??? I cant use thin driver since its an old application for which i dont have source files.
    Apr 6, 2013 1:00:59 PM org.apache.catalina.core.StandardWrapperValve invoke
    SEVERE: Servlet.service() for servlet jsp threw exception
    java.lang.UnsatisfiedLinkError: no ocijdbc9 in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1682)
    at java.lang.Runtime.loadLibrary0(Runtime.java:822)
    at java.lang.System.loadLibrary(System.java:992)
    at oracle.jdbc.oci8.OCIDBAccess.logon(OCIDBAccess.java:262)
    at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:346)
    at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:468)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
    at java.sql.DriverManager.getConnection(DriverManager.java:525)
    at java.sql.DriverManager.getConnection(DriverManager.java:171)
    at PettyCash.SysDate.getSysSubSys(SysDate.java:232)
    at org.apache.jsp.PettyCash.index_jsp._jspService(org.apache.jsp.PettyCash.index_jsp:186)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:322)
    at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
    at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    Code is as follow for reference
    import oracle.jdbc.driver.*;
    DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
    conn = DriverManager.getConnection ("jdbc:oracle:oci8:@" + database,db_user, db_pass);
    eNVIRONMENT VARIABLES set are as follows:
    classpath
    C:\Program Files\apache-tomcat-5.5.12\common\lib\servlet-api.jar;C:\Program Files\apache-tomcat-5.5.12\webapps\ROOT\WEB-INF\lib\classes12.jar;C:\Program Files\apache-tomcat-5.5.12\webapps\ROOT\WEB-INF\lib\ojdbc6.jar;
    JAVA_HOME
    C:\Program Files\Java\jdk1.5.0_04
    PATH
    C:\Program Files\Java\jdk1.5.0_04\bin
    ORACLE_HOME
    D:\Oracle11\product\11.2.0\client_1\BIN

    Apr 8, 2013 5:24:06 PM org.apache.catalina.core.StandardWrapperValve invoke
    SEVERE: Servlet.service() for servlet jsp threw exception
    java.lang.NullPointerException
         at org.apache.jsp.abc.index_jsp._jspService(org.apache.jsp.abc.index_jsp:280)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:322)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
         at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:432)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:868)
         at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:663)
         at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
         at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
         at java.lang.Thread.run(Thread.java:595)

Maybe you are looking for

  • Abap general

    what is the alternate for SELECT ..... DISTINCT statement . diiference b/w the sy-tabix and sy-index .can we check sy-subrc after the PERFORM statement .

  • Mzm.pbfkalfs.pkg error message

    Repeated attempts to download OS X Lion keep ending with the following message: "The application could not be downloaded.  An error occurred while running scripts from the package "mzm.pbfkalfs.pkg"." I am already running OS X Lion 10.7.1 on my 2008

  • Exporting to Movie

    Using FCE I've cut a movie using only square images from my Hasselblad, text, and music, and now I am trying to export my rendered product into a Quicktime movie. I have no idea what settings to use. What I've tried ends up either croping or squeezin

  • CrystalReportViewer in WPF - win32 exception on application closing

    Hello! I'am using the CrystalReportViewer Control in a WindowsFormsHost control as the rest of the application is made in WPF. If the application is closed when a Report is open, the "Visual Studio JIT" Debugger windows is opened informing me that an

  • Fixing Page Display Resolution

    Hello Is it possible to fix the display resolution of a PDF file at 96 pixels/inch by some means? I am developing some e-learning content which I would like to distribute as interactive PDF files. I intend to use InDesign CS5.5 for the layout work an