Pb with LONG, size exceeds 32767

With ODP.NET 9.2.0.2.102 - ORACLE 8.1.7 I can't read LONG value larger than 32767 characters.
When the size exceeds this limit, the value is truncated at exactly 32767 characters even if InitialLONGFetchSize is set to a larger size.
example of my code : (vb.net 2003)
objCmd = New OracleCommand(sql, m_ObjOracleConn)
objCmd.CommandType = CommandType.Text
objCmd.InitialLONGFetchSize = 200000
value = objCmd.ExecuteScalar()

Thank you. Unfortunately the result is always the same.
I've tried this :
Dim objCmd As OracleCommand
Dim DA As OracleDataReader
dim sql as string = "SELECT ROWID,OPH FROM A_GA WHERE DICO = 'G' AND DOSSIER = '@' AND ID1 = 'TEST' AND NO=0 AND RUB = '0000'"
objCmd = New OracleCommand(sql, m_ObjOracleConn)
objCmd.InitialLONGFetchSize = 200000
m_ObjOracleConn.Open()
DA = objCmd.ExecuteReader()
If DA.Read() Then
Return DA.GetString(1)
Else
Return String.Empty
End If
The length of the result string is still 32767
I've added objCmd.AddRowid = True -> result.length = 32767
objCmd.ExecuteScalar() -> result.length = 32767
parameterized call-> result.length = 32767
What's wrong with my code ?

Similar Messages

  • Issue with email body size exceeding 4000 characters in Apex

    Hi
    We are getting "ORA-01403: no data found" error in our apex application whenever the the email body size exceeds 4000 characters. When the content of the email body are edited to reduce the size, it is working fine.
    In our application, the item details will be emailed as part of email body and when the number of items are more, the size of the email body exceeds 4000 characters and when we try to send email of these details we are getting "ORA-01403: no data found" error.
    Need your help to know if there is any way in apex to handle this issue and to send email with size exceeding 4000 characters from apex application.
    Please advice.
    Regards,
    Sri

    >
    Update your forum profile with a real handle instead of "user13394362".
    ALWAYS include the following information with the initial question:
    <li>APEX version
    <li>DB version and edition
    <li>Web server architecture (EPG, OHS or APEX listener)
    <li>Browser(s)/versions(s) used
    <li>Theme
    <li>Templates
    <li>Region type
    I am using APEX_MAIL.SEND procedure from apex application to send the mail. The argument P_BODY_HTML which holds the email body content is of VARCHAR2 datatype and it has a limit of 4000 characters. So I am looking for a way to send the mail through APEX_MAIL.SEND even if the email body size exceeds 4000 characters.As (somewhat telegraphically) pointed out above, <tt>apex_mail.send</tt> is overloaded to accept either <tt>VARCHAR2</tt> or <tt>CLOB</tt> <tt>p_body_html</tt> parameters. Use a <tt>CLOB</tt> <tt>p_body_html</tt> parameter.
    Please consult the documentation before posting questions here.

  • Java.lang.OutOfMemoryError: Requested array size exceeds VM limit

    Hi!
    I've a this problem and I do not know how to reselve it:
    I' ve an oracle 11gr2 database in which I installed the Italian network
    when I try to execute a Shortest Path algorithm or a shortestPathAStar algorithm in a java program I got this error.
    [ConfigManager::loadConfig, INFO] Load config from specified inputstream.
    [oracle.spatial.network.NetworkMetadataImpl, DEBUG] History metadata not found for ROUTING.ITALIA_SPAZIO
    [LODNetworkAdaptorSDO::readMaximumLinkLevel, DEBUG] Query String: SELECT MAX(LINK_LEVEL) FROM ROUTING.ITALIA_SPAZIO_LINK$ WHERE LINK_LEVEL > -1
    *****Begin: Shortest Path with Multiple Link Levels
    *****Shortest Path Using Dijkstra
    [oracle.spatial.network.lod.LabelSettingAlgorithm, DEBUG] User data categories:
    [LODNetworkAdaptorSDO::isNetworkPartitioned, DEBUG] Query String: SELECT p.PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.LINK_LEVEL = ? AND ROWNUM = 1 [1]
    [QueryUtility::prepareIDListStatement, DEBUG] Query String: SELECT NODE_ID, PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.NODE_ID IN ( SELECT column_value FROM table(:varray) ) AND LINK_LEVEL = ?
    [oracle.spatial.network.lod.util.QueryUtility, FINEST] ID Array: [2195814]
    [LODNetworkAdaptorSDO::readNodePartitionIds, DEBUG] Query linkLevel = 1
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    [oracle.spatial.network.lod.LabelSettingAlgorithm, WARN] Requested array size exceeds VM limit
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
    I use the sdoapi.jar, sdomn.jar and sdoutl.jar stored in the jlib directory of the oracle installation path.
    When I performe this query : SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    I got the following result
    BLOB NUM_INODES NUM_ENODES NUM_ILINKS NUM_ELINKS NUM_INLINKS NUM_OUTLINKS USER_DATA_INCLUDED
    (BLOB) 3408 116 3733 136 130 128 N
    then the java code I use is :
    package it.sistematica.oracle.spatial;
    import it.sistematica.oracle.network.data.Constant;
    import java.io.InputStream;
    import java.sql.Connection;
    import oracle.spatial.network.lod.DynamicLinkLevelSelector;
    import oracle.spatial.network.lod.GeodeticCostFunction;
    import oracle.spatial.network.lod.HeuristicCostFunction;
    import oracle.spatial.network.lod.LODNetworkManager;
    import oracle.spatial.network.lod.LinkLevelSelector;
    import oracle.spatial.network.lod.LogicalSubPath;
    import oracle.spatial.network.lod.NetworkAnalyst;
    import oracle.spatial.network.lod.NetworkIO;
    import oracle.spatial.network.lod.PointOnNet;
    import oracle.spatial.network.lod.config.LODConfig;
    import oracle.spatial.network.lod.util.PrintUtility;
    import oracle.spatial.util.Logger;
    public class SpWithMultiLinkLevel
         private static NetworkAnalyst analyst;
         private static NetworkIO networkIO;
         private static void setLogLevel(String logLevel)
         if("FATAL".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FATAL);
         else if("ERROR".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         else if("WARN".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_WARN);
         else if("INFO".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_INFO);
         else if("DEBUG".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_DEBUG);
         else if("FINEST".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FINEST);
         else //default: set to ERROR
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         public static void main(String[] args) throws Exception
              String configXmlFile =                "LODConfigs.xml";
              String logLevel =           "FINEST";
              String dbUrl =                Constant.PARAM_DB_URL;
              String dbUser =                Constant.PARAM_DB_USER;
              String dbPassword =                Constant.PARAM_DB_PASS;
              String networkName =                Constant.PARAM_NETWORK_NAME;
              long startNodeId = 2195814;
              long endNodeId = 3415235;
         int linkLevel = 1;
         double costThreshold = 1550;
         int numHighLevelNeighbors = 8;
         double costMultiplier = 1.5;
         Connection conn = null;
         //get input parameters
         for(int i=0; i<args.length; i++)
         if(args.equalsIgnoreCase("-dbUrl"))
         dbUrl = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbUser"))
         dbUser = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbPassword"))
         dbPassword = args[i+1];
         else if(args[i].equalsIgnoreCase("-networkName") && args[i+1]!=null)
         networkName = args[i+1].toUpperCase();
         else if(args[i].equalsIgnoreCase("-linkLevel"))
         linkLevel = Integer.parseInt(args[i+1]);
         else if(args[i].equalsIgnoreCase("-configXmlFile"))
         configXmlFile = args[i+1];
         else if(args[i].equalsIgnoreCase("-logLevel"))
         logLevel = args[i+1];
         // opening connection
         System.out.println("Connecting to ......... " + Constant.PARAM_DB_URL);
         conn = LODNetworkManager.getConnection(dbUrl, dbUser, dbPassword);
         System.out.println("Network analysis for "+networkName);
         setLogLevel(logLevel);
         //load user specified LOD configuration (optional),
         //otherwise default configuration will be used
         InputStream config = (new Network()).readConfig(configXmlFile);
         LODNetworkManager.getConfigManager().loadConfig(config);
         LODConfig c = LODNetworkManager.getConfigManager().getConfig(networkName);
         //get network input/output object
         networkIO = LODNetworkManager.getCachedNetworkIO(
         conn, networkName, networkName, null);
         //get network analyst
         analyst = LODNetworkManager.getNetworkAnalyst(networkIO);
         double[] costThresholds = {costThreshold};
         LogicalSubPath subPath = null;
         try
              System.out.println("*****Begin: Shortest Path with Multiple Link Levels");
              System.out.println("*****Shortest Path Using Dijkstra");
              String algorithm = "DIJKSTRA";
              linkLevel = 1;
              costThreshold = 5000;
              subPath = analyst.shortestPathDijkstra(new PointOnNet(startNodeId), new PointOnNet(endNodeId),linkLevel, null);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest path using Dijkstra");
              catch (Exception e)
              e.printStackTrace();
              try
              System.out.println("*****Shortest Path using Astar");
              HeuristicCostFunction costFunction = new GeodeticCostFunction(0,-1, 0, -2);
              LinkLevelSelector lls = new DynamicLinkLevelSelector(analyst, linkLevel, costFunction, costThresholds, numHighLevelNeighbors, costMultiplier, null);
              subPath = analyst.shortestPathAStar(
              new PointOnNet(startNodeId), new PointOnNet(endNodeId), null, costFunction, lls);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest Path Using Astar");
              System.out.println("*****End: Shortest Path with Multiple Link Levels");
              catch (Exception e)
              e.printStackTrace();
         if(conn!=null)
         try{conn.close();} catch(Exception ignore){}
    At first I create a two link level network with this command
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 5000, 'LOAD_DIR', 'sdlod_part.log', 'w', 1);
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 60000, 'LOAD_DIR', 'sdlod_part.log', 'w', 2);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 1, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 2, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    Then I try with a single level network but I got the same error.
    Please can samebody help me?

    I find the solution to this problem.
    In the LODConfig.xml file I have:
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11g</partitionBlobTranslator>
    but when I change it to
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11gR2</partitionBlobTranslator>
    The application starts without the obove mentioned error.

  • QM: Sample Size is 32767 if freely defined inspection point is used

    Hi,
    I am using Inspection plan with freely defined inspection point (100).
    Inspection type: 04
    I want to do 100% inspection. Material master setting is done for early lot creation.
    If I use freely defined inspection point in the plan and create an order of any quantity > 32767, I get a sample size of 32767. If create a order of 100000 qty still sample size is 32767 and I can't even change it during result recording. If I don't use freely defined inspection point I get a sample size which is equal to the order quantity.
    My Requirement:
    I want to use freely defined inspection point in the plan... and also I want to get a sample size equal to production quantity (not 32767 always when production quantity exceeds the this quantity).
    Please let me know how to control this fix sample size 32767 when my plan is with freely defined inspection point.
    Regards,
    Abir.

    Hi,
    Sample size under 'Inspect' column in result recording screen is always 32767, if order quantity >32767.
    I use EARLY lot creation (through material master setting), because I want one lot for one production order... do partial confirmation. Lot is created with production order release, before actual GR takes place.
    Say I have an order with qty 100,000 and if I open the lot after REL of order I see,
    inspection lot quantity: 100,000
    Actual Lot Qty:             0
    Sample Size:                100,000
    Even if I do confirmation and GR for say 100,000 qty and open the Lot I can see,
    inspection lot quantity: 100,000
    Actual Lot Qty:             100,000
    Sample Size:                100,000
    When I go to result recoding screen, against every MIC there is column 'Inspect' (there I get 32767), there is another entry field 'Inspected' (here system don't allow to enter more that 32767)
    Sampling procedure I have used:
    Sample Type:     200    100% Inspection
    Valuation Mode: 500    Manual Valuation
    Free Inspection Point
    May be I made mistake in pointing out the issue; the problem is with 'Inspect' quantity in result recording  - which always appearing 32767. For smaller quantity order its equal to order quantity.  How change the 'Inspect' quantity.
    Also not if I remove Inspection Point from the plan I get "Inspect' quantity equal to Production order quantity.
    Best regards,
    Abir.

  • How to copy a table with LONG and CLOB datatype over a dblink?

    Hi All,
    I need to copy a table from an external database into a local one. Note that this table has both LONG and CLOB datatypes included.
    I have taken 2 approaches to do this:
    1. Use the CREATE TABLE AS....
    SQL> create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db;
    create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    2. After reading some threads I tried to use the COPY command:
    SQL> COPY FROM xxxx/pass@ext_db TO xxxx/pass@target_db REPLACE XXXX_INDV_DOCS USING SELECT * FROM XXXX_INDV_DOCS;
    Array fetch/bind size is 15. (arraysize is 15)
    Will commit when done. (copycommit is 0)
    Maximum long size is 80. (long is 80)
    CPY-0012: Datatype cannot be copied
    If my understanding is correct the 1st statement fails because there is a LONG datatype in XXXX_INDV_DOCS table and 2nd one fails because there is a CLOB datatype.
    Is there a way to copy the entire table (all columns including both LONG and CLOB) over a dblink?
    Would greatelly appriciate any workaround or ideas!
    Regards,
    Pawel.

    Hi Nicolas,
    There is a reason I am not using export/import:
    - I would like to have a one-script solution for this problem (meaning execute one script on one machine)
    - I am not able to make an SSH connection from the target DB to the local one (although the otherway it works fine) which means I cannot copy the dump file from target server to local one.
    - with export/import I need to have an SSH connection on the target DB in order to issue the exp command...
    Therefore, I am looking for a solution (or a workaround) which will work over a DBLINK.
    Regards,
    Pawel.

  • Essbase Error:Set is too large to be processed. Set size exceeds 2^64 tuple

    Hi,
    we are using obiee 11.1.1.6 version with essbase 9.3.3 as a data source . when I try to run a report in obiee, I am getting the below error :
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Internal error: Set is too large to be processed. Set size exceeds 2^64 tuples (HY000)
    but if I run the same query in excel add in, I am just getting 20 records . wondering why I am getting this error in obiee . Does any one encountered the same issue ?
    Thanks In advance,

    Well if you want to export in I think you have to manually do it.
    The workaround it to open your aperture library by right clicking it and show contents...
    Then go into your project right click show contents...
    In here there are sub folders on dates that the pictures were added to those projects. If you open the sub folder and search for your pictures name it should be in that main folder.
    You can just copy it out as you would any normal file to any other location.
    Voila you have manually exported out your file.
    There is a very similar post that has been close but again you can't export the original file that you are working on - FYI http://discussions.apple.com/thread.jspa?threadID=2075419

  • Using DBMS_DATAPUMP with LONG data type

    I've got a procedure below that calls the DBMS_DATAPUMP procedure using a REMOTE_LINK to move a schema from one database to another. However, a couple of the tables within that schema have columns with the LONG data type. And when I run it I get an error saying that you cannot move data with the LONG data type using a REMOTE LINK. So no data in those particular tables gets moved over.
    Has anyone else had this issue? If so, do you have a work around? I tried adding a CLOB column to my table and setting the new CLOB to equal the LONG, but I couldn't get that to work either...even when I tried using a TO_LOB. If I could get that to, then I could just drop the LONG, move the schema, then recreate the LONG column on the opposite side.
    Here's my procedure....
    DECLARE
         /* EXPORT/IMPORT VARIABLES */
         v_dp_job_handle                    NUMBER ;          -- Data Pump job handle
         v_count                              NUMBER ;          -- Loop index
         v_percent_done                    NUMBER ;          -- Percentage of job complete
         v_job_state                         VARCHAR2(30) ;     -- To keep track of job state
         v_message                         KU$_LOGENTRY ;     -- For WIP and error messages
         v_job_status                    KU$_JOBSTATUS ;     -- The job status from get_status
         v_status                         KU$_STATUS ;     -- The status object returned by get_status
         v_logfile                         NUMBER ;
         v_date                              VARCHAR2(13) ;
         v_source_server_name          VARCHAR2(50) ;
         v_destination_server_name     VARCHAR2(50) ;
    BEGIN
         v_project := 'TEST' ;
         v_date := TO_CHAR(SYSDATE, 'MMDDYYYY_HHMI') ;
         v_source_server_name := 'TEST_DB' ;
         v_dp_job_handle := DBMS_DATAPUMP.OPEN(
              OPERATION     => 'IMPORT',
              JOB_MODE     => 'SCHEMA',
              REMOTE_LINK => v_source_server_name,
              JOB_NAME     => v_project||'_EXP_'||v_date,
              VERSION          => 'LATEST') ;
         v_logfile := DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE ;
         DBMS_DATAPUMP.ADD_FILE(
              HANDLE          => v_dp_job_handle,
              FILENAME     => v_project||'_EXP_'||v_date||'.LOG',
              DIRECTORY     => 'DATAPUMP',
              FILETYPE     => v_logfile) ;
         DBMS_DATAPUMP.METADATA_FILTER(
              HANDLE          => v_dp_job_handle,
              NAME          => 'SCHEMA_EXPR',
              VALUE          => '= '''||v_project||''' ') ;
         DBMS_DATAPUMP.START_JOB(v_dp_job_handle) ;
         v_percent_done := 0 ;
         v_job_state := 'UNDEFINED' ;
         WHILE (v_job_state != 'COMPLETED') AND (v_job_state != 'STOPPED')
         LOOP
              DBMS_DATAPUMP.GET_STATUS(
                   v_dp_job_handle,
                   DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR + DBMS_DATAPUMP.KU$_STATUS_JOB_STATUS + DBMS_DATAPUMP.KU$_STATUS_WIP,
                   -1,
                   v_job_state,
                   v_status) ;
                   v_job_status := v_status.JOB_STATUS ;
              IF v_job_status.PERCENT_DONE != v_percent_done THEN
                   DBMS_OUTPUT.PUT_LINE('*** Job percent done = '||TO_CHAR(v_job_status.PERCENT_DONE)) ;
                   v_percent_done := v_job_status.PERCENT_DONE ;
              END IF ;
              IF BITAND(v_status.MASK, DBMS_DATAPUMP.KU$_STATUS_WIP) != 0 THEN
                   v_message := v_status.WIP ;
              ELSIF BITAND(v_status.mask, DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR) != 0 THEN
                   v_message := v_status.ERROR ;
              ELSE
                   v_message := NULL ;
              END IF ;
              IF v_message IS NOT NULL THEN
                   v_count := v_message.FIRST ;
                   WHILE v_count IS NOT NULL
                   LOOP
                        DBMS_OUTPUT.PUT_LINE(v_message(v_count).LOGTEXT) ;
                        v_count := v_message.NEXT(v_count) ;
                   END LOOP ;
              END IF ;
         END LOOP ;
         DBMS_OUTPUT.PUT_LINE('Job has completed') ;
         DBMS_OUTPUT.PUT_LINE('Final job state = '||v_job_state) ;
         DBMS_DATAPUMP.DETACH(v_dp_job_handle) ;
    END ;

    But the application we have that uses the database cannot be changed to read from a CLOBWhy can't you change the application?
    Well, anyway you should point out to your superiors that Oracle documented years ago to not use LONGS anymore...
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/datatype.htm#sthref3806
    It clearly states:
    LONG Datatype
    Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases.
    How do I go from CLOB to LONG?I'm sorry, cannot help you on that one, I don't think you can do that at all (Oracle wants us to stop using LONGS, so, it's a one-way conversion...):
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1037232794454#15512131314505
    So: NO built_in, you'll need to write a program if the clob is ALWAYS LESS THAN 32k in size, you can use plsql..but is that the case in your case? Only you know that.
    I believe that question is still unanswered on this forum, but you might try searchin for answers on this forum, and
    the 'Database-General' forum: General Database Discussions
    Perhaps you can google a Q&D workaround...
    ( And consider convincing your collegues to just convert your LONGS to LOBS)
    Edited by: hoek on Apr 8, 2009 5:43 PM

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Can someone help with auto-size fields in forms?

    I created a form in Acrobat for my team members with auto-size text fields that allow to shrink the text when the field size is not large enough to show the entire text. When I then open the same form file with Adobe Reader on my machine with text that has shrunk in some field, I can see all the text and have also a scroll function when the text size has reached its lower limits and there is still more text to show.
    However, when I receive the filled out forms back from other team members' iPads (they use my template created with Acrobat), the text didn't shrink as on a PC version and the scroll function is disabled. I checked the template and the fields are correctly set to auto-size left and they do work on my PC's and laptop's Adobe Reader. But the iPad version of Adobe Reader may cause some issues.
    Can anybody help with solutions?
    Thank you!
    Klaus

    The filled version of your PDF document (20140722 Daily Meeting Report...pdf) is no longer a PDF form because it has been flattened.
    Once an interactive PDF form (such as your template version) is flattened, all form fields are replaced with images of filled data.  You can no longer interact with form fields, edit form data, or tap/click any buttons in the flattened PDF document.  That is the reason why the text in auto-size text fields did not shrink.
    When you email a PDF document (including a PDF form) in Adobe Reader for iOS, the E-mail Document dialog is displayed.
    In this particular case, your team member must have selected "Share Flattened Copy".
    If you would like to keep the interactivity of a PDF form, you can select "Share Original Document".  Please advise your team members to select the "Share Original Document" option when emailing filled forms.
    Unfortunately, once flattened, a PDF document cannot be reverted back to the original "unflattened" state.  However, if your team members still have the original filled forms, they can resend the forms with the "Share Original Document" option.
    Please let us know if you have further questions.

  • Got error when sending message with big size

    Hello!
    I hope someone will able to help. I am facing with the size issue (I guess).
    The input file has a size of 58668 bytes. My program takes it and convert it to a TextMessage. When sending the file to the JMS queue, I got:
    javax.jms.JMSException: Failed to process message: Failed to add message=ID:f411d6c8-c1df-1004-8942-9b67f9629ee5, destination=QueueName3 (5)
            at org.exolab.jms.messagemgr.MessageMgr.add(MessageMgr.java:199)
            at org.exolab.jms.server.ServerSessionImpl.send(ServerSessionImpl.java:205)
            at org.exolab.jms.server.net.RemoteServerSession.send(RemoteServerSession.java:152)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at org.exolab.jms.net.orb.DefaultORB$Handler.invoke(DefaultORB.java:553)
            at org.exolab.jms.net.orb.DefaultORB$1.run(DefaultORB.java:511)
            at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Thread.java:595)This error is generated when doing:
    if(message instanceof TextMessage)
          this.producer.send(message);
      }producer is a MessageProducer.
    I looked around and did not find any limitation on the send method :o(
    FYI I am using OpenJMS has JMS provider.
    Thx in advance

    oh I looked into OpenJMS log (I could do that from the beginning) and I saw the following message:
    17:00:11.005 ERROR [ORB-Worker-18] - Failed to process message
    javax.jms.JMSException: Failed to process message: Failed to add message=ID:f3e38ebc-c1df-1004-8ecc-3d4e4e499b90, destination=
    QueueName3 (5)
            at org.exolab.jms.messagemgr.MessageMgr.add(MessageMgr.java:199)
            at org.exolab.jms.server.ServerSessionImpl.send(ServerSessionImpl.java:205)
            at org.exolab.jms.server.net.RemoteServerSession.send(RemoteServerSession.java:152)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at org.exolab.jms.net.orb.DefaultORB$Handler.invoke(DefaultORB.java:553)
            at org.exolab.jms.net.orb.DefaultORB$1.run(DefaultORB.java:511)
            at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Thread.java:595)
    17:00:41.333 ERROR [ORB-Worker-18] - Failed to process message
    org.exolab.jms.persistence.PersistenceException:
    ERROR 22001: A truncation error was encountered trying to shrink LONG VARCHAR FOR BIT DATA 'XX-RESOLVE-XX' to length 32700.
            at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
            at org.apache.derby.iapi.types.SQLBinary.checkHostVariable(Unknown Source)
            at org.apache.derby.exe.ac601a400fx0110xbb05xf3e9xffff95b19c308.e0(Unknown Source)
            at org.apache.derby.impl.services.reflect.DirectCall.invoke(Unknown Source)
            at org.apache.derby.impl.sql.execute.RowResultSet.getNextRowCore(Unknown Source)
            at org.apache.derby.impl.sql.execute.NormalizeResultSet.getNextRowCore(Unknown Source)
            at org.apache.derby.impl.sql.execute.DMLWriteResultSet.getNextRowCore(Unknown Source)
            at org.apache.derby.impl.sql.execute.InsertResultSet.open(Unknown Source)
            at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source)
    ...From what I understand, my issue is related to OpenJMS. Can the length be increased?

  • How to Increase Instance size for ALBPM-Err:'Max instance size exceeded.

    HI,
    Can anybody in How to increase maximum instance size for
    1. ALBPM Studio
    2. runtime i.e. process administrator?
    Look forward for help.
    Cheers
    The exception in detail is:
    Error while persisting the transaction data: 'Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0''
    Details:
    Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0'
    fuego.server.exception.MaxInstanceSizeRuntimeException: Max instance size exceeded.
    Current size is 33262, whereas the maximum size is 16384. This occurs with instance 'Process1' at activity 'StartExecution[Process1DownloadMessage]' of process '/Process1Download#Default-1.0'      at fuego.server.ProcInst.getComponentData(ProcInst.java:792)      at fuego.server.ProcInst.mustStoreComponent(ProcInst.java:2777)      at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.executeUpdateInstance(JdbcProcessInstancePersMgr.java:2870)      at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.updateInstance(JdbcProcessInstancePersMgr.java:2272)      at fuego.server.persistence.Persistence.updateProcessInstance(Persistence.java:1008)      at fuego.server.execution.EngineExecutionContext.persistInstances(EngineExecutionContext.java:1819)      at fuego.server.execution.EngineExecutionContext.persist(EngineExecutionContext.java:1109)      at fuego.transaction.TransactionAction.beforeCompletion(TransactionAction.java:132)      at fuego.connector.ConnectorTransaction.beforeCompletion(ConnectorTransaction.java:685)      at fuego.connector.ConnectorTransaction.commit(ConnectorTransaction.java:368)      at fuego.transaction.TransactionAction.commit(TransactionAction.java:302)      at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:481)      at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)      at fuego.transaction.TransactionAction.start(TransactionAction.java:212)      at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)      at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:63)      at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42)      at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:264)      at fuego.server.execution.ToDoItem.run(ToDoItem.java:559)      at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:773)      at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:753)      at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)      at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)      at fuego.fengine.ToDoQueueThread$PrincipalWrapper.processBatch(ToDoQueueThread.java:446)      at fuego.component.ExecutionThread.work(ExecutionThread.java:837)      at fuego.component.ExecutionThread.run(ExecutionThread.java:408)

    First take a look at your instance variables in your processes. Determine if some could be changed to be Separated instance variables. Once an instance variable's category changes from "Normal" to "Separated", it is not included in the instance size calculation.
    If you cannot mark variables as Separated, then in Studio's "Project Navigator" tab, right mouse click the name of your project -> click "Engine Preferences" -> with the "Engine" selected as the Category, click the "Advanced" tab on the upper right change the "Maximum Instance Size" to 64KB (4x the original 15kb value) and change the "Instances Cache" to 1250 (1/4th the original value).
    What version of Enterprise are you on (Standalone or WLS)? There is a similar setting on Enterprise, but it is slightly different between the two types of Enterprise Engines.
    Dan

  • C0111: Maximum script size exceeded.

    Not sure if this has anything to do with it but
    I am working with in the 12.6 Powerbuilder classic 30 day demo.
    I just purchased 12.6 yesterday, waiting for the email to install the live copy today.
    Is there anyway to increase the size on the script size maximium?
    This script has 1,783 lines of code. 
    ---------- Compiler: Errors   (10:41:49 AM)
    reports.pbl(w_room_layout).pb_refresh.clicked.1780: Error       C0111: Maximum script size exceeded.
    reports.pbl(w_room_layout).pb_refresh.clicked.1782: Error       C0111: Maximum script size exceeded.
    reports.pbl(w_room_layout).pb_refresh.clicked.1784: Error       C0111: Maximum script size exceeded.
    reports.pbl(w_room_layout).pb_refresh.clicked.1784: Error       C0111: Maximum script size exceeded.
    ---------- Finished Errors   (10:41:49 AM)

    Actually, the size constraint is on the size of the compiled P-Code, which means you should probably watch for different things. Things that don't compile down to P-Code tokens (e.g. string constants, embedded SQL) should be reviewed carefully. If you have a lot of embedded SQL (a common trap for those new to PB), you might want to consider encapsulating that in a DataWindow, which will be more network-traffic-efficient at run time anyway.
    Good luck.

  • Mail size exceeds problem

    Hi,
    I have written abap program to send a mail to outlook by using the FM SO_DOCUMENT_SEND_API1 in my program.
    As per client requirement,  i have to display the results in the body of the message like tabular format.
    and i have used html tags in my abap program.
    Every thing working fine but whenever the output contains more data i.e more than 200 rows of output, the mail size exceeds more thean 3.0 MB and the same mail not triggering at user's mail box  because of of size of the mail.
    In our company, some of users mail box restricted with 1mb size only.
    Is there any way to reduce the size and send the same mail to them.
    Advance thanks,
    Arunachalam S
    <removed by moderator>
    Edited by: Mike Pokraka on Jul 28, 2008 7:30 AM

    the only way will be to restrict the number of rows and the split the emails.
    Regards, IA

  • Trouble with Border Size when Printing

    I am having a great deal of difficulty with border size using PSE 9 and an Epson R800.  I have the latest driver for Windows 7, and still all of my photos come out with a .25" border (or greater if border is checked in the Adobe settings as well) when "borderless" is left unchecked.  I have tried all possible permutations of borderless with an added border, changing print size to trick the program, but cannot get the 3mm (.125") border I am accustomed to. Can anyone figure out what I am doing wrong?  Is a quarter of an inch on a 4x6 print the standard border?
    Thanks in advance
    Gregg

    You must have been using iPhoto 6 the last time as that's not available any more. There are themes now with different border options but no continuously adjustable border width.
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 6 and 7 libraries and Tiger and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
    Note: There now an Automator backup application for iPhoto 5 that will work with Tiger or Leopard.

  • Routeserver-java.lang.OutOfMemoryError:Requested array size exceeds VM limi

    Well,
    When I started to try running routeserver, I was using "false" in <param-name>long_ids</param-name> (web.xml). When I tried to use "true", an OutOfMemoryError error was occuring. Now I know that "false" is wrong. So, I walked a bit more...
    The error now is:
    09/02/03 15:25:29.547 web: Error initializing servlet
    java.lang.OutOfMemoryError: Requested array size exceeds VM limit
         at oracle.spatial.router.engine.NonBoundaryEdge.readNonBoundaryEdge(NonBoundaryEdge.java:74)
         at oracle.spatial.router.engine.Partition.readPartition(Partition.java:103)
         at oracle.spatial.router.engine.PartitionCache.loadPartitionFromDatabase(PartitionCache.java:286)
         at oracle.spatial.router.engine.PartitionCache.obtainPartitionReference(PartitionCache.java:244)
         at oracle.spatial.router.engine.Network.<init>(Network.java:77)
         at oracle.spatial.router.server.RouteServerImplementation.<init>(RouteServerImplementation.java:136)
         at oracle.spatial.router.server.RouteServerServlet.init(RouteServerServlet.java:299)
         at com.evermind.server.http.HttpApplication.loadServlet(HttpApplication.java:2379)
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4830)
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4754)
         at com.evermind.server.http.HttpApplication.initPreloadServlets(HttpApplication.java:4942)
         at com.evermind.server.http.HttpApplication.initDynamic(HttpApplication.java:1144)
         at com.evermind.server.http.HttpApplication.<init>(HttpApplication.java:741)
         at com.evermind.server.ApplicationStateRunning.getHttpApplication(ApplicationStateRunning.java:431)
         at com.evermind.server.Application.getHttpApplication(Application.java:586)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.createHttpApplicationFromReference(HttpSite.java:1987)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.<init>(HttpSite.java:1906)
         at com.evermind.server.http.HttpSite.initApplications(HttpSite.java:643)
         at com.evermind.server.http.HttpSite.setConfig(HttpSite.java:290)
         at com.evermind.server.http.HttpServer.setSites(HttpServer.java:270)
         at com.evermind.server.http.HttpServer.setConfig(HttpServer.java:177)
         at com.evermind.server.ApplicationServer.initializeHttp(ApplicationServer.java:2493)
         at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.java:1042)
         at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLauncher.java:131)
         at java.lang.Thread.run(Thread.java:595)
    09/02/03 15:25:29.547 web: Error preloading servlet
    javax.servlet.ServletException: Error initializing servlet
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4857)
         at com.evermind.server.http.HttpApplication.findServlet(HttpApplication.java:4754)
         at com.evermind.server.http.HttpApplication.initPreloadServlets(HttpApplication.java:4942)
         at com.evermind.server.http.HttpApplication.initDynamic(HttpApplication.java:1144)
         at com.evermind.server.http.HttpApplication.<init>(HttpApplication.java:741)
         at com.evermind.server.ApplicationStateRunning.getHttpApplication(ApplicationStateRunning.java:431)
         at com.evermind.server.Application.getHttpApplication(Application.java:586)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.createHttpApplicationFromReference(HttpSite.java:1987)
         at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.<init>(HttpSite.java:1906)
         at com.evermind.server.http.HttpSite.initApplications(HttpSite.java:643)
         at com.evermind.server.http.HttpSite.setConfig(HttpSite.java:290)
         at com.evermind.server.http.HttpServer.setSites(HttpServer.java:270)
         at com.evermind.server.http.HttpServer.setConfig(HttpServer.java:177)
         at com.evermind.server.ApplicationServer.initializeHttp(ApplicationServer.java:2493)
         at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.java:1042)
         at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLauncher.java:131)
         at java.lang.Thread.run(Thread.java:595)
    09/02/03 15:25:29.547 web: 10.1.3.4.0 Started
    I start OC4J with:
    C:\Java\jdk1.5.0_16\bin>java -server -Xms1024m -Xmx1024m -XX:NewSize=512m -XX:Max NewSize=512m -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.rmi.dgc.client.gcInterval=3600000 -verbose:gc -jar c:\oc4j\j2eehome\oc4j.jar -config c:\oc4j\j2ee\home\config\server.xml
    My computer has 2 Gb of RAM, AMD Turion 64 Mobile 2.20 GHz
    Any ideas?
    Thanks a lot again!
    Regards,
    Daniel

    Well,
    I am using Router from 11g. The web.xml from 10g does not have logs_id parameter and servlet mapping.... May I add logs_id parameter in web.xml?
    Here we have contract with oracle. But I could NOT find any thing about patchs for routeserver... I am downloading "configuration manager" to update my metalink. Could you tell me tip about where are routeserver patchs?
    Tks a lot,
    Daniel
    Edited by: user10788592 on 03/02/2009 12:09

Maybe you are looking for

  • Ipod touch will no longer sync my songs, they show up for a few seconds then disappear help

    for some reason when I try to sync my songs to my ipod touch, they will all show up briefly, then disappear once I hit the sync button?  very frustrating, have never had this problem in the past, any suggestions would be great

  • Error connecting to ES Explorer?

    I'm receiving the following error when attempting to access the ES Explorer from inside Visual Studios 2008, 2005, and as well as trying to access the site directly at: http://sr.esworkplace.sap.com/webdynpro/dispatcher/sap.com/tcesiesperui/Menu?j_us

  • Photoshop elements 6 raw image problems.

    PROBLEM 1:  I recently purchased a Canon 7D and got a free copy of PSE6, and I had to upgrade to camera raw 5.6 to view raw files.  I am now able to view raw files, but still get a error; "DLL c:\Programfiles\adobe\photoshop elements 6.0\Plug-Ins\Cam

  • [ANN] Foxtrot 2.0 Released !

    Foxtrot is the simple API for using threads in your Swing applications, fully compatible with Java Web Start. Foxtrot uses the synchronous model to make very easy to write threaded Swing code; it's easy to backport your Swing applications to make the

  • How to Transport Sender/Receiver ID objects in SXMB_ADM to Quality System ?

    Hi,    I want to transport Sender/Reciever ID configuration to Quality system.    Can you please let me know how can i do that ?    Also can you please let me know the tables in which these configurations are stored ? Thanks sourav