Sequence caching with oracle

Hi,
In my application, I use toplink as a JPA provider. For performance reasons, I defined an allocationSize on sequence generator for some persistence objects. I read some documentation to understand how it works, the result of my research is that the allocationSize of sequence generator must match with the increment value of the sequence in database.
So here is my question, what if my sequence define a cache in the database ? For a cache of 1000 values on the application (or server) side, I can create a sequence like this : sequence myseq increment by 1000 cache 20.
Does it mean that 20 values will be cached or 1000*20 values will be cached ? Is the cache size defined per session or for all the sessions of the database ?
Thanks.
Will.

ok.... so if my sequence starts with 10000 and has a cache of 5, values from 11000 to 15000 will be cached, not values from 10000 to 100020 ?"5" values will be cached. You will find the following example to be helpful.
[cache]
SQL> create sequence myseq start with 10000 cache 5;
Sequence created.
SQL> select increment_by, cache_size, last_number from user_sequences where sequence_name = 'MYSEQ';
INCREMENT_BY CACHE_SIZE LAST_NUMBER
1 5 10000
SQL> select myseq.nextval from dual;
NEXTVAL
10000
SQL> /
NEXTVAL
10001
SQL> /
NEXTVAL
10002
SQL> /
NEXTVAL
10003
SQL> /
NEXTVAL
10004
SQL> select increment_by, cache_size, last_number from user_sequences where sequence_name = 'MYSEQ';
INCREMENT_BY CACHE_SIZE LAST_NUMBER
1 5 10005
SQL> select myseq.nextval from dual;
NEXTVAL
10005
SQL> select increment_by, cache_size, last_number from user_sequences where sequence_name = 'MYSEQ';
INCREMENT_BY CACHE_SIZE LAST_NUMBER
1 5 10010
SQL>
Asif Momen
http://momendba.blogspot.com

Similar Messages

  • Sequence caching with toplink and oracle caching

    Hi,
    In my application, I use toplink as a JPA provider. For performance reasons, I defined an allocationSize on sequence generator for some persistence objects. I read some documentation to understand how it works, the result of my research is that the allocationSize of sequence generator must match with the increment value of the sequence in database.
    So if I want to cache 1000 of values, I must define on my sequence generator allocationSize of 1000 and create a sequence like this : create sequence myseq increment by 1000. Am I right ?
    Second question, what if my sequence define a cache in the database. For example, create sequence myseq increment by 1000 cache 20.
    Does it mean that 20 values will be cached or 1000*20 values will be cached ? Is the cache size defined per session or for all the sessions of the database ?
    Thanks.
    Will.

    Thank you for your response.
    I used the default parameter of toplink (increment by 50) and I left a cache size of 20 on database side.
    But I noticed that whenever the server restarts, I "lost" between 900 and 1000 ids (nearly 20*50). So even if sequence cache on oracle size is non transactional, I think there is a kind of prefetch somewhere on the server side. I must not lost more than 50 ids when a server restarts.
    If I remove the cache on database side, then the behavior is correct (the expected one to be more precise).
    Is there any kind of sequence caching in oracle jdbc drivers ?
    Thanks.
    Will.

  • Purge Cache with Oracle BI Event tables don#t work

    Hi,
    i want to purge the bi server cache an oracle bi event table.
    i created an table like:
    create table BISE_UPDATE_EVENTS
    UPDATE_TYPE INTEGER default 1 not null,
    UPDATE_TIME DATE default SYSDATE not null,
    DB_NAME VARCHAR2(40 BYTE),
    CATALOG_NAME VARCHAR2(40 BYTE),
    SCHEMA_NAME VARCHAR2(40 BYTE),
    TABLE_NAME VARCHAR2(40 BYTE) not null,
    OTHER VARCHAR2(80 BYTE)
    and defied it in the Administration tool as an event table
    i insert data into it:
    INSERT INTO BISE_update_events
    (db_name, catalog_name, schema_name, table_name
    VALUES (NULL, NULL, 'PSLID_DT', 'DI_LI_MANDANT'
    COMMIT
    But purging the cache don't work.
    I got error messages in NQServer.log:
    2010-02-19 11:49:56
    [55004] The prepare operation failed while polling from table BISE_UPDATE_EVENTS.
    2010-02-19 11:49:56
    [nQSError: 22006] Repository metadata: missing column object: ID=5111903:5046337.
    2010-02-19 11:49:56
    [55005] The cache polling delete statement failed for table BISE_UPDATE_EVENTS.
    Got somebody help?
    Regards Christian

    From the Manual :-
    SchemaName
    The name of the schema where the physical table that was updated resides.
    Populate the SchemaName column only if the event table does not reside in the same database as the physical tables being updated. Otherwise, set it to the null value.
    TableName
    The name of the physical table that was updated. The name has to match the name defined for the table in the Physical layer of the Administration Tool.
    Values cannot be null.
    Can you check once to see if you really need to put the schemaname and also if the tablename defined in the RPD is the same as put in the insert statement. Finally, the user used in the connection pool should have delete rights on your polling table.
    hope this helps

  • Using Parallelism, Cache with Oracle Applications Core Tables

    Hi all,
    I want to know if i can put some tables in parallel and some tables in cache in an Oracle Applications enviroment without any problem. The precedure to change these tables are equal to a single database? Just use command like "ALTER TABLE table_name PARALLEL......"
    Do i need to change anything at Applications Level after these changes that i will made on these tables?
    Tks,
    Paulo

    You can cache read-only PL/SQL stored procedures in the DB Cache. I'm not sure about db built-in packages, but if they are read-only, should be ok.
    All DB Cache management functionality is available from DBA Studio. You can also use the supplied dbms_icache PL/SQL package to manage the cache. Refer to the DB Cache Concepts & Admin Guide for details.
    DB Cache is strictly a cache for read-only queries. All updates are passed to the origin db.

  • Oracle 10g sequence cache aging

    I have been reading some about how sequences work and why you can end up with gaps in the sequence numbers. It is my understanding that you could 'lose' certain sequence numbers when the library cache ages/expires. What I can't find is where this cache is configured. Where do you define when the cache will expire and thus clear out? Seems like it's happening very quickly in one of our databases, but much more slowly in another. Don't know where to look for this setting. Can anyone help point me in the right direction? Thanks!

    - The size of the sequence cache is driven by the CACHE parameter (default 20) when you create the sequence
    - Oracle manages the library cache automatically-- there are no parameters to set here for the timeout. When a particular object is aged out is going to depend on, among other things, the size of the library cache, the frequency a particular object is used, and the number of other objects competing for space in the library cache. If one sequence is constantly being used, it will stay in the library cache much longer than the cache for a less frequently used sequence. If one database's library cache is under pressure because you're constantly loading new objects, cached objects will be aged out far more quickly than in a database where the library cache is not under pressure.
    Justin

  • ADF application integrating with Oracle Web Cache

    Hello,
    I am trying to integrated my ADF 11g application with Oracle Web Cache. I used this link http://andrejusb.blogspot.com/2010/06/oracle-webtier-11g-configuration-for.html for it.
    I am able to access my ADF application using webcache port 7785.
    I created few caching rules in the Oracle Web Cache. And in the popular request section of the Oracle Web cache i see jpg,png and other image files cached.
    But the issue is when the application access images like /testapp/test/images/abc.jpg?_adf.ctrl-state=5b0s7lzfo_29 . I created a caching rule with regular expression ^/testapp/test/images/[A-Za-z0-9_]*\.(gif|jpeg|png|jpg)\?_adf\.ctrl-state=[A-Za-z0-9_]*$.
    But when i access the popular request in em i don't see the URL given above as cached. The caching reason it specifies as URL contains query string.
    I am not sure if i need to do anything additional to cache these URL's as well.
    Thanks!
    Ram

    Yes that works. But my question is how to cache the urls which has querystring. I was trying to give a regular expression to match the url so that the url which contains parameters like _afrLoop which changes with each HTTP request can also be cached.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Using EclipseLink or JPA with Oracle Coherence as shared cache

    Hi,
    I expect to use EclipseLink with Oracle Coherence as shared cache, mainly with/for Entity Caching.
    * JPA)
    For JPA queries, is the JPA semantics fully preserved when using Oracle Coherence ?
    * Non-JPA)
    Are the non-JPA API taking also advantage from Oracle Coherence ?
    For these non-JPA queries, is the semantics fully preserved when using Oracle Coherence ?
    Regards,
    Dominique
    PS: I hope this is the right forum to ask my questions.
    Otherwise, could you tell me what is the right forum ?
    Thanks.

    Yes, JPA semantics are fully preserved when using TopLink-Grid to cache entities within Coherence.
    Yes, the native EclipseLink APIs can also take advantage of TopLink-Grid and Coherence.
    I am not sure what you mean by "For these non-JPA queries, is the semantics fully preserved when using Oracle Coherence ?" Perhaps you can provide an example of what you are looking for?
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Oracle Portal with Oracle Web Cache

    How can I configure Oracle Portal (or Web Cache) to work together?
    I have installed and started both of them. Now, Web Cache works fine with standard html pages (for example Apache docs) but after trying to reach portal the url is redirected to standard port - on which Portal runs (Web Cache works on 1100 and Portal on 7778).
    So it seems that portal pages are not cached.
    Web Cache:
    Application Web Servers:
    hpwin 7778 100 30 5 / 10
    Where can I find any information about that?
    best regards
    KJ

    Portal PM, I thought you said that this solution would not work. Everyone I have heard so far has said that you cannot configure webcache to work with Portal documents. Does this work or does it not?
    Read the messages below from the bottom up.
    It's not possible to do that today.
    In 9.0.2 (the next release) we will be using WebCache configured out of the box so you won;t need to, however, till then you cannot use webcache with Portal 3.0.x
    Thanks
    Portal PM
    Title : re:Oracle Portal with Oracle Web Cache O/S : N/A POST: REPLY (W/QUOTE)
    Author : Agus Jaelani Type : Question
    Date : Feb 4, 2002 23:11 PT
    please try using this configuration on http.conf
    Port <webcache port>
    Listen <apache listen port>
    ServerName <webcache hostname>
    you must running ssodatan script again.
    Note :
    this config only make portal work together with webcache.
    Title : re:Oracle Portal with Oracle Web Cache O/S : N/A POST: REPLY (W/QUOTE)
    Author : Krzysztof Jungowski Type : Question
    Date : Feb 5, 2002 06:58 PT
    Thanks a lot. It works.
    best regards
    Krzysztof Jungowski

  • Create Sequence Number with Select Query

    Hi All,
    I would like to create a sequence number in oracle but instead of hard coding the "start with" I want to select the max value of the primary key of a table and add 1 and use this instead:
    So what I want is:
    CREATE SEQUENCE crg_mrec_seq
    MINVALUE 1
    MAXVALUE 999999999999999999999999999
    START WITH select max(primarykey)+1 from table1
    INCREMENT BY 1
    CACHE 20;I'm guessing I need to pass this max value as a variable into the create sequence number but I'm not sure what syntax to use.
    Thanks,
    Ed

    spalato76 wrote:
    Hi All,
    I would like to create a sequence number in oracle but instead of hard coding the "start with" I want to select the max value of the primary key of a table and add 1 and use this instead:
    So what I want is:
    CREATE SEQUENCE crg_mrec_seq
    MINVALUE 1
    MAXVALUE 999999999999999999999999999
    START WITH select max(primarykey)+1 from table1
    INCREMENT BY 1
    CACHE 20;I'm guessing I need to pass this max value as a variable into the create sequence number but I'm not sure what syntax to use.
    Thanks,
    Edconstruct SQL statement & then EXECUTE IMMEDIATE

  • OIM 9.1.0.1870.1 with Oracle database 11.1.0.7 - Hung threads

    Anyone seen issues like hung threads using OIM9.1.0 with Oracle Database 11.1.0.7?

    Thanks guys for the quick responses.
    But I have tried both the options for putting the jar in Third Party and also by uploading using the OOTB utility UploadJar.sh.
    But it is giving the same error.
    I have tired rebouncing the server and also Purge cached, but no success.
    Just to mention again, I have tried with all the last 3 postgres JDBC4 driver available on the site (u mentioned).
    So, any other clue?
    Thanks in advance.

  • Error while installing CRM 4.0 SR1 with Oracle 10.2 on Windows 2000 Server

    Hi.. I'm getting error messages in the Database Instance phase, while installing <b>CRM 4.0 SR1 with Oracle 10.2</b> on Windows 2000 Server. Following are message snippets with last few lines showing error message:
    <b>SAPSSEXC.log</b>
    [code](DB) INFO: REPOSRC~0 created
    (DB) INFO: REPOTEXT created
    DbSl Trace: ORA-4031 occured when executing SQL statement (parse error offset = 0)
    (IMP) ERROR: DbSlExeModify/DbSlLobPutPiece failed
      rc = 99, table "REPOTEXT"
      (SQL error 4031)
      error message returned by DbSl:
    ORA-04031: unable to allocate 84 bytes of shared memory ("shared pool","select name,intcol#,segcol#,...","Typecheck","opndef:qkexrAddMatching1")
    (DB) INFO: disconnected from DB
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: END OF LOG: 20121102144208
    [/code]
    <b>SAPAPPL2.log</b>
    [code](DB) INFO: SMOT413~0 created
    (DB) INFO: SMOT413~R04 created
    DbSl Trace: ORA-604 occured when executing SQL statement (parse error offset = 31)
    (DB) ERROR: DDL statement failed
    (CREATE  INDEX "SMOT413~R05" ON "SMOT413" ( "MANDT" , "ERSKZ"  ) TABLESPACE PSAPCRM STORAGE (INITIAL 16384 NEXT 0000000040K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
      (SQL error 604)
      error message returned by DbSl:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-04031: unable to allocate 4108 bytes of shared memory ("shared pool","select owner#,name,namespace...","Typecheck","seg:kggfaAllocSeg")
    (DB) INFO: disconnected from DB
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: END OF LOG: 20121102144207
    [/code]
    <b>SAPSLEXC.log</b>
    [code]DbSl Trace: ORA-1403 when accessing table SAPUSER
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
    (DB) INFO: CO2MAP created
    (IMP) INFO: import of CO2MAP completed (0 rows)
    (DB) INFO: CO2MAP~0 created
    (DB) INFO: CO2MAP~IDX created
    DbSl Trace: ORA-604 occured when executing SQL statement (parse error offset = 0)
    (DB) ERROR: DDL statement failed
    (CREATE TABLE "CO2MAPINF" ( "SRCID" VARCHAR2(30) DEFAULT ' ' NOT NULL , "METHID" VARCHAR2(1) DEFAULT ' ' NOT NULL , "APPLNAME" VARCHAR2(30) DEFAULT ' ' NOT NULL , "PAGEKEY" VARCHAR2(70) DEFAULT ' ' NOT NULL , "ID" NUMBER(10) DEFAULT 0 NOT NULL , "LANGUAGE" VARCHAR2(1) DEFAULT ' ' NOT NULL , "NAMESPACE" VARCHAR2(30) DEFAULT ' ' NOT NULL  ) TABLESPACE PSAPCRM620 STORAGE (INITIAL 16384 NEXT 0000000640K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
      (SQL error 604)
      error message returned by DbSl:
    ORA-00604: error occurred at recursive SQL level 2
    ORA-04031: unable to allocate 84 bytes of shared memory ("shared pool","select ts#,file#,block#,nvl(...","Typecheck","opndef:qkexrAddMatching1")
    (DB) INFO: disconnected from DB
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: END OF LOG: 20121102144742
    [/code]
    <b>SAPSDIC.log</b>
    [code](DB) INFO: TNMAP~0 created
    (DB) INFO: TODIR created
    DbSl Trace: ORA-604 occured when executing SQL statement (parse error offset = 0)
    (IMP) ERROR: DbSlExeModify/DbSlLobPutPiece failed
      rc = 99, table "TODIR"
      (SQL error 604)
      error message returned by DbSl:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-04031: unable to allocate 4108 bytes of shared memory ("shared pool","update seg$ set type#=:4,blo...","Typecheck","seg:kggfaAllocSeg")
    (DB) INFO: disconnected from DB
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: END OF LOG: 20121102144811
    [/code]
    <b>SAPSDOCU.log</b>
    [code](DB) INFO: TLSY3~0 created
    (DB) INFO: TLSY5 created
    (IMP) INFO: import of TLSY5 completed (3 rows)
    DbSl Trace: ORA-604 occured when executing SQL statement (parse error offset = 33)
    (DB) ERROR: DDL statement failed
    (CREATE UNIQUE INDEX "TLSY5~0" ON "TLSY5" ( "LANGU", "REL", "OLD_REL", "IMP_DATE" ) TABLESPACE PSAPCRM STORAGE (INITIAL 16384 NEXT 0000000080K MINEXTENTS 0000000001 MAXEXTENTS 2147483645 PCTINCREASE 0 ) )
    DbSlExecute: rc = 99
      (SQL error 604)
      error message returned by DbSl:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-04031: unable to allocate 4108 bytes of shared memory ("shared pool","update seg$ set type#=:4,blo...","Typecheck","seg:kggfaAllocSeg")
    (DB) INFO: disconnected from DB
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    C:\usr\sap\CRM\SYS\exe\run/R3load.exe: END OF LOG: 20121102144811
    [/code]
    I am not sure but I think the problem is with the allocated shared memory size, which right now is:
    <b>init.ora</b>
    [code]shared_pool_size = 67108864[/code]
    <b>init<SID>.ora</b>
    [code]shared_pool_size = 90112000
    shared_pool_reserved_size = 9011200
    [/code]
    If this is the cause of the problem then how much should i increase it to? and do i need to modify the size in both files or only init<SID>.ora? Is there any SAP Note related to this problem? Thanks.
    Regards,

    Hi Vasu,
    ORA-4031... for 99%  of cases this means one thing:
    the database instance is not configured correctly.
    To proceed, do the following:
    1. get rid of init.ora (it's not used at all)
    2. make sure that the init<sid>.ora file is really used.
       If there is a spfile in the %ORACLE_HOME&/database folder
       then this will be used.
    3. With your  configured shared pool of 85MB your memory should really not be filled up - anyhow it's really too small for a SAP system!
    Thus I'd check how much memory the other sga-area members (buffer cache, large pool, streams pool, java pool...) take up.
        Remember on Windows 32 Bit, everything runs in one single process that can only allocate 2GB (or 3GB if you've set this option).
    See note:
    <a href="https://websmp102.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=869006&_NLANG=E">#869006 - Composite SAP note: ORA-04031</a>
    for more on this.
    And when you're setting up parameters anyway, check note
    <a href="https://websmp102.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=830576&_NLANG=E">#830576 - Parameter recommendations for Oracle 10g</a>
    KR Lars

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Integrate Business Activity Monitoring (BAM) with Oracle Forms Recognition

    Hi All,
    As per project requirement, I have to integrate Business Activity Monitoring (BAM) with Oracle Forms Recognition.
    Does anyone have an idea how can this be achievable from OFR Verifier?
    Thanks,
    Moumi.

    Hi All,
    Apart from my previous queries, I found that there is an sample reporting program - Oracle Application Express application has been developed and tested using version 4.1 of AppExpress.
    To access OFR tables I run the below script in below sequence in my local IPM environment as found in this link- http://workplacedba.blogspot.in/2012/11/ofr-odc-installation.html
    Seq 1 - 01-OFR-AP-Tables_Oracle.sql
    Seq 2 - 02-OFR-AP-Reporting_Oracle.sql
    Seq 3 - 03-XX_ROUND_IT.fnc
    Seq 4 - 04-XX_ROUNDDOWN.fnc
    Seq 5 - 05-XX_ROUNDUP.fnc
    Seq 6 - 06-OFR-AP-EBS-Views.sql
    Seq 7 - 07-INVOICE_NUMBER_FORMATS_INS.sql
    Seq 8 - 08-Insert Into Company.sql
    I couldn't found the below queries in my installables -
    Seq 3 - 03-XX_ROUND_IT.fnc
    Seq 4 - 04-XX_ROUNDDOWN.fnc
    Seq 5 - 05-XX_ROUNDUP.fnc
    Seq 7 - 07-INVOICE_NUMBER_FORMATS_INS.sql
    Seq 8 - 08-Insert Into Company.sql
    can anyone provide me these sql scripts?
    Thanks,
    Moumi

  • Correlation Problem with Oracle BPM 2.1.2

    I have developed a BPEL process that invokes Axis web services and then wait an asynchronous message from the services, using pick with onMessage tag. Previously, I do my work with Oracle BPM 2.0.10 and designer 0.9.5, it works well. But now, I have used Oracle BPEM 2.1.2 and designer 0.9.10, my Axis web services can not send any asynchronous message to the process. What's the problem? Must I set some configuration with BPM 2.1.2 or change my original BPEL code?
    This is a fragement of my BPEL code.
              <invoke name="invokeSubscriptionProxy" partnerLink="subcriptionProxy" operation="subscribeToProxy" inputVariable="input4SubscriptionProxy" outputVariable="output4SubscriptioProxy" portType="nsxml1:NotificationProxyPortType">
                   <correlations>
                        <correlation set="correlationInteger" initiate="no" pattern="out"/>
                   </correlations>
              </invoke>
              <pick name="pick4Subscribe2">
                   <onMessage partnerLink="client" portType="tns:TestOrchestratingGeneratedGramProxy" operation="deliverNotificationFromProxy" variable="deliveredNotificationMessage">
                        <correlations>
                             <correlation set="correlationInteger" initiate="no"/>
                        </correlations>
                        <sequence>
                             <empty name="empty-2"/>
                        </sequence>
                   </onMessage>
                   <onAlarm for="'PT1H'">
                        <sequence>
                             <empty name="empty-2"/>
                        </sequence>
                   </onAlarm>
              </pick>
    When I edit this code with designer 0.9.10, it shows no error. I test my process by viewing BPEL console.
    Please help me to correct this problem.
    PS. I sure that my Axis Web servies work well.

    Hi,
    "my Axis web services can not send any asynchronous message to the process" --&gt; means when you try to invoke an Axis Web services from a BPEL process, there are no callback message from the web services, right? Did you see any exception from either the console or BPEL server DOS window?
    John

  • Some problems with Oracle and XA in WLS 6.1

    Hi,
    I am using WLS6.1 SP4 with Oracle Thin driver 8.1.7 and TxDataSources with ConnectionPools
    using XA.
    I am getting the following error:
    java.sql.SQLException: ORA-00604: error producido a nivel 1 de SQL recursivo
    ORA-01000: número máximo de cursores abiertos excedido
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:168)
         at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
         at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:543)
         at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1405)
         at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:822)
         at oracle.jdbc.driver.OracleStatement.doExecuteQuery(OracleStatement.java:1657)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1870)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:363)
         at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:314)
         at weblogic.jdbc.jta.PreparedStatement.executeQuery(PreparedStatement.java:69)
         at weblogic.jdbc.rmi.internal.PreparedStatementImpl.executeQuery(PreparedStatementImpl.java:56)
         at weblogic.jdbc.rmi.SerialPreparedStatement.executeQuery(SerialPreparedStatement.java:42)
         at delta.beans.common.DBQueriesRemesa.getSecuencia(DBQueriesRemesa.java:55)
         at delta.beans.remesas.ImportarRemesa2.remesaRATSB(ImportarRemesa2.java:858)
         at delta.beans.remesas.ImportarRemesa2.execute(ImportarRemesa2.java:122)
         at java.lang.reflect.Method.invoke(Native Method)
         at delta.servlet.contrl.service(contrl.java:161)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:262)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:21)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:27)
         at delta.servlet.VolverFiltro.doFilter(VolverFiltro.java:101)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:27)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2643)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2359)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    I have read about this issue in the newsgroup and I know it seems to be a bug
    in Oracle Driver.
    So, as a workaround, I'm trying to use the Oracle's prepared statement cache features
    in order to avoid creations of new statements for each query. I noticed that the
    WebLogic prepared statement cache does not work properly, because I put a value
    of 10 but when inspecting with WLShell it haven't any hit in the statement cache.
    In order to use Oracle's prepared statement cache I have to enable it in the physical
    connection, but I don't found in weblogic.jar the class weblogic.jdbc.extensions.WLConnection
    which have the method getVendorConnection. In the documentation available at http://edocs.beasys.com/wls/docs61/javadocs/index.html
    there is a reference to that class and it haven't any notice since which service
    pack it becomes available.
    So, is there any way to obtain the physical connection?
    Can anybody help me about any of these problems??
    Thanks in advance.
    Jesús.

    The line of code that brings up this exception is:
    multiReq = new
    = new MultipartRequest((ServletRequest)request,
    "c:\\temp", 10485760);
    try this: MultipartRequest multiReq = new MultipartRequest((ServletRequest)request,
    "c:\\temp", 10485760);
    (not sure if that is the problem or not..)

Maybe you are looking for

  • Why is itunes Match so different?

    is itunes match different than syncing your device and computer?   Also Match can hold 25,000 songs but when you play them don't you download them and put them on your device anyway? thanks david

  • Password protecting a gallery?

    Hello all, This is my first post in the Apple support discussions! I was wondering if there is a way to password a specific gallery in iWeb '08, and not the whole site? I'm a photographer and would like to be able to have client specific galleries th

  • Reference filed is  not updating in FBL3N,FBL5N.

    Hi, The reference filed(XBLNR) not updating in t.code FBL3N & FBL5N but it is updating in VF03. I followed the logic for updating VF03 is. select single * from bkpf into it_bkpf   where bukrs = pa_vkorg   and gjahr = pa_gjahr   and awkey = pa_vbeln.

  • Differenc in BAPI_PR_CREATE  tables structure in 4.7  and ECC 6.0

    Hi All, Following are the tables parameters in 4.7 System for BAPI_PR_CREATE  : Tables                  RETURN                  PRITEM                  PRITEMX                 PRITEMEXP               PRITEMSOURCE            PRACCOUNT               PR

  • Regarding CMP entity bean

    in ejb-jar.xml when i add select method "CMP select methods" it get added into CMP entity bean. I have added lookup method than got the refrence of LocalHomeInterface in the servlet Now my confusion is how to use this method to get the returbed value