Portal and Database Cache

All,
Any pointers on using Portal with Database Cache. Can we identify Portal tables as 'hot', so that Portal page, content area rendering will be faster ? I was told that Portal and iCache have different approaches to caching. What if I turned off Portal caching and let iCache do it for me?
Would appreciate some response.
Thanks
Sanjay

The caching portal provides isn't a product. It's an optimization on the architecture, basically storing information wherever we can so it doesn't have to be pregenerated.

Similar Messages

  • Install Portal and Database with modem turned off?

    I am about to install Oracle Portal 9iAS with Oracle 8i database in a Win2000 PC. I have a cable internet connection that uses DHCP to dynamically assign IP and DNS addresses. Would DHCP be a problem? Should I make the installation with the modem turned off?
    Thank you very much,
    Melissa

    I installed Portal and Database on my machine at home which has both a public and private IP and there were NO problems doing it with the modem on and connected.

  • Is there a difference between Web and Database Cache

    What, if any, is the difference between Web Cache and Database Cache?

    There is good documentation in the form of PDF files at the following general URL
    "http://otn.oracle.com/products/ias"
    and specifically at:
    http://technet.oracle.com/docs/products/ias/doc_index.htm
    I have read them all and most things are documented in great detail.
    I'm trying to get the Web Cache working but it core dumps immediately after trying to start it. I had no install errors reported and everything else I have tested seems to work. If you get it working could you please send me a note on anything special you did? I really need the Web Cache to work.

  • Oracle Portal & 9iAS Database Cache

    Has anyone used Oracle 9iAS database cache to speed up document retrieval from Portal? Can Portal be used to cache Portal documents remotely??
    Any help greatly appreciated

    From here
    The In-Memory Database Cache option of Oracle Database Enterprise Edition is based on Oracle TimesTen In-Memory Database.TimesTen is also available for 10g.

  • Portal and Database Links

    We are running Portal 3.0.9.8.0. I am wondering if anyone has come across away to disable the ability of users to create database links.

    Hi,
    You can do it only by revoking the "create database link" or "create public database link" privilege from the database schema
    to which the portal user is connected.
    For example say there is a user "erin" mapped to the database schema portal30, then you should go to the
    "Administer Database" tab and edit the schema using the "Schemas" portlet. In the roles tab remove "create pubic database link"
    or "create database link" privilege.
    This might help. But be careful not to remove the privilege from the portal schema, unless you do not want to create any
    database links as the portal schema.
    Thanks,
    Sharmila

  • Forms/Reports: Role of the Database cache and Web cache

    Hello oracle experts,
    I am running a purely Forms and Reports based environment (9iAS).
    My question are:
    a. Is it possible to use features from the Web Cache and
    Database Cache to boost the performance of my applications?
    b. Are all components monitorable from the OEM?
    Please guide me so that i can configure my OEM to monitor my
    forms and reports services.
    thanks in advance for your reply
    Kind regards
    Yogeeraj

    Hi BradW,
    The way this is supposed to be done in Web Cache is by keeping separate copies of a cached page for different types of browsers distinguished by User-Agent header.
    In case of cache miss, Web Cache expects origin servers to return appropriate version of the page based on browser type, and the page from the origin server is just forwarded back to browser.
    Here, if the page is cacheable, Web Cache retains a separate copy for each type of User-Agent header value.
    And when there is a hit on this cached page, Web Cache returns the version of page with the User-Agent header that matches the request.
    Check out the config screen titled "Header Association" for this feature.
    About forwarding requests to different origin servers based on User-Agent header value, Web Cache does not have such capability.

  • Database Cache and Gateways

    We are considering using Oracle Database Cache to speed up a web application that reads data from a mainframe database. Since Database Cache only works with Oracle databases, we plan to create a new Oracle database that acts as a gateway (using ODBC) to the mainframe.
    The question we have is whether this would be possible and would make sense from a performance perspective. It is possible that Database Cache makes clever use of Oracle metadata (table update timestamps, etc) which would not be available for tables linked through ODBC. In this case, Database Cache might not work properly.
    Is it possible to configure Database Cache in such a way that that the application reads data from the mainframe only when the data in the cache is more than an hour old?

    moving up...
    Hi
    I have a question regarding the 9iAS database cache: when I
    select a table to be cached, does the cache also pull up the
    indexes definition from the origin database? Another way to put
    it: if I cache a table which has an index on the PK ( on the
    origin database ), is the index also created on the db cache or
    it performs a full scan for every query?
    Thanks
    Ramiro

  • Policy of Cache and Database

    Is it possible to search a class (MyProject) each time in the cache (because all its directly mapped attributes are not volatile)
    and to search a one-to-many attribute toward another Class (MyQuota) each time in the database (because there is a sort option)?
    For the moment :
    MyProject class uses a Full identity Map
    MyQuota class uses a SoftCacheWeak identity Map
    _myQuota attribute of MyProject doesn't use indirection                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    TopLink queries return objects, in this case an instance of MyProject. Queries do not return attributes of objects. When accessing an attribute from an Object TopLink does not re-issue a query the data is simply returned from the java reference. If a change is made to the oneToMany collection that change should be made to MyProject's collection and the cached versions will be updated. If the change to the collection is being made outside of TopLink, and those changes are required in the MyProject object, then the MyProject object will have to be refreshed from the database. If the collection is the only attribute that changes and you do not want to have to refresh the MyProject object to get the changes then I would recommend not mapping the oneToMany but maintaining it manually in your code and always executing seperate queries for the collection data.
    --Gordon

  • Database Change Notification and TopLink Cache Invalidation

    Has someone succeeed in implementing the How-to Database Change Notification and TopLink Cache Invalidation.
    I have corrected some document errata about the pl/sql content and I manage to have messages in the 'notify_queue'.
    I obtain the Topic in Java from this queue.
    But the TopicSuscriber instances do not receive any message. Is there something to have in mind to make it work ?
    Regards.

    Reviving this thread again...
    I am using DCN feature to build a middle-tier cache. I know oracle has problem sending physical rowid in case of 'Index Organized Table', however, in normal table also its not able to send proper rowid.
    e.g, I have 2 records in Table A with rowid AAARIUAAGAAAV/uABw and AAARIUAAGAAAV/pAAX.
    I have updated both the records. Strangely for the first record, oracle is sending INVALID rowid, although for the second record its sending the valid one.
    Following is the output:
    Row 1:  (Wrong rowid being sent, AAARIUAAGAAAV/uABw is replaced with AAARIUAAGAAAXDCAAr)
    Connection information  : local=localhost.localdomain/127.0.0.1:47633, remote=localhost.localdomain/127.0.0.1:2278
    Registration ID         : 2102
    Notification version    : 1
    Event type              : OBJCHANGE
    Database name           : <sid>
    Table Change Description (length=1)
        operation=[UPDATE], tableName=<table_name>, objectNumber=70164
        Row Change Description (length=1):
          ROW:  operation=UPDATE, ROWID=AAARIUAAGAAAXDCAAr
    Row 2:  (Right rowid being sent, AAARIUAAGAAAV/pAAX)
    Connection information  : local=localhost.localdomain/127.0.0.1:47633, remote=localhost.localdomain/127.0.0.1:2278
    Registration ID         : 2102
    Notification version    : 1
    Event type              : OBJCHANGE
    Database name           : <sid>
    Table Change Description (length=1)
        operation=[UPDATE], tableName=<table_name>, objectNumber=70164
        Row Change Description (length=1):
          ROW:  operation=UPDATE, ROWID=AAARIUAAGAAAV/pAAX
    Any idea ?

  • Database cache  hit ratio is very low  is about 11%

    Dear all,
    Our SAP ERP development server is on sap distributed environment in
    vmware esxi environment.
    centeral instance information
    AEOM1SAP08(172.16.1.87)
    RAM : 4 GB
    CPU : 4 core processors 2.93 Ghz, 2.93 Ghz
    SWAP : 25 GB
    database server information
    AEOM1SAP04(172.16.1.83)
    RAM : 20 GB
    CPU : 8 core processors 2.93 Ghz, 2.93 Ghz
    swap :40 GB
    database server consist of three database instance
    AED sap erp development server(database buffer pool memory 8GB)
    EDE sap enterprise portal server (database buffer pool memory 8 GB )
    AES sap solution manager server (the database currently stopped )
    RAM : 11 GB utillization of database server about 20 GB
    In st04 transaction memory parameters
    physical size of memory --> 20 GB
    current database memory --> 8GB
    database cahe memory --> 5 GB and Hitratio:11%
    proc cache memory --> 1.5 GB and Hitratio 98%
    SQL memory setting --> FIXED
    and the warning are coming as follows
    latch wait time per request(ms) execeeds 20 milliseconds
    wait time per log write(ms) execeeds 10 milliseconds
    check the IO performance of the database server
    and In SM12 large of tables are locking and is running very slow users
    are not able perform their work in development server
    below iam attaced required information
    thanks and regards
    sudheerk
    7382383262

    Hello,
    Please check
    Note #987961 FAQ: SQL Server I/O performance
    regards,
    John Feely

  • SSO between Portal and Nakia.....problem with SSO... library not found..

    Hi Sdn's  and Nakisa tehnical experts,
    We have a Portal environment 7.02 , a Nakisa environment 3.0  (CE) and and HR backend environment 701 (604).
    We are busy setting up SSO between Portal and Nakisa via the, URL iview for the Org chart (http://<host>:<port>OrgChart/default.jsp).
    We have done as indicated in wiki:
    http://wiki.sdn.sap.com/wiki/display/ERPHCM/SAPSSOAuthenticationwithverify.pseusingSAPSSOEXT
    We are however stil having issues with the SSO and in the cds.log the following is being displayed:
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.mysap.sso.SSO2Ticket : Could not load library: sapsecu.dll - java.lang.Exception: MySapInitialize failed: rc= 14null++
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : java.lang.Exception: MySapEvalLogonTicketEx failed: standard error= 9, ssf error= 0++
    ++01 Aug 2011 13:11:42 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Internal error (9) - No SSF error (0)++
    Can someone indicate what I am doing wrong?
    Regards Dries

    Hi Luke,
    thanks a lot for your help so far.
    I have created a root/XML folder under the diretory, and the path is now as follows:
    K:\usr\sap\NKP\J14\j2ee\cluster\apps\Nakisa\OrgChart\servlet_jsp\OrgChart\root\.system\Admin_Config\__000__Sasol_DEV_LIVE\.delta\root\XML
    It seems like it finds the verify.pse, but not the library, sapsecu.dll.
    My credentials.xml file is as follows:
    <credentials>
    <assembly name="SapSso"/>
      <info>
        <item name="PseFilePath">XML\verify.pse</item>
        <item name="SsfLibFilePath">XML\sapsecu.dll</item>
        <item name="PsePassword"></item>
        <item name="WindowsPlatform">64</item>
        <item name="TicketFile"></item>
        <item name="Base64decode">true</item>
       </info>
    </credentials>
    I however stilll get the following in the cds.log
    15 Aug 2011 13:59:53 INFO  com.nakisa.Logger  - Tenant ID: 000
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - LoginSettingsObject Load: 1719
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Credential provider SapSso
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Using cert: K:\usr\sap\NKP\J14\j2ee\cluster\apps\Nakisa\OrgChart\servlet_jsp\OrgChart\root\XML\verify.pse
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Ticket is: AjExMDAgAA9wb3J0YWw6eXNzZWxhZ2OIABNiYXNpY2F1dGhlbnRpY2F0aW9uAQAIWVNTRUxBR0MCAAMwMDADAANEUDkEAAwyMDExMDgxNTExNDcFAAQAAAAICgAIWVNTRUxBR0P%2FAQQwggEABgkqhkiG9w0BBwKggfIwge8CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHATGBzzCBzAIBATAiMB0xDDAKBgNVBAMTA0RQOTENMAsGA1UECxMESjJFRQIBADAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTEwODE1MTE0NzIwWjAjBgkqhkiG9w0BCQQxFgQUK13ubzFiQrY4H%2FLRk2ysyvPSvccwCQYHKoZIzjgEAwQuMCwCFF1W9d!tAjLvP8dnb1bs4XghaHSBAhQ9kd9N!bJubUWITtkzU!za96lxNg%3D%3D
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Version of SAPSSOEXT: SAPSSOEXT 4
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : SCUE LIB base path is:
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.mysap.sso.SSO2Ticket : Could not load library: sapsecu.dll - java.lang.Exception: MySapInitialize failed: rc= 14null
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : java.lang.Exception: MySapEvalLogonTicketEx failed: standard error= 9, ssf error= 0
    15 Aug 2011 13:59:55 ERROR com.nakisa.Logger  - com.nakisa.framework.login.Credentials_SapSso : Internal error (9) - No SSF error (0)
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User to authenticate null
    15 Aug 2011 13:59:55 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Authentication provider SapSso
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User authenticated null
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Authentication row is {SapSsoTicket=AjExMDAgAA9wb3J0YWw6eXNzZWxhZ2OIABNiYXNpY2F1dGhlbnRpY2F0aW9uAQAIWVNTRUxBR0MCAAMwMDADAANEUDkEAAwyMDExMDgxNTExNDcFAAQAAAAICgAIWVNTRUxBR0P%2FAQQwggEABgkqhkiG9w0BBwKggfIwge8CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHATGBzzCBzAIBATAiMB0xDDAKBgNVBAMTA0RQOTENMAsGA1UECxMESjJFRQIBADAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTEwODE1MTE0NzIwWjAjBgkqhkiG9w0BCQQxFgQUK13ubzFiQrY4H%2FLRk2ysyvPSvccwCQYHKoZIzjgEAwQuMCwCFF1W9d!tAjLvP8dnb1bs4XghaHSBAhQ9kd9N!bJubUWITtkzU!za96lxNg%3D%3D}
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User population provider is Database
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner : ensurePool : Current pool size:0
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner : ensurePool : Current pool size:0
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - FunctionRunner.executeFunctionDirect: /NAKISA/RFC_REPORT took: 266ms
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - BAPI_SAP_OTFProcessor_Report :  WhereClause : ( (Userid is null) or (Userid='') ); Table : (SAP_UserPopulation); Dataelement : (UserPopulationInfo)
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : User populated
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Role mapping provider is: SAP
    15 Aug 2011 14:00:00 ERROR com.nakisa.Logger  - SAPRoleMapping_SAP.MapRoles() : while trying to invoke the method java.lang.String.toUpperCase() of an object loaded from local variable 'value'
    15 Aug 2011 14:00:00 INFO  com.nakisa.Logger  - com.nakisa.framework.login.Main : LogIn : Login process finished with errors
    Any ideas? Should I maybe hardcode the location in the credentials.xml?
    Kind regards
    Dries Yssel

  • How to Execute a Remote Procedure in Portal using Database Link

    Hi,
    I followed the instructions to create a Portal form for a remote procedure. But I am encountering the following error. Can someone advise what may be the cause?
    Failed to execute - Missing string(create_package_body) language(us) domain (wwv) sub_domain (wwv_builder) (WWV-04300)
    ORA-04020: deadlock detected while trying to lock object PUBLIC.PORTLET_SCHEMA (WWV-11230)
    Failed to parse as PORTAL - (WWV-08300)
    PURPOSE
    How to execute a remote procedure in Portal using Database Link.
    DESCRIPTION
    This procedure assumes that you have two databases, one of which is remote, and Portal is configured in the other.
    Remote Database A:
    ==================
    1) Create a procedure as follows: Create or Replace PROCEDURE SCOTT.ADD_TWO_VALUES ( v_one IN NUMBER, v_two IN NUMBER, v_result OUT NUMBER) as begin v_result :=v_one+v_two; end; 2) Grant execute privileges to PUBLIC on the procedure.
    Database B (where Portal is configured): ========================================
    1) Create a public database link and choose to connect as a specific user (say SYSTEM). By default, in an Oracle 8i database, the "global_names" parameter in initSID.ora (or init.ora) file is set to "true". This Global Naming parameter enforces that a dblink has the same name as the database it connects to. Therefore, if the remote global database (A) name is "ora8.acme.com" then the database link should also be named as "ora8.acme.com".
    2) Create a synonym for the procedure in Database A. Make sure you fully qualify the procedure name in the remote database (like SCOTT.ADD_TWO_VALUES).
    3) Create a dynamic page to execute the procedure. The ORACLE tags in the dynamic page will look similar to the following: <ORACLE> DECLARE v_total NUMBER; BEGIN ADD_TWO_VALUES(:v_one,:v_two, v_total); htp.p('The total is => '); htp.p('<input type="TEXT" VALUE='||v_total||'>'); htp.para; htp.anchor('http://<machine.domain:port#>/pls/portal30/SCOTT.DYN_ADD_TWO_VALUES.show_parms', 'Re-Execute Procedure'); END; </ORACLE>
    4) Portal does not have an option to create a form based on a synonym. Therefore, if you want to create a form instead of a dynamic page, create a wrapper procedure and then create a form based on this procedure. For example: Create or Replace PROCEDURE PORTAL30.ADD_TWO_VALUES_PR ( v_one IN NUMBER, v_two IN NUMBER, v_total OUT NUMBER) as begin add_two_values(v_one, v_two, v_total); end;
    5) Grant execute privileges to PUBLIC on the procedure.

    hello...
    any input will welcomed... Thanks..

  • "In-Memory Database Cache" option for Oracle 10g Enterprise Edition

    Hi,
    In one of our applications, we are using TimesTen 5.1.24 and Oracle 9i
    databases (platform - Solaris 9i).
    TimesTen holds application information which needs to be accessed quickly
    and Oracle 9i is a master application database.
    Now we are looking at an option of migrating from Oracle 9i to Oracle 10g
    database. While exploring about Oracle 10g features, came to know about
    "In-Memory Database Cache" option for Oracle Enterprise Edition. This made
    me to think about using Oracle 10g Enterprise Edition with "In-Memory
    Database Cache" option for our application.
    Following are the advantages that I could visualize by adopting the
    above-mentioned approach:
    1. Data reconciliation between Oracle and TimesTen is not required (i.e.
    data can be maintained only in Oracle tables and for caching "In-Memory
    Database Cache" can be used)
    2. Data maintenance is easy and gives one view access to data
    I have following queries regarding the above-mentioned solution:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    In "Options and Packs" chapter in Oracle documentation
    (http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/options.htm
    #CIHJJBGA), I encountered the following statement:
    "For the purposes of licensing Oracle In-Memory Database Cache, only the
    processors on which the TimesTen In-Memory Database component of the
    In-Memory Database Cache software is installed and/or running are counted
    for the purpose of determining the number of licenses required."
    We have servers with the following configuration. Is there a way to get the
    count of processors on which the Cache software could be installed and/or
    running? Please assist.
    Production box with 12 core 2 duo processors (24 cores)
    Pre-production box with 8 core 2 duo processors (16 cores)
    Development and test box with 2 single chip processors
    Development and test box with 4 single chip processors
    Development and test box with 6 single chip processors
    Thanks & Regards,
    Vijay

    Hi Vijay,
    regarding your questions:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    ==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    ==> Seperate Installation
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    ==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
    This explains the differences.
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    ==> Please see above mentioned papers
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    ==> Again ... ;-)
    Kind regards
    Mike

  • Question about Portal and BI Beans

    Hi,
    I am trying to create a Portlet that displays Thin BI Beans crosstab. Using the "URL-Based Portlet (inline rendering)", I could display the Thin BI Beans crosstab inside the Portal. But, when I try to drill down or change the page edge, it ends up with "No Page Found" error. My question is...
    1) Is it possible to embed a Thin BI Beans crosstab inside the Portal and manipulate it dynamically?
    2) If it is possible, how can I do that?
    I will attach my provider.xml and JSP file that creates the crosstab. Please let me know if you need more information. Thank you very much.
    Seiji Minabe
    Technical Director
    IAF Software, Inc.
    provider.xml
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <?providerDefinition version="3.1"?>
    <provider class="oracle.portal.provider.v2.http.URLProviderDefinition">
    <providerInstanceClass>oracle.portal.provider.v2.http.URLProviderInstance</providerInstanceClass>
    <session>true</session>
         <portlet class="oracle.portal.provider.v2.DefaultPortletDefinition">
              <id>1</id>
              <name>Sample Portal</name>
              <title>SamplePortal</title>
              <shortTitle>SamplePortal</shortTitle>
              <description>SamplePortal.</description>
              <timeout>10000</timeout>
              <timeoutMessage>SamplePortal portlet timed out</timeoutMessage>
              <acceptContentType>text/html</acceptContentType>
              <renderer class="oracle.portal.provider.v2.render.RenderManager">
                   <contentType>text/html</contentType>
                   <charSet>UTF-8</charSet>
                   <showPage>/sampleView.jsp</showPage>
              </renderer>
         </portlet>
         <portlet class="oracle.portal.provider.v2.http.URLPortletDefinition">
              <id>2</id>
              <name>Sample URL Based Portlet</name>
              <title>Sample URL Based Portlet</title>
              <description>Display Sample as a portlet.</description>
              <timeout>100</timeout>
              <timeoutMessage>Timed out waiting for Sample Portlet.</timeoutMessage>
              <acceptContentType>text/html</acceptContentType>
              <showEdit>false</showEdit>
              <showEditToPublic>false</showEditToPublic>
              <showEditDefault>false</showEditDefault>
              <showPreview>false</showPreview>
              <showDetails>false</showDetails>
              <hasHelp>false</hasHelp>
              <hasAbout>false</hasAbout>
              <renderer class="oracle.portal.provider.v2.render.RenderManager">
              <contentType>text/html</contentType>
              <showPage class="oracle.portal.provider.v2.render.http.URLRenderer">
              <contentType>text/html</contentType>
              <charSet>ISO-8859-1</charSet>
              <pageUrl>http://iafsoft06.iafsoft.com:7779/SamplePortal/sampleView.jsp</pageUrl>
              <filter class="oracle.portal.provider.v2.render.HtmlFilter">
              <headerTrimTag>&lt;body</headerTrimTag>
              <footerTrimTag>/body></footerTrimTag>
              <inlineRendering>true</inlineRendering>
              </filter>
              </showPage>
              </renderer>
         </portlet>
    </provider>
    sampleView.jsp
    <%@ taglib uri="http://xmlns.oracle.com/bibeans" prefix="orabi" %>
    <%@ page contentType="text/html;charset=windows-1252"%>
    <%@ page import="oracle.portal.provider.v2.render.*"%>
    <%@ page import="oracle.portal.provider.v2.http.*"%>
    <%-- Start synchronization of the BI tags --%>
    <% synchronized(session){ %>
    <orabi:BIThinSession id="BIThinSession1" configuration="/Project1BIConfig1.xml" >
    <orabi:Presentation id="sampleView_Presentation1" location="sampleCrosstab" />
    </orabi:BIThinSession>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
    <title>
    Sample View
    </title>
    </head>
    <body>
    <FORM name="BIForm">
    <!-- Insert your Business Intelligence tags here -->
    <orabi:Render targetId="sampleView_Presentation1" parentForm="BIForm" />
    <%-- The InsertHiddenFields tag adds state fields to the parent form tag --%>
    <orabi:InsertHiddenFields parentForm="BIForm" biThinSessionId="BIThinSession1" />
    </FORM>
    </body>
    </html>
    <% } %>
    <%-- End synchronization of the BI tags --%>

    The versions of products I use are...
    Oracle Database 9.2.0.2
    9iAS 9.0.2
    JDeveloper903
    bibeans903a

  • Coherence and database backend updates

    Hi
    I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
    My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
    Rgds
    Anil

    Hi Anil,
    it really depends on what you need to achieve.
    There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
    However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
    The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
    Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
    It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
    Due to these behaviors write-behind has a profound effect on write-heavy applications.
    However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
    Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
    The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
    The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
    Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
    So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
    There can be other approaches as well.
    It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
    Best regards,
    Robert

Maybe you are looking for