Data in cache

Hi,
Webdynpro session variable should not be used, as it will be the violation fo rule Webdynpro inner classes should not be used.
Whats the way to put data in cache for a user. I have 8 different projects and deployed on Portal and used as webdynpro iViews.
Now for the logged in user, how to maintain data in cache, so that no need to access the data again from database for the same queries.

Nav,
<a href="https://media.sdn.sap.com/javadocs/NW04s/SPS7/wd/com/sap/tc/webdynpro/services/sal/adapter/api/IWDWebContextAdapter.html">IWDWebContextAdapter</a> is deprecated. Instead, use <a href="https://media.sdn.sap.com/javadocs/NW04s/SPS7/wd/com/sap/tc/webdynpro/services/sal/adapter/api/IWDProtocolAdapter.html">IWDProtocolAdapter</a> to access the protocol abstraction of the current request-response-cycle. Check, if <a href="https://media.sdn.sap.com/javadocs/NW04s/SPS7/wd/com/sap/tc/webdynpro/services/sal/adapter/api/IWDWebContextAdapter.html#getRequestParameter(java.lang.String)">getRequestParameter</a> helps you in getting the parameters.
Bala

Similar Messages

  • Data Buffer Cache Error Message

    I'm using a load rule that builds a dimenson on the fly and getting the following error: "Not enough memory to allocate the Data Buffer Cache [adDatInitCacheParamsAborted]"I've got 4 other databases which are set up the same as this one and I'm not getting this error. I've checked all the settings and I think they're all the same.Anyone have any idea what this error could mean?I can be reached at [email protected]

    Hi,
    Same issue, running Vista too.  This problem is recent.  It may be due to the last itunes update.  itunes 11.2.23

  • Clear POJO Data control cache

    Hi I am using Jdev 11.1.1.7
    I have a POJO based Data control which am using for tree structure.
    I have this tree inside a popup.
    So when cancelling or after the purpose of the popup is over.
    I want to clear the cache of the data control.
    I tried several ways to do that like having a methods in my java class where making the object  null and calling the methods on cancel.
    But nothing worked .
    How to clear the Pojo DC.

    Hi AlejandroTLanz   ,
    I tried the above method.
    I created a method in Java class which is used to make the List null used by the tree.
    And then calling finditerator.executequery.
    I could see that the list is getting null.
    But In Java sampler It still shows that Objects are present.
    I guess the Object are still in data control cache...
    I tried to clear the DC Cache by calling the DC.resetMethod.
    But nothing worked

  • Data File Cache / Data Cache

    I have a few Questions regarding Data File cache and data cache. based on teh size of the application.
    All the settings are changed by using a maxl script.
    1. Data File cache-50MB,Data Cache - 100MB
    I am using Buffered I/O, then will memory be allocated to Data File cache?
    2. It is given in DBAG that data cache & index cache should be set as small as
    possbile for both buffered & Direct I/O. The size of one of my application is
    around 11 GB. data file :11GB,index File :450MB
    I have set my index cache to 450MB and data cache to 700MB.
    Is it OK or a. what should be my data cache size?
    b. How do I calculate the optimal Data cache and index cache?
    3. the memory size of our AIX server is 6GB. If i use Directo I/O, can my sum of
    all caches be 4GB?
    4. If I use buffered I/O, according to (2), what should be my cache sizes?
    Thanks
    Amarnath

    In the DBAG it states data file cache is not used with buffered IO so the answere to 1) should be NO.
    For 2) there is a hint in the DBAG that you should check the hit ratio of the caches to verify sizing, only calculatory advice for sizing is given on the calulator cache :-( This would mean for 2b) look at the hit ration if it stays around 1.0 try to decrease it until it drops slightly. Inspect the ratios from time to time.
    3) don't know, 64bit should be no problem. But why would you do this anyway?
    Example form our settings pag total ~20GB ind ~2GB
    outline hat 11 dimensions with a block size of ~340KB largest dense ~400 members, largest sparse ~4000 members, existing blocks ~2.7 milions
    The data cache is set to 256 MB, the index cache to 64MB our hit ratios are 1.0 for index cache and 0.77 for data chache. so our data cache could be larger, but the performace of retrievals is around 3.0 seconds which is fine for our users..
    4) Check your hit ratios and try to increase or decrease the cahces in small steps (first I'd do the index cache, the if it's fine I'd tune the data cache).
    hope it helped a bit..

  • Data buffer cache has undo information...??

    do data buffer in sga has any undo information....??

    920273 wrote:
    in sga we have data buffer ..... in data buffer cache if i m updating a block, do any undo information is maintained in buffer cache regarding the change, i hav made in block.....am
    lets take an example if i m updating a block having value '2' to '4'...and dbwriter not writes to datafiles ...in this case do any undo information is maintained in buffer cache...???Hmmm.. When you updated value 2 to 4, so still its uncommitted then the transaction is active and it will be in buffer cache itself. Of course if the cache is full or performed any checkpoint, then this modified data also writtened into data files. So you should not mention dbwriter not writes to datafiles you cant predict when cache going to full or checkpoint occurred.

  • Data Buffer Cache!

    Hi Guru(s),
    Please anyone can tell me that, how can we check the size of data buffer cache.
    Please help.
    Regards,
    Rajeev K

    Hi,
    Here is what I would do:
    First, let's have a look what views we have available to us
    SQL> select table_name from dict where table_name like '%SGA%';
    TABLE_NAME
    DBA_HIST_SGA
    DBA_HIST_SGASTAT
    DBA_HIST_SGA_TARGET_ADVICE
    V$SGA
    V$SGAINFO
    V$SGASTAT
    V$SGA_CURRENT_RESIZE_OPS
    V$SGA_DYNAMIC_COMPONENTS
    V$SGA_DYNAMIC_FREE_MEMORY
    V$SGA_RESIZE_OPS
    V$SGA_TARGET_ADVICE
    GV$SGA
    GV$SGAINFO
    GV$SGASTAT
    GV$SGA_CURRENT_RESIZE_OPS
    GV$SGA_DYNAMIC_COMPONENTS
    GV$SGA_DYNAMIC_FREE_MEMORY
    GV$SGA_RESIZE_OPS
    GV$SGA_TARGET_ADVICE
    19 rows selected.v$sgainfo looks pretty promising, what does it contain?
    SQL> desc v$sgainfo
    Name                                                              Null?    Type
    NAME                                                                       VARCHAR2(32)
    BYTES                                                                      NUMBER
    RESIZEABLE                                                                 VARCHAR2(3)And how many rows are in it?
    SQL> select count(*) from v$sgainfo;
      COUNT(*)
            12Not too many so let's look at them all, with a little formatting
    SQL> select name,round(bytes/1024/1024,0) MB, resizeable from v$sgainfo order by name;Hope that helps,
    Rob

  • How much data canbe cache in SGA

    Hai everybody,
    my company using Oracle 11g 11.2.0.1.0 - Production database, my os is RHEL 5.5 ,my server's physical memory is 30.9 GB ( cat /proc/meminfo ) ie the SGA size [ (30.9*40/100)=12.5 GB ] so i use 12.5GB SGA size for one instance, we need to cache some data for our application performance how much data can cache in SGA based on 12.5 GB or how to calculate how much data can cache in SGA without any performance degradation.
    Regards Benk

    Aman has it right. WHY?
    the keep pool, alter table cache, etc - I would not be doing ANY of these without a good reason.
    Often times, people think that forcing a table to stay in the cache is a good idea. It rarely is. If you want things in the cache, use them. If they are used, they will naturally have their blocks cached. If you don't use them, they can get flushed out. But if you don't use them, why do you want them using memory that could better be used by other things?
    My advice: don't try to second-guess Oracle's memory management and caching strategy. If you think you have a reason to, post it here and we can help you (or debunk your reason). Based on the fact that you've apparently calculated the SGA using that silly, meaningless "rule" that says give it 40% of your total RAM, I'd guess that you are looking around for a magic bullet, and you don't actually have a problem to solve.
    John

  • Data Target CACHE not known in APD

    Dear All,
    I have created CRM extension thru EEWB and also i have created necessary query at BW side. Now when i try to create Analysis Process thru APD, i am not able to add my Analytical Data Storage wat i have created in CRM thru EEWB.
    It is giving "CRM error: Data target CACHE;ZSBP not known" where ZSBP is my ADS created on CRM.
    Is this some authorization issue or wat? Plz suggest.
    Regards,
    SS.

    The same question in data source is not appeared in the Infopackage.

  • Data File Cache Significance

    Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

    The data file cache has significance only if your client is using Direct I/O. Direct I/O was introduced beginning with v6 to allow DBAs to explicitly manage data file caching on a database by database basis rather than allowing the operating system to do it. The previous I/O management scheme (Buffered I/O) was still supported, but Direct I/O was the default.Unfortunately, in installations with multiple applications per server, optimizing the data file cache for each database proved to be a headache. So Hyperion decided to revert to Buffered I/O as the default beginning with v6.2. So in Essbase version 6.0-6.1 Direct I/O is the default and you have to change it in essbase.cfg to use Buffered I/O. In version 6.2 and later the reverse is true (please correct me if I'm wrong everyone).If you choose to use Direct I/O, I believe the conventional wisdom is to make the data file cache big enough to hold all the .pag files in your database. Otherwise, set it as large as possible.Good luck,Bruce---Message Posted by sparky1 ??4/7/02 19:44---Hi Everyone,I have been away from Essbase for a little while and in the mean time our client made the move to Essbase v6.Could somebody please explain the significance, and correct way to set, the Data File Cache? Also, have the rules for setting the Data Cache changed as a result?Any advice would be appreciated - I understood Essbase v5 quite well but have been out of the loop as far as Essbase v6 goes.

  • How to put the data into cache and distribute to nodeusing oracle coherence

    Hi Friends,
    i am having some random number data writing into file,from that file i am reading the data and i want to put into cache,how can i put the data into cache and partition this data into different nodes ( machines) to caluculate like S.D,variance..etc..like that.(or how can i implement montecarlo using oracle coherence) if any one know plz suggest me with flow.
    Thank you.
    regards
    chandra

    Hi robert,
    i have some bulk data in some arraylist or object format,i want to put into cache.
    i am not able to put into cache.i am using put method like cache.put(object key ,object value) ,but its not allowing to put into cache.
    can you please help me.i m sending my code.plz go thru and tel me whr i did mistake.
    package lab3;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.NearCache;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.PrintWriter;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.Scanner;
    import javax.naming.Name;
    public class BlockScoleData {
         * @param args
         * s=The spot market price
         * x=the exercise price of the option
         * v=instantaneous standard deviation of s
         * r=risk free instantaneous rate of interest
         * t= time to expiration of the option
         * n – Number of MC simulations.
         private static String outputFile = "D:/cache1/sampledata2.txt";
    private static String inputFile = "D:/cache1/sampledata2.txt";
    NearCache cache;
    List<Credit> creditList = new ArrayList<Credit>();
         public void writeToFile(int noofsamples) {
              Random rnd = new Random();
              PrintWriter writer = null;
              try {
                   writer = new PrintWriter(outputFile);
                   for (int i = 1; i <= noofsamples; i++) {
                        double s = rnd.nextInt(200) * rnd.nextDouble();
                        //double x = rnd.nextInt(250) * rnd.nextDouble();
                        int t = rnd.nextInt(5);
                        double v = rnd.nextDouble() ;
                        double r = rnd.nextDouble() / 10;
                        //int n = rnd.nextInt(90000);
                        writer.println(s + " " + t + " " + v + " "
                                  + r );
              } catch (FileNotFoundException e) {
                   e.printStackTrace();
              } finally {
                   writer.close();
                   writer = null;
    public List<Credit> readFromFile() {
    Scanner scanner = null;
    Credit credit = null;
    // List<Credit> creditList = new ArrayList<Credit>();
    try {
    scanner = new Scanner(new File(inputFile));
    while (scanner.hasNext()) {
    credit = new Credit(scanner.nextDouble(), scanner.nextInt(),
    scanner.nextDouble(), scanner.nextDouble());
    creditList.add(credit);
    System.out.println("read the list from file:"+creditList);
    } catch (FileNotFoundException e) {
    e.printStackTrace();
    } finally {
    scanner.close();
    credit = null;
    scanner = null;
    return creditList;
    // public void putCache(String cachename,List<Credit> list){
    // cache = CacheFactory.getCache ( "VirtualCache");
    // List<Credit> rand = new ArrayList<Credit>();
    public Object put(Object key, Object value){
    cache = (NearCache)CacheFactory.getCache("mycache");
    String cachename = cache.getCacheName();
    List<Credit> cachelist=new ArrayList<Credit>();
    // Object key;
    //cachelist = (List<Credit>)cache.put(creditList,creditList);
    cache.put(creditList,creditList);
    System.out.println("read to the cache list from file:"+cache.get(creditList));
    return cachelist;
         public static void main(String[] args) throws Exception {
         NearCache cache = (NearCache)CacheFactory.getCache("mycache");
              new BlockScoleData().writeToFile(20);
         //new BlockScoleData().putCache("Name",);
              System.out
                        .println("New file \"myfile.csv\" has been created to the current directory");
         CacheFactory.ensureCluster();
         new BlockScoleData().readFromFile();
    System.out.println("data read from file successfully");
         List<Credit> creditList = new ArrayList<Credit>();
    new BlockScoleData().put(creditList,creditList);
         System.out.println("read to the cache list from file:"+cache.get(creditList));
    //cache=CacheFactory.getCache("mycache");
    //mycacheput("Name",new BlockScoleData());
    //     System.out.println("name of cache is :" +mycache.getCacheName());
    //     System.out.println("value in cache is :" +mycache.get("Name"));
    //     System.out.println("cache services are :" +mycache.getCacheService());
    regards
    chandra

  • Help Needed in persisting data in cache from CEP on to a database

    Hi,
    We are trying to create a Oracle Complex Event Processing (CEP) application in which persist the data stored in the cache (Oracle Coherence) to a back end database
    Let me provide you the steps that I have followed:
    1)     Created a CEP project with cache implementation to store the events.
    2)     Have the following configuration in the context file:
    <wlevs:cache id="Task_IN_Cache" value-type="Task" key-properties="TASKID" caching-system="CachingSystem1">
    <wlevs:cache-store ref="cacheFacade"></wlevs:cache-store>
    </wlevs:cache>
    <wlevs:caching-system id="CachingSystem1" provider="coherence">
    <bean id="cacheFacade" class="com.oracle.coherence.handson.DBCacheStore">
    </bean>
    3)     Have the following in the coherence cache config xml:
    <cache-mapping>
    <cache-name>Task_IN_Cache</cache-name>
    <scheme-name>local-db-backed</scheme-name>
    </cache-mapping>
    <local-scheme>
    <scheme-name>local-db-backed</scheme-name>
    <service-name>LocalCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme/>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.oracle.coherence.handson.DBCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </local-scheme>
    4)     Have configured tangosol-coherence-override.xml to make use of coherence in my local machine.
    5)     Have written a class that implements com.tangosol.net.cache.CacheStore
    public class DBCacheStore implements CacheStore{
    But when I try to deploy the project on to the CEP server getting the below error:
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Task_IN_AD': Cannot resolve reference to bean 'wlevs_stage_proxy_forTask_IN_Cache' while setting bean property 'listeners' with key [0]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'wlevs_stage_proxy_forTask_IN_Cache': Cannot resolve reference to bean '&Task_IN_Cache' while setting bean property 'cacheFactoryBean'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Task_IN_Cache': Invocation of init method failed; nested exception is java.lang.RuntimeException: Error while deploying application 'AIT_Caching'. Cache store specified for cache 'Task_IN_Cache' does not implement the required 'com.tangosol.net.cache.CacheStore' interface.
    Can you please let me know if I am missing any configuration. Appreciate your help.

    Hi JK,
    Yes my class com.oracle.coherence.handson.DBCacheStore implements the com.tangosol.net.cache.CacheStore interface. I am providing you with the DBCacheStore class.
    package com.oracle.coherence.handson;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.CacheStore;
    import java.sql.DriverManager;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.util.Collection;
    import java.util.Iterator;
    import java.util.LinkedList;
    import java.util.List;
    import java.util.Map;
    import oracle.jdbc.pool.OracleDataSource;
    public class DBCacheStore implements CacheStore {
    protected Connection m_con;
    protected String m_sTableName;
    private static final String DB_DRIVER = "oracle.jdbc.OracleDriver";
    private static final String DB_URL = "jdbc:oracle:thin:@XXXX:1521:XXXX";
    private static final String DB_USERNAME = "XXXX";
    private static final String DB_PASSWORD = "XXXX";
    public DBCacheStore()
    m_sTableName = "TASK";
    System.out.println("Inside constructor");
    init();
    //store("10002", "10002");
    public void init()
    try
         OracleDataSource ods = new OracleDataSource();
    /* Class.forName(DB_DRIVER);
    m_con = DriverManager.getConnection(DB_URL, DB_USERNAME, DB_PASSWORD);
    m_con.setAutoCommit(true);*/
         ods.setURL(DB_URL);
         ods.setUser(DB_USERNAME);
         ods.setPassword(DB_PASSWORD);
         m_con = ods.getConnection();
    System.out.println("Connection Successful");
    catch (Exception e)
         e.printStackTrace();
    //throw ensureRuntimeException(e, "Connection failed");
    public String getTableName() {
    return m_sTableName;
    public Connection getConnection()
    return m_con;
    public Object load(Object oKey)
    Object oValue = null;
    Connection con = getConnection();
    String sSQL = "SELECT TASKID, REQNUMBER FROM " + getTableName()
    + " WHERE TASKID = ?";
    try
    PreparedStatement stmt = con.prepareStatement(sSQL);
    stmt.setString(1, String.valueOf(oKey));
    ResultSet rslt = stmt.executeQuery();
    if (rslt.next())
    oValue = rslt.getString(2);
    if (rslt.next())
    throw new SQLException("Not a unique key: " + oKey);
    stmt.close();
    catch (SQLException e)
    //throw ensureRuntimeException(e, "Load failed: key=" + oKey);
    return oValue;
    public void store(Object oKey, Object oValue)
         System.out.println("Inside Store method");
         NamedCache cache = CacheFactory.getCache("Task_IN_Cache");
         System.out.println("Cache Service:" + " "+ cache.getCacheService());
         System.out.println("Cache Size:" + " "+ cache.size());
         System.out.println("Is Active:" + " "+ cache.isActive());
         System.out.println("Is Empty:" + " "+ cache.isEmpty());
         //cache.put("10003", "10003");
         //System.out.println("Values:" + " "+ cache.put("10003", "10003"));
    Connection con = getConnection();
    String sTable = getTableName();
    String sSQL;
    if (load(oKey) != null)
    sSQL = "UPDATE " + sTable + " SET REQNUMBER = ? where TASKID = ?";
    else
    sSQL = "INSERT INTO " + sTable + " (TASKID, REQNUMBER) VALUES (?,?)";
    try
    PreparedStatement stmt = con.prepareStatement(sSQL);
    int i = 0;
    stmt.setString(++i, String.valueOf(oValue));
    stmt.setString(++i, String.valueOf(oKey));
    stmt.executeUpdate();
    stmt.close();
    catch (SQLException e)
    //throw ensureRuntimeException(e, "Store failed: key=" + oKey);
    public void erase(Object oKey)
    Connection con = getConnection();
    String sSQL = "DELETE FROM " + getTableName() + " WHERE id=?";
    try
    PreparedStatement stmt = con.prepareStatement(sSQL);
    stmt.setString(1, String.valueOf(oKey));
    stmt.executeUpdate();
    stmt.close();
    catch (SQLException e)
    // throw ensureRuntimeException(e, "Erase failed: key=" + oKey);
    public void eraseAll(Collection colKeys)
    throw new UnsupportedOperationException();
    public Map loadAll(Collection colKeys) {
    throw new UnsupportedOperationException();
    public void storeAll(Map mapEntries)
         System.out.println("Inside Store method");
    throw new UnsupportedOperationException();
    public Iterator keys()
    Connection con = getConnection();
    String sSQL = "SELECT id FROM " + getTableName();
    List list = new LinkedList();
    try
    PreparedStatement stmt = con.prepareStatement(sSQL);
    ResultSet rslt = stmt.executeQuery();
    while (rslt.next())
    Object oKey = rslt.getString(1);
    list.add(oKey);
    stmt.close();
    catch (SQLException e)
    //throw ensureRuntimeException(e, "Iterator failed");
    return list.iterator();
    }

  • Bad Data from Cache after infoprovider update

    Hello,
    We have queries running off of a multi-provider consisting of 4 cubes.  Occasionally when we generate quires after deltas have been loaded into the cubes we get incomplete or bad data.  Deactivating the cache or deleting the cache and runing the query again always seems to resolve the problem.
    Any thoughts on what the issue might be and how to resolve?
    Thanks
    Stan Pickford

    Stanley,
    after any data change in an InfoCube, table RSDINFOPROVDATA should contain the timestamp. This timestamp is used by the OLAP cache to determine if new data has been loaded or if the cache is still valid.
    I'm not aware of any problems. If it is reproducible and you can't fix it, open a message to SAP Support.
    Regards,
    Marc
    SAP NetWeaver RIG

  • Data Buffer Cache Quality

    Hi All,
    Can somebody please please tell some ways in which i can improve the data buffer quality? Presently it is 51.2%. The DB is 10.2.0.2.0
    I want to know, wat all factors do i need to keep in mind if i want to increase DB_CACHE_SIZE?
    Also, i want to know how can i find out Cache Hit ratio?
    Further, i want to know which are the most frequently accessed objects in my DB?
    Thanks and Regards,
    Nick.

    Nick-- wud b DBA wrote:
    Hi Aman,
    Thanks. Can u please give the appropriate query for that?
    And moreover when i'm giving:
    SQL>desc V$SEGMENT-STATISTICS; It is giving the following error:
    SP2-0565: Illegal identifier.
    Regards,
    Nick.LOL dude I put it by mistake. Its dash(-) sign but we need underscore(_) sign.
    About the query, it may vary what you really mean by "most used obect". If you mean to find the object that is undergoing lots of reads,writes than this may help,
    SELECT Rownum AS Rank,
    Seg_Lio.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'LIO' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'logical reads'
    ORDER BY St.VALUE DESC) Seg_Lio
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_r.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Reads' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical reads'
    ORDER BY St.VALUE DESC) Seq_Pio_r
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_w.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Writes' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical writes'
    ORDER BY St.VALUE DESC) Seq_Pio_w
    WHERE Rownum <= 10; But if you are looking for the objects which are most highly in the waits than this query may help
    select * from
       select
          DECODE
          (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
       AS "Object",
    sum(case when
       a.statistic_name = 'ITL waits'
    then
       a.value else null end) "ITL Waits",
    sum(case when
       a.statistic_name = 'buffer busy waits'
    then
       a.value else null end) "Buffer Busy Waits",
    sum(case when
       a.statistic_name = 'row lock waits'
    then
       a.value else null end) "Row Lock Waits",
    sum(case when
       a.statistic_name = 'physical reads'
    then
       a.value else null end) "Physical Reads",
    sum(case when
       a.statistic_name = 'logical reads'
    then
       a.value else null end) "Logical Reads"
    from
       v$segment_statistics a
    where
       a.owner like upper('&owner')
    group by
       rollup(a.object_name)) b
    where (b."ITL Waits">0 or b."Buffer Busy Waits">0)This query's reference:http://www.dba-oracle.com/t_object_wait_v_segment_statistics.htm
    So it would depend upon that on what ground you want to get the objects.
    About the cache increase, are you seeing any wait events related to buffer cache or DBWR in the statspack report?
    HTH
    Aman....

  • 5 months rolling monthly report by fetching data from cache & Database

    Scenario: One monthly report would be created each month end which contains one summary tab and other details tabs. Summary tabs will contain the data of all the months summary data (current month and previous months)
    Problem:
    Current month data have to come directly from database and previous months data come form cache and whole thing will be cached this time so that when we run next month report the data till this month will also come from cache.
    Any one please help me ASAP.

    What's wrong with F_GET_COMPANY_CODE ?  Below is similar code - try running this and seeing what you get:
    report zlocal_jc_t001w.
    tables:
      t001k,     "Valuation area
      t001w.     "Plants/Branches
    parameters:
      p_bukrs          like t001k-bukrs default '1000'.
    select-options:
      s_werks          for t001w-werks.
    start-of-selection.
      perform get_data.
    *&      Form  get_data
    form get_data.
      data:
        begin of gt_t001k occurs 10,
          bukrs             like t001k-bukrs,
          bwkey             like t001k-bwkey,
          werks             like t001w-werks,
        end of gt_t001k.
      select
        t001k~bukrs
        t001k~bwkey
        t001w~werks
        into corresponding fields of table gt_t001k
        from t001k as t001k
        inner join t001w as t001w on t001w~bwkey = t001k~bwkey
        where t001k~bukrs = p_bukrs
        and   t001w~werks in s_werks.
      loop at gt_t001k.
        write: /
          gt_t001k-bukrs,
          gt_t001k-bwkey,
          gt_t001k-werks.
      endloop.
    endform.                    "get_data
    As for links to locally defined database tables, that will depend on what the keys defined in SE11 are e.g. there will be a primary key plus relationships to other tables (for example "WERKS LIKE YSDA_EXP_PRTLOG-YY_PLANT" indicates YY_PLANT relates to T001W).
    Jonathan

  • Data Buffer Cache Limit

    Is there any way that I could signal daat buffer cache to write all data to data files if amount of dirty blocks reach say 50 Mb.
    Iam processing with BLOBS, one blob at a time, some of which has sizes exceeeding 100 Mb and the diffcult thing is that I cannot write to disk until the whoile blob is finished as it is one transaction.
    Well if anyone is going to suggest that to open, process close, commit ....well i tried that but it also gives error "nmo free buffer pool" but this comes for twice the size of buffer size = 100 Mb file when db cache size is 50 mb.
    any ideas.

    Hello,
    Ia using Oracle 9.0.1.3.1
    I am getting error ORA-OO379 No free Buffers available in Buffer Pool. Default for block size 8k.
    My Init.ora file is
    # Copyright (c) 1991, 2001 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=104857600
    # Cursors and Library Cache
    open_cursors=300
    # Diagnostics and Statistics
    background_dump_dest=C:\oracle\admin\iasdb\bdump
    core_dump_dest=C:\oracle\admin\iasdb\cdump
    timed_statistics=TRUE
    user_dump_dest=C:\oracle\admin\iasdb\udump
    # Distributed, Replication and Snapshot
    db_domain="removed"
    remote_login_passwordfile=EXCLUSIVE
    # File Configuration
    control_files=("C:\oracle\oradata\iasdb\CONTROL01.CTL", "C:\oracle\oradata\iasdb\CONTROL02.CTL", "C:\oracle\oradata\iasdb\CONTROL03.CTL")
    # Job Queues
    job_queue_processes=4
    # MTS
    dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.0.0
    db_name=iasdb
    # Network Registration
    instance_name=iasdb
    # Pools
    java_pool_size=41943040
    shared_pool_size=33554432
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=33554432
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS

Maybe you are looking for