ICM Statistics - Suppressed Cache Entries

Well, I've done the usual search in SAP xSearch and Google, not to mention checking the application help, but I can't seem to find the answer.
We have a CRM ABAP 7.0 system (kernel 7.01).  When I display the ICM statistics for the HTTP Server Cache (txn SMICM), I see:
  Cache Statistics
   Cache Size (Bytes)                         52428,800
   Occupied Cache Memory (Bytes)              52422,656
   Maximum Number of Cache Entries               10,000
   Current Number of Cache Entries                5,968
Number of Suppressed Cache Entries            55,593
   Total Number of Cache Accesses             27134,021
   Number of Cachable Accesses                28224,213
   Number of Cache Hits                       23749,222             84.14  %
   Number of Failed Cache Accesses            4,474,991             15.86  %
   Number of Cache Hits in Memory             23749,222             84.14  %
   Maximum Number of UFO Entries                  2,000
   Current Number of UFO Entries                      0
   Number of Accesses to UFO List                   416
   Number of UFO Hits                                44
What, pray tell, are "Suppressed Cache Entries" (in red above) are what are their effects on the Server Cache performance?  Do I need to 'care' about these suppressed entries?
Regards,
bryan

Thanks to Ashutosh and Clébio for responding to my query. 
Ashutosh , the link you supplied was interesting and it is possible that web error pages, such as "400 session timed out", could be the so-called suppressed cache pages.
Clébio, As for "the number of entries in the cache that have been deleted to release space to new ones," I think the message that would be relevant to that action is the CCMS alert "EvictedEntries".  I'm not certain as that is anotehr alert that isn't very well documented.
bryan

Similar Messages

  • Expire all local cache entries at specific time of day

    Hi,
    We have a need for expiring all local cache entries at specific time(s) of the day (all days, like a crontab).
    Is it possible thru Coherence config ?
    Thanx,

    Hi,
    AFAIK there is no out of the box solution but certainly you can use Coherence API along with quartz to develop a simple class that can be triggered to remove all the entries from the cache at certain time. You can also define your custom cache factory configuration and an example is available here http://sites.google.com/site/miscellaneouscomponents/Home/time-service-for-oracle-coherence
    Hope this helps!
    Cheers,
    NJ

  • How to clear Local-Cache Entries for a Query in BW?

    Hi There,
    i`m student und i need please your help for my Thesis!!
    I execute the same Query many times in BEx Web Analyzer und note a Query Response Time under ST03N using each time a different READ Mode and Cache Mode is inactiv (Query Monitor RSRT).
    First time i exectue the Query, it reads also from database, second time it uses the local Cache and  that `s okay!
    My problem is:
    When i change the Read mode and execute the Query again, it uses for the first run also the old entries from the Cache so i get wrong response time for the first run!!
    I know that while the mode cache inactiv , the local cache will still be used, so how can i delete the local cache entries each
    time i change the read mode and execute the Query? In Cache monitor (rsrcache) i find only entries for Global cache etc..
    I've already tried to close the session and login in to the System again but it doesn`t solve the Problem!!
    i don't have a permission (access rights) to switch off the complete Cache (local and global ).
    Any idea please??
    Thanks und Best Regards,
    Rachidoo
    P.S: sorry for my bad english!! i have to refresh it soon:)

    Hi Praba,
    the entries stored in RSRCACHE are for global cache, there is no entry for my query in Cache monitor!
    i execute the query in RSRT using java web button and cache mode inactiv so, the results will be stored in local cache.
    this is as below what i want to do for my performance tests in my Thesis:
    1. run a query for the first time with cache inactiv and note runtime of it
    2. run the query again with cache inactiv and note runtime of it
    3. clear the local cache (i don't know how to do it??)
    4. change the read mode of query in RSRT then run the same query for the first time and note runtime of it
    5. run the query again and note runtime of it.
    i'm doing the same procedure for each Read mode.
    the problem is in step 4 , The olap Processor gets the old results form the cache so i get wrong runtime for my tests.
    Generate the report doesn't help, any idea please?

  • Performance for messaging queue - queue implemented as single cache entry.

    Hey Brian/All,
    Has there been any progress on addressing the long standing performance issues with messaging?
    i.e. messaging stores a queue within a single cache entry, which means it needs to deserialize, add item, and reserialize every time we add an item to the queue.
    For push rep, this means a burst of data can bring messaging to it knees and cause a cluster to fall over (eg: a clear of a large cache, or a remote site that is unavailable causing queues to grow to a very large size).
    I have also noticed that when a queue gets very large, the jmx suspend/drain times out, and throws an exception.
    Cheers,
    Neville.

    Hi Friends,
    Create a function that needs to be called on the ejbCreate.
    Inside this function make the connections as in the snippet below and close it when ejbRemove() or exception.
    fis                          = new FileInputStream("D:/MessageRouter_UAT/AppConfig.Properties");
    props.load(fis);
    fis.close();
    String logPath      = props.getProperty("Log_path").trim()+".log";
    logHandler      = new FileHandler(logPath);
    logHandler.setFormatter(new SimpleFormatter());
    logWriter.addHandler(logHandler);
    logWriter.setLevel(Level.ALL);
    MQEnvironment mqEnv      = null;
    mqEnv      = new MQEnvironment();
    MQEnvironment.hostname      = props.getProperty("MQ_HOST_NAME").trim();
    MQEnvironment.port      = Integer.parseInt(props.getProperty("MQ_PORT").trim());
    MQEnvironment.channel      = props.getProperty("CHANNEL_NAME").trim();
    MQEnvironment.properties.put(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_CLIENT);
    q_Mgr                = new MQQueueManager(props.getProperty("QUEUE_MANAGER").trim());
    queueID                = q_Mgr.accessQueue(props.getProperty("ID_Queue").trim(), MQC.MQOO_OUTPUT);
    queueSG                     = q_Mgr.accessQueue(props.getProperty("SG_Queue").trim(), MQC.MQOO_OUTPUT);
    queueHK                = q_Mgr.accessQueue(props.getProperty("HK_Queue").trim(), MQC.MQOO_OUTPUT);
    queueCL                     = q_Mgr.accessQueue(props.getProperty("CL_Queue").trim(), MQC.MQOO_OUTPUT);
    Thanks,
    Arun Prithviraj

  • Cache entry created a long time after the report runs

    We have a report that results in more than 300,000 records. We get the results for the report in quick time but the cache entry gets created only after sometime, say around 30 mins later or so. Any idea why this delay? Is it that the report caches 25 records at a time (default no. of rows per page) and shows them in quick time and the rest of the records are getting cached in the background? Is there like a way we can optimize this?

    did you check how much time the entire report takes to execute (evn though the first 25 rows comes up quickly). I suspect it is > 30 mins.
    OBIEE is not meant as a data dump tool and there is little that could be done. (except better hardware)

  • Cache entry not found error message in java console 1.6.0_05 - Citrix ICA

    Client information:_
    Windows XP_SP2
    Java Runtime version 1.6.0_05
    Application: Citrix ICA version 9.4.1868
    Slow citrix ICA client connection, repeated errors in the java console stating "cache entry not found". However when I downgrade to Java Runtime version 1.5.0_10 I do not see the "cache entry not found" errors and the Citrix ICA connection is much faster. Basically launches and connects in 10 seconds versus 2 minutes.
    Any ideas? Thanks!

    Hi,
    All your classes must be accessible through the web. The easiest solution is to put all your classes in the same folder as your web page.
    If your classes are in a different folder (which must be mapped as a virtual directory), try adding the codebase attribute to your applet.
    Regards,
    Kurt.

  • The specified cache entry could not be loaded during initialization

    The OBIEE server is failing every few minutes of working.
    I saw in the NQServer.log:
    The specified cache entry could not be loaded during initialization.
    Cache entry, '/home/oracleadmin/OracleBIData/cache/NQS_rprt_734086_39693_00000001.TBL', is invlaid due to either non-existing file name, size does not match, exceeding maximum space limit nor exceeding maximum entry limit, and it's been removed.
    I checked the directory and there is no space limitation.
    Can someone please clarify?
    The server/Config/NQSConfig.INI is as follows:
    [ CACHE ]
    ENABLE = YES;
    DATA_STORAGE_PATHS = "/home/oraceladmin/OracleBIData/cache" 2000 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;

    It's more than enough if you are using it for .tmp and .tbl files storage. Have you checked work_directory_path in NQSConfig.ini file .Is it on diffrent mount point.Usually we wil create a seperate mount point and use that as storage location.
    b. Instead of using SAPurgeAllCache() , can I add shell script which delete the files fromt the cache directory and the execute run-sa.sh?You need to understand diffrence between OBIEE rpd cache and .tbl and .tmp file. SAPurgeAllCache() will purge cache from RPD.It wont purge .tmp or .tbl files.These files will be stored in some physical location.When a report start executing .tmp file will start evoluting and after execution of report it will be purged by OBIEE server automatically.
    see this blog for some insight into this topic
    Re: TMP Files uneder /OracleBIData/tmp
    Regards,
    Sandeep

  • How to clear old CACHE entries -- RSRCACHE

    Hello Gurus: What is the suggested menthod to clear the Cache entries of yesterday in RSRCACHE.
    We r still using 3.1 & will be upgrading soon.
    I will be happy to assign points & thanks for your help.

    Hi
    You can go to RSRT
    Select the cache Monitor
    There on the top you can find option Delete.
    So from there you can delete the main memory cache, or what ever its required.
    Regards
    M.A
    Edited by: M.A on May 21, 2008 4:15 PM

  • Big binaries as cache entries

    I know that we can put files as caches entries in coherence (actually we are using this feature to have a distributed lucene index inspired on Infinispan's implementation). But handling big binaries is another kind of animal: I cannot have a 4gb file in memory (near cache or wherever).
    So I was thinking if there is a way to stream instead of read a cache entry ... is this possible? I was thinking to have a cache for big binaries with no memory space, so everything will be keep in a backing map. Then every time that someone get a key from this cache it can obtain a 'DataHandler' or 'InputStream' to read the big binary.
    Why?
    Our software use the cluster to read almost all the business data but files attached to business objects. The idea is to have all the files that our software use on the IMDG instead of having another software doing this for us (i.e. distributed file system or a network file system as our current implementation).
    Edited by: ggarciao.com on Dec 11, 2012 3:54 PM
    Edited by: ggarciao.com on Dec 19, 2012 1:42 PM

    You are right there are two options, and they are both commonly used.
    Storing on the file system is a bit easier to implement for many and might be a touch more proformant with less encoding and decoding going on.  But it has a higher risk of syncornazation  problems since there are many ways for the file objects to be modified, moved or removed without changing anything and the database.  Thus one should build logic that will check for the actual existence of the file at the URI before trying to serve it up.
    Storing the file in the database mitigates the issues with syncornazation.  But it takes a bit more to set up and could involve a lot more data in the database itself, which may or may not be an issue depending on ones setup.
    But either is an acceptiable choice and just work with the various pros and cons of either.

  • Purging the cache entries accordingto table and SA

    Hi Gurus,
    I have few questions on Cache purging
    1>How many types of purging.
    2>Is it possible to purge the cache based on tables i.e.(if entries are generated on table A only that entries should purge in cache manager).
    3>Is it possible to purge the cache based on Subject Areas i.e.(if entries generated on subject area A only that entries should purge in cache manager).
    4>supose in cache manager there are 100 entries and i want to purge only selected entries i.e.(if i want to purge 25th, 50th and 75th,100th entries ) how i can achieve this.
    Regards,

    The same you can get it from Helpfile
    To purge the cache manually with the Cache Manager facility
    Use the Administration Tool to open a repository in online mode.
    Select Manage > Cache to open the Cache Manager dialog box.
    Select Cache or Physical mode by selecting the appropriate tab in the left pane.
    Navigate the explorer tree to display the associated cache entries in the right pane.
    Select the cache entries to purge, and then select Edit > Purge to remove them.
    In Cache mode, select the entries to purge from those displayed in the right pane.
    In Physical mode, select the database, catalog, schema or tables to purge from the explorer tree in the left pane.
    In Cache mode, you can purge:
    One or more selected cache entries associated with the open repository.
    One or more selected cache entries associated with a specified business model.
    One or more selected cache entries associated with a specified user within a business model.
    In Physical mode, you can purge:
    All cache entries for all tables associated with one or more selected databases.
    All cache entries for all tables associated with one or more selected catalogs.
    All cache entries for all tables associated with one or more selected schemas.
    All cache entries associated with one or more selected tables.
    SAPurgeCacheByQuery. Purges a cache entry that exactly matches a specified query. For example, using the following query, you would have a query cache entry that retrieves the names of all employees earning more than $100,000:
    select lastname, firstname from employee where salary > 100000;
    The following call purges the cache entry associated with this query:
    Call SAPurgeCacheByQuery('select lastname, firstname from employee where salary > 100000' );
    SAPurgeCacheByTable. Purges all cache entries associated with a specified physical table name (fully qualified) for the repository to which the client has connected.
    This function takes up to four parameters representing the four components (database, catalog, schema and table name proper) of a fully qualified physical table name. For example, you might have a table with the fully qualified name of DBName.CatName.SchName.TabName. To purge the cache entries associated with this table in the physical layer of the Oracle BI repository, execute the following call in a script:
    Call SAPurgeCacheByTable( 'DBName', 'CatName', 'SchName', 'TabName' );
    NOTE: Wild cards are not supported by the Oracle BI Server for this function. Additionally, DBName and TabName cannot be null. If either one is null, you will receive an error message.
    SAPurgeAllCache. Purges all cache entries. The following is an example of this call:
    Call SAPurgeAllCache();
    SAPurgeCacheByDatabase. Purges all cache entries associated with a specific physical database name. A record is returned as a result of calling any of the ODBC procedures to purge the cache. This function takes one parameter that represents the physical database name and the parameter cannot be null. The following shows the syntax of this call:
    Call SAPurgeCacheByDatabase( 'DBName' );
    For Q4:
    SAGetSharedRequestKey. An ODBC procedure that takes a logical SQL statement from the Oracle BI Presentation Services and returns a request key value.
    The following shows the syntax of this procedure:
    SAGetSharedRequestKey('sql-string-literal)
    Pls mark if helps

  • C350- LDAP Default Cache Entries - 10000

    Hi Mentors,
    Can someone also explain if what will happened if im going to increase the cache entries of ldap on my c350?
    What will happened if im going to have it 50000...whats the impact.
    thanks for your help
    kira

    From a cache size standpoint, I would recommend adjusting it based on your number of valid unique email addresses expected within the cache TTL period. If set the TTL to 28800 seconds (24 hours) and you have 50,000 valid email addresses (unique users or distribution groups, etc.) and there is a likely hood they all will receive an email within that 24 hrs. Then set the cache size to 50,000.
    Erich

  • Static netbios cache entry

    Hello
    I have two routers connected over dlsw, each router is connected to a
    token ring. If i want to configure a static cache entry, which command sould
    i use netbios name-cache ...........
    or
    dlsw icanreach netbios-name ...........
    Thanks for your help
    Louis

    NetBios Name-Caching is an older feature which was used in conjunction with RSRB
    and Local Acknowledgement. The cache that DLSw+ keeps for NetBios names is a
    completely different cache, optimized for DLSw+ operation. By configuring 'dlsw
    icanreach netbios-name ...' on a router, it advertises to its peers via capabilities exchange
    that it has a route to the NetBios station in question, thereby creating static remote entries
    in each of its remote-peers. This is the configuration you will most likely want to use.

  • Suppressing "No Entry" calendar entry when printing in the list view

    How can I go about suppressing the "No Entry" item for a date when printing the whole month in the calendar view? I would prefer to only see printed dates that have an actual entry in them.

    Hi Dwayne,
    In SharePoint 2013, we could manually create a list for all users. Here are the reference:
    Go to Site Settings >  People and Groups > SiteMembers
    Modify url
    http://sitename/_layouts/15/people.aspx?MembershipGroupId=8 to
    http://sp/sites/tutu/_layouts/15/people.aspx?MembershipGroupId=0 , now you will see All people in this site.
    Change the view to List view, and copy the listview id in the url
    http://sitename/_layouts/15/people.aspx?MembershipGroupId=0&View={viewID}
    Go to Settings > List settings, copy the list id in the url
    http://sitename/_layouts/15/listedit.aspx?List=listeid&Source=....
    Now type the address in IE:
    http://sitename/_vti_bin/owssvr.dll?CS=109&Using=_layouts/query.iqy&List=[LISTID]&View=[VIEWID]&CacheControl , such as
    http://sp/sites/tutu/_vti_bin/owssvr.dll?CS=109&Using=_layouts/query.iqy&List=f3958d27-9c2f-4f8d-b221-89466e816667&View=696BFDC5-0C6E-4E27-818F-0E6292A18407&CacheControl=1
    Save the owssvr.jqy from SharePoint site
    Now you could see the file in your desktop with all users, save it as allusers in Excel.
    Then import it to your SharePoint site, add an app > find an app > import spreadsheet
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Cache entries expiring after doing bulk insert

    We are seeing this peculiar issue of expired entries when we started using bulk loading in coherence 3.5.
    com.tangosol.net.NamedCache.putAll(<hashmap>)
    Earlier, we were using com.tangosol.net.NamedCache.put(key,value,EXPIRY_NEVER). This code works without any issues and we never saw any missing entries. Here is our coherence config xml. Any clue?
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>repl-*</cache-name>
    <scheme-name>repl-default</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>dist-*</cache-name>
    <scheme-name>dist-default</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>GRIDCACHE</cache-name>
    <scheme-name>SampleOverflowScheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <local-scheme>
    <scheme-name>SampleMemoryLimitedScheme</scheme-name>
    <high-units>300000</high-units>
    <low-units>250000</low-units>
    <expiry-delay>24h</expiry-delay>
    </local-scheme>
    <overflow-scheme>
    <scheme-name>SampleOverflowScheme</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>SampleMemoryLimitedScheme</scheme-ref>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <external-scheme>
    <scheme-ref>SampleDiskScheme</scheme-ref>
    </external-scheme>
    </back-scheme>
    </overflow-scheme>
    <external-scheme>
    <scheme-name>SampleDiskScheme</scheme-name>
    <lh-file-manager>
    <directory>/apps/uma/smicache/store2</directory>
    <file-name>{cache-name}.store</file-name>
    </lh-file-manager>
    </external-scheme>
    <distributed-scheme>
    <scheme-name>dist-default</scheme-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    </class-scheme>
    <replicated-scheme>
    <scheme-name>repl-default</scheme-name>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>true</autostart>
    </replicated-scheme>
    <invocation-scheme>
    <service-name>InvocationService</service-name>
    <autostart>true</autostart>
    </invocation-scheme>
    </caching-schemes>
    </cache-config>

    As David already wrote; open the source file in an (hex) editor and check if all line are correctly terminated with carriage return + line feed
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • ARP cache entry of a switch

    Hello...
    I came across a particular question that got me a bit confused.
    Please see attached for the network topology. Question: After HostA pings HostB, which entry will be in the ARP cache of HostA to support this
    transmission?
    a) Interface address: 192.168.4.7; MAC: 000f.2480.8916
    b) Interface address: 192.168.4.7; MAC: 0010.5a0c.feae
    c) Interface address: 192.168.6.1; MAC: 0010.5a0c.feae
    e) Interface address: 192.168.6.1; MAC: 000f.2480.8916
    c) Interface address: 192.168.6.2; MAC: 0010.5a0c.feae
    e) Interface address: 192.168.6.2; MAC: 000f.2485.8918
    The correct answer is D.
    From my understanding, the source and destination IP doesn't change. If this is the case, why is the IP in the ARP cache not that of hostB?

    Hi Rajtilak,
    What switch are you using?
    If it is a small business switch, ie SG200, SG300 etc do you use the CLI or GUI?
    From CLI:
    From Web GUI:
    go to IP Configuration -> ARP then click add:
    Remember to save your config changes.  Hope that helps.
    Best,
    David
    Please rate helpful posts and identify correct answers.

Maybe you are looking for

  • File to IDOC (content conversion)

    Hi, My scenario is File to IDOC. My input file is as "AAA",399,"DD",20050302,100642,3289 "B01","E",20051,39.1,"AC","M","L" "B01","L",20051,38.3,"AC","F","D" "B01","L",20301,37.7,"AC","F","W" "L9",50 wherein the structure should look as follows: <root

  • IWeb FTP Publish Site Changes

    Whenever I publish JUST the small changes to my site, and then attempt to view the website, my site just hangs and will NEVER load. I have to go back and publish the ENTIRE site. This significantly increases my bandwidth consumption on my hosting ser

  • Using an external monitor with the computer closed and using a bluetooth ke

    Can you use a bluetooth keyboard and mouse with the computer closed and using an external monitor?

  • Viewing .doc/.docx files on new iMac

    Hello folks, I have a new iMac running 10.9.4. I don't seem to be able to open .doc or .docx files in either  the new Pages (5.2) or Pages '09, which I've installed. Pages '09 can view both file types quite happily on my old iMac (10.7.5) so I'm a bi

  • PSD to HTML?

    Hi! Just wondering if you do have a simple PSD file that I can use to practice my css and HTML skills? Lets start from the basics though... And if I ever got lost, can you suggest some tutorials (website, video, blog) that teaches to convert PSD to H