Slow "ls -al" on /userhome - until nscd caches entries

Hi,
I am running DS5.2 on Sol9 server.
If I run an "ls -al" in the home directory (just over 1000 users) the listing can take over 5 minutes - the second time will take seconds because of the nscd. Users are in mulitple groups.
The "ls -al" takes about the same time when run directly from the LDAP server and from a client (sol 8&9)
Any idea how I can speed the lookups?
Thanks in advance
Geoff

Try a restart.
Do a backup, using either Time Machine or a cloning program, to ensure files/data can be recovered. Two backups are better than one.
Try setting up another admin user account to see if the same problem continues. If Back-to-My Mac is selected in System Preferences, the Guest account will not work. The intent is to see if it is specific to one account or a system wide problem. This account can be deleted later.
Isolating an issue by using another user account
Try booting into the Safe Mode using your normal account.  Disconnect all peripherals except those needed for the test. Shut down the computer and then power it back up after waiting 10 seconds. Immediately after hearing the startup chime, hold down the shift key and continue to hold it until the gray Apple icon and a progress bar appear and again when you log in. The boot up is significantly slower than normal. This will reset some caches, forces a directory check, and disables all startup and login items, among other things. When you reboot normally, the initial reboot may be slower than normal. If the system operates normally, there may be 3rd party applications which are causing a problem. Try deleting/disabling the third party applications after a restart by using the application un-installer. For each disable/delete, you will need to restart if you don’t do them all at once.
Safe Mode - Mavericks
Safe Mode - About
Performance Guide
Why is my computer slow
Why your Mac runs slower than it should
Slow Mac After Mavericks
Things you can do to resolve slowdowns  see post by Kappy

Similar Messages

  • Cache entry not found error message in java console 1.6.0_05 - Citrix ICA

    Client information:_
    Windows XP_SP2
    Java Runtime version 1.6.0_05
    Application: Citrix ICA version 9.4.1868
    Slow citrix ICA client connection, repeated errors in the java console stating "cache entry not found". However when I downgrade to Java Runtime version 1.5.0_10 I do not see the "cache entry not found" errors and the Citrix ICA connection is much faster. Basically launches and connects in 10 seconds versus 2 minutes.
    Any ideas? Thanks!

    Hi,
    All your classes must be accessible through the web. The easiest solution is to put all your classes in the same folder as your web page.
    If your classes are in a different folder (which must be mapped as a virtual directory), try adding the codebase attribute to your applet.
    Regards,
    Kurt.

  • Expire all local cache entries at specific time of day

    Hi,
    We have a need for expiring all local cache entries at specific time(s) of the day (all days, like a crontab).
    Is it possible thru Coherence config ?
    Thanx,

    Hi,
    AFAIK there is no out of the box solution but certainly you can use Coherence API along with quartz to develop a simple class that can be triggered to remove all the entries from the cache at certain time. You can also define your custom cache factory configuration and an example is available here http://sites.google.com/site/miscellaneouscomponents/Home/time-service-for-oracle-coherence
    Hope this helps!
    Cheers,
    NJ

  • How to clear Local-Cache Entries for a Query in BW?

    Hi There,
    i`m student und i need please your help for my Thesis!!
    I execute the same Query many times in BEx Web Analyzer und note a Query Response Time under ST03N using each time a different READ Mode and Cache Mode is inactiv (Query Monitor RSRT).
    First time i exectue the Query, it reads also from database, second time it uses the local Cache and  that `s okay!
    My problem is:
    When i change the Read mode and execute the Query again, it uses for the first run also the old entries from the Cache so i get wrong response time for the first run!!
    I know that while the mode cache inactiv , the local cache will still be used, so how can i delete the local cache entries each
    time i change the read mode and execute the Query? In Cache monitor (rsrcache) i find only entries for Global cache etc..
    I've already tried to close the session and login in to the System again but it doesn`t solve the Problem!!
    i don't have a permission (access rights) to switch off the complete Cache (local and global ).
    Any idea please??
    Thanks und Best Regards,
    Rachidoo
    P.S: sorry for my bad english!! i have to refresh it soon:)

    Hi Praba,
    the entries stored in RSRCACHE are for global cache, there is no entry for my query in Cache monitor!
    i execute the query in RSRT using java web button and cache mode inactiv so, the results will be stored in local cache.
    this is as below what i want to do for my performance tests in my Thesis:
    1. run a query for the first time with cache inactiv and note runtime of it
    2. run the query again with cache inactiv and note runtime of it
    3. clear the local cache (i don't know how to do it??)
    4. change the read mode of query in RSRT then run the same query for the first time and note runtime of it
    5. run the query again and note runtime of it.
    i'm doing the same procedure for each Read mode.
    the problem is in step 4 , The olap Processor gets the old results form the cache so i get wrong runtime for my tests.
    Generate the report doesn't help, any idea please?

  • Performance for messaging queue - queue implemented as single cache entry.

    Hey Brian/All,
    Has there been any progress on addressing the long standing performance issues with messaging?
    i.e. messaging stores a queue within a single cache entry, which means it needs to deserialize, add item, and reserialize every time we add an item to the queue.
    For push rep, this means a burst of data can bring messaging to it knees and cause a cluster to fall over (eg: a clear of a large cache, or a remote site that is unavailable causing queues to grow to a very large size).
    I have also noticed that when a queue gets very large, the jmx suspend/drain times out, and throws an exception.
    Cheers,
    Neville.

    Hi Friends,
    Create a function that needs to be called on the ejbCreate.
    Inside this function make the connections as in the snippet below and close it when ejbRemove() or exception.
    fis                          = new FileInputStream("D:/MessageRouter_UAT/AppConfig.Properties");
    props.load(fis);
    fis.close();
    String logPath      = props.getProperty("Log_path").trim()+".log";
    logHandler      = new FileHandler(logPath);
    logHandler.setFormatter(new SimpleFormatter());
    logWriter.addHandler(logHandler);
    logWriter.setLevel(Level.ALL);
    MQEnvironment mqEnv      = null;
    mqEnv      = new MQEnvironment();
    MQEnvironment.hostname      = props.getProperty("MQ_HOST_NAME").trim();
    MQEnvironment.port      = Integer.parseInt(props.getProperty("MQ_PORT").trim());
    MQEnvironment.channel      = props.getProperty("CHANNEL_NAME").trim();
    MQEnvironment.properties.put(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_CLIENT);
    q_Mgr                = new MQQueueManager(props.getProperty("QUEUE_MANAGER").trim());
    queueID                = q_Mgr.accessQueue(props.getProperty("ID_Queue").trim(), MQC.MQOO_OUTPUT);
    queueSG                     = q_Mgr.accessQueue(props.getProperty("SG_Queue").trim(), MQC.MQOO_OUTPUT);
    queueHK                = q_Mgr.accessQueue(props.getProperty("HK_Queue").trim(), MQC.MQOO_OUTPUT);
    queueCL                     = q_Mgr.accessQueue(props.getProperty("CL_Queue").trim(), MQC.MQOO_OUTPUT);
    Thanks,
    Arun Prithviraj

  • Cache entry created a long time after the report runs

    We have a report that results in more than 300,000 records. We get the results for the report in quick time but the cache entry gets created only after sometime, say around 30 mins later or so. Any idea why this delay? Is it that the report caches 25 records at a time (default no. of rows per page) and shows them in quick time and the rest of the records are getting cached in the background? Is there like a way we can optimize this?

    did you check how much time the entire report takes to execute (evn though the first 25 rows comes up quickly). I suspect it is > 30 mins.
    OBIEE is not meant as a data dump tool and there is little that could be done. (except better hardware)

  • The specified cache entry could not be loaded during initialization

    The OBIEE server is failing every few minutes of working.
    I saw in the NQServer.log:
    The specified cache entry could not be loaded during initialization.
    Cache entry, '/home/oracleadmin/OracleBIData/cache/NQS_rprt_734086_39693_00000001.TBL', is invlaid due to either non-existing file name, size does not match, exceeding maximum space limit nor exceeding maximum entry limit, and it's been removed.
    I checked the directory and there is no space limitation.
    Can someone please clarify?
    The server/Config/NQSConfig.INI is as follows:
    [ CACHE ]
    ENABLE = YES;
    DATA_STORAGE_PATHS = "/home/oraceladmin/OracleBIData/cache" 2000 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;

    It's more than enough if you are using it for .tmp and .tbl files storage. Have you checked work_directory_path in NQSConfig.ini file .Is it on diffrent mount point.Usually we wil create a seperate mount point and use that as storage location.
    b. Instead of using SAPurgeAllCache() , can I add shell script which delete the files fromt the cache directory and the execute run-sa.sh?You need to understand diffrence between OBIEE rpd cache and .tbl and .tmp file. SAPurgeAllCache() will purge cache from RPD.It wont purge .tmp or .tbl files.These files will be stored in some physical location.When a report start executing .tmp file will start evoluting and after execution of report it will be purged by OBIEE server automatically.
    see this blog for some insight into this topic
    Re: TMP Files uneder /OracleBIData/tmp
    Regards,
    Sandeep

  • How to clear old CACHE entries -- RSRCACHE

    Hello Gurus: What is the suggested menthod to clear the Cache entries of yesterday in RSRCACHE.
    We r still using 3.1 & will be upgrading soon.
    I will be happy to assign points & thanks for your help.

    Hi
    You can go to RSRT
    Select the cache Monitor
    There on the top you can find option Delete.
    So from there you can delete the main memory cache, or what ever its required.
    Regards
    M.A
    Edited by: M.A on May 21, 2008 4:15 PM

  • Big binaries as cache entries

    I know that we can put files as caches entries in coherence (actually we are using this feature to have a distributed lucene index inspired on Infinispan's implementation). But handling big binaries is another kind of animal: I cannot have a 4gb file in memory (near cache or wherever).
    So I was thinking if there is a way to stream instead of read a cache entry ... is this possible? I was thinking to have a cache for big binaries with no memory space, so everything will be keep in a backing map. Then every time that someone get a key from this cache it can obtain a 'DataHandler' or 'InputStream' to read the big binary.
    Why?
    Our software use the cluster to read almost all the business data but files attached to business objects. The idea is to have all the files that our software use on the IMDG instead of having another software doing this for us (i.e. distributed file system or a network file system as our current implementation).
    Edited by: ggarciao.com on Dec 11, 2012 3:54 PM
    Edited by: ggarciao.com on Dec 19, 2012 1:42 PM

    You are right there are two options, and they are both commonly used.
    Storing on the file system is a bit easier to implement for many and might be a touch more proformant with less encoding and decoding going on.  But it has a higher risk of syncornazation  problems since there are many ways for the file objects to be modified, moved or removed without changing anything and the database.  Thus one should build logic that will check for the actual existence of the file at the URI before trying to serve it up.
    Storing the file in the database mitigates the issues with syncornazation.  But it takes a bit more to set up and could involve a lot more data in the database itself, which may or may not be an issue depending on ones setup.
    But either is an acceptiable choice and just work with the various pros and cons of either.

  • Purging the cache entries accordingto table and SA

    Hi Gurus,
    I have few questions on Cache purging
    1>How many types of purging.
    2>Is it possible to purge the cache based on tables i.e.(if entries are generated on table A only that entries should purge in cache manager).
    3>Is it possible to purge the cache based on Subject Areas i.e.(if entries generated on subject area A only that entries should purge in cache manager).
    4>supose in cache manager there are 100 entries and i want to purge only selected entries i.e.(if i want to purge 25th, 50th and 75th,100th entries ) how i can achieve this.
    Regards,

    The same you can get it from Helpfile
    To purge the cache manually with the Cache Manager facility
    Use the Administration Tool to open a repository in online mode.
    Select Manage > Cache to open the Cache Manager dialog box.
    Select Cache or Physical mode by selecting the appropriate tab in the left pane.
    Navigate the explorer tree to display the associated cache entries in the right pane.
    Select the cache entries to purge, and then select Edit > Purge to remove them.
    In Cache mode, select the entries to purge from those displayed in the right pane.
    In Physical mode, select the database, catalog, schema or tables to purge from the explorer tree in the left pane.
    In Cache mode, you can purge:
    One or more selected cache entries associated with the open repository.
    One or more selected cache entries associated with a specified business model.
    One or more selected cache entries associated with a specified user within a business model.
    In Physical mode, you can purge:
    All cache entries for all tables associated with one or more selected databases.
    All cache entries for all tables associated with one or more selected catalogs.
    All cache entries for all tables associated with one or more selected schemas.
    All cache entries associated with one or more selected tables.
    SAPurgeCacheByQuery. Purges a cache entry that exactly matches a specified query. For example, using the following query, you would have a query cache entry that retrieves the names of all employees earning more than $100,000:
    select lastname, firstname from employee where salary > 100000;
    The following call purges the cache entry associated with this query:
    Call SAPurgeCacheByQuery('select lastname, firstname from employee where salary > 100000' );
    SAPurgeCacheByTable. Purges all cache entries associated with a specified physical table name (fully qualified) for the repository to which the client has connected.
    This function takes up to four parameters representing the four components (database, catalog, schema and table name proper) of a fully qualified physical table name. For example, you might have a table with the fully qualified name of DBName.CatName.SchName.TabName. To purge the cache entries associated with this table in the physical layer of the Oracle BI repository, execute the following call in a script:
    Call SAPurgeCacheByTable( 'DBName', 'CatName', 'SchName', 'TabName' );
    NOTE: Wild cards are not supported by the Oracle BI Server for this function. Additionally, DBName and TabName cannot be null. If either one is null, you will receive an error message.
    SAPurgeAllCache. Purges all cache entries. The following is an example of this call:
    Call SAPurgeAllCache();
    SAPurgeCacheByDatabase. Purges all cache entries associated with a specific physical database name. A record is returned as a result of calling any of the ODBC procedures to purge the cache. This function takes one parameter that represents the physical database name and the parameter cannot be null. The following shows the syntax of this call:
    Call SAPurgeCacheByDatabase( 'DBName' );
    For Q4:
    SAGetSharedRequestKey. An ODBC procedure that takes a logical SQL statement from the Oracle BI Presentation Services and returns a request key value.
    The following shows the syntax of this procedure:
    SAGetSharedRequestKey('sql-string-literal)
    Pls mark if helps

  • ICM Statistics - Suppressed Cache Entries

    Well, I've done the usual search in SAP xSearch and Google, not to mention checking the application help, but I can't seem to find the answer.
    We have a CRM ABAP 7.0 system (kernel 7.01).  When I display the ICM statistics for the HTTP Server Cache (txn SMICM), I see:
      Cache Statistics
       Cache Size (Bytes)                         52428,800
       Occupied Cache Memory (Bytes)              52422,656
       Maximum Number of Cache Entries               10,000
       Current Number of Cache Entries                5,968
    Number of Suppressed Cache Entries            55,593
       Total Number of Cache Accesses             27134,021
       Number of Cachable Accesses                28224,213
       Number of Cache Hits                       23749,222             84.14  %
       Number of Failed Cache Accesses            4,474,991             15.86  %
       Number of Cache Hits in Memory             23749,222             84.14  %
       Maximum Number of UFO Entries                  2,000
       Current Number of UFO Entries                      0
       Number of Accesses to UFO List                   416
       Number of UFO Hits                                44
    What, pray tell, are "Suppressed Cache Entries" (in red above) are what are their effects on the Server Cache performance?  Do I need to 'care' about these suppressed entries?
    Regards,
    bryan

    Thanks to Ashutosh and Clébio for responding to my query. 
    Ashutosh , the link you supplied was interesting and it is possible that web error pages, such as "400 session timed out", could be the so-called suppressed cache pages.
    Clébio, As for "the number of entries in the cache that have been deleted to release space to new ones," I think the message that would be relevant to that action is the CCMS alert "EvictedEntries".  I'm not certain as that is anotehr alert that isn't very well documented.
    bryan

  • C350- LDAP Default Cache Entries - 10000

    Hi Mentors,
    Can someone also explain if what will happened if im going to increase the cache entries of ldap on my c350?
    What will happened if im going to have it 50000...whats the impact.
    thanks for your help
    kira

    From a cache size standpoint, I would recommend adjusting it based on your number of valid unique email addresses expected within the cache TTL period. If set the TTL to 28800 seconds (24 hours) and you have 50,000 valid email addresses (unique users or distribution groups, etc.) and there is a likely hood they all will receive an email within that 24 hrs. Then set the cache size to 50,000.
    Erich

  • Static netbios cache entry

    Hello
    I have two routers connected over dlsw, each router is connected to a
    token ring. If i want to configure a static cache entry, which command sould
    i use netbios name-cache ...........
    or
    dlsw icanreach netbios-name ...........
    Thanks for your help
    Louis

    NetBios Name-Caching is an older feature which was used in conjunction with RSRB
    and Local Acknowledgement. The cache that DLSw+ keeps for NetBios names is a
    completely different cache, optimized for DLSw+ operation. By configuring 'dlsw
    icanreach netbios-name ...' on a router, it advertises to its peers via capabilities exchange
    that it has a route to the NetBios station in question, thereby creating static remote entries
    in each of its remote-peers. This is the configuration you will most likely want to use.

  • Firefox is running so slow even after clearing history, cookies, and cache

    My firefox browser suddenly started running very slow. I have cleared history, cookies, and cache. I have run scans, and even tried to uninstall and reinstall. it is still running slow

    flytnn,
    after a fresh install, Spotlight will need to reïndex the files on your internal disk — that happens each time a fresh installation takes place.
    I see that you have OS X 10.9.2 installed. Are you planning on installing the 10.9.3 update?
    Your installed version of Java is outdated — the most recent version can be downloaded from here.

  • Everything Slowed Down....Dump All Caches?

    After updating my system last night, my MacBook was very slow on reboot and both Safari and Firefox are very slow as well. Another member mentioned dumping the caches in the home folder and HD folder. If this is recommended, would someone please provide me with exact instructions on which caches and how to dump each. I do not want to lose anything important. Thanks!

    I'd select +Safari > Empty Cache+ from the Safari menubar.
    And make sure you have at least 15% of your boot volume free.
    If that doesn't help, download the (free) OnyX app, from: http://www.titanium.free.fr/index_us.html
    Select the Cleaning icon in the toolbar, then select the various tabs and caches you want to delete.
    (I omit +Form Values+ and cookies from the Internet tab, and most of the logs items, but your desires may be different.)
    You also might want to rebuild the Launch Services database, and some or all of the other items in the +Maintenance > Rebuild+ tab.

Maybe you are looking for

  • IPod not recognized by Windows or iTunes- Please Help!!!

    I have had my ipod mini since last summer and so far it has worked flawlessly. However, recently, when I connect my iPod to my computer, it does not charge and is not recognized as being plugged it at all. The iPod displays a battery icon although it

  • Dynamic alias name in query

    Is it possible to use dynamic alias name in query for interactive report? I have this SELECT and I would like to use the parameters used in alias request to build a dynamic crosstab report: SELECT p.annee AS "Année", p.desc_rls AS "RLS", SUM(CASE WHE

  • WVC200: How to set a home default position ?

    From time to time we have to turn the camera off and when the camera is power-on it goes to a home position where it shows the ceiling ! Is there something that can be done to change that home position ?

  • Maximum Content Height

    What is the maximum height for an iWeb page? Can't seem to get a content height of more than 4000 pixels. Would that be it?

  • LD CRMLDB_SALES_MON  & CRMLDB_SERVICE_MON

    These 2 logical db have elements CRMLDB_SERVICE_MON my dept my team my collegues CRMLDB_SALES_MON dept sales collegues sales Are the driven by the org model setup?  in PPOME or PPOMA_CRM? If yes can you point me in the direction to set these.