Is NFS client data cacheing possible?

Yesterday, I was viewing an HD 1080 video with VLC, and noticed that Activity Monitor was showing about 34MB/sec from my NAS box. My NAS box runs OpenSolaris (I was at Sun for over 20 years, and some habits die hard), and the 6GB video file was mounted on my iMac 27" (10.7.2) using NFSv3 (yes, I have a gigabit network).
Being a long term UNIX performance expert and regular DTrace user, I was able to confirm that VLC on Lion was reading the file at about 1.8MB/sec, and that the NFS server was being hit at 34MB/sec. Further investigation showed that the NFS client (Lion) was requesting each 32KB block 20 times!
Note: the default read size for NFSv3 over TCP is 32KB).
Digging deeper, I found that VLC was reading the file in 1786 byte blocks. I have concluded that Lion's NFSv3 client implement at least one 32KB read for each application call to read(2), and that no data is cached betweem reads (this fully accounts for the 20x overhead in this case).
A workaround is to use say rsize=1024, which will increase the number of NFS ops but dramatically reduce the bandwidth consumption (which means I might yet be able to watch HD video over wifi).
That VLC should start issuing such small reads is a bug, so I have also written some notes in the vlc.org forums. But client side cacheing would hide the issue from the network.
So, the big question: is it possible to enable NFS client data cacheing in Lion?

The problem solved itself mysteriously overnight, without any interference from myself.
The systems are again perfectly happily mounting the file space (650 clients of them all at the same time
mounting up to 6 filesystems from the same server) and the server is happily serving again as it has been for the past 2 years.
My idea is that there has been a network configuration clash, but considering that the last modification of NIS hosts file was about 4 weeks ago and the latest server was installed then and has been serving since then, I have no
idea how such clash could happen without interference in the config files. It is a mystery and I will have to make
every effort to unravel it. One does not really like to sweep incidents like that un-investigated under the carpet.
If anybody has any suggestions and thoughts on this matter please post them here.
Lydia

Similar Messages

  • Client automatically cache the data got from cache server?

    Hi expert,
    I have 2 questions about the client local cache. Would you please help to give me some suggestion?
    1. Will client automatically locally cache the data got from cache server the first time and automatically update the data in local cache when getting the same data from cache server again? I go through the API reference but cannot find any API to query the data currently cached in the local cache.
    2. If client will automatically cache the data got from cache server. Is there any way for a client to get the data event that happens to its local cache, such as entry created in local cache, entry deleted from local cache and entry updated in local cache? In my opinion, when getting an entry from cache server the first time, the MapListener's entry create event should be triggered. When getting the same entry again, the entry update event should be triggered.
    However, I have tried a client with replicated cache, a client with partitioned cache, an extend client with remote cache and a client with local cache(front cache part of near cache), the client (the NamedCache object has been set the MapListener) cannot get any event notification after getting data from cache server. By the way, my listener is OK since when putting data the entry create event and entry update event will be triggered.
    Your suggestion is very appreciated. :)

    Hi
    If I were you I would read this http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/toc.htm
    and particularly the section about Near Caching here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/nearcache.htm#CDEFEAJG
    which is what you are asking about in your question.
    Near Caching is how Coherence stores data in the locally - which is the answetr to your first question. How Near Caching works is explained in the documentation.
    Events, which you ask about in your second question are explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/delivereventsjava.htm#CBBIIEFA
    It might be that ContinuousQueryCache is closer to what you want. This is explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/queryabledatafabric.htm#sthref38 A ContinuousQueryCache is like having a sub-set of the underlying cache on the local client which you can then listen to etc...
    JK

  • Possible to delete Offline Files content for a specific user from the Client Side Cache (CSC) ?.

    Hello Everyone,
    We would like to implement a script to delete the offline files in the Client Side Cache (CSC) for a nominated user (on Windows 7 x64 enterprise).
    I am aware that;
    1. We can use a registry value to flush the entire CNC cache (for all users) next time the machine reboots.
    2. If we delete the user's local profile it appears that Windows 7 also removes their content from the local CSC.
    However, we would like to just delete the CSC content for a particular nominated user without having to delete their local user profile.
    In our environment we have many users that share workstations but only use them occasionally. We don't use roaming profile so we would like to retain all the users' local profiles but still delete the CSC content for any users that haven't
    logged on in a week.
    Any ideas or info would be appreciated !
    Thanks, Makes

    Hi,
    I don't think this is possible.
    If you want to achieve it via script, I suggest you post it in official script forum for more professional help:
    http://social.technet.microsoft.com/Forums/scriptcenter/en-US/home?category=scripting
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Karen Hu
    TechNet Community Support

  • Client Copy (With Data/Cubes) Possible in BI 7.0?

    Hello,
    I am a basis team member (newbie and still learning) for my company and have been asked to copy our development cubes/data to our sandbox environment. We are attempting to do this through a client copy. I am using SCC9 with the SAP_APPL profile, but afterwards I do not see the cubes or any of the infoproviders etc in RSA1 in the sandbox. Is there a post processing step that I need to carry out to have this content show up or is a client copy not possible? We are trying to avoid a system copy if at all possible. Thanks for any and all suggestions/comment/help. Don't worry, I will award points for helpful answers :).
    Thanks,
    Rob

    Hello,
    See these docs,
    [System Copy Procedures |https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50960d8f-6542-2a10-3ba8-e46dc23dd9b1]
    [System Copy and Migration|System Copy and Migration ]
    [Setting up Business Intelligence Client in NW2004s|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/007af9a7-e48e-2a10-5c85-fcac22d58e82]
    [System Copy for SAP Systems Based on SAP NetWeaver 2004s SR2 ABAP and Java|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/200ebc93-dabe-2910-c1a6-c4ec30b20e04]
    [Homogeneous and Heterogeneous System Copy for SAP Systems Based on SAP NetWeaver 2004s|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f022aa7d-0c01-0010-20a5-c247330d47fa]
    [System Copy for SAP Systems Based on SAP NetWeaver 2004s SR1 ABAP|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d10bf27d-0c01-0010-6995-bbdcdf0118a1]
    [System Copy for SAP Systems Based on SAP NetWeaver 2004s SR2 ABAP|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e08502d4-dabe-2910-bbb4-c1bfc82aed73]
    [System Copy for SAP Systems Based on SAP NetWeaver 2004s SR2 Java|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20bd5cee-dabe-2910-fb97-c082a7c9c682]
    Thanks
    Chandran

  • Possible to set CacheSize for the single-JVM version of the data cache?

    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les

    The actual size of the cache is a bit more complex than just the CacheSize
    setting. The CacheSize is the number of hard references to maintain in the
    cache. So, the most-recently-used 10 elements will have hard refs to them,
    and the other 15 will be moved to a SoftValueCache. Soft references are not
    garbage-collected immediately, so you might see the cache size remain at
    25 until you run out of memory. (The JVM has a good deal of flexibility in
    how it implements soft references. The theory is that soft refs should stay
    around until absolutely necessary, but many JVMs treat them the same as
    weak refs.)
    Additionally, pinning objects into the cache has an impact on the cache
    size. Pinned objects do not count against the cache size. So, if you have
    15 pinned objects, the cache size could be 25 even if there are no soft
    references being maintained.
    -Patrick
    In article <aqrpqo$rl7$[email protected]>, Les Selecky wrote:
    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Error in Starting Oracle BAM Active Data Cache

    I am not able to start "Oracle BAM Active Data Cache" on my machine.
    The other two components "Oracle BAM Event Engine" and "Oracle BAM Report Cache" are starting properly.
    When I see the event log file of my Computer I could see the details as below:
    Event Type: Error
    Event Source: Oracle BAM Active Data Cache
    Event Category: None
    Event ID: 0
    Date: 2/7/2007
    Time: 3:51:25 PM
    User: N/A
    Computer: CHNANDA-WXP
    Description:
    ActiveDataCache: The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> Oracle.DataAccess.Client.OracleException ORA-12541: TNS:no listener at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure)
    at Oracle.DataAccess.Client.OracleConnection.Open()
    at Oracle.DataAccess.Client.OracleConnection.Open()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()
    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
    Could anyone pls help me?
    Thanks and Regards,
    Chinmaya Nanda

    hi Chinmaya -can yoy tell us your companyname,project ? Your problem is very simple.BAM ADC is notable to reachoracle db.fromyour dos prompt- try tnsping <yrDB> [default  is oraclebam  or orcl  ]/ Also see FAQ pages. there is a requirement on dos prompt setting, with <clientforBAM>as 1st parameter

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • BAM Report and Active data cache

    I am having problems with reports that have automatic active data retrieval.
    It does not work.
    If I load the report it extracts the data correctly from an external data source. If I Reprompt or refresh it brings back changed data from the data source.
    It does not automatically poll for changes even though it is set in the report properties.
    If I stop and start the Active data cache service it starts polling and reconnecting, but not resynching the data.
    In the Active data cache log I have the following entries :
    2005-12-01 09:36:54,062 [3576] ERROR - ActiveDataCache Viewset not found:
    2005-12-01 09:36:54,078 [3576] WARN - ActiveDataCache Exception occurred in method GetChangeList
    Stack trace:
    at Oracle.BAM.ActiveDataCache.Kernel.Viewsets.ViewsetManager.GetViewset(String strViewsetID)
    at Oracle.BAM.ActiveDataCache.Kernel.Viewsets.ViewsetManager.GetChangeList(String strViewsetID, Int32 iTimeout)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.GetChangeList(String strViewsetID, Int32 iTimeout)
    Has anyone any ideas on what the problem is ?
    Thanks

    Sriram
    Can you identify your company and project.?
    a) Maximum capacity of ADC == maximum capacity of your underlying Oracle DB used by BAM as repository.
    b) Retention period - infinite - data is there in BAM ADC (also in BAM repository) till you manually delete it (there is no automatic delete of old data)
    c) Yes ADC stores its data internally in the Oracle database (aka BAM repository)
    d) No - we generally donot publish ADC internals. This is a known decision since end users will only work either to put data into ADC or plot reports/dashboards from BAM GUI. You have to be very specific on what your are trying to acheive and we can suggest possible alternates.

  • BAM Active data Cache Capacity

    what is the maximum capacity of the active data cache?
    what is the retention period of active data cache?
    does ADC store the data in the internal oracle data base?
    Please provide the documents for the internal implementation of ADC?
    Sriram.S

    Sriram
    Can you identify your company and project.?
    a) Maximum capacity of ADC == maximum capacity of your underlying Oracle DB used by BAM as repository.
    b) Retention period - infinite - data is there in BAM ADC (also in BAM repository) till you manually delete it (there is no automatic delete of old data)
    c) Yes ADC stores its data internally in the Oracle database (aka BAM repository)
    d) No - we generally donot publish ADC internals. This is a known decision since end users will only work either to put data into ADC or plot reports/dashboards from BAM GUI. You have to be very specific on what your are trying to acheive and we can suggest possible alternates.

  • Error starting Oracle BAM active data cache service

    Hi
    after installing BAM every thing working fine ,but if restart my system Oracle BAM active data cache service throwing following error
    "The Oracle BAM Active Data Cache service on Local computer started and then stopped.Some services stop automatically if they have no work to do,for example the performance logs and alerts service"
    Database is running fine
    Following is the ADC log file error
    2007-12-07 17:19:29,640 [2928] ERROR - ActiveDataCache The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> System.DllNotFoundException: Unable to load DLL (OraOps10.dll).
    at Oracle.DataAccess.Client.OpsTrace.GetRegTraceInfo(UInt32& TrcLevel, UInt32& StmtCacheSize)
    at Oracle.DataAccess.Client.OraTrace.GetRegistryTraceInfo()
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleDataFactory.GetConnection()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()
    2007-12-07 17:24:45,250 [1524] ERROR - ActiveDataCache Unable to load DLL (OraOps10.dll).
    2007-12-07 17:24:45,265 [1524] WARN - ActiveDataCache Exception occurred in method Startup
    Please help me in resolving this issue .Am getting this issue every time
    Thanks
    BS

    Make sure the path to the ODAC used by BAM (C:\OracleBAM\ClientForBAM\bin) is the first item in the system PATH
    environment variable. Restart your computer after fixing this.
    If that doesn't fix it, please check the Troubleshooting section in the BAM Install Guide.
    Regards, Stephen

  • NFS client problem "The document X could not be saved"

    Hi,
    Briefly: Debian Linux server (Lenny), OS X 10.5.7 client. NFS server config is simple enough:
    /global 192.168.72.0/255.255.255.0(rw,rootsquash,sync,insecure,no_subtreecheck)
    This works well without our Linux clients, and generally it is Ok with my OS X iMac. OS X NFS client is configured through Directory Utility, with no "Advanced" options. Client can authenticate with NIS nicely, and NFS, on the whole, works. I can manipulate files with Finder, and create files on the commandline with the usual tools.
    The problem is TextEdit, iWork and other Cocoa apps (not all). They can save a file once, but subsequently saving a file produces a "The document X.txt cannot be saved" error dialog. If I remove the file on the commandline and re-save, then the save succeeds. It is as if re-saving the document with the same name as an existing file causes issues. There seems to be no problem with file permissions. When I save in a non NFS exported directory everything is fine.
    Has anyone spotted this problem before?
    Lawrence

    I doubt that "OS X NFS is fundamentally broken" seeing as how many people use it successfully.
    tcpdump (or more preferably: wireshark) might be useful in tracking down what's happening between the NFS client and NFS server. Sometimes utilities like fs_usage can be useful in tracking down the application/filesystem interaction.
    It's usually a good idea to check the logs (e.g. /var/log/system.log) for possible clues in case an error/warning is getting logged around the same time as the failure. And if you can't reproduce the problem from the command line, then that can be a good indication that the issue is with the higher layers of the system.
    Oh, and if you think there's a bug in the OS, it never hurts to officially tell Apple that via a bug report:
    http://developer.apple.com/bugreporter/
    Even if it isn't a bug, they should still be able to work with you to help you figure out what's going on. They'll likely want know details about what exactly isn't working and will probably ask for things like a tcpdump capture file and/or an fs_usage trace.
    HTH
    --macko

  • [Solved]NFS Client Not Mounting Shares

    Here is my setup:
    I have two Arch boxes that I am attempting to setup NFS shares on.  The box that is going to be the server is headless FYI.  So far, I have installed nfs-utils, started `rpc-idmapd` and `rpc-mountd` successfully on the server, and started `rpc-gssd` successfully on the client.
    The folder I am trying to share is the /exports folder.
    ls -l /exports
    produces
    total 8
    drwxrwxrw-+ 110 daniel 1004 4096 Dec 6 17:26 Movies
    drwxrwxrwx+ 13 daniel users 4096 Jan 8 19:12 TV-Shows
    On the server:
    /etc/exports
    # /etc/exports
    # List of directories exported to NFS clients. See exports(5).
    # Use exportfs -arv to reread.
    # Example for NFSv2 and NFSv3:
    # /srv/home hostname1(rw,sync) hostname2(ro,sync)
    # Example for NFSv4:
    # /srv/nfs4 hostname1(rw,sync,fsid=0)
    # /srv/nfs4/home hostname1(rw,sync,nohide)
    # Using Kerberos and integrity checking:
    # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt)
    # /srv/nfs4/home gss/krb5i(rw,sync,nohide)
    /exports 192.168.1.10(rw,fsid=0)
    On the client:
    showmount -e 192.168.1.91
    Export list for 192.168.1.91:
    /exports 192.168.1.10
    Everything is looking hunky-dory.  However, I go to mount using
    sudo mount -t nfs4 192.168.1.91:/exports /mnt/Media
    and the mount never takes place.  It sits there and does nothing.  I CAN, however, kill the process with Ctrl-c.
    So does anybody have ANY idea why my shares aren't working.
    EDIT: Just thought I should mention that all of the data in the /exports folder is a mount --bind from /mnt/media.  All of the /mnt/media is contained on a USB external hard drive.  I did notice that there is an ACL.
    getfacl /exports
    getfacl: Removing leading '/' from absolute path names
    # file: exports/
    # owner: root
    # group: root
    user::rwx
    group::r-x
    other::r-x
    Last edited by DaBungalow (2014-01-10 03:18:05)

    I found what the problem was.  Apparently rpc_gssd was causing a problem.  Stopping it fixed everything.

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • Information kept in client browser cache

    Hi all,
    Does anyone know what kind of information is stored in client browser cache when users are logged into Planning or Workspace?
    We have a concern from customer Information Security department concerning to confidential information (data) that could be left behind on browser cache when users log out from applications.
    We need to provide a confirmation to customer that all confidential information that could be found on client browser cache is deleted after the users end their work sessions.
    Thanks in advance,
    Marilia

    The question is what counts as confidential information.
    You can use the Firefox browser to login and go to Planning and then in another tab do about:cache
    Using this the below web pages are in the cache. I browse through these cached pages and one of them has application name and database names which some people may consider confidential depending on the context.
    Suggest you provide them a view only login and ask them to use Firefox and let them determine if that information is confidential.
    The cached information from my session follows:
    Key: http://bpm11bi01/workspace/browse/workspacepages?moduleID=tools.workspacepages.5&editable=false&accessibilityMode=false&action=4&repository_uuid=HomePage_wsp&theme_dir=themes%2Ftheme_tadpole%2F
    Data size: 55161 bytes
    Fetch count: 2
    Last modified: 2009-07-16 00:09:19
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/BpmLauncher.jsp?accessibilityMode=false
    Data size: 13218 bytes
    Fetch count: 2
    Last modified: 2009-07-16 00:08:38
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/browse/dyn?page=/jsp/com/hyperion/tools/workspacepages/mrulisting.jsp&cssUri=%2E%2E%2Fthemes%2Ftheme_tadpole%2Fhomepage%2Ecss&showTitle=true&theme_dir=themes%2Ftheme_tadpole%2F
    Data size: 8482 bytes
    Fetch count: 2
    Last modified: 2009-07-16 00:09:20
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/browse/dyn?page=/jsp/com/hyperion/tools/workspacepages/workspacePagelisting.jsp&cssUri=%2E%2E%2Fthemes%2Ftheme_tadpole%2Fhomepage%2Ecss&showTitle=true&showItems=4&theme_dir=themes%2Ftheme_tadpole%2F
    Data size: 6958 bytes
    Fetch count: 1
    Last modified: 2009-07-16 00:09:19
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/browse/dyn?page=/jsp/com/hyperion/tools/workspacepages/quicklink.jsp&cssUri=%2E%2E%2Fthemes%2Ftheme_tadpole%2Fhomepage%2Ecss&showTitle=true&showItems=4&numThreads=5&theme_dir=themes%2Ftheme_tadpole%2F
    Data size: 6102 bytes
    Fetch count: 3
    Last modified: 2009-07-16 00:09:19
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/modules/com/hyperion/tools/cds/repository/bpm/mode/modeApps.jsp
    Data size: 1408 bytes
    Fetch count: 2
    Last modified: 2009-07-16 00:13:22
    Expires: 1969-12-31 18:00:00
    Key: http://bpm11bi01/workspace/index.jsp
    Data size: 3397 bytes
    Fetch count: 3
    Last modified: 2009-07-16 00:08:38
    Expires: 1969-12-31 18:00:00
    Regards,
    John
    http://www.metavero.com

  • Data Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!
    Edited by: user4958421 on Dec 15, 2009 10:43 AM

    Hi,
    1. For error no 1130203, here are the possible problems and solutions, straight from document.
    Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
    1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
    2. If you are on a UNIX computer, check the user limit profile.
    3. Check the block size of the database. If necessary, reduce the block size.
    4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

Maybe you are looking for